HASCA2016

4th International Workshop on Human Activity Sensing Corpus and Application: Towards Open-Ended Context Awareness

HASCA Mailing List Application

Thanks for attending HASCA2016!

As we discussed at the closing session of HASCA2016, we will provide Mailing List for future collaboration around Human Activity Recognition.

If you want to get further information related to HASCA, please fill in the following form.

[ HASCA Mailing List Application ]


Welcome to HASCA2016

Welcome to HASCA2016 Web site!

HASCA2016 is a fourth workshop for Human Activity Sensing Corpus and Application: Towards Open-Ended Context Awareness. The workshop will be held in conjunction with UbiComp2016.

** Worksop Program is now open **

Abstract

Technological advances enable the inclusion of miniature sensors (e.g. accelerometers, gyroscopes) on a variety of wearable/portable information devices. Most current devices only utilize these sensors for simple orientation and gesture recognition. However in the future the recognition of more complex and subtle human behaviours from these sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g. dementia care). This will require large scale human activity corpuses and much improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world.

As a special topic this year, we wish to reflect on the challenges and possible approaches to recognize situations, events or activities outside of a statically pre-defined pool - which is the current state of the art - and instead adopt an open-ended view on activity and context awareness. Following the huge success of previous years, we are further planning to share these experiences of current research on human activity corpus and their applications among the researchers and the practitioners and to have a deep discussion on the future of activity sensing, in particular towards open-ended contextual intelligence.

We solicit the following topics (but not limited to).

Data collection / Corpus construction

Experiences or reports from the data collection and/or corpus construction projects. Also includes the papers which describing the formats, styles or methodologies for data collection. Cloud-sourcing data collection or participatory sensing also could be included in this topic.

Effectiveness of Data / Data Centric Research

There is a field of research based on the collected corpus, which is called “Data Centric Research”. Also, we solicit of the experience of using large-scale human activity sensing corpus. Using large-scale corpus with machine learning technology, there will be a large space for improving the performance of recognition results.

Tools and Algorithms for Activity Recognition

If we have appropriate and suitable tools for management of sensor data, activity recognition researchers could be more focused on their research theme. However, development of tools or algorithms for sharing among the research community is not much appreciated. In this workshop, we solicit development reports of tools and algorithms for forwarding the community.

Real World Application and Experiences

Activity recognition “in the Lab” usually works well. However, it is not true in the real world. In this workshop, we also solicit the experiences from real world applications. There is a huge gap/valley between “Lab Environment” and “Real World Environment”. Large scale human activity sensing corpus will help to overcome this gap/valley.

Sensing Devices and Systems

Data collection is not only performed by the “off the shelf” sensors. There is a requirement to develop some special devices to obtain some sort of information. There is also a research area about the development or evaluate the system or technologies for data collection.

In light of this year's special emphasis on open-ended contextual awareness, we wish cover these topics as well:

Mobile experience sampling, experience sampling strategies:

Advances in experience sampling approaches, for instance intelligently querying the user or using novel devices (e.g. smartwatches) are likely to play an important role to provide user-contributed annotations of their own activities.

Unsupervised pattern discovery

Discovering meaningful repeating patterns in sensor data can be fundamental in informing other elements of a system generating an activity corpus, such as inquiring user or triggering annotation crowd sourcing.

Dataset acquisition and annotation through crowdsourcing,
web-mining

A wide abundance of sensor data is potentially in reach with users instrumented with their mobile phones and other wearables. Capitalising on crowd-sourcing to create larger datasets in a cost effective manner may be critical to open-ended activity recognition. Online datasets could also be used to bootstrap recognition models.

Transfer learning, semi-supervised learning, lifelong learning

The ability to translate recognition models across modalities or to use minimal supervision would allow to reuse datasets across domains and reduce the costs of acquiring annotations.


Program

September 12th (Monday)

Long paper : 15min talk including discussion
Short paper: 10min talk including discussion (marked as [Short])

9:00--9:10

Opening
(Daniel Roggen)

9:10--10:00

Session 1: Keynote
(Chair: Kristof Van Laerhoven)

[Keynote] Visual Turing Test and Deep Learning vs. Privacy

Mario Fritz (MPI Saarbruecken)

abstract:
With the advance of sensing technology and the availability of abundant data resources, machines can get a detailed “picture" of the real-world - unlike ever possible before. However, there is a big gap between the raw data and the semantic understanding a human may acquire by analyzing the same data. My goal is to narrow and eventually close this gap between these low-level representations and the rich semantic understanding acquired by humans. Progress in this direction will facilitate a seamless interaction and exchange of information between machines and humans. We quantifying progress towards this overarching goal by formulating a Visual Turing Test where machine learning approaches are trained to answer natural language questions on data. In particular, I will describe how we address this challenging task by deep learning techniques that bring together state of the art methods in natural language and image understanding. More broadly, recent deep learning techniques have provided us with modular, efficient and end-to-end trainable architectures, which I will illustrate on a few examples of our recent work ranging from eye tracking over applications in computer graphics to recognition of ongoing activities. I will close by describing our latest work that investigates privacy implications of such effective machine learning techniques when applied to visual data on social networks.
10:00--10:30

Coffee Break

10:30--12:00

Session 2: Data / Corpus
(Chair: Nobuhiko Nishio)

UbiComp/ISWC 2015 PDR Challenge Corpus

Katsuhiko Kaji, Masaaki Abe, Wan Weimin, Kei Hiroi, Nobuo Kawaguchi

HASC-PAC2016: Large Scale Human Pedestrian Activity Corpus and Its Baseline Recognition

Haruyuki Ichino, Katsuhiko Kaji, Ken Sakurada, Kei Hiroi, Nobuo Kawaguchi

A Multi-Media Exchange Format for Time-Series Dataset Curation

Philipp M. Scholl, Kristof Van Laerhoven

Implicit Positioning Using Compass Sensor Data

Dennis Kroll, Rico Kusber, Klaus David

A Better Positioning with BLE Tag by RSSI Compensation through Crowd Density Estimation

Kei Hiroi, Yoichi Shinoda, Nobuo Kawaguchi

Discovery and Recognition of Unknown Activities

Juan Ye, Lei Fang, Simon Dobson

12:30--12:55

Session 2': Activity Recognition
(Chair: Nobuo Kawaguchi)

[Short] Wearable Electric Potential Sensing: A new modality sensing hair touch and restless leg movement

A. Pouryazdan, R. J. Prance, H. Prance, D. Roggen

Recognizing Unknown Activities Using Semantic Word Vectors and Twitter Timestamps

Moe Matsuki, Sozo Inoue

12:55--14:00

Lunch Break

14:00--15:00

Session 3: Data-centric Research
(Chair: Susanna Pirttikangas)

Enhancing Location Prediction with Big Data: Evidence from Dhaka

Dunstan Matekenya, Masaki Ito, Yoshito Tobe, Ryosuke Shibasaki, Kaoru Sezaki

[Short] Patterns of human activity behavior: From data to information and clinical knowledge

A. Paraschiv-Ionescu, S. Mellone, M. Colpo, E. A.F. Ihlen, L. Chiar, C. Becker, K. Aminian

Inhalation During Fire Experiments: an Approach Derived Through ECG

Raquel Sebastiao, Sandra Sorte, Joana Valente, Ana I. Miranda, Jose M. Fernandes

Exploring human activity annotation using a privacy preserving 3D model

Mathias Ciliberto, Daniel Roggen, Francisco Javier Ordonez

15:00--15:30

Coffee Break

15:30--16:40

Session 4: Sensing Technologies
(Chair: Sozo Inoue)

Detecting Group Formations using iBeacon Technology

Kleomenis Katevas, Hamed Haddadi, Laurissa Tokarchuk, Richarg G. Clegg

Let the objects tell what you are doing

Gabriele Civitarese, Stefano Belfiore, Claudio Bettini

FPGA Based Hardware Acceleration of Sensor Matrix

Abdul Mutaal Ahmad, Paul Lukowicz, Jingyuan Cheng

Towards Recognizing Person-Object Interaction Gestures using a Single Wrist Wearable Device

Juhi Ranjan, Kamin Whitehouse

A Recognition Method for Continuous Gestures with an Accelerometer

Hikaru Watanabe, Masahiro Mochizuki, Kazuya Murao, Nobuhiko Nishio

16:40--16:50

Break

16:50--17:30

Discussion and Closing Remarks