skip to main content
research-article

Multimodal Deep Learning for Activity and Context Recognition

Published: 08 January 2018 Publication History

Abstract

Wearables and mobile devices see the world through the lens of half a dozen low-power sensors, such as, barometers, accelerometers, microphones and proximity detectors. But differences between sensors ranging from sampling rates, discrete and continuous data or even the data type itself make principled approaches to integrating these streams challenging. How, for example, is barometric pressure best combined with an audio sample to infer if a user is in a car, plane or bike? Critically for applications, how successfully sensor devices are able to maximize the information contained across these multi-modal sensor streams often dictates the fidelity at which they can track user behaviors and context changes. This paper studies the benefits of adopting deep learning algorithms for interpreting user activity and context as captured by multi-sensor systems. Specifically, we focus on four variations of deep neural networks that are based either on fully-connected Deep Neural Networks (DNNs) or Convolutional Neural Networks (CNNs). Two of these architectures follow conventional deep models by performing feature representation learning from a concatenation of sensor types. This classic approach is contrasted with a promising deep model variant characterized by modality-specific partitions of the architecture to maximize intra-modality learning. Our exploration represents the first time these architectures have been evaluated for multimodal deep learning under wearable data -- and for convolutional layers within this architecture, it represents a novel architecture entirely. Experiments show these generic multimodal neural network models compete well with a rich variety of conventional hand-designed shallow methods (including feature extraction and classifier construction) and task-specific modeling pipelines, across a wide-range of sensor types and inference tasks (four different datasets). Although the training and inference overhead of these multimodal deep approaches is in some cases appreciable, we also demonstrate the feasibility of on-device mobile and wearable execution is not a barrier to adoption. This study is carefully constructed to focus on multimodal aspects of wearable data modeling for deep learning by providing a wide range of empirical observations, which we expect to have considerable value in the community. We summarize our observations into a series of practitioner rules-of-thumb and lessons learned that can guide the usage of multimodal deep learning for activity and context detection.

Supplementary Material

radu (radu.zip)
Supplemental movie, appendix, image and software files for, Multimodal Deep Learning for Activity and Context Recognition

References

[1]
Michael Barz, Mohammad Mehdi Moniri, Markus Weber, and Daniel Sonntag. 2016. Multimodal Multisensor Activity Annotation Tool. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct (UbiComp‘16). ACM, New York, NY, USA, 17--20. https://doi.org/10.1145/2968219.2971459
[2]
Yoshua Bengio, Ian J. Goodfellow, and Aaron Courville. 2015. Deep Learning. (2015). http://www.iro.umontreal.ca/~bengioy/dlbook Book in preparation for MIT Press.
[3]
S. Bhattacharya and Nicholas D. Lane. 2016. From smart to deep: Robust activity recognition on smartwatches using deep learning. In 2016 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops). 1--6. https://doi.org/10.1109/PERCOMW.2016.7457169
[4]
Sourav Bhattacharya and Nicholas D. Lane. 2016. Sparsification and separation of deep learning layers for constrained resource inference on wearables. In Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems (SenSys). ACM, 176--189.
[5]
Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA.
[6]
Tatiana Bokareva, Wen Hu, Salil Kanhere, Branko Ristic, Neil Gordon, Travis Bessell, Mark Rutten, and Sanjay Jha. 2006. Wireless sensor networks for battlefield surveillance. In Proceedings of the land warfare conference. 1--8.
[7]
Heike Brock, Yuji Ohgi, and James Lee. 2017. Learning to judge like a human: convolutional networks for classification of ski jumping errors. In Proceedings of the 2017 ACM International Symposium on Wearable Computers. ACM, 106--113.
[8]
Donald E. Brown, Vincent Corruble, and Clarence Louis Pittard. 1993. A comparison of decision tree classifiers with backpropagation neural networks for multimodal classification problems. Pattern Recognition 26, 6 (1993), 953--961. https://doi.org/10.1016/0031-3203(93)90060-A
[9]
Andreas Bulling, Jamie A. Ward, and Hans Gellersen. 2012. Multimodal recognition of reading activity in transit using body-worn sensors. TAP 9, 1 (2012), 2. https://doi.org/10.1145/2134203.2134205
[10]
Jose A Castellanos and Juan D Tardos. 2000. Mobile robot localization and map building: A multisensor fusion approach. Kluwer academic publishers.
[11]
Guoguo Chen, Carolina Parada, and Georg Heigold. 2014. Small-footprint Keyword Spotting Using Deep Neural Networks. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP‘14).
[12]
W. Chen, A. Sano, D. L. Martinez, S. Taylor, A. W. McHill, A. J. K. Phillips, L. Barger, E. B. Klerman, and R. W. Picard. 2017. Multimodal ambulatory sleep detection. In 2017 IEEE EMBS International Conference on Biomedical Health Informatics (BHI). 465--468. https://doi.org/10.1109/BHI.2017.7897306
[13]
Tanzeem Choudhury, Gaetano Borriello, Sunny Consolvo, Dirk Haehnel, Beverly Harrison, Bruce Hemingway, Jeffrey Hightower, Predrag “Pedja” Klasnja, Karl Koscher, Anthony LaMarca, James A. Landay, Louis LeGrand, Jonathan Lester, Ali Rahimi, Adam Rea, and Danny Wyatt. 2008. The Mobile Sensing Platform: An Embedded Activity Recognition System. IEEE Pervasive Computing 7, 2 (April 2008), 32--41. https://doi.org/10.1109/MPRV.2008.39
[14]
Li Deng and Dong Yu. 2014. DEEP LEARNING: Methods and Applications. Technical Report MSR-TR-2014-21. http://research.microsoft.com/apps/pubs/default.aspx?id=209355
[15]
Samira Ebrahimi Kahou, Xavier Bouthillier, Pascal Lamblin, Çağlar Gülçehre, Vincent Michalski, Kishore Reddy Konda, Sébastien Jean, Pierre Froumenty, Yann Dauphin, Nicolas Boulanger-Lewandowski, Raul Chandias Ferrari, Mehdi Mirza, David Warde-Farley, Aaron Courville, Pascal Vincent, Roland Memisevic, Christopher Pal, and Yoshua Bengio. 2015. EmoNets: Multimodal deep learning approaches for emotion recognition in video. Journal on Multimodal User Interfaces (2015), 1--13. https://doi.org/10.1007/s12193-015-0195-2
[16]
Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep Bayesian Active Learning with Image Data. CoRR abs/1703.02910 (2017). http://arxiv.org/abs/1703.02910
[17]
Petko Georgiev, Sourav Bhattacharya, Nicholas D. Lane, and Cecilia Mascolo. 2017. Low-resource Multi-task Audio Sensing for Mobile and Embedded Devices via Shared Deep Neural Network Representations. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 3, Article 50 (Sept. 2017), 19 pages. https://doi.org/10.1145/3131895
[18]
Github repository 2017. Multimodal Deep Learning Framework. https://github.com/vradu10/deepfusion.git. (2017).
[19]
A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. Ch. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley. 2000. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 101, 23 (2000), e215--e220. Circulation Electronic Pages: http://circ.ahajournals.org/cgi/content/full/101/23/e215 1085218;
[20]
Alex Graves, A-R Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 6645--6649.
[21]
Matthieu Guillaumin, Jakob Verbeek, and Cordelia Schmid. 2010. Multimodal semi-supervised learning for image classification. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 902--909.
[22]
Haodong Guo, Ling Chen, Liangying Peng, and Gencai Chen. 2016. Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble. In Proceedings of UbiComp. ACM.
[23]
Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA Data Mining Software: An Update. SIGKDD Explor. Newsl. 11, 1 (Nov. 2009), 10--18. https://doi.org/10.1145/1656274.1656278
[24]
Nils Hammerla, James Fisher, Peter Andras, Lynn Rochester, Richard Walker, and Thomas Plötz. 2015. PD Disease State Assessment in Naturalistic Environments using Deep Learning. In AAAI 2015.
[25]
Nils Hammerla, Shane Halloran, and Thomas Ploetz. 2016. Deep, Convolutional, and Recurrent Models for Human Activity Recognition using Wearables. In Proceedings of IJCAI. ACM.
[26]
Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 (2015).
[27]
Awni Y. Hannun, Carl Case, Jared Casper, Bryan C. Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, and Andrew Y. Ng. 2014. Deep Speech: Scaling up end-to-end speech recognition. CoRR abs/1412.5567 (2014). http://arxiv.org/abs/1412.5567
[28]
Samuli Hemminki, Petteri Nurmi, and Sasu Tarkoma. 2013. Accelerometer-based Transportation Mode Detection on Smartphones. In Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems (SenSys‘13). ACM, New York, NY, USA, Article 13, 14 pages. https://doi.org/10.1145/2517351.2517367
[29]
Loc N Huynh, Youngki Lee, and Rajesh Krishna Balan. 2017. DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 82--95.
[30]
Ashish Kapoor and Rosalind W Picard. 2005. Multimodal affect recognition in learning environments. In Proceedings of the 13th annual ACM international conference on Multimedia. ACM, 677--682.
[31]
Thomas Kautz, Benjamin H Groh, Julius Hannink, Ulf Jensen, Holger Strubberg, and Bjoern M Eskofier. 2017. Activity recognition in beach volleyball using a Deep Convolutional Neural Network. Data Mining and Knowledge Discovery (2017), 1--28.
[32]
Mohamed Khamis, Florian Alt, Mariam Hassib, Emanuel von Zezschwitz, Regina Hasholzner, and Andreas Bulling. 2016. GazeTouchPass: Multimodal Authentication Using Gaze and Touch on Mobile Devices. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA‘16). ACM, New York, NY, USA, 2156--2164. https://doi.org/10.1145/2851581.2892314
[33]
Yelin Kim, Honglak Lee, and E.M. Provost. 2013. Deep learning for robust feature generation in audiovisual emotion recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. 3687--3691. https://doi.org/10.1109/ICASSP.2013.6638346
[34]
Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. 2014. Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models. CoRR abs/1411.2539 (2014). http://arxiv.org/abs/1411.2539
[35]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger (Eds.). Curran Associates, Inc., 1097--1105. http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
[36]
Saewon Kye, Junhyung Moon, Juneil Lee, Inho Choi, Dongmi Cheon, and Kyoungwoo Lee. 2017. Multimodal Data Collection Framework for Mental Stress Monitoring. In Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers (UbiComp‘17). ACM, New York, NY, USA, 822--829. https://doi.org/10.1145/3123024.3125616
[37]
Nicholas D. Lane, S. Bhattacharya, P. Georgiev, C. Forlivesi, L. Jiao, L. Qendro, and F. Kawsar. 2016. DeepX: A Software Accelerator for Low-Power Deep Learning Inference on Mobile Devices. In 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN). 1--12. https://doi.org/10.1109/IPSN.2016.7460664
[38]
Nicholas D. Lane, Sourav Bhattacharya, Petko Georgiev, Claudio Forlivesi, and Fahim Kawsar. 2015. An early resource characterization of deep learning on wearables, smartphones and internet-of-things devices. In Proceedings of the 2015 International Workshop on Internet of Things towards Applications. ACM, 7--12.
[39]
Nicholas D. Lane and Petko Georgiev. 2015. Can Deep Learning Revolutionize Mobile Sensing?. In HotMobile 2015.
[40]
Nicholas D. Lane, Petko Georgiev, and Lorena Qendro. 2015. DeepEar: Robust Smartphone Audio Sensing in Unconstrained Acoustic Environments Using Deep Learning. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp‘15). ACM, New York, NY, USA, 283--294. https://doi.org/10.1145/2750858.2804262
[41]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep Learning. Nature (2015).
[42]
LG G Watch R 2017. LG G Watch R. https://www.qualcomm.com/products/snapdragon/wearables/lg-g-watch-r. (2017).
[43]
Wei Liu, Wei-Long Zheng, and Bao-Liang Lu. 2016. Multimodal Emotion Recognition Using Multimodal Deep Learning. CoRR abs/1602.08225 (2016). http://arxiv.org/abs/1602.08225
[44]
Hong Lu, Jun Yang, Zhigang Liu, Nicholas D. Lane, Tanzeem Choudhury, and Andrew T. Campbell. 2010. The Jigsaw Continuous Sensing Engine for Mobile Phone Applications. In Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems (SenSys‘10). ACM, New York, NY, USA, 71--84. https://doi.org/10.1145/1869983.1869992
[45]
Lumo Lift 2017. Lumo Lift. http://www.lumobodytech.com. (2017).
[46]
J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille. 2014. Explain Images with Multimodal Recurrent Neural Networks. ArXiv e-prints (Oct. 2014). arXiv:cs.CV/1410.1090
[47]
Christopher Merck, Christina Maher, Mark Mirtchouk, Min Zheng, Yuxiao Huang, and Samantha Kleinberg. 2016. Multimodality Sensing for Eating Recognition. In Proceedings of the 10th EAI International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth‘16). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels, Belgium, Belgium, 130--137. http://dl.acm.org/citation.cfm?id=3021319.3021339
[48]
Microsoft Band 2017. Microsoft Band. http://www.microsoft.com/Microsoft-Band/. (2017).
[49]
Francisco Javier Ordóñez Morales and Daniel Roggen. 2016. Deep convolutional feature transfer across mobile activity recognition domains, sensor modalities and locations. In Proceedings of the 2016 ACM International Symposium on Wearable Computers. ACM, 92--99.
[50]
Y. Mroueh, E. Marcheret, and V. Goel. 2015. Deep multimodal learning for Audio-Visual Speech Recognition. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2130--2134. https://doi.org/10.1109/ICASSP.2015.7178347
[51]
Sebastian Münzner, Philip Schmidt, Attila Reiss, Michael Hanselmann, Rainer Stiefelhagen, and Robert Dürichen. 2017. CNN-based Sensor Fusion Techniques for Multimodal Human Activity Recognition. In Proceedings of the 2017 ACM International Symposium on Wearable Computers (ISWC‘17). ACM, New York, NY, USA, 158--165. https://doi.org/10.1145/3123021.3123046
[52]
Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng. 2011. Multimodal Deep Learning. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, Lise Getoor and Tobias Scheffer (Eds.). Omnipress, 689--696.
[53]
Trung Thanh Ngo, Yasushi Makihara, Hajime Nagahara, Yasuhiro Mukaigawa, and Yasushi Yagi. 2015. Similar gait action recognition using an inertial sensor. Pattern Recognition 48, 4 (2015), 1289--1301. https://doi.org/10.1016/j.patcog.2014.10.012
[54]
Reza Olfati-Saber and Jeff S Shamma. 2005. Consensus filters for sensor networks and distributed sensor fusion. In Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC‘05. 44th IEEE Conference on. IEEE, 6698--6703.
[55]
Soujanya Poria, Erik Cambria, Newton Howard, Guang-Bin Huang, and Amir Hussain. 2016. Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing 174, Part A (2016), 50--59. https://doi.org/10.1016/j.neucom.2015.01.095
[56]
Valentin Radu, Panagiota Katsikouli, Rik Sarkar, and Mahesh K. Marina. 2014. A Semi-supervised Learning Approach for Robust Indoor-outdoor Detection with Smartphones. In Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems (SenSys ‘14). ACM, New York, NY, USA, 280--294. https://doi.org/10.1145/2668332.2668347
[57]
Valentin Radu, Nicholas D. Lane, Sourav Bhattacharya, Cecilia Mascolo, Mahesh K Marina, and Fahim Kawsar. 2016. Towards multimodal deep learning for activity recognition on mobile devices. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. ACM, 185--188.
[58]
Valentin Radu and Mahesh K. Marina. 2013. HiMLoc: Indoor Smartphone Localization via Activity Aware Pedestrian Dead Reckoning with Selective Crowdsourced WiFi Fingerprinting. In In Proc. Indoor Positioning and Indoor Navigation (IPIN). IEEE. http://dx.doi.org/10.1109/IPIN.2013.6817916
[59]
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision. Springer, 525--542.
[60]
Devendra Singh Sachan, Umesh Tekwani, and Amit Sethi. 2013. Sports Video Classification from Multimodal Information Using Deep Neural Networks. In 2013 AAAI Fall Symposium Series.
[61]
Gyula Simon, Mikl�s Mar�ti, �kos L�deczi, Gy�rgy Balogh, Branislav Kusy, Andr�s N�das, G�bor Pap, J�nos Sallai, and Ken Frampton. 2004. Sensor network-based countersniper system. In Proceedings of the 2nd international conference on Embedded networked sensor systems. ACM, 1--12.
[62]
Cees GM Snoek, Marcel Worring, and Arnold WM Smeulders. 2005. Early versus late fusion in semantic video analysis. In Proceedings of the 13th annual ACM international conference on Multimedia. ACM, 399--402.
[63]
Kihyuk Sohn, Wenling Shang, and Honglak Lee. 2014. Improved Multimodal Deep Learning with Variation of Information. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8--13 2014, Montreal, Quebec, Canada, Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger (Eds.). 2141--2149. http://papers.nips.cc/paper/5279-improved-multimodal-deep-learning-with-variation-of-information
[64]
Nitish Srivastava and Ruslan R Salakhutdinov. 2012. Multimodal Learning with Deep Boltzmann Machines. In Advances in Neural Information Processing Systems 25, F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger (Eds.). Curran Associates, Inc., 2222--2230. http://papers.nips.cc/paper/4683-multimodal-learning-with-deep-boltzmann-machines.pdf
[65]
Allan Stisen, Henrik Blunck, Sourav Bhattacharya, Thor Siiger Prentow, Mikkel Baun Kj�rgaard, Anind Dey, Tobias Sonne, and Mads M�ller Jensen. 2015. Smart Devices are Different: Assessing and Mitigating Mobile Sensing Heterogeneities for Activity Recognition. In The 13th ACM Conference on Embedded Networked Sensor Systems.
[66]
Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, and Lior Wolf. 2014. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. In Conference on Computer Vision and Pattern Recognition (CVPR).
[67]
Torch 2017. Torch. http://torch.ch/. (2017).
[68]
Ehsan Variani, Xin Lei, Erik McDermott, Ignacio Lopez Moreno, and Javier Gonzalez-Dominguez. 2014. Deep neural networks for small footprint text-dependent speaker verification. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2014, Florence, Italy, May 4--9, 2014. IEEE, 4052--4056. https://doi.org/10.1109/ICASSP.2014.6854363
[69]
Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. 2015. On deep multi-view representation learning. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15). 1083--1092.
[70]
Pengcheng Wu, Steven C.H. Hoi, Hao Xia, Peilin Zhao, Dayong Wang, and Chunyan Miao. 2013. Online Multimodal Deep Similarity Learning with Application to Image Retrieval. In Proceedings of the 21st ACM International Conference on Multimedia (MM‘13). ACM, New York, NY, USA, 153--162. https://doi.org/10.1145/2502081.2502112
[71]
Shuochao Yao, Shaohan Hu, Yiran Zhao, Aston Zhang, and Tarek Abdelzaher. 2017. Deepsense: A unified deep learning framework for time-series mobile sensing data processing. In Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 351--360.
[72]
Piero Zappi, Thomas Stiefmeier, Elisabetta Farella, Daniel Roggen, Luca Benini, and Gerhard Tr�ster. 2007. Activity Recognition from On-Body Sensors by Classifier Fusion: Sensor Scalability and Robustness. In 3rd Int. Conf. on Intelligent Sensors, Sensor Networks, and Information Processing (ISSNIP). 281--286. http://www2.ife.ee.ethz.ch/~droggen/publications/wear/EDAS_ISSNIP.pdf

Cited By

View all
  • (2024)Evaluation of multimodal data-driven financial risk prediction methods for corporate green creditJournal of Intelligent & Fuzzy Systems10.3233/JIFS-237691(1-13)Online publication date: 3-Apr-2024
  • (2024)Data Preprocessing Techniques for Artificial Intelligence (AI)/Machine Learning (ML)-Readiness: Systematic Review of Wearable Sensor Data in Cancer Care (Preprint)JMIR mHealth and uHealth10.2196/59587Online publication date: 16-Apr-2024
  • (2024)Users' Perspectives on Multimodal Menstrual Tracking Using Consumer Health DevicesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785758:3(1-24)Online publication date: 9-Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 1, Issue 4
December 2017
1298 pages
EISSN:2474-9567
DOI:10.1145/3178157
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 January 2018
Accepted: 01 October 2017
Revised: 01 August 2017
Received: 01 February 2017
Published in�IMWUT�Volume 1, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Mobile sensing
  2. activity recognition
  3. context detection
  4. deep learning
  5. deep neural networks
  6. multi-modal
  7. sensor fusion

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • European Union's Horizon2020

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)377
  • Downloads (Last 6 weeks)57
Reflects downloads up to 22 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Evaluation of multimodal data-driven financial risk prediction methods for corporate green creditJournal of Intelligent & Fuzzy Systems10.3233/JIFS-237691(1-13)Online publication date: 3-Apr-2024
  • (2024)Data Preprocessing Techniques for Artificial Intelligence (AI)/Machine Learning (ML)-Readiness: Systematic Review of Wearable Sensor Data in Cancer Care (Preprint)JMIR mHealth and uHealth10.2196/59587Online publication date: 16-Apr-2024
  • (2024)Users' Perspectives on Multimodal Menstrual Tracking Using Consumer Health DevicesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785758:3(1-24)Online publication date: 9-Sep-2024
  • (2024)RFBoost: Understanding and Boosting Deep WiFi Sensing via Physical Data AugmentationProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596208:2(1-26)Online publication date: 15-May-2024
  • (2024)IOTeethProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435168:1(1-29)Online publication date: 6-Mar-2024
  • (2024)Malicious Attacks against Multi-Sensor Fusion in Autonomous DrivingProceedings of the 30th Annual International Conference on Mobile Computing and Networking10.1145/3636534.3649372(436-451)Online publication date: 29-May-2024
  • (2024)CroSSL: Cross-modal Self-Supervised Learning for Time-series through Latent MaskingProceedings of the 17th ACM International Conference on Web Search and Data Mining10.1145/3616855.3635795(152-160)Online publication date: 4-Mar-2024
  • (2024)Demonstrating PANDALens: Enhancing Daily Activity Documentation with AI-assisted In-Context Writing on OHMDExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3648644(1-7)Online publication date: 11-May-2024
  • (2024)Learning About Social Context From Smartphone Data: Generalization Across Countries and Daily Life MomentsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642444(1-18)Online publication date: 11-May-2024
  • (2024)PANDALens: Towards AI-Assisted In-Context Writing on OHMD During TravelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642320(1-24)Online publication date: 11-May-2024
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media