skip to main content
10.1145/2522848.2531739acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Emotion recognition in the wild challenge 2013

Published: 09 December 2013 Publication History

Abstract

Emotion recognition is a very active field of research. The Emotion Recognition In The Wild Challenge and Workshop (EmotiW) 2013 Grand Challenge consists of an audio-video based emotion classification challenges, which mimics real-world conditions. Traditionally, emotion recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such laboratory controlled data poorly represents the environment and conditions faced in real-world situations. The goal of this Grand Challenge is to define a common platform for evaluation of emotion recognition methods in real-world conditions. The database in the 2013 challenge is the Acted Facial Expression in the Wild (AFEW), which has been collected from movies showing close-to-real-world conditions.

References

[1]
Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In CVPR4HB10, 2010.
[2]
Maja Pantic, Michel Fran�ois Valstar, Ron Rademaker, and Ludo Maat. Web-based database for facial expression analysis. In Proceedings of the IEEE International Conference on Multimedia and Expo, ICME'05, 2005.
[3]
Michel Valstar, Bihan Jiang, Marc Mehu, Maja Pantic, and Scherer Klaus. The first facial expression recognition and analysis challenge. In Proceedings of the Ninth IEEE International Conference on Automatic Face Gesture Recognition and Workshops, FG'11, pages 314--321, 2011.
[4]
Gary McKeown, Michel Fran�ois Valstar, Roderick Cowie, and Maja Pantic. The semaine corpus of emotionally coloured character interactions. In IEEE ICME, 2010.
[5]
Bj�rn Schuller, Michel Fran�ois Valstar, Florian Eyben, Gary McKeown, Roddy Cowie, and Maja Pantic. Avec 2011-the first international audio/visual emotion challenge. In ACII (2), pages 415--424, 2011.
[6]
Bj�rn Schuller, Michel Valstar, Florian Eyben, Roddy Cowie, and Maja Pantic. Avec 2012: the continuous audio/visual emotion challenge. In ICMI, pages 449--456, 2012.
[7]
Abhinav Dhall, Jyoti Joshi, Ibrahim Radwan, and Roland Goecke. Finding happiest moments in a social context. In ACCV, 2012.
[8]
Abhinav Dhall, Roland Goecke, Simon Lucey, and Tom Gedeon. A semi-automatic method for collecting richly labelled large facial expression databases from movies. IEEE Multimedia, 19(3):34--41, 2012.
[9]
Jacob Whitehill, Gwen Littlewort, Ian R. Fasel, Marian Stewart Bartlett, and Javier R. Movellan. Toward Practical Smile Detection. IEEE TPAMI, 2009.
[10]
Abhinav Dhall, Roland Goecke, Simon Lucey, and Tom Gedeon. Static Facial Expression Analysis In Tough Conditions: Data, Evaluation Protocol And Benchmark. In ICCVW, BEFIT'11, 2011.
[11]
P.F. Felzenszwalb and D.P. Huttenlocher. Pictorial Structures for Object Recognition. IJCV, 2005.
[12]
Xiangxin Zhu and Deva Ramanan. Face detection, pose estimation, and landmark localization in the wild. In CVPR, pages 2879--2886, 2012.
[13]
Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In CVPR, pages 886--893, 2005.
[14]
Tobias Gehrig and Hazım Kemal Ekenel. A common framework for real-time emotion recognition and facial action unit detection. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer Society Conference on, pages 1--6. IEEE, 2011.
[15]
Guoying Zhao and Matti Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE TPAMI, 29:915--928, 2007.
[16]
Björn Schuller, Michel Valstar, Florian Eyben, Gary McKeown, Roddy Cowie, and Maja Pantic. Avec 2011--the first international audio/visual emotion challenge. In Affective Computing and Intelligent Interaction, pages 415--424. Springer Berlin Heidelberg, 2011.
[17]
Bj�rn Schuller, Stefan Steidl, Anton Batliner, Felix Burkhardt, Laurence Devillers, Christian A M�ller, and Shrikanth S Narayanan. The interspeech 2010 paralinguistic challenge. In INTERSPEECH, pages 2794--2797, 2010.
[18]
Florian Eyben, Martin Wollmer, and Bjorn Schuller. Openear--introducing the munich open-source emotion and affect recognition toolkit. In Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on, pages 1--6. IEEE, 2009.
[19]
Florian Eyben, Martin W�llmer, and Bj�rn Schuller. Opensmile: the munich versatile and fast open-source audio feature extractor. In ACM Multimedia, pages 1459--1462, 2010.

Cited By

View all
  • (2024)Self-assessment of affect-related events for physiological data collection in the wild based on appraisal theoriesFrontiers in Computer Science10.3389/fcomp.2023.12856905Online publication date: 11-Jan-2024
  • (2024)Development of multimodal sentiment recognition and understandingJournal of Image and Graphics10.11834/jig.24001729:6(1607-1627)Online publication date: 2024
  • (2024)Transformer-Based Multimodal Emotional Perception for Dynamic Facial Expression Recognition in the WildIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.331285834:5(3192-3203)Online publication date: May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interaction
December 2013
630 pages
ISBN:9781450321297
DOI:10.1145/2522848
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 December 2013

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. emotion recognition in the wild
  2. multimodal

Qualifiers

  • Research-article

Conference

ICMI '13
Sponsor:

Acceptance Rates

ICMI '13 Paper Acceptance Rate 49 of 133 submissions, 37%;
Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)159
  • Downloads (Last 6 weeks)7
Reflects downloads up to 21 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Self-assessment of affect-related events for physiological data collection in the wild based on appraisal theoriesFrontiers in Computer Science10.3389/fcomp.2023.12856905Online publication date: 11-Jan-2024
  • (2024)Development of multimodal sentiment recognition and understandingJournal of Image and Graphics10.11834/jig.24001729:6(1607-1627)Online publication date: 2024
  • (2024)Transformer-Based Multimodal Emotional Perception for Dynamic Facial Expression Recognition in the WildIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.331285834:5(3192-3203)Online publication date: May-2024
  • (2024)MSSTNet: A Multi-Scale Spatio-Temporal CNN-Transformer Network for Dynamic Facial Expression RecognitionICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10446699(3015-3019)Online publication date: 14-Apr-2024
  • (2024)Dual-STI: Dual-path Spatial-Temporal Interaction Learning for Dynamic Facial Expression RecognitionInformation Sciences10.1016/j.ins.2024.120953(120953)Online publication date: Jun-2024
  • (2024)Dynamic facial expression recognition based on spatial key-points optimized region feature fusion and temporal self-attentionEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108535133(108535)Online publication date: Jul-2024
  • (2024)A joint local spatial and global temporal CNN-Transformer for dynamic facial expression recognitionApplied Soft Computing10.1016/j.asoc.2024.111680161(111680)Online publication date: Aug-2024
  • (2024)An Intelligent Teaching Evaluation System Integrating Emotional Computing and Cloud PlatformComputer Supported Cooperative Work and Social Computing10.1007/978-981-99-9640-7_39(515-521)Online publication date: 5-Jan-2024
  • (2023)Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language ModelsSensors10.3390/s2401012624:1(126)Online publication date: 26-Dec-2023
  • (2023)Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired PeopleSensors10.3390/s2303108023:3(1080)Online publication date: 17-Jan-2023
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media