Unconsciously, humans evaluate situations based on environment and social parameters when recognizing emotions in social interactions. Without context, even humans may misunderstand the observed facial, vocal or body behavior. Contextual information, such as the ongoing task (e.g., human-computer vs. human-robot interaction), the identity (male vs. female) and natural expressiveness of the individual (e.g., introvert vs. extrovert), as well as the intra- and interpersonal contexts, help us to better interpret and respond to environment around us. These considerations suggest that attention to context information can deepen our understanding of affect communication (e.g., discrete emotions, affective dimensions such as valence and arousal, different types of moods and sentiment, etc.) for making reliable real-world affect-sensitive applications.
This 6th CBAR workshop aims to investigate how to efficiently exploit and model context using the cutting-edge computer vision and machine learning approaches in order to advance automatic affect recognitionTopics of interest include, but are not limited to:
  • Context-sensitive affect recognition from still images or videos.
  • Audio and/or physiological data modeling for context-sensitive affect recognition.
  • Context based corpora recording and annotation.
  • Domain adaptation for context-aware affect recognition.
  • Multi-modal context-aware fusion for affect recognition to successfully handle:
-Asynchrony and discordances of different modalities such as voice, face, and head/body.
-Innate priority among modalities.
-Temporal variations in the relative importance of the modalities according to the context.
  • Theoretical and empirical analysis of influence of context on affect recognition
  • Context aware applications:
-Depression severity assessment, pain intensity measurement, and autism screening (e.g. the influence of age, gender, intimate vs. stranger interaction, physician-patient relationship, home vs. hospital).
-Affect-based human-robot and human-embodied conversational agent interactions (e.g. autism therapy and story-telling, caregiving for the elderly).
-Other applications such as context-sensitive and affect-aware intelligent tutors (e.g. learning profile, personality assessment, student performance, content).

Submission Policy:
We call for submission of high-quality papers. The submitted manuscripts should not be submitted to another conference or workshop. Each paper will receive at least two reviews. Acceptance will be based on relevance to the workshop, novelty, and technical quality.

At least one author of each paper must register and attend the workshop to present the paper

Workshop Proceedings will be submitted for inclusion to IEEE Xplore.
Papers have to be submitted at the following link (EasyChair).  

The reviewing process for the workshop will bedouble-blind”. All submissions should, therefore, be appropriately anonymized not to reveal authors names or authorsinstitutions.

Submissions must be in PDF format, in accordance with the IEEE FG conference paper style.

Organizers:

Zakia Hammal
The Robotics Institute, 
Carnegie Mellon University.

Merlin Teodosia Suarez
Center for Empathic Human-Computer Interactions,
De La Salle University

Important Dates:
Submission Deadline:         14 December 2018
Notification of Acceptance:  30 January 2019
Camera Ready:                   15 February 2019

Program Committee (to be completed)
Anna Esposito, University degli Studi della Campania, “Luigi Vanvitelli”, Italy 
Mohammad H. Mahoor, University of Denver, USA 
Yan Tong, University of South Carolina, USA 
Ursula Hess, Humboldt University, Berlin  
Laurence Devillers, Paris-Sorbonne IV, France  
Hongying Meng, Brunel University London, UK
Oya Aran, Idiap Research Institute, Switzerland
Khiet Truong, University of Twente, Netherlands