1. 8th Int. Workshop on Human Behavior Understanding in conjunction with 2nd Int. Workshop on Automatic Face Analytics for Human Behavior Understanding

The 8th Int. Workshop on Human Behavior Understanding (HBU) and 2nd Int. Workshop on  Automatic Face Analytics for Human Behavior Understanding (FaceHUB) are jointly organized at FG to gather researchers on behavior analysis and analytics. It will have two specific focus tracks dealing with "face analytics" and "behavior analysis for smart cars". For track 1, application scenarios of "face analytics" include analyzing emotions while the person is watching emotional movies or advertisements, playing video games, driving a car, is under health monitoring or crime investigation, or is participating in interactive tutoring. Long-term continuous monitoring and analysis of expressions for assessing personality and assessment of psychological disorders are also relevant. For track 2, "behavior analysis for smart cars," we are looking at the inside of the car, to assess driver drowsiness and attention, in-car social signals like frustration and boredom, improved driver/passenger assistance, safety, and comfort systems, and car biometrics.

Carlos Busso, Univ. of Texas at Dallas
Xiaohua Huang, Univ. of Oulu
Takatsugu Hirayama, Nagoya Univ.
Guoying Zhao, Univ. of Oulu & Northwest Univ. of China
Albert Ali Salah, Boğaziçi Univ. & Nagoya Univ.
Matti Pietikäinen, Univ. of Oulu
Roberto Vezzani, Univ. of Modena and Reggio Emilia
Wenming Zheng, Southeast Univ.
Abhinav Dhall, Indian Institute of Technology

Stephen Brewster, Univ. of Glasgow, "Designing new user interfaces for cars" (ACM Distinguished Speaker)

Special issue:
A journal special issue on "behavior analysis for smart cars" will be published by JAISE, and a second special issue on "automatic face analytics for human behavior understanding" will be published by IMAVIS, where extended papers will be invited.

2. Latest developments of FG technologies in China

In recent years, there are many researchers in China working on the field of FG, and many competitive companies focus on developing FG technologies. So, we want to propose a half-day workshop in FG2018 to discuss the latest developments of FG technologies in China. Besides of the researchers from academia, some outstanding researchers from industry will also report their progress in research and applications. The workshop will help to exchange ideas between academia and industry.

Prof. Qingshan Liu, Nanjing University of Information Science and Technology,
Prof. Shiqi Yu, Shenzhen University,
Prof. Zhen Lei, National Laboratory of Pattern Recognition, Chinese Academy of Science,

3. First Workshop on Large-scale Emotion Recognition and Analysis

With the advancement in social computing, multimedia, and sensing technology, the amount of emotionally relevant data has grown enormously. It becomes crucial for the affective computing community to develop new methods for understanding emotion conveyed by the media and the emotion felt by the user at a large scale. This workshop invites researchers to submit their original work proposing methods to create data and new methodologies for large-scale analysis. Much development has been observed in the computer vision community after large-scale databases such as the ImageNet and MS COCO have been released. The first LERA workshop at FG2018 aims to transfer current research focus on small-scale, lab based environment to real-world, large-scale corpus.
Abhinav Dhall, Indian Institute of Technology, Ropar
Yelin Kim, State University of New York, Albany
Qiang Ji, Rensselaer Polytechnic Institute

4. IEEE FG 2018 Workshop on Dense 3D Reconstruction of 2D Face Images in the Wild

3D face reconstruction from 2D images has become a very active topic in computer vision and computer graphics. The workshop aims to bring together the community to explore and address challenges in 3D face reconstruction of 2D in-the-wild images. Topics of interest include but are not limited to:
-3D face dataset and models
-3D face reconstruction from single images and videos
-3D-based face analysis and biometrics
-Special session on 3D face reconstruction challenge (competition)
-Other applications of 3D face models

Special Session: Competition
As a special session of the workshop, we will release a new benchmark dataset consisting of a number of identities.  For each identity there are a number of 2D in-the-wild face images and a high-resolution 3D face scan ground-truth. Alongside  the dataset we supply a standard protocol to allow independent comparison among different algorithms.

Dr. Zhenhua Feng, University of Surrey, UK
Dr. Patrik Huber, University of Surrey, UK
Prof. Josef Kittler, University of Surrey, UK
Prof. Xiaojun Wu, Jiangnan University, China

5. Face and Gesture Analysis for Health Informatics (FGAHI)

Healthcare applications and clinical research have always been a fascinating and attractive field of research for face and gesture analysis. Recent advances in computer vision and machine learning for automatic analysis and modeling of human behavior could play a vital role in overcoming some limitations in clinical context. For instance, depression assessment relies almost entirely on patients' verbally reported symptoms in clinical interviews (e.g., BDI). Such assessment, while useful, fail to include behavioral indicators that are powerful indices of depression.

This workshop aims to discuss the strengths and major challenges of automatic face and gesture analysis for clinical research and healthcare applications. We invite scientists working in related areas of face and gesture analysis, affective computing, machine learning, psychology, and cognitive behavior to share their expertise and achievements in the emerging field of face and gesture analysis for health informatics.

Topics of interest include:
- Face, head, and body detection, analysis, and modeling for healthcare applications
- Human-Computer Interaction systems for home healthcare and wellness management
- Physiological sensing and processing platforms for healthcare applications (e.g., wearable devices for self-management)
- Clinically relevant corpora recording and annotation
- Clinical protocols and methods for secure collection and use of patient data (e.g., face and gesture de-identification)

Applications include but are not limited to:
Telemedicine, pain intensity measurement, depression severity assessment, autism screening, heart rate and breathing rate monitoring.

Kévin Bailly, Pierre and Marie Curie University, France
Liming Chen, Ecole Centrale De Lyon, France
Mohamed Daoudi, IMT Lille Douai, France
Arnaud Dapogny, Pierre and Marie Curie University, France
Zakia Hammal, Carnegie Mellon University, USA
Di Huang, Beihang University, China

Keynote speakers:
Jeffrey Cohn, University of Pittsburgh, USA

6. Facial Micro-Expression Grand Challenge (MEGC): Methods and Datasets

Facial micro-expressions (MEs) are involuntary movements of the face that occur spontaneously when a person experiences an emotion but attempts to suppress the facial expression, typically found in a high-stakes environment. Computational analysis and automation of tasks on micro-expressions is an emerging area in face research, with a strong interest appearing as recent as 2014. Only a few spontaneously induced facial micro-expression datasets have provided the impetus to advance further from the computational aspect. Particularly comprehensive are two state-of-the art FACS coded datasets: the Chinese Academy of Sciences Micro-Expression Database II (CASME II) and the Spontaneous Micro-Facial Movement Dataset (SAMM). While much research has been done on these datasets individually, there has been no attempts to introduce a more rigorous and realistic evaluation. This is the inaugural workshop with the aim of promoting interactions between researchers and scholars from within this niche area of research, and also those from broader, general areas of computer vision and psychology research.  This workshop has two main agenda: 1) To organize the first Grand Challenge for facial micro-expression research, involving CASME II-SAMM cross-database recognition of micro-expression classes; and 2) To solicit original works that address a variety of modern challenges of ME research such as spotting macro-/micro-expressions from long videos, and deep learning techniques.

Moi Hoon Yap, Manchester Metropolitan University, UK,
Sujing Wang, Chinese Academy of Sciences, China,
John See, Multimedia University, Malaysia,
Xiaopeng Hong, University of Oulu, Finland,
Stefanos Zafeiriou, Imperial College London, UK,

Advisory panel:
Xiaolan Fu, Chinese Academy of Sciences, China
Guoying Zhao, University of Oulu, Finland

7. The 1st International Workshop on Real-World Face and Object Recognition from Low-Quality Images (FOR-LQ)
Description: While the visual recognition research has made tremendous progress in recent years, most models are trained, applied, and evaluated on high-quality (HQ) visual data, such as the LFW and ImageNet benchmarks. However, in many emerging applications such as video surveillance, robotics and autonomous driving, the performances of visual sensing and analytics are largely jeopardized by low-quality (LQ) visual data acquired from complex unconstrained environments, suffering from various types of degradations such as low resolution, noise, occlusion and motion blur. While some mild degradations may be compromised by sophisticated visual recognition models, their impacts will turn much notable as the level of degradations passes some empirical threshold. This Half-Day workshop (FOR-LQ 2018) will provide an integrated forum for researchers to review the recent progress of robust face and object recognition from LQ visual data. We embrace the most advanced deep learning systems, meanwhile being open to classical physically grounded models and feature engineering, as well as any well-motivated combination of the two streams. The workshop will consist of 1-2 invited keynote talks, together with peer-reviewed regular papers (oral and poster).

Dong Liu,
Weisheng Dong,
Zhangyang Wang,
Ding Liu: