Conference Program (Download a PDF)

The 13th IEEE Conference on Automatic Face and Gesture Recognition (FG 2018) will take place during the week of May 15-19, 2018. Workshops and tutorials will be held on Tuesday (May 15) and Saturday (May 19), while the main conference will take place on Wednesday through Friday (May 16-18). Please see the Registration page to register for the full conference (including workshops and tutorials) or for individual days.

General and detailed conference schedules:



(Tuesday)
05/15/2018
(Wednesday)
05/16/2018
(Thursday)
05/17/2018
(Friday)
05/18/2018
(Saturday)
05/19/2018
Workshops/ Tutorials
Main Conference
Main Conference
Main Conference
Workshops/Tutorials
Lunch Break  Lunch Break Lunch Break Lunch Break Lunch Break
Workshops/Tutorials Doctoral Consortium Main Conference Main Conference Main Conference Workshops/
Challenges


Wednesday (5/16/18)

Thursday (5/17/18)

Friday (5/18/18)

8:45 - 9:00: Opening

 

 

9:00 - 10:00: Keynote:

Professor Hillel Aviezer, Hebrew University

 

9:00 - 10:00: Keynote:

Professor Nadia Berthouze, University College London 

9:00 - 10:00: Keynote:

Dr. Jian Sun,

Face++

10:00am-10:30am Coffee Break

10:00am-10:30am Coffee Break

10:00am-10:30am Coffee Break

10:30 – 11:30 (Oral Session 1)

Face Recognition

 

10:30 – 11:30 (Oral Session 5)

Facial Synthesis

 

10:30 – 11:30 (Oral Session 9)

Affect and Expression

11:30 – 12:10 (Oral Session 2)

Facial Expression Recognition

11:30 – 12:10 (Oral Session 6)

Gesture Analysis

11:30 – 12:10 (Oral Session 10)

Psychological and Behavioral Analysis

End of session till - 14:00: Lunch break

End of session till - 14:00: Lunch break

End of session till - 14:00: Lunch break

14:00 – 15:00 (Oral Session 3)

Special Session on Perception, Cognition and Psychophysiology of

Gesture Interaction

14:00 – 15:00 (Oral Session 7)

Special Session on Deep Learning for Face Analysis

 

14:00 – 15:20 (Oral Session 11)

Face Detection and Alignment

15:00 – 15:30: Coffee break

 

15:00 – 15:30 Coffee break

15:20 – 15:50 Coffee Break

15:30 – 16:30 (Oral Session 4)

Databases and Tools

15:30 – 16:50 (Oral Session 8)

Facial Biometrics and Face Technology Application

15:50 – 16:30 (Oral Session 12)

Multimodal  Data for Personal Wellness and Health

16:30 – 17:30

Poster Highlights I

 

16:30 – 17:15

Poster Highlights II

17:30 – 19:00

Poster Session I

    - Posters from Doctoral Consortium

    - Posters in areas of Face Gesture, Body, Affect, Technology and Applications  

    - Posters from papers of Oral Sessions 1-4

 

17:15 – 18:45

Poster Session II

   - Posters in areas of Face, Gesture, Body, Affect, Technology and Applications

   - Posters from papers of Oral Sessions 5-12

17:30 – 19:00

Demos

 

17:15 – 18:45

Demos

09:00 – 19:00

Exhibits

09:00 – 17:00

Exhibits

09:00 – 18:45

Exhibits

19:30 – 21:30

Reception

18:00 – 21:30

Banquet

 


Wednesday (May 16, 2018)

 

Opening (8:45 – 9:00)

 

Keynote (9:00 - 10:00)

“Beyond Pleasure and Pain: The Case of Real-Life Extreme Emotional Expressions”, Professor Hillel Aviezer, Hebrew University

 

 

Oral Session 1:  Face Recognition (10:30 – 11:30)

1.      One-Shot Face Recognition via Generative Learning

Zhengming Ding (Northeastern University), Yandong Guo (Microsoft), Lei Zhang (Microsoft), and Yun Fu (Northeastern University)

 

2.     RGB-D Face Recognition via Deep Complementary and Common Feature Learning

Hao Zhang (Institute of Computing Technology, Chinese Academy of Sciences), Hu Han (Institute of Computing Technology, Chinese Academy of Sciences), Shiguang Shan (Institute of Computing Technology, Chinese Academy of Sciences),  and Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences)

 

3.     Visualizing and Quantifying Discriminative Features for Face Recognition

Gregory Castanon (Systems & Technology Research) and Jeffrey Byrne (Systems & Technology Research)

 

Oral Session 2:  Facial Expression Recognition (11:30 – 12:10)

 

1.     Automatic 4D Facial Expression Recognition using Dynamic Geometrical Image Network

Weijian Li (Beihang University), Di Huang (Beihang University), Huibin Li (Xi'an Jiaotong University), and Yunhong Wang (Beihang University)

 

2.      Unsupervised Domain Adaption with Regularized Optimal Transport for Multimodal 2D+3D Facial Expression Recognition

Xiaofan Wei (Xi'an Jiaotong University), Huibin Li (Xi'an Jiaotong University), Jian Sun (Xi'an Jiaotong University), and Limin Chen (University of Lyon)

 

Oral Session 3: Special Session on Perception, Cognition and Psychophysiology of

Gesture Interaction (14:00 – 15:00)

 

1.     Biomechanical-based Approach to Data Augmentation for One-Shot Gesture Recognition

Maria E Cabrera (Purdue University) and Juan Wachs (Purdue University)

 

2.     Kinematic Constrained Cascaded Autoencoder for Real-time Hand Pose Estimation

Yushun Lin (Institute of Computing Technology, Chinese Academy of Sciences), Xiujuan Chai (Institute of Computing Technology, Chinese Academy of Sciences), and Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences)

 

3.      Large-scale Isolated Gesture Recognition Using Masked Res-C3D Network and Skeleton LSTM

Chi Lin (Macau University of Science and Technology), Jun Wan (Institute of Automation Chinese Academy of Sciences), Yanyan Liang (Macau University of Science and Technology), and Stan Z. Li (Institute of Automation Chinese Academy of Sciences)

 

Oral Session 4: Databases and Tools (15:30 – 16:30)

 

1.      OpenFace 2.0: facial behavior analysis toolkit

Tadas Baltrusaitis (Carnegie Mellon University), Amir Zadeh (Carnegie Mellon University), Yao Chong Lim (Carnegie Mellon University), and Louis-Philippe Morency (Carnegie Mellon University)

 

2.     VGGFace2: A  dataset for recognizing faces across pose and age

Qiong Cao (University of Oxford), Li Shen (University of Oxford), Weidi Xie (University of Oxford), Omkar M. Parkhi (University of Oxford), and Andrew Zisserman (University of Oxford)

 

3.      Morphable Face Models - An Open Framework

Thomas Gerig (University of Basel), Andreas Morel-Forster (University of Basel), Clemens Blumer (University of Basel), Bernhard Egger (University of Basel), Marcel Lüthi (University of Basel), Sandro Schönborn (University of Basel), and Thomas Vetter (University of Basel)

 

Poster Highlights I (16:30 – 17:30)

Format: 2 minutes per regular poster paper, 1 minute per doctoral consortium poster

Posters from Poster Session I (DC posters and regular poster papers)

 

Poster Session I (17:30 – 19:00)

·       Doctor Consortium Posters (TBA)

·       Posters of oral papers from Oral  Sessions 1 ~ 4 

·       Regular poster papers of Face Gesture, Body, Affect, Technology and Applications 

 

 

1.     (*) Face Verification: Strategies for Employing Deep Models

Ricardo Kloss (Universidade Federal de Minas Gerais), Artur Jordão (DCC UFMG), and William Robson Schwartz (Federal University of Minas Gerais)

 

2.     Emotion-Preserving Representation Learning via Generative Adversarial Network for Multi-View Facial Expression Recognition

Ying-Hsiu Lai (National Tsing Hua University) and Shang-Hong Lai (National Tsing Hua University)

 

3.     Task Specific Networks for Identity and Face Variation

Yichen Qian (Beijing University of Posts and Telecommunications), Weihong Deng (Beijing University of Posts and Telecommunications), and Jiani Hu (Beijing University of Posts and Telecommunications)          

 

4.     Context-sensitive Prediction of Facial Expressivity using Multimodal Hierarchical Bayesian Neural Networks

Ajjen Joshi (Boston University), Margrit Betke (Boston University), Stan Sclaroff (Boston University), Sarah Gunnery (Tufts University), Soumya Ghosh (IBM), and Linda Tickle-Degnen (Tufts University)

 

5.     Spotting the Details: The Various Facets of Facial Expressions

Carl Martin Grewe (Zuse Institute Berlin), Gabriel Le Roux (Zuse Institute Berlin), Sven-Kristofer Pilz (Zuse Institute Berlin), and Stefan Zachow (Zuse Institute Berlin)

 

6.     Identity-Adaptive Facial Expression Recognition Through Expression Regeneration Using Conditional Generative Adversarial Networks

Huiyuan Yang (Binghamton University), Zheng Zhang (Binghamton University), and Lijun Yin (Binghamton University)

 

7.     Island Loss for Learning Discriminative Features in Facial Expression Recognition

Jie Cai (University of South Carolina), Zibo Meng (University of South Carolina), Ahmed Shehab Khan (University of South Carolina), Zhiyuan Li (University of South Carolina), James O'Reilly (University of South Carolina), and Yan Tong (University of South Carolina)

 

8.     An Empirical Study of  Face Recognition under Variations

Baoyun Peng (National University of Defense Technology) , Heng Yang (ULSee Inc.), Dongsheng Li (National University of Defense Technology) and Zhaoning Zhang (National University of Defense Technology)

 

9.     Attributes in Multiple Facial Images

Xudong Liu (West Virginia University) and Guodong Guo (West Virginia University)

 

10.  Versatile Model for Activity Recognition: Sequencelet Corpus Model

Hyun-Joo Jung (POSTECH) and Ki-Sang Hong (POSTECH)

 

11.  Energy and Computation Efficient Audio-Visual Voice Activity Detection Driven by Event-cameras

Arman Savran (Istituto Italiano di Tecnologia), Raffaele Tavarone (Istituto Italiano di Tecnologia), Bertrand Higy (Istituto Italiano di Tecnologia), Leonardo Badino (Istituto Italiano di Tecnologia), and Chiara Bartolozzi (Istituto Italiano di Tecnologia)

 

12.  Predicting Body Movement and Recognizing Actions: an Integrated Framework for Mutual Benefits

Boyu Wang (Stony Brook University) and Minh Hoai (Stony Brook University)

 

13.  A Study on the Suppression of Amusement

Ifeoma Nwogu (Rochester Institute of Technology), Bryan Passino (Rochester Institute of Technology),  and Reynold Bailey (Rochester Institute of Technology)

 

14.  Say CHEESE:Common Human Emotional Expression Set Encoder Analysis of Smiles in Honest and Deceptive Communication

Taylan Sen (University of Rochester), Md Kamrul Hasan (University of Rochester), Minh Tran (University of Rochester), Matt Levin (University of Rochester), Yiming Yang (University of Rochester), and M. Ehsan Hoque (University of Rochester)

 

15.  Facial Expression Grounded Conversational Dialogue Generation

Bernd Huber (Harvard University) and Daniel McDuff (Microsoft)

 

16.  A Multi-task Cascaded Network for Prediction of Affect, Personality, Mood and Social Context Using EEG Signals

Juan Abdon Miranda-Correa (Queen Mary University of London) and Ioannis Patras (Queen Mary University of London)

 

17.  Letter-level Writer Identification  

Zelin Chen (Sun Yat-Sen University), Hongxing Yu (Sun Yat-Sen University), Ancong Wu (Sun Yat-Sen University), and Wei-Shi Zheng (Sun Yat-Sen University)

 

18.  (*) Online attention for interpretable conflict estimation in political debates

Ruben Vereecken (ibug, Imperial College London), Yannis Panagakis (ibug, Imperial College London), Stavros Petridis (ibug, Imperial College London), and Maja Pantic (ibug, Imperial College London)

 

19.  (*) Rich Convolutional Features Fusion For Crowd Counting

Chaochao Fan (Anhui University), Jun Tang (Anhui University), Nian Wang (Anhui University), and Dong Liang (Anhui University)

 

20.  (*) Cascade Multi-view Hourglass Model for Robust 3D Face Alignment

Jiankang Deng (Imperial College London), Yuxiang Zhou (Imperial College London), Shiyang Cheng (Imperial College London), and Stefanos Zafeiriou (Imperial College London)

 

21.  (*) A Data-augmented 3D Morphable Model of the Ear

Hang Dai (University of York), Nick Pears (University of York), William Smith (University of York), and Christian Duncan (Alder Hey Children's Hospital)

 

22.  Expressive Speech-Driven Lip Movements with Multitask Learning

Najmeh Sadoughi (The University of Texas at Dallas) and Carlos Busso (The University of Texas at Dallas)

 

Demos (17:30 – 19:00)

1.     Real-time emotion recognition on mobile devices, Denis Sokolov, Mikhail Patkin; (WeSee, London, UK)

 

2.     Fast Face and Saliency Aware Collage Creation for Mobile Phones, Love Mehta, Abhinav Dhall; (Indian Institute of Technology at Ropar)

 

3.     Human Computer Interaction with Head Pose, Eye Gaze and Body Gestures, Kang Wang, Rui Zhao, Qiang Ji; (Rensselaer Polytechnic Institute)

 

4.     End-to-end, automatic face swapping pipeline, Yuval Nirkin (The Open University of Israel), Iacopo Masi (University of Southern California), Anh Tuan Tran (University of Southern California), Tal Hassner (University of Southern California), Gerard Medioni (University of Southern California)

 

5.     Letter-level Writer Identification, Zelin Chen, Hong-Xing Yu, Ancong Wu, Wei-Shi Zheng; (Sun-Yet Sen University)

 

6.     OpenFace 2.0: facial behavior analysis toolkit, Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, Louis-Philippe Morency; (Carnegie Mellon University)

 

 

Exhibits (9:00 – 19:00) (TBA)

 

Thursday (May 17, 2018)

 

Keynote (9:00 - 10:00)

“The Affective Body in A Technology-Mediated World”,   Professor Nadia Berthouze, University College London

 

 

Oral Session 5:  Facial Synthesis (10:30 – 11:30)

 

1.     High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks

Lidan Wang (Rutgers University), Vishwanath Sindagi (Rutgers University), and Vishal Patel (Rutgers University)

 

2.     Symmetric Shape Morphing for 3D Face and Head Modelling

Hang Dai (University of York), Nick Pears (University of York), William Smith (University of York), and Christian Duncan (Alder Hey Children's Hospital)

 

3.     On Face Segmentation, Face Swapping, and Face Perception

Yuval Nirkin (The Open University of Israel), Iacopo Masi (University of Southern California), Anh Tuan Tran (University of Southern California), Tal Hassner (University of Southern California), and Gerard Medioni (University of Southern California)

 

Oral Session 6: Gesture Analysis (11:30 – 12:10)

 

1.     Deep Learning for Hand Gesture Recognition on Skeletal Data

Guillaume Devineau (MINES ParisTech, PSL Research University), Wang Xi (Shanghai Jiao Tong University), Fabien Moutarde (MINES ParisTech, PSL Research University), and Jie Yang (Shanghai Jiao Tong University)

 

2.     Learning to recognize touch gestures: recurrent vs. convolutional features and dynamic sampling

Quentin Debard (LIRIS / Itekube), Christian Wolf (INRIA / CITI / LIRIS / INSA-Lyon), Stéphane Canu (LITIS / INSA-Rouen), and Julien Arné (Itekube)

 

Oral Session 7: Special Session on Deep Learning for Face Analysis (14:00 – 15:00)

 

1.     ExpNet: Landmark-Free, Deep, 3D Facial Expressions

Fengju Chan (University of Southern California), Anh Tuan Tran (University of Southern California), Tal Hassner (The Open University of Israel), Iacopo Masi (University of Southern California), Ram Nevatia (University of Southern California), and Gerard Medioni (University of Southern California)

 

2.     (*) Cross-generating GAN for Facial Identity Preserving

Weilong Chai (Beijing University of Posts and Telecommunications), Weihong Deng (Beijing University of Posts and Telecommunications), and Haifeng Shen (AI Lab)

 

3.     Unsupervised Learning of Face Representations

Samyak Datta (IIIT at Hyderabad), Gaurav Sharma (IIIT at Hyderabad), and C. V. Jawahar (IIIT at Hyderabad)

 

 

Oral Session 8:  Facial Biometrics and Face Technology Application (15:30 – 16:50)

 

1.     Kinship Classification through Latent Adaptive Subspace

Yue Wu (Northeastern University), Zhengming Ding (Northeastern University), Hongfu Liu (Northeastern University), Joseph Robinson (Northeastern University), and Yun Fu (Northeastern University)

 

2.     Automatic detection of amyotrophic lateral sclerosis (ALS) from video-based analysis of facial movements: speech and non-speech tasks

Andrea Bandini (Toronto Rehabilitation Institute - University Health Network), Jordan R. Green (MGH Institute of Health Professions), Babak Taati (Toronto Rehabilitation Institute - University Health Network), Silvia Orlandi (Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital), Lorne Zinman (Neurology, Sunnybrook Health Sciences Centre), and Yana Yunusova (University of Toronto)

 

3.     Human Behaviors-based Automatic Depression Analysis using Hand-crafted Statistics and Deep Learned Spectral Features

Siyang Song (University of Nottingham), Linlin Shen (Shenzhen University), and Michel Valstar (University of Nottingham)

 

4.     (*) Harnessing Label Uncertainty to Improve Modeling: An Application to Student Engagement Recognition         

Arkar Min Aung (Worcester Polytechnic Institute) and Jacob Whitehill (Worcester Polytechnic Institute)

 

Exhibits (9:00 – 17:00) (TBA)

 

Friday (May 18, 2018)

 

Keynote (9:00 - 10:00)

“Towards ‘Human-Level’ Visual Human Understanding”, Dr. Jian Sun, Face++

 

Oral Session 9:  Affect and Expression (10:30 – 11:30)

 

1.     Edge Convolutional Network for Facial Action Intensity Estimation

Liandong Li (Beijing Normal University), Tadas Baltrusaitis (Carnegie Mellon University), Bo Sun (Beijing Normal University), and Louis-Philippe Morency (Carnegie Mellon University)

 

2.     Perceptual Facial Expression Representation

Olga Mikheeva (KTH Royal Institute of Technology), Carl Henrik Ek (University of Bristol), and Hedvig Kjellström (KTH Royal Institute of Technology)

 

3.      Facial Action Unit Recognition Augmented by Their Dependencies

Longfei Hao (University of Science and Technology of China), Shangfei Wang (University of Science and Technology of China), Guozhu Peng (University of Science and Technology of China), and Qiang Ji (Rensselaer Polytechnic Institute)

 

 

Oral Session 10: Psychological and Behavioral Analysis (11:30 – 12:10)

 

1.     Generative Models of Nonverbal Synchrony in Close Relationships

Joseph Grafsgaard (University of Colorado Boulder), Nicholas Duran (Arizona State University), Ashley Randall (Arizona State University), Chun Tao (Arizona State University), and Sidney D'Mello (University of Colorado Boulder)

 

2.     The What, When, and Why of Facial Expressions: An Objective Analysis of Conversational Skills in Speed-Dating Videos

Mohammad Rafayet Ali (University of Rochester), Taylan Sen (University of Rochester), Dev Crasta (University of Rochester), Viet-Duy Nguyen (University of Rochester), Ronald Rogge (University of Rochester), and M Ehsan Hoque (University of Rochester)   

 

Oral Session 11:  Face Detection and Alignment (14:00 – 15:20)

 

1.     Face Alignment across Large Pose via MT-CNN-based 3D Shape Reconstruction

Gang Zhang (Institute of Computing Technology, Chinese Academy of Sciences), Hu Han (Institute of Computing Technology, Chinese Academy of Sciences), Shiguang Shan (Institute of Computing Technology, Chinese Academy of Sciences), and Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences)

 

2.     Deep & Deformable: Convolutional Mixtures of Deformable Part-based Models

Kritaphat Songsri-In (Imperial College London), George Trigeorgis (Imperial College London), and Stefanos Zafeiriou (Imperial College London)

 

3.     Enhancing Interior and Exterior Deep Facial Features for Face Detection in the Wild

Chenchen Zhu (Carnegie Mellon University), Yutong Zheng (Carnegie Mellon University), Khoa Luu (Carnegie Mellon University), and Marios Savvides (Carnegie Mellon University)

 

4.     PersonRank: Detecting Important People in Images

Wei-Hong Li (Sun Yat-sen University), Benchao Li (Sun Yat-sen University), and Wei-Shi Zheng (Sun Yat-sen University)

 

 

Oral Session 12:  Multimodal Data for personal wellness and health (15:50 – 16:30)

1.     The OBF Database: A Large Face Video Database for Remote Physiological Signal Measurement and Atrial Fibrillation Detection

Xiaobai Li (University of Oulu), Iman Alikhani (University of Oulu), Jingang Shi (University of Oulu), Tapio Seppanen (University of Oulu), Juhani Junttila (Oulu University Hospital and University of Oulu), Kirsi Majamaa-Voltti (Oulu University Hospital and University of Oulu), Mikko Tulppo (Oulu University Hospital and University of Oulu),  and Guoying Zhao (University of Oulu)

 

2.     Deep Multimodal Pain Recognition: A Database and Comparision of Spatio-Temporal Visual Modalities

Mohammad A. Haque (Aalborg University), Ruben B. Bautista (University of Barcelona), Kamal Nasrollahi (Aalborg University), Sergio Escalera (University of Barcelona), Christian B. Laursen (Aalborg University), Ramin Irani (Aalborg University), Ole K. Andersen (Aalborg University), Erika G. Spaich (Aalborg University), Kaustubh Kulkarni (University of Barcelona), Thomas B. Moeslund (Aalborg University), Marco Bellantonio (University of Barcelona), Golamreza Anbarjafari (University of Tartu), and Fatemeh Noroozi (University of Tartu)

 

 

Poster Highlights II (16:30 – 17:15)

Format: 2 minutes per regular poster paper

Posters from Poster Session II (regular poster papers)

 

Poster Session II (17:15 – 18:45)

·       Posters of oral papers from Oral  Sessions 5 ~ 12 

·       Regular poster papers of Face Gesture, Body, Affect, Technology and Applications 

 

1.     Deep Transfer Network with 3D Morphable Models for Face Recognition

Zhanfu An (Beijing University of Posts and Telecommunications) and Weihong Deng (Beijing University of Posts and Telecommunications, Tongtong Yuan (Beijing University of Posts and Telecommunications), and Jiani Hu (Beijing University of Posts and Telecommunications)

 

2.     Hand-crafted Feature Guided Deep Learning for Facial Expression Recognition    

Guohang Zeng (Shenzhen University), Jiancan Zhou (Shenzhen University), Xi Jia (Shenzhen University), Weicheng Xie (Shenzhen University), and Linlin Shen (Shenzhen University)

 

3.     (*) A Parametric Freckle Model for Faces

Andreas Schneider (University of Basel), Thomas Vetter (University of Basel), and Bernhard Egger (University of Basel)

 

4.     What is the Challenge for Deep Learning in Unconstrained Face Recognition?

Guodong Guo (West Virginia University)  and Na Zhang (West Virginia University)

 

5.     (*) Barycentric Representation and Metric Learning for Facial Expression Recognition

Anis Kacem (IMT Lille Douai), Mohamed Daoudi (IMT Lille Douai) and Juan-Carlos Alvarez-Paiva (University of Lille)

 

6.     (*) Reverse Engineering Psychologically Valid Facial Expressions of Emotion into Social Robots

Chaona Chen (University of Glasgow), Oliver G.B. Garrod (University of Glasgow), Jiayu Zhan (University of Glasgow), Jonas Beskow (Furhat Robotics), Philippe G. Schyns (University of Glasgow),  and Rachael E. Jack (University of Glasgow)

 

7.     (*) Deep Unsupervised Domain Adaptation for Face Recognition

Zimeng Luo (Beijing University of Posts and Telecommunications), Jiani Hu (Beijing University of Posts and Telecommunications), Weihong Deng (Beijing University of Posts and Telecommunications), and Haifeng Shen (AI Lab)

 

8.     Multi-channel Pose-aware Convolution Neural Networks  for Multi-view Facial Expression Recognition

Yuanyuan Liu (China University of Geosciences), Jiabei Zeng (Institute of Computing Technology, CAS), Shiguang Shan (Institute of Computing Technology, CAS), and Zhuo Zheng (China University of Geosciences)

 

9.     Accurate Facial Parts Localization and Deep Learning for 3D Facial Expression Recognition

Asim Jan (Brunel University), Huaxiong Ding (Ecole Centrale de Lyon), Hongying Meng (Brunel University),  Liming Chen (Ecole Centrale de Lyon),  and Huibin Li (Xi’an Jiaotong University)

 

10.  Eigen-Evolution Dense Trajectory Descriptors

Yang Wang (Stony Brook University), Vinh Tran (Stony Brook University), and Minh Hoai (Stony Brook University)               

                 

11.  (*) Improve Accurate Pose Alignment and Action Localization by Dense Pose Estimation

Yuxiang Zhou (Imperial College London), Jiankang Deng (Imperial College London), and Stefanos Zafeiriou (Imperial College London)

 

12.  Toward Marker-less 3D Pose Estimation in Lifting: A Deep Multi-view Perceptron Solution

Rahil Mehrizi (Rutgers University), Xi Peng (Rutgers University), Shaoting Zhang (UNC Charlotte), Xu Xu (North Carolina State University), Dimitri Metaxas (Rutgers University), and Kang Li (Rutgers University)

 

13.  Linear and Non-Linear Multimodal Fusion for Continuous Affect Estimation in-the-Wild 

Yona Falinie Binti Abd Gaus  (Brunel University) and Hongying Meng (Brunel University)

 

14.  (*) Detecting Decision Ambiguity from Facial Images

Pavel Jahoda (Czech Technical University in Prague), Antonin Vobecky (Czech Technical University in Prague), Jan Cech (Czech Technical University in Prague), and Jiri Matas (Czech Technical University in Prague)

 

15.  Predicting Folds in Poker Using Action Unit Detectors and Decision Trees

Doratha Vinkemeier (University of Nottingham), Jonathan Gratch (University of Southern California),  and Michel Valstar (University of Nottingham)

 

16.  (*) A New Computational Approach to Identify Human Social Intention in Action

Mohamed Daoudi (IMT Lille Douai), Yann Coello (University of Lille), Paul-Audain Desrosier (University of Lille), and Laurent Ott (University of Lille)

 

17.  An Immersive System with Multi-Modal Human-Computer Interaction

Rui Zhao (Rensselaer Polytechnic Institute), Kang Wang (Rensselaer Polytechnic Institute), Rahul Divekar (Rensselaer Polytechnic Institute), Robert Rouhani (Rensselaer Polytechnic Institute), Hui Su (IBM), and Qiang Ji (Rensselaer Polytechnic Institute)

 

18.  (*) Clinical Valid Pain Database with Biomarker and Visual Information for Pain Level Analysis

Peng Liu (Binghamton University), Idris Yazgan (Binghamton University), Sarah Olsen (Binghamton University), Alecia Moser (Binghamton University), Umur Ciftci (Binghamton University), Saeed Bajwa (SUNY Upstate Medical University at Syracuse), Christian Tvetenstrand (United Health Services Hospital at Binghamton), Peter Gerhardstein (Binghamton University), Omowunmi Sadik (Binghamton University), and Lijun Yin (Binghamton University)

 

19.  (*) Toward Visual Behavior Markers of Suicidal Ideation

Naomi Eigbe (Rice University), Tadas Baltrusaitis (Microsoft), Louis Philippe Morency (Carnegie Mellon University),  and John Pestian (Cincinnati Children's Hospital Medical Center)

 

20.  (*) Semi-Supervised Learning for Monocular Gaze Redirection

Daniil Kononenko (Skolkovo Institute of Science and Technology) and Victor Lempitsky (Skolkovo Institute of Science and Technology)

 

21.  Head Pose Estimation on Low-Quality Images

Qiang Ji (Rensselaer Polytechnic Institute), Kang Wang (Rensselaer Polytechnic Institute), and Yue Wu (Tesla)

 

22.  LCANet: End-to-End Lipreading with Cascaded Attention-CTC

Kai Xu (Arizona State University),  Xiaolong Wang (Samsung Electronics), and Dawei Li (Samsung Electronics)

 

23.  HeadNet: Pedestrian Head Detection Utilizing Body in Context

Gang Chen (Institute of Computing Technology, CAS), Xufen Cai (Communication University of China), Hu Han (Institute of Computing Technology, CAS), Shiguang Shan (Institute of Computing Technology, CAS), and Xilin Chen (Institute of Computing Technology, CAS)

 

Demos (17:15 – 18:45)

1.     Real-time emotion recognition on mobile devices, Denis Sokolov, Mikhail Patkin; (WeSee, London, UK)

 

2.     Fast Face and Saliency Aware Collage Creation for Mobile Phones, Love Mehta, Abhinav Dhall; (Indian Institute of Technology at Ropar)

 

3.     Human Computer Interaction with Head Pose, Eye Gaze and Body Gestures, Kang Wang, Rui Zhao, Qiang Ji; (Rensselaer Polytechnic Institute)

 

4.     End-to-end, automatic face swapping pipeline, Yuval Nirkin (The Open University of Israel), Iacopo Masi (University of Southern California), Anh Tuan Tran (University of Southern California), Tal Hassner (University of Southern California), Gerard Medioni (University of Southern California)

 

 

5.     Letter-level Writer Identification, Zelin Chen, Hong-Xing Yu, Ancong Wu, Wei-Shi Zheng; (Sun-Yet Sen University)

 

6.     OpenFace 2.0: facial behavior analysis toolkit, Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, Louis-Philippe Morency; (Carnegie Mellon University)

 

 

Exhibits (9:00 – 18:45)  (TBA)



May 15th, 2018 (3 Workshops + 5 Tutorials + 1 Doctoral Consortium)

May 15th, 2018

Room

Morning: 9:00am – 12:30pm

Afternoon: 1:00pm – 5:30pm

Room 1 (W)

8th Int. Workshop on Human Behavior Understanding in conjunction with

2nd Int. Workshop on Automatic Face Analytics for Human Behavior (W1)

Room 2

(T/W)

Statistical Methods For Affective Computing (T5)

Latest developments of FG technologies in China (W2)

Room 3

(T/W)

Person Re-Identification: Recent Advances And Challenges (T1)

Facial Micro-Expression Grand Challenge (MEGC): Methods and Datasets (W3)

Room 4

(T/T)

Representation Learning For Face Alignment And Recognition (T3)

Ms-Celeb-1m: Large Scale Face Recognition Challenge Tutorial (T2)

Room 5

(T/DC)

Reading Hidden Emotions From Micro-Expression Analysis (T4)

Doctoral Consortium (DC)  (Start at 12:30pm lunch with all participants)




May 19th, 2018 (3 Workshops + 4 Tutorials + 2 Challenges)

May 19th, 2018

Room

Morning: 9:00am – 12:30pm

Afternoon: 1:00pm – 5:30pm

Room 1 (W)

First Workshop on Large-scale Emotion Recognition and Analysis (W4)

 

Room 2

(C/W)

Challenge 1 “Holoscopic Micro-Gesture Recognition Challenge 2018” (C1)

Face and Gesture Analysis for Health Informatics (FGAHI) (W5)

Room 3

(T/W)

Sign Language Recognition And Gesture Analysis (T7)

The 1st International Workshop on Real-World Face and Object Recognition from Low-Quality Images (FOR-LQ) (W6)

Room 4

(T/T)

Active Authentication In Mobile Devices: Role Of Face And Gesture (T8)

Physiological Measurement From Images And Videos (T6)

Room 5

(T/C)

Introduction To Deep Learning For Facial Understanding (T9)

Challenge 2 “Recognizing Families In the Wild (RFIW) 2.0” (C2)



Note: Each oral presentation follows the following format:
- Long paper: 17 minutes presentation and 3 minutes questions
- Short Paper (*): 12 minutes presentation and 3 minutes questions
- Each oral paper is also required to show its poster in a designated poster session.


Note: Each poster presentation follows the following format:
- Poster format: Display board size is 2.5m (8.20′) x 1.0m (3.28′)–portrait  orientation.  Suggested poster size in portrait orientation 1.5m ~ 1.8m (in height)  x 0.9m (in width) . We will provide clear adhesive tape for poster mounting.