Call for Special
Session and Panel Proposals
The organizers of FG 2018 invite proposals for Special Sessions and Panels to be held during the main conference which will be held from May 16 – May 18, 2018 in Xi’an, China.
Proposals may be submitted on any topic that falls into the broad areas of face and gesture recognition, modeling, and analysis. We particularly encourage proposals that highlight emerging new fields and novel challenges related to face and gesture as well as interdisciplinary topics that bring new perspectives to the FG community. Proposals that plan to hold a special session with an integrated panel discussion are particularly encouraged.
Proposers should have a strong track record in the proposed field.SUBMISSION GUIDELINES
Proposals should be submitted to the Program Committee with the subject field “FG 2018 [Special Session/Panel] Proposal: [title of session]”. A proposal must include the following information:
- The title of the proposed Special Session or Panel.
Special Session papers will be part of the main conference proceedings, but the review process should be organised by the proposers in collaboration with the FG Program Chairs. Papers submitted to a Special Session should describe original work and should follow the general FG submission guidelines.IMPORTANT DATES
Special Session & Panel proposals due: 7 July 2017
All proposals should be sent to the FG 2018 Programme
List of Proposed Topics of FG 2018 Special SessionsFollowings are the list of the proposed topics of FG 2018 Special Sessions. Papers submitted for Special Sessions will be double-blind reviewed following the same review process as other standard FG18 papers. All papers should follow the general FG submission guidelines and be submitted through the FG18 EasyChair system (then select the corresponding special session track) by the due date September 22, 2017. Accepted papers will be published in the main conference proceedings.
1. Perception, Cognition and Psychophysiology of Gesture Interaction
Gesture recognition algorithms have shown a dramatic jump in performance thanks to deep learning and convolutional neural networks techniques. We witness a plethora of papers using such techniques to display exceptional performances over the state of the art. Conversely, this special session focuses on mechanisms related to the generation of the gestures and their inherent function. While understanding such mechanisms may not necessary lead to an improved recognition accuracy and low alarm rate, it would enable more “human-like” interaction with machines. These mechanisms include (but not limited to):
(1) perceptual - where we focus when observing gestures, what we actually “see,” and how is this represented in the brain);
(2) cognitive - what do we remember and naturally associate with the gestures, and their higher-level semantics;
(3) sensorimotor - how the gestures are planned and physically generated, including their associated action-perception feedback control loops and judgment of exerted physical and physiological effort;
(4) affective – the ways in which gestures impart rich emotive communication and personality during gesture interactions; for instance, is there equivalence to action units for expressing emotive themes;
Dr. Juan P Wachs, Purdue University, USA; (point of contact: firstname.lastname@example.org)
Dr. Adar Pelah, University of York, UK;
Dr. Richard Voyles, Purdue University, USA;
2. Is Deep Learning Always the Best Solution for Face Recognition?
In recent years, Deep Learning (DL) has become a dominant method for a wide variety of computer vision tasks. One of its biggest successes has been in face recognition where the performance has been improved dramatically. So, will DL make other face recognition algorithms obsolete? Is deep learning always the best solution in any scenarios? Is it necessary for researchers to deeply investigate the traditional face recognition technique in the DL era? Actually, deep learning is not perfect. For instance, deep learning heavily depends on big data which is sometimes quite expensive and sometimes may not be available. Due to this limitation, conventional methods achieve superior or comparable performance against DL methods in the field of facial landmark detection, face recognition across large poses, thermal/near Infrared Face Recognition, 3D face recognition, etc. It would be interesting to explicitly compare DL methods with traditional methods in terms of accuracy, efficiency and model complexity. We aim to investigate the scenarios where conventional methods can outperform DL methods by EXPLICIT comparison and deep analysis.
We welcome submissions on topics related to the investigation of (i) the advantages of traditional methods against DL methods and (ii) insights into the reasons of these advantages.
Dr. Guosheng Hu, Anyvision, UK (point of contact: email@example.com)
Dr. Neil Robertson, Queen's University Belfast, UK
Dr. Josef Kittler, University of Surrey, UK
Dr. Stan Z Li, Institute of Automation, CAS, China
Dr. Zhen Lei, Institute of Automation, CAS, China
3. Face and Gesture Recognition on Mobile Devices
Nowadays, mobile devices (e.g., phones, drones, AR gears) are equipped with high-resolution cameras which enable many exciting face related applications, e.g., selfie masks, facial authentication, face detection/tracking, liveness detection, etc. On the other hand, gesture recognition provides novel ways to control and interact with mobile devices. However, there are challenges when deploying existing computer vision and image processing methods directly on mobile devices. Users might experience obvious delays due to limited computing power, and quickly drained energy power due to limited battery capacity.
In this special session, we call for contributions of the followings:
(1) Efficient algorithms for face and gesture recognition.
(2) Model compression and acceleration for deep learning based approaches for face and gesture analysis.
(3) Gesture recognition with mixed inputs (e.g., vision, mobile sensors).
(4) Novel algorithms and applications for interacting with mobile devices.
(5) Optimized processing with heterogeneous computing units (e.g., CPU, GPU, DSP).
Dr. Xiaolong Wang, Samsung Research America, USA (point of contact: firstname.lastname@example.org)
Dr. Shiguang Shan, Institute of Computing Technology, CAS, China
Dr. Yan Tong, University of South Carolina, USA
Dr. Guo-Jun Qi, University of Central Florida, USA
Dr. Dawei Li, Samsung Research America
Dr. Chandra Kambhamettu, University of Delaware, USA
4. Kinect-based Kinematic Data Analysis and Evaluation for Clinical Applications
In recent years, there has been an increased interest in automated methods for detection, analysis and quantification of human motion (e.g. physical activity, performance execution and rehabilitation). This is due to the increased availability of low-cost multi-modality RGBD marker-less capturing devices. These devices are clinically important to allow for assessment in home-based and remote settings by extracting kinematic features for use in post-stroke recovery, fall prevention, and in-house elderly monitoring.
We welcome submissions on the topics related to clinical application of RBGD, including
(1) Geometric approaches for human motion analysis;
(2) Mathematical models for fall prevention and fall detection;
(3) Balance performance analysis;
(4) Normal and pathological Gait analysis;
(5) Postural abnormalities;
(6) Moving body symmetry analysis;
(7) Elderly coaching and monitoring;
(8) Post-stroke rehabilitation using RBD;
(9) Evaluation of Kinect-like sensors in kinematic data analysis;
(10) Face recognition and analysis for clinical application;
(11) Encoding musculoskeletal measurements from depth and/or motion capture data;
(12) “In-the-wild” encoding and measuring kinematic signals;
(13) Micro-non-mico facial analysis.
Dr. Daniel Leightley, King’s College London, UK (point of contact: email@example.com)
Dr. Boulbaba Ben Amor, IMT Lille Douai/CRIStAL CNRS 9189, France
Dr. Moi Hoon Yap, Manchester Metropolitan University, UK
Dr. Pavan Turaga, Arizona State University, USA
Dr. Anuj Srivastava, Florida State University
5. Measurable Social Impact Systems and Applications
Advances in face and gesture (FG) recognition technology can be measured using multiple technical criteria such as detection and recognition accuracy as well as responsiveness. Along with the novelty, another important criterion in the FG application domain could be the humanitarian and compassionate benefit of the presented technology, i.e. its social impact. It can be revealed from the following three aspects:
number of people/users who can directly benefit from the described FG applications;
amount of time and/or money saved by the described FG systems; and
emotional stress reduction and satisfaction degree from the surveys data
The described FG methodology should bring measurable benefits to humanity and be applicable in any country or culture. Considered FG technologies may target medical applications, health and safety monitoring, disaster relief efforts, family and pets reunification systems, etc. This set excludes all military and intelligence applications, but may include some law enforcement and counterterrorism systems, as long as they are universal. Papers (in the Conclusion section) must explicitly discuss:
* Social Impact via verifiable estimates of at least two of the aforementioned three aspects;
* Technical and Practical Challenges that the authors encountered in applying FG technology in the real world, formulated in terms of problems that the FG community can collectively work on.
Dr. Babak Taati, University of Toronto, Canada (point of contact: Babak.Taati@uhn.ca)
Dr. Eugene Borovikov, National Institute of Health/ US National Library of Medicine, USA
6. Generalized Face Spoofing Detection in Real-World Applications
It is known that most of existing face recognition systems are vulnerable to spoofing attacks. A spoofing attack occurs when a person tries to bypass a face biometric system by presenting a fake face in front of the camera. The proposed anti-spoofing methods in the literature have shown very encouraging results on individual databases but lack generalization to varying nature of spoofing attacks that can be encountered in real-world applications. As the field evolves, new and more challenging databases are expected. The recent advances in machine learning (e.g. deep learning) are expected to play a key role in spoofing detection as well. This special session aims to bring together researchers working on face spoofing detection and related disciplines to present and discuss the recent developments in the field. This session will hopefully open a debate on new opportunities and new challenges in the area, and at unifying the efforts toward the development of new adequate tools, protocols and databases for evaluating and monitoring the progress in the field.
Dr. Abdenour Hadid, University of Oulu, Finland (point of contact: firstname.lastname@example.org)
Dr. Alex Kot, Nanyang Technology University, Singapore
7. Automatic Kinship Verification from Face
Face-based kinship (or family) verification is a relatively a new topic of research which is attracting more and more interest in the recent years. Kinship verification aims at determining whether two persons are from the same family or not based only on “facial patterns” including appearance and movements. The challenge is to automatically learn and extract the similarity between the family members in unconstrained settings and regardless of gender, age and identity. This special session aims to bring together researchers working on kinship verification and related disciplines to present and discuss the recent developments in the field. This session will hopefully open a debate on new opportunities and new challenges in the area, and at unifying the efforts toward the development of new adequate tools, protocols and databases for evaluating and monitoring the progress in the field.
Dr. Abdenour Hadid, University of Oulu, Finland (point of contact: email@example.com)
Dr. Xiaoyi Feng, Northwestern Polytechnical University, China