BEYOND PLEASURE AND PAIN: THE CASE OF REAL-LIFE EXTREME EMOTIONAL EXPRESSIONS

Date and time:
Room:


Professor Hillel Aviezer, Hebrew University
Hillel Aviezer graduated the clinical neuropsychology program at the Hebrew University of Jerusalem. He completed his thesis on contextualized emotion perception under the joint supervision of Professors Shlomo Bentin and Ran Hassin. After obtaining his PhD, he continued to a post-doc in Princeton University, where he worked with Prof. Alex Todorov. At 2012 Aviezer joined the faculty of the psychology department at Hebrew University, where he is currently an associate Professor.


Abstract:
The distinction between positive and negative facial expressions is assumed to be clear and robust. Nevertheless, research with intense real-life faces has shown that viewers are unable to differentiate the valence of such expressions without the use of body context. Using FACS analysis, we supplied participants with valid information about objective facial activity that could be easily used to differentiate positive from negative expressions. Strikingly, ratings remained virtually unchanged and participants failed to differentiate between positive and negative faces. We propose that the immunity of participants to objectively useful facial information results from stereotypical (but erroneous) inner representations of extreme positive and negative expression. These results have several important implications for automated expression recognition efforts. First, they demonstrate that felt and expressed emotion may dissociate, thus theories of basic expressions may have serious limitations in real-life. Second, they suggest a surprising dissociation between information present in isolated facial expressions and information used by human perceivers. Finally, they highlight the critical role of context in the perception of facial expressions.

THE AFFECTIVE BODY IN A TECHNOLOGY-MEDIATED WORLD

Date and time:
Room:


Professor Nadia Berthouze, University College London
Professor Nadia Berthouzeis a Full Professor in Affective Computing and Interaction at the University College London Interaction Centre (UCLIC). Her research focuses on designing technology that can sense the affective state of its users and use that information to tailor the interaction process. She has pioneered the field of Affective Computing by investigating how body movement and touch behaviour can be used as means to recognize and measure the quality of the user experience. She also studied how full-body technology and body sensory feedback can be used to modulate people's perception of themselves and of their capabilities to improve self-efficacy and coping capabilities. Her work has been motivated by real-world applications such as physical rehabilitation (EPSRC Emo&Pain), textile design (EPSRC Digital Sensoria), education (H2020 WeDraw) and wellbeing on the industrial workfloor (H2020 Human Manufacturing). She has published more than 200 papers in Affective Computing, HCI, and Pattern Recognition.


Abstract:
Body movement and touch behaviour are important agents in the affective life of people. With the emergence of full-body sensing technology come new opportunities to support people's affective experiences and needs. Although we are now able to track people's body movements almost ubiquitously through a variety of low-cost sensors embedded in our environment as well as in our accessories and clothes, the information garnered is typically used for activity tracking more than for recognising and modulating affect. In my talk I will highlight how we express affect through our bodies in everyday activities and how technology can be designed to read those expressions and even to modulate them. Among various applications, I will present our work on technology for chronic pain management and discuss how such technology can lead to more effective physical rehabilitation through integrating it in everyday activities and supporting people at both physical and affective levels. I will also discuss how this sensing technology enables us to go beyond simply measuring and reflecting on one's behaviour by exploiting embodied bottom-up mechanisms that enhance the perception of one's body and its capabilities. I will conclude by identifying new challenges and opportunities that this line of work presents.

TOWARDS "HUMAN-LEVEL" VISUAL HUMAN UNDERSTANDING

Date and time:
Room:


Dr. Jian Sun, Face++
Dr. Jian Sun is Chief Scientist of Megvii Technology (Face++), a Computer Vision/AI startup (800+ FTE, 600M USD total funding, ranked at the 11th among 50 smartest companies 2017 by MIT Technology Review) located at Beijing, China. He received a B.S., a M.S. and a Ph.D., in Electronical Engineering at Xian Jiaotong University in 1997, 2000 and 2003. Following his dissertation, he joined Visual Computing Group in Microsoft Research Asia. He was promoted as Lead Researcher, Senior Researcher, Principal Researcher in 2008, 2010, and 2013. He relocated to Microsoft Research US in 2015, and was promoted as Principal Research Manager, Microsoft Partner in 2016.

His primary research interests are deep learning based image understanding, face recognition, and computational photography. Since 2002, he has published 100+ scientific papers on five tier-one conferences or journals (CVPR, ICCV, ECCV, SIGGRAPH, and PAMI). His Google Scholar citation number is 40,000+, H-index is 68, up to 02/2018. He is the recipient of Best Paper Awards of CVPR 2010 and CVPR 2016. In 2015, his team won five 1st places on ImageNet Challenge and COCO Visual Recognition Challenge, by invented "Residual Networks" and "Faster R-CNN" algorithms, which have been prevalently used in both academy and industry, including DeepMind's AlphaGo Zero in 2017. Jian was named one of the world's top 35 young innovators (TR35) by MIT Technology Review in 2010. He severed as area chair of ICCV 2011, CVPR 2012/2015/2016/2017, and paper committee of Siggraph 2011. He is also the recipient of National Natural Science Award of China (second class) in 2016. His team at Megvii Research won three 1st places on COCO & Places Visual Recognition Challenge in 2017. He holds 40 International or US patents.


Abstract:
In the first part of this talk, I will briefly present my views on the revolution and challenges in computer vision, by the rising of deep learning or deep neural networks. I will introduce some research works I have done in Microsoft Research and Megvii Research, including ResNet, Faster-RCNN, ShuffleNet, and MegDet, and some real-world applications I have built.

The second part of the talk focuses on the impacts of deep learning on visual understanding of human, the most important "object" in the world. This part will cover face recognition, anti-spoofing, human pose estimation, and pedestrian re-identification.