<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Zaraki, A</style></author><author><style face="normal" font="default" size="100%">Mazzei, D</style></author><author><style face="normal" font="default" size="100%">Giuliani, M</style></author><author><style face="normal" font="default" size="100%">De Rossi, D</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Designing and Evaluating a Social Gaze-Control System for a Humanoid Robot</style></title><secondary-title><style face="normal" font="default" size="100%">Human-Machine Systems, IEEE Transactions on, </style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">04/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;arnumber=6736067</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">44</style></volume><pages><style face="normal" font="default" size="100%">157-168</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(51, 51, 51); font-family: Arial, sans-serif; font-size: 15px; line-height: 24.012800216674805px;&quot;&gt;This paper describes a context-dependent social gaze-control system implemented as part of a humanoid social robot. The system enables the robot to direct its gaze at multiple humans who are interacting with each other and with the robot. The attention mechanism of the gaze-control system is based on features that have been proven to guide human attention: nonverbal and verbal cues, proxemics, the visual field of view, and the habituation effect. Our gaze-control system uses Kinect skeleton tracking together with speech recognition and SHORE-based facial expression recognition to implement the same features. As part of a pilot evaluation, we collected the gaze behavior of 11 participants in an eye-tracking study. We showed participants videos of two-person interactions and tracked their gaze behavior. A comparison of the human gaze behavior with the behavior of our gaze-control system running on the same videos shows that it replicated human gaze behavior 89% of the time.&lt;/span&gt;&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue><section><style face="normal" font="default" size="100%">157</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Zaraki, Abolfazl</style></author><author><style face="normal" font="default" size="100%">Giuliani, M</style></author><author><style face="normal" font="default" size="100%">Dehkordi, MaryamBanitalebi</style></author><author><style face="normal" font="default" size="100%">D'ursi, A.</style></author><author><style face="normal" font="default" size="100%">De Rossi, Danilo</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">An RGB-D based social behavior interpretation system for a humanoid social robot</style></title><secondary-title><style face="normal" font="default" size="100%">Robotics and Mechatronics (ICRoM), 2014 Second RSI/ISM International Conference on </style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Oct/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=6990898&amp;isnumber=6990766</style></url></web-urls></urls><pages><style face="normal" font="default" size="100%">185-190</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;div id=&quot;article-actions&quot;&gt;
	&lt;div class=&quot;module article-tools actionbar&quot;&gt;
		&lt;div id=&quot;pop-container&quot; style=&quot;margin-top: 1px;&quot;&gt;
			&lt;span style=&quot;font-size: 12px;&quot;&gt;Humanoid social robots that interact with people need to be capable of interpreting the social behavior of their interaction partners in order to respond in a socially appropriate way. In this paper, we present a social behavior interpretation system that enables a humanoid robot to recognize human social behavior by analyzing communicative signals. The system receives the constructed RGB-D scene from a Kinect sensor, extracts information about body gesture and head pose from the scene using Microsoft Kinect SDK, and recognizes eight human social behaviors using a Hidden Markov Model (HMM). We trained the eight-state HMM with a corpus of 35 recorded human-human interaction scenes. The evaluation of the system shows a weighted average recognition rate of 81% for all states.&lt;/span&gt;&lt;/div&gt;
	&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
</style></abstract></record></records></xml>