Current Search: Direct Perception (x)
View All Items
- Title
- IS PERCEIVED INTENTIONALITY OF A VIRTUAL ROBOT INFLUENCED BY THE KINEMATICS?.
- Creator
-
Sasser, Jordan, McConnell, Daniel S., University of Central Florida
- Abstract / Description
-
Research has shown that in Human-Human Interactions kinematic information reveals that competitive and cooperative intentions are perceivable and suggests the existence of a cooperation bias. The present study invokes the same question in a Human-Robot Interaction by investigating the relationship between the acceleration of a virtual robot within a virtual reality environment and the participants perception of the situation being cooperative or competitive by attempting to identify the...
Show moreResearch has shown that in Human-Human Interactions kinematic information reveals that competitive and cooperative intentions are perceivable and suggests the existence of a cooperation bias. The present study invokes the same question in a Human-Robot Interaction by investigating the relationship between the acceleration of a virtual robot within a virtual reality environment and the participants perception of the situation being cooperative or competitive by attempting to identify the social cues used for those perceptions. Five trials, which are mirrored, faster acceleration, slower acceleration, varied acceleration with a loss, and varied acceleration with a win, were experienced by the participant; randomized within two groups of five totaling in ten events. Results suggest that when the virtual robot's acceleration pattern were faster than the participant's acceleration the situation was perceived as more competitive. Additionally, results suggest that while the slower acceleration was perceived as more cooperative, the condition was not significantly different from mirrored acceleration. These results may indicate that there may be some kinematic information found in the faster accelerations that invoke stronger competitive perceptions whereas slower accelerations and mirrored acceleration may blend together during perception; furthermore, the models used in the slower acceleration conditions and the mirrored acceleration provide no single identifiable contributor towards perceived cooperativeness possibly due to a similar cooperative bias. These findings are used as a baseline for understanding movements that can be utilized in the design of better social robotic movements. These movements would improve the interactions between humans and these robots, ultimately improving the robot's ability to help during situations.
Show less - Date Issued
- 2019
- Identifier
- CFH2000524, ucf:45668
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH2000524
- Title
- THE ROLE OF CUES AND KINEMATICS ON SOCIAL EVENT PERCEPTION.
- Creator
-
Berrios, Estefania, McConnell, Daniel S., University of Central Florida
- Abstract / Description
-
The belief that intentions are hidden away in the minds of individuals has been circulating for many years. Theories of indirect perception, such as the Theory of Mind, have since been developed to help explain this phenomenon. Conversely, research in the field of human kinematics and event perception have also given rise to theories of direct perception. The purpose of the study was to determine if intentionality can be directly perceived rather than requiring inferential processes. Prior...
Show moreThe belief that intentions are hidden away in the minds of individuals has been circulating for many years. Theories of indirect perception, such as the Theory of Mind, have since been developed to help explain this phenomenon. Conversely, research in the field of human kinematics and event perception have also given rise to theories of direct perception. The purpose of the study was to determine if intentionality can be directly perceived rather than requiring inferential processes. Prior research regarding kinematics of cooperative and competitive movements have pointed toward direct perception, demonstrating participants can accurately judge a movement as cooperative or competitive by simply observing point-light displays of the isolated arm movements. Considering competitive movements are often performed faster than cooperative movements, speed was perturbed for the purpose of this study to determine if participants are relying on cues or if they can indeed perceive a unique kinematic pattern that corresponds to intentionality. Judging the clips correctly despite perturbation would suggest perception is direct. Additionally, we hypothesized judgments accuracy would be higher in the presence of two actors pointing to the use of interpersonal affordances. Twenty-eight participants from the University of Central Florida were asked to judge 40 clips presented in random order including: normal or perturbed competitive actions with one or two actors; normal or perturbed cooperative actions with one or two actors. Percent correct and reaction time data were analyzed on SPSS using a repeated measures ANOVA. Results rejected the hypothesis that social perception is direct and supported indirect perception, indicating participants relied on cues to make judgments, and provided potential support for the interpersonal affordance hypothesis.
Show less - Date Issued
- 2019
- Identifier
- CFH2000514, ucf:45681
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH2000514
- Title
- DIRECT MANIPULATION OF VIRTUAL OBJECTS.
- Creator
-
Nguyen, Long, Malone, Linda, University of Central Florida
- Abstract / Description
-
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world....
Show moreInteracting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities proprioception, haptics, and audition and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
Show less - Date Issued
- 2009
- Identifier
- CFE0002822, ucf:48060
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002822