In natural circumstances, many kinds of sensory information contribute to self-motion perception, including visual, vestibular, kinesthetic, and somatosensory information, and they are thought to be integrated in harmony to achieve robust spatial self-orientation (e.g., Howard, 1982 for a recent review, see Britton & Arshad, 2019). These results suggest that feature-defined visual motion containing no luminance modulation has the potential to contribute to visual self-motion perception.Īccurate perception of self-motion is crucial to our behavioral adaptation to the environment. In the case where the orientation and luminance rotations were combined and presented simultaneously, perceived self-rotation was mainly determined by the luminance rotation when both factors were set to rotate in consistent or inconsistent directions. The experiments clearly indicate that orientation-defined visual rotation can strongly induce an observer's perceived self-rotation (roll vection), although it was significantly weaker than that induced by luminance-defined rotation. In this study, psychophysical experiments examining the potential effects of visual motion defined by features other than luminance on visual self-motion perception (vection) were conducted, employing orientation-defined rotation (so-called fractal rotation) as a visual inducer. It is believed that visual self-motion perception (vection) can be effectively induced only in the case where the inducer's motion is defined by luminance modulation.
0 Comments
Leave a Reply. |