This paper describes an integrated system for the automatic
construction of image-based virtual realities to describe a real
environment. A mobile robot autonomously navigates through the
environment and uses a camera to make observations. At locations that
are deemed sufficiently interesting, panoramic images are collected
that are used to construct a multi-node VR movie. Images of the environment are classified in terms of two features related to human attention: edge element density and edge orientation. The system deems locations interesting if they are sufficiently different from the surrounding environment. The parameterization of the surrounding environment is computed either in a pre-computation pass, or on-line using a technique termed alpha-backtracking. The panoramic images that describe the environment are automatically joined together in a navigable movie that simulates motion in the real environment.
|