This paper describes the fully automatic creation of an
environment's description using an image-based representation. This
representation is a collection of cylindrical sample images combined
into an "image-based virtual reality". The locations at which the
environment will be sampled are chosen automatically using an
operator inspired by models of human visual attention and saccadic
motion. The image acquisition is performed by a mobile robot. The selection of vantage points is based on an analysis of the edge structure of sampled panoramic images. In order to trade off the optimality of the generated description of the navigation effort required in solving the on-line problem, a concept referred to as alpha-backtracking is introduced. The paper illustrates sample data acquired by the procedure.
|