This paper deals with automatically learning the spatial distribution of a set of images. That is, given a sequence of images acquired from well-separated locations, how can they be arranged to best determine where they were viewed from, relative to one onother. The solution to this problem can be viewed as an instance of robot mapping although it can also be used in other contexts. We examine the problem where only limited prior odometric information is available, employing a feature-based method derived from a probabilistic pose estimation framework. Initially, a set of visual features are selected from the images and correspondences are found across the ensemble. The images are then localized by first assembling the small subset of images for which odometric confidence is high, and sequentially inserting the remaining images, localizing each against the previous estimates, and taking advantage of any priors that are available. We present experimental results validating the approach, and demonstrating metrically and topologically accurate results over two large image ensembles. Finally, we discuss the results, their relationship to the autonomous exploration of an unknown environment, and their utility for robot localization and navigation.