Reconstruction of 3D models from intensity images and partial depth This paper addresses the probabilistic inference of geometric structures from images. Specifically, of synthesizing range data to enhance the reconstruction of a 3D model of an indoor environment by using video images and (very) partial depth information. In our method, we interpolated the available range data using statistical inferences learned from the concurrently available video images and from those (sparse) regions where both range and intensity information is available. The spatial relationships between the variations in intensity and range can be efficiently captured by the neighborhood system of a Markov Random Field (MRF). In contrast to classical approaches to depth recovery (i.e. stereo, shape from shading), we can afford to make only weak prior assumptions regarding specific surface geometries or surface reflectance functions since we compute the relationship between existing range data and the images we start with. Experimental results show the feasibility of our method.