Exploration of underwater environments, such as coral reefs and ship wrecks, is a difficult and potentially dangerous tasks for humans, which makes the use of an autonomous robotic system very appealing. This paper presents such an autonomous system that can be used to assist humans in exploring challenging underwater environments. We use an amphibious underwater robot to autonomously collect image data, while trying taking into account the semantic structure of the environment. The output of the exploration task is a video and a summary that highlights any surprising observations that were encountered by the robot. We allow the robot to only control its speed as it traverses this predefined path, such that more time could be spent around surprising regions. An online spatiotemporal topic modeling framework is used understand the scene, and compute a surprise score based on previous observations. Experiments at low speed on an amphibious robot revealed that a previous depth control had to be improved.