A resource-rational model of physical abstraction for efficient mental simulation

Tina O. Zhu, Jessica Hamrick, Kevin R. McKee, Raphael Koster, Jan Balaguer, Peter Battaglia, & Matthew Botvinick

Abstract

Physical simulation enables people to make intuitive predictions about physical scenes and interact flexibly with the objects around them, from a stack of books balanced on a ledge to the turrets and moats of a sandcastle. We hypothesize that when the number of possible objects makes simulation intractable, people use “chunked” abstractions that reduce the number of objects they need to simulate while also minimizing simulation error. We tracked participants’ gaze while they viewed complex towers of blocks and predicted whether the towers would remain stable under gravity. We developed a resource-rational model of how people might optimally partition towers into chunks of blocks. Subsequently, we compared this model to participants’ fixations over the scene. We explore how efficient, resource-rational chunkings of physical scenes might underlie people’s ability to make rapid and robust inferences in this domain.


Venue

Proc. of the 41st Annual Conference of the Cognitive Science Society

Year

2019

Links

CogSci Proceedings