One of the striking findings in visual science from the past several decades is that the brain only retains a very small amount of information about visual scenes from one time to the next; for example, across saccadic gaze shifts. This has led some researchers to study the nature of the capacity limits of visual working memory and how that information is represented in the brain. We develop and test information theoretic computational models that emphasize both the limits on and flexibility of memory encoding. We are further extending "classic" visual working memory experiments, which use kinds of explicit visual memory tasks that we rarely perform in everyday life, to study how observers use visual working memory when performing natural tasks. In these experiments, we study visual working memory as it is used to guide hand movements when performing naturalistic tasks in virtual reality. This allows us to study how the demands of natural tasks shape how the brain represents information in working memory.