I am a Senior Research Scientist in the Center for Learning and Memory at the University of Texas at Austin. My research uses structural and functional neuroimaging, virtual reality environments, and machine learning approaches to uncover novel insights into how people perceive where they are located in the world and use that knowledge to map their memories.
I have developed several virtual reality (VR) environments to examine how humans learn and remember in naturalistic environments:
- Volt - virtual object location task
- Voltage - VOLT developed for ECoG experimentation
- MoshiGO - virtual paradigm to examine the development of spatial cognition
- MoshiGO Eyes - MoshiGO with eyetracking capabilities built in
- XMaze - virtual paradigm to examine the formation of hierarchical associative knowledge
- XMaze ECoG - XMaze developed for ECoG experimentation
- XMaze Eyes - XMaze with eyetracking capabilities built in
- XMaze Replay - XMaze with eye gaze behavior superimposed on replay of participant's actions
- Honeycomb - complex virtual environment to test online human decision making
Note: Several of these respositories are private or unlinked as either data collection and/or analysis is still ongoing. They will be made public upon peer-review and publication. For a flavor of my coding style and expertise, please see Volt
Further bespoke Python and Matlab code for data analysis will be made public soon ...