2/14/2025

NEI Research Grant for Vision-Related Secondary Data Analysis 

Human System Understanding Through Deep Artificial Neural Networks 

The current state of the field of human visual neuroscience has identified brain regions that are responsible for identifying human actions in dynamic scenes (e.g. a video clip). Our work will answer questions about how the human brain identifies human actions in static images (e.g. a photograph), yielding fundamental knowledge about workings of the human brain and how the brain naturally identifies the actions, goals, and intentions of others around us. 

We will focus on finding links between the structure of the human brain and artificial intelligence. Highlighting key differences between humans and current artificial systems will allow us to create more biologically inspired systems, potentially improving their performance. State-of-the-art deep learning models, such as EHOI, EPK-CLIP and CLIP4HOI, have achieved some success in identifying human actions from photographs. Finding the differences between the human brain and the artificial neural networks will provide a path to improving these models.

In summary, our goal is to provide insights into both the workings of the human brain and AI. Our first objective is to uncover what areas of the brain are active during viewing of static images of human actions. The second objective is to compare and contrast the activity in the human visual system with the activity in artificial neural networks.