Neuromatch NeuroAI Reflections

2 minute read

Published:

This summer, I completed the NeuroAI course offered by Neuromatch Academy, a volunteer-led organization providing research education and training at the intersection of neuroscience and machine learning. The course introduced key topics in biologically inspired AI, including neural coding, learning dynamics, and open problems in cognition, while emphasizing hands-on coding, peer collaboration, and engagement with current research. As someone who had been passively interested in NeuroAI, this course was foundational in helping me move from curiosity to genuine direction. It gave me the conceptual framework, technical skills, and intellectual community to explore the field seriously.

I was especially drawn to tutorials like Macrolearning and Mysteries, which encouraged thinking beyond standard ML benchmarks. The Macrolearning module focused on how biological systems acquire and generalize knowledge over long timescales—from concept formation to learning to learn. I found it compelling because it addressed phenomena that current AI struggles with, such as continual learning and transfer across tasks, while offering frameworks for how these abilities might emerge from hierarchical and distributed representations. The Mysteries module took a different angle, introducing unsolved questions at the core of both AI and neuroscience—such as the nature of consciousness, how the brain balances generalization with specificity, and whether predictive coding frameworks can explain cognition. I appreciated its speculative nature and the invitation to engage with ambiguity and open-ended inquiry.

As part of the course, my group completed a project comparing spiking neural networks (SNNs), recurrent neural networks (RNNs), and convolutional neural networks (CNNs) on visual working memory tasks. We examined how each model encoded, maintained, and retrieved visual information, and how performance degraded under noisy conditions. Using tools such as Representational Similarity Analysis (RSA), Dynamic Similarity Analysis (DSA), and fixed-point analysis, we investigated the robustness and representational dynamics of these architectures. The project deepened my interest in how neural geometry and temporal coding support cognition and how these insights could inform better AI design.

What draws me to NeuroAI isn’t just the technical novelty—it’s the potential to build more responsible, interpretable AI systems by grounding them in the principles of biological intelligence. I’m particularly interested in how internal representations, goal structures, and dynamics in models shape behavior, topics increasingly relevant to efforts in interpretability and AI safety. Understanding how the brain handles ambiguity, motivation, and generalization could be key to creating AI systems that behave more reliably and in alignment with human values. I believe that NeuroAI offers not only scientific insight but also a principled foundation for ethical and robust AI development.

This fall, I’ll be starting my MS in Computer Science at Columbia University, where I hope to explore these topics further!