Powered by RND
PodcastsCienciasBrain Inspired
Escucha Brain Inspired en la aplicación
Escucha Brain Inspired en la aplicación
(6 012)(250 108)
Favoritos
Despertador
Sleep timer

Brain Inspired

Podcast Brain Inspired
Paul Middlebrooks
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand i...

Episodios disponibles

5 de 99
  • BI 201 Rajesh Rao: From Predictive Coding to Brain Co-Processors
    Support the show to get full episodes, full archive, and join the Discord community. Today I'm in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-directs the Center for Neurotechnology. Back in 1999, Raj and Dana Ballard published what became quite a famous paper, which proposed how predictive coding might be implemented in brains. What is predictive coding, you may be wondering? It's roughly the idea that your brain is constantly predicting incoming sensory signals, and it generates that prediction as a top-down signal that meets the bottom-up sensory signals. Then the brain computes a difference between the prediction and the actual sensory input, and that difference is sent back up to the "top" where the brain then updates its internal model to make better future predictions. So that was 25 years ago, and it was focused on how the brain handles sensory information. But Raj just recently published an update to the predictive coding framework, one that incorporates actions and perception, suggests how it might be implemented in the cortex - specifically which cortical layers do what - something he calls "Active predictive coding." So we discuss that new proposal, we also talk about his engineering work on brain-computer interface technologies, like BrainNet, which basically connects two brains together, and like neural co-processors, which use an artificial neural network as a prosthetic that can do things like enhance memories, optimize learning, and help restore brain function after strokes, for example. Finally, we discuss Raj's interest and work on deciphering an ancient Indian text, the mysterious Indus script. Raj's website. Related papers A sensory–motor theory of the neocortex. Brain co-processors: using AI to restore and augment brain function. Towards neural co-processors for the brain: combining decoding and encoding in brain–computer interfaces. BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains. Read the transcript. 0:00 - Intro 7:40 - Predictive coding origins 16:14 - Early appreciation of recurrence 17:08 - Prediction as a general theory of the brain 18:38 - Rao and Ballard 1999 26:32 - Prediction as a general theory of the brain 33:24 - Perception vs action 33:28 - Active predictive coding 45:04 - Evolving to augment our brains 53:03 - BrainNet 57:12 - Neural co-processors 1:11:19 - Decoding the Indus Script 1:20:18 - Transformer models relation to active predictive coding
    --------  
    1:37:22
  • BI 200 Grace Hwang and Joe Monaco: The Future of NeuroAI
    Support the show to get full episodes, full archive, and join the Discord community. Joe Monaco and Grace Hwang co-organized a recent workshop I participated in, the 2024 BRAIN NeuroAI Workshop. You may have heard of the BRAIN Initiative, but in case not, BRAIN is is huge funding effort across many agencies, one of which is the National Institutes of Health, where this recent workshop was held. The BRAIN Initiative began in 2013 under the Obama administration, with the goal to support developing technologies to help understand the human brain, so we can cure brain based diseases. BRAIN Initiative just became a decade old, with many successes like recent whole brain connectomes, and discovering the vast array of cell types. Now the question is how to move forward, and one area they are curious about, that perhaps has a lot of potential to support their mission, is the recent convergence of neuroscience and AI... or NeuroAI. The workshop was designed to explore how NeuroAI might contribute moving forward, and to hear from NeuroAI folks how they envision the field moving forward. You'll hear more about that in a moment. That's one reason I invited Grace and Joe on. Another reason is because they co-wrote a position paper a while back that is impressive as a synthesis of lots of cognitive sciences concepts, but also proposes a specific level of abstraction and scale in brain processes that may serve as a base layer for computation. The paper is called Neurodynamical Computing at the Information Boundaries, of Intelligent Systems, and you'll learn more about that in this episode. Joe's NIH page. Grace's NIH page. Twitter: Joe: @j_d_monaco Related papers Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognitive swarming in complex environments with attractor dynamics and oscillatory computing. Spatial synchronization codes from coupled rate-phase neurons. Oscillators that sync and swarm. Mentioned A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications. Recalling Lashley and reconsolidating Hebb. BRAIN NeuroAI Workshop (Nov 12–13) NIH BRAIN NeuroAI Workshop Program Book NIH VideoCast – Day 1 Recording – BRAIN NeuroAI Workshop NIH VideoCast – Day 2 Recording – BRAIN NeuroAI Workshop Neuromorphic Principles in Biomedicine and Healthcare Workshop (Oct 21–22) NPBH 2024 BRAIN Investigators Meeting 2020 Symposium & Perspective Paper BRAIN 2020 Symposium on Dynamical Systems Neuroscience and Machine Learning (YouTube) Neurodynamical Computing at the Information Boundaries of Intelligent Systems | Cognitive Computation NSF/CIRC Community Infrastructure for Research in Computer and Information Science and Engineering (CIRC) | NSF - National Science Foundation THOR Neuromorphic Commons - Matrix: The UTSA AI Consortium for Human Well-Being Read the transcript. 0:00 - Intro 25:45 - NeuroAI Workshop - neuromorphics 33:31 - Neuromorphics and theory 49:19 - Reflections on the workshop 54:22 - Neurodynamical computing and information boundaries 1:01:04 - Perceptual control theory 1:08:56 - Digital twins and neural foundation models 1:14:02 - Base layer of computation
    --------  
    1:37:11
  • BI 199 Hessam Akhlaghpour: Natural Universal Computation
    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/ To explore more neuroscience news and perspectives, visit thetransmitter.org. Hessam Akhlaghpour is a postdoctoral researcher at Rockefeller University in the Maimon lab. His experimental work is in fly neuroscience mostly studying spatial memories in fruit flies. However, we are going to be talking about a different (although somewhat related) side of his postdoctoral research. This aspect of his work involves theoretical explorations of molecular computation, which are deeply inspired by Randy Gallistel and Adam King's book Memory and the Computational Brain. Randy has been on the podcast before to discuss his ideas that memory needs to be stored in something more stable than the synapses between neurons, and how that something could be genetic material like RNA. When Hessam read this book, he was re-inspired to think of the brain the way he used to think of it before experimental neuroscience challenged his views. It re-inspired him to think of the brain as a computational system. But it also led to what we discuss today, the idea that RNA has the capacity for universal computation, and Hessam's development of how that might happen. So we discuss that background and story, why universal computation has been discovered in organisms yet since surely evolution has stumbled upon it, and how RNA might and combinatory logic could implement universal computation in nature. Hessam's website. Maimon Lab. Twitter: @theHessam. Related papers An RNA-based theory of natural universal computation. The molecular memory code and synaptic plasticity: a synthesis. Lifelong persistence of nuclear RNAs in the mouse brain. Cris Moore's conjecture #5 in this 1998 paper. (The Gallistel book): Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. Related episodes BI 126 Randy Gallistel: Where Is the Engram? BI 172 David Glanzman: Memory All The Way Down Read the transcript. 0:00 - Intro 4:44 - Hessam's background 11:50 - Randy Gallistel's book 14:43 - Information in the brain 17:51 - Hessam's turn to universal computation 35:30 - AI and universal computation 40:09 - Universal computation to solve intelligence 44:22 - Connecting sub and super molecular 50:10 - Junk DNA 56:42 - Genetic material for coding 1:06:37 - RNA and combinatory logic 1:35:14 - Outlook 1:42:11 - Reflecting on the molecular world
    --------  
    1:49:07
  • BI 198 Tony Zador: Neuroscience Principles to Improve AI
    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/ To explore more neuroscience news and perspectives, visit thetransmitter.org. Tony Zador runs the Zador lab at Cold Spring Harbor Laboratory. You've heard him on Brain Inspired a few times in the past, most recently in a panel discussion I moderated at this past COSYNE conference - a conference Tony co-founded 20 years ago. As you'll hear, Tony's current and past interests and research endeavors are of a wide variety, but today we focus mostly on his thoughts on NeuroAI. We're in a huge AI hype cycle right now, for good reason, and there's a lot of talk in the neuroscience world about whether neuroscience has anything of value to provide AI engineers - and how much value, if any, neuroscience has provided in the past. Tony is team neuroscience. You'll hear him discuss why in this episode, especially when it comes to ways in which development and evolution might inspire better data efficiency, looking to animals in general to understand how they coordinate numerous objective functions to achieve their intelligent behaviors - something Tony calls alignment - and using spikes in AI models to increase energy efficiency. Zador Lab Twitter: @TonyZador Previous episodes: BI 187: COSYNE 2024 Neuro-AI Panel. BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys BI 034 Tony Zador: How DNA and Evolution Can Inform AI Related papers Catalyzing next-generation Artificial Intelligence through NeuroAI. Encoding innate ability through a genomic bottleneck. Essays NeuroAI: A field born from the symbiosis between neuroscience, AI. What the brain can teach artificial neural networks. Read the transcript. 0:00 - Intro 3:28 - "Neuro-AI" 12:48 - Visual cognition history 18:24 - Information theory in neuroscience 20:47 - Necessary steps for progress 24:34 - Neuro-AI models and cognition 35:47 - Animals for inspiring AI 41:48 - What we want AI to do 46:01 - Development and AI 59:03 - Robots 1:25:10 - Catalyzing the next generation of AI
    --------  
    1:35:04
  • BI 197 Karen Adolph: How Babies Learn to Move and Think
    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Karen Adolph runs the Infant Action Lab at NYU, where she studies how our motor behaviors develop from infancy onward. We discuss how observing babies at different stages of development illuminates how movement and cognition develop in humans, how variability and embodiment are key to that development, and the importance of studying behavior in real-world settings as opposed to restricted laboratory settings. We also explore how these principles and simulations can inspire advances in intelligent robots. Karen has a long-standing interest in ecological psychology, and she shares some stories of her time studying under Eleanor Gibson and other mentors. Finally, we get a surprise visit from her partner Mark Blumberg, with whom she co-authored an opinion piece arguing that "motor cortex" doesn't start off with a motor function, oddly enough, but instead processes sensory information during the first period of animals' lives. Infant Action Lab (Karen Adolph's lab) Sleep and Behavioral Development Lab (Mark Blumberg's lab) Related papers Motor Development: Embodied, Embedded, Enculturated, and Enabling An Ecological Approach to Learning in (Not and) Development An update of the development of motor behavior Protracted development of motor cortex constrains rich interpretations of infant cognition Read the transcript.
    --------  
    1:29:31

Más podcasts de Ciencias

Acerca de Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Sitio web del podcast

Escucha Brain Inspired, Órbita Laika. El podcast y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v7.1.0 | © 2007-2024 radio.de GmbH
Generated: 12/19/2024 - 3:53:05 AM