Neural Control of Movement Satellite Meeting

Join us April 28th for the NCM Satellite Meeting “New frontiers at the intersection of cognition and motor control”. The Satellite meeting will be held at the Westin Playa Bonita Hotel in advance of the annual Society for the Neural Control of Movement Meeting.

New frontiers at the intersection of cognition and motor control

The satellite is organized by:

Sam McDougle, Yale University
Dan O’Shea, Stanford University
Saurabh Vyas, Columbia University

The NCM conference has long been a venue for showcasing exemplary studies of intelligent behaviors. Many approaches have focused on how the sensorimotor system learns and generates movement. Separate studies have focused on how cognitive areas implement abstract thought and logical reasoning to guide actions over long timescales. Critically, despite the interconnectedness of these systems, current research rarely explores their interaction. How might we go about identifying and understanding the distributed circuits that implement cognitive-motor computations that convert thought into action? Perhaps the recent advent of large-scale neural measurement and manipulation technology will play a central role. However, is observation and causal manipulation sufficient, or are there still conceptual hurdles to overcome? Is this just a data science problem, or a theory problem? What does a more holistic understanding of the neural control of movement even look like?

There is a growing consensus that motor control and cognition are inherently intertwined, accompanied by a recognition that our field must focus on understanding their interaction within naturalistic behavior. However, the methods to achieve this remain unclear. We believe a central cause of this uncertainty is the absence of a unifying goal—a guiding beacon to aim for and a benchmark against which to measure progress. In essence, what will it mean to understand motor cognition? Will a deep learning model capable of goal-driven manipulation across diverse contexts suffice? Will we require a structured algorithmic understanding based on well-defined cognitive principles? Should we work backwards from neural recordings, reverse-engineering mechanisms from neural population dynamics, brain network interactions, and causal perturbations? Or should we proceed forward from tasks with well-understood solutions and clear hypotheses? Against this backdrop, the primary aim of this satellite meeting is to establish moonshot-style goals for our field in tackling motor cognition, setting forth a bold, measurable set of research objectives. Our speakers, drawn from robotics, cognitive science, and neurophysiology, will bring diverse perspectives on what progress will look like and what research programs are needed to achieve it. In panel discussions, we will focus the discussions on a “challenge question.” We will ask panelists to broadly explore what they believe should be a measurable overarching goal for the study of their topic (e.g., long-range planning), and offer perspectives on how exactly they would go about achieving that goal. These discussions should inspire innovative research collaborations, elucidate a set of open problems, and guide the emerging scholars in our community who are now asking, “What’s our field’s big question for the next decade?”

The satellite meeting will be focused on three broad areas of motor cognition that will address many of these timely and critical questions. The satellite will organize the day into three sessions: 1) long-range cognitive-motor planning, 2) skill learning and performance and its association with cognitive-motor flexibility, and 3) imitation and few-shot learning. Each session will focus on one of these three areas of motor cognition. Each will include 3–4 invited talks and a carefully designed panel discussion. To facilitate a broad discussion, each panel will be composed of experimental neurobiologists, engineers (e.g., roboticists), theoreticians, and cognitive psychologists. The satellite will have one opening and one closing keynote; one will focus on the theory and philosophy of mind while the other will focus on systems motor neuroscience.

 

Tentative Satellite Meeting Program

*Please note, program will be updated as confirmed

08:00 – 08:30

Registration


08:30 – 08:45

Opening remarks


08:45 – 9:30

Opening Keynote: Daniel Wolpert, Columbia University

Computational principles underlying the learning of sensorimotor repertoires

Context is widely regarded as a major determinant of learning and memory across numerous domains, including classical and instrumental conditioning, episodic memory, economic decision-making, and motor learning. However, studies across these domains remain disconnected due to the lack of a unifying framework formalizing the concept of context and its role in learning. I will present a principled theory of motor learning based on the key insight that memory creation, updating, and expression are all controlled by a single computation – contextual inference. Unlike dominant theories of single-context learning, our repertoire-learning model accounts for key features of motor learning that had no unified explanation and predicts novel phenomena, which we confirm experimentally. Although this model was developed for motor learning the principles underlying the model are domain general.  Our results suggest that contextual inference is a key principle underlying how a diverse set of experiences is reflected in behavior.


09:30 – 11:10

Session 1

Theresa Desrochers, Brown University

The control of cognitive and motor sequences in frontal and parietal cortex

Sequential tasks are an integral component of the daily lives of humans and other species. These sequences are multifaceted such that they can contain a set of abstract tasks (e.g., when making a meal: cut vegetables, heat the pan) that are independent from the precise motor actions needed to carry them out, and can also include a series of the precise muscle contractions needed to produce a motor sequence (e.g., picking a fruit off a tree). Presumably, processes control at the abstract level and keep track or monitor progress throughout sequences. Despite this presumption, the vast majority of studies that have studied sequences have focused on motor actions (e.g., a series of joystick movements, eye movements, or finger taps) while neglecting the control and sequential monitoring processes that occur simultaneously. We have made progress in understanding the neural bases of nonmotor sequential tasks across human and nonhuman primates and some similarities, differences and interactions with the execution of motor sequences. Using fMRI and transcranial magnetic stimulation (TMS) in humans and awake fMRI and electrophysiology in nonhuman primates, we show that the lateral prefrontal cortex represents sequential information across sequence types and species. Further, these representations share a common dynamic, increasing activity across individual sequences (“ramping”). New studies and collaborative work are focusing on investigating these dynamics in the parietal and other cortical areas during motor sequences. Together these studies provide a unique view of complex cognitive processes across species, step towards understanding functional homology, and provide insight into sequential processes during health and disorder

Andrew Pruszynski, Western University

Future movement plans during sequential reaching

Whether tying shoes or playing piano, real-world actions often comprise a rapid sequence of movements that unfold continuously and cannot be fully planned in advance. How the brain plans multiple future movements while an overall action is ongoing remains largely unknown. In this talk, I will present a new behavioral paradigm that addresses this question by manipulating how many future reach targets participants can see ahead of time (i.e. the planning horizon) while precisely controlling the execution of the ongoing reach. I will show that people can plan at least two future movements while simultaneously executing the ongoing reach. This ability occurs without training, from the very first trial, and eye tracking reveals that future target information is extracted parafoveally. I will then show that planning processes for future movements are not fully independent. Specifically, we find that (1) correcting for an unexpected change in the position of the current reach target is slower when more future reaches are planned and that (2) the curvature of the current reach is modified based on the next reach, but only when their planning processes overlap in time. Finally, I will share preliminary neural data from macaques that sheds light on how these future movement plans are organized in the primate motor circuit. Consistent with previous work, when only one future target is visible, activity in primary motor cortex (M1) and dorsal premotor cortex (PMd) is largely related to the current movement, with next-target information only emerging near the end of the current movement. Strikingly, when two future targets are visible, next-target information is present even in the planning phase of the current movement. The geometry of this planning activity appears to support efficient sequence production by selecting a subset of potential movement trajectories which optimize the current movement based on the biomechanical requirements of future movements.

Hanna Hillman, Yale University

Exploring Motor Working Memory

Motor Working Memory (WM) allows the brain to temporarily store, update, and manipulate movement-related information. However, motor WM remains largely unexplored despite WM’s integral role in motor planning, adaptation, and long-term learning. Here, we synthesize recent and ongoing research on motor WM to provide new insights into this understudied cognitive system. We designed a behavioral reaching paradigm to distinguish effector-specific (i.e., proprioceptive) from effector-independent (i.e., abstract) content stored in motor WM. Without visual feedback, participants encoded one or more reaching movements and later recalled them by retracing their trajectories with either the same or opposite arm. Same-arm recall allowed access to the full range of motor WM information, whereas opposite-arm recall relied solely on effector-independent information.

Unexpectedly, participants did not always perform better when recalling a movement with the same arm that encoded it. Using this framework and variations of the task, we investigated motor WM under different cognitive loads, time delays, and interference conditions. Evidence from multiple experiments indicates that subsequent reaching movements (i.e., proprioceptive interference) degrade effector-specific representations in motor WM while leaving effector-independent representations unaffected. Additionally, we explored the potential effects of visuospatial and mental rotation dual-tasks on effector-independent WM. Increasing visuospatial WM load during the motor WM task affected neither same- nor opposite-arm recall, suggesting that visual and nonvisual spatial information can be stored separately in WM. However, when a mental rotation task was introduced between motor encoding and recall, we observed a rotation magnitude-dependent decrease in opposite-arm (effector-independent) accuracy, which may reflect overlapping cognitive processes involved in spatial transformation and abstract motor memory. To further investigate how these content distinctions in motor WM relate to motor learning, we conducted a correlational study using our motor WM task alongside a visuomotor rotation task. We found positive relationships between effector-specific motor WM and implicit motor learning, as well as between effector-independent motor WM and explicit motor learning. These results suggest selective functional and/or anatomical parallels and provide insight into how motor WM supports adaptation and long-term motor memory. Taken together, our findings bridge WM and motor cognition research by identifying and characterizing effector-specific and effector-independent components in motor WM. This work challenges WM models that subsume abstract motor information into visuospatial WM and highlights the need to revisit and fully integrate motor WM into models of cognitive motor processes.

Panel Discussion


11:10 – 11:30

Coffee break


11:30 – 13:10

Session 2

Emily Oby, Queen’s University

Dynamical constraints on neural population activity

The manner in which neural activity unfolds over time is thought to be central to sensory, motor and cognitive functions in the brain. Network models have long posited that the brain’s computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain–computer interface to challenge monkeys to violate the naturally occurring time courses of neural population activity that we observed in the motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.

Nuo Li, Duke University

A combinatorial neural code for long-term motor memory

In our lifetime we stably retain our motor repertoire. How are learned actions stored in motor memory? Moreover, how are existing motor memories maintained as we continuously acquire new motor skills? To explore these questions, we used automated home-cage training to establish a continual learning paradigm in mice. Mice learned to perform directional licking in multiple tasks for up to 6 months. We combined this paradigm with chronic two-photon imaging to track motor cortex activity across continual learning. Learned directional licking actions are evoked by distinct patterns of preparatory activity. Within the same task context, activity driving directional licking was stable over time with little representational drift. When learning new task contexts, new preparatory activity emerged to drive the same licking actions. Learning created parallel new motor memories instead of modifying existing representations. Re-learning to make the same actions in the previous task context re-activated the previous preparatory activity, even months later. Continual learning of new task contexts kept creating new preparatory activity patterns. Context-specific memories, as we observed in the motor system, may provide a solution for stable memory storage throughout continual learning.

Maria Herrojo Ruiz, University of London

Performance anxiety is associated with biases in learning from reward and punishment in skilled performers

Previous research has linked anxiety disorders to altered learning processes, with clinical and subclinical anxiety associated with biases towards negative outcomes.  Despite this evidence, learning biases in highly trained individuals with performance anxiety (PA) are not well understood, partly due to limited methodological integration. Across three experiments (N = 95 pianists), we combined hierarchical Bayesian beta regression, a generative model of motor variability, and electroencephalography to examine differential learning from reward and punishment in skilled performers.Pianists inferred hidden target dynamics from melodies using graded reinforcement feedback in two conditions: reward (0–100 scores), and punishment (–100 to 0). Contrary to our hypothesis, pianists with higher PA levels learned faster from rewards, while those with lower PA relied more on punishment feedback to refine their learning. These differences in learning speed were explained by changes in the regulation of motor variability in response to lower scores—specifically in keystroke dynamics, the task-relevant variable. A second experiment replicated these findings. At the neural level, frontocentral theta activity (4–7 Hz) encoded unsigned differences in feedback and predicted upcoming motor variability, helping to explain learning biases. The findings suggest that theta signals the need for behavioural adjustment, particularly under reward, where greater theta amplitude predicted increased motor variability in higher-PA individuals. The findings support the role of theta in prefrontal control and align with evidence of elevated frontal-midline theta in high-anxiety individuals, especially in response to errors requiring adjustment—here reflected in greater motor variability following poorer performance under reward. In a final experiment, we reduced task uncertainty by presenting four dynamics contour options, including the unknown correct one. This allowed us to evaluate the separate effects of reinforcement and PA on categorical decision-making and decisions made on a continuous scale. Learning biases were not explained by either decision-making component alone but rather by their combined effect. In this setting, the interaction between PA and reinforcement condition reversed: higher-PA pianists learned faster from punishment, while those with lower PA learned faster from reward. These patterns were driven by the effect of outcomes on motor variability, with higher-PA individuals showing greater adaptation to poor outcomes through increased variability under punishment. Collectively, our findings indicate that skilled performers with increased predisposition to PA learn faster from punishment in low-uncertainty environments but increasingly rely on reward as uncertainty escalates. These effects are mediated by reinforcement-driven motor variability and by changes in frontocentral theta activity that encodes graded feedback and signals exploratory behavioural control.

Panel Discussion


13:10 – 14:30

Lunch


14:30 – 16:10

Session 3

Adrian Haith, Johns Hopkins University

What is the role of cognition in motor skill learning?

It has long been recognized that cognition plays an important role in motor skill learning, but the exact nature of cognitive contributions to learning is not clear. Work in visuomotor adaptation has suggested that cognition contributes to motor learning through action selection – directly identifying actions that will lead to success. These cognitively discovered actions can then be automatized through repetition. I will argue that this view of cognition’s role in learning is unlikely to scale to more complex skills learned over longer timescales. I will show evidence that, in more complex skill tasks, cognition plays a more minimal role. I will argue that, instead, most skill learning occurs through model-free reinforcement learning, rather than through cognitive selection of actions. Building on this theory, I will speculate on how we might reimagine the role of cognition in motor skill learning.

Randy Flanagan, Queen’s University

Real-world object manipulation tasks: where action meets cognition

Many of the tasks we perform on a daily basis, such as making and serving coffee, involve manually interacting with multiple objects in what has been referred to as “reachable space”. Skillful performance of such tasks requires the ability to accurately predict the weights of the objects being acted upon. In addition, successful task performance requires the ability to form a representation of reachable space, which supports moving the hand to target objects while avoiding obstacles. I will begin my talk by reviewing behavioural and neuroimaging work investigating how the brain represents the mechanical properties of objects (e.g., weight) in memory. I will then discuss new work examining how people represent reachable space and how these representations change with experience. Using a novel haptic maze task, inspired by research on spatial navigation, and a hybrid model that combines model-based (MB) and model-free (MF) reinforcement learning methods, we can assess the contributions of MB and MF learning to the encoding of reachable space. Our findings suggest that the contribution of MB learning is stronger than MF learning when first moving in a given maze, but that the contribution of MF learning increases with experience moving within the maze, gradually surpassing the contribution of MB learning. Finally, I will contrast reinforcement learning of reachable space with reinforcement learning of navigable space.

Joonhee (Leo) Lee, Johns Hopkins University

Cognitive maps of sensorimotor programs

Humans have an exceptional ability to link movements with external cues. For example, playing the violin requires adjustment of the force and timing of bow movements with the sheet music notes. However, how the brain relates sensorimotor features with external cues remains unclear. One potential explanation lies in the cognitive map theory, which posits that the brain constructs mental representations of spatial and abstract environments to guide memory formation and future action. Cognitive maps are neurally instantiated in the brain’s memory network, which consists of the hippocampus, retrosplenial, and entorhinal cortex. Notably, grid cells in the entorhinal cortex spatially represent environments through receptive fields arranged in a regular, hexagonal pattern. This study investigated whether cognitive maps support sensorimotor representations and how memory and sensorimotor brain regions interact to link external cues with motor actions. Twenty-four participants underwent a multi-day study, learning to associate isometric hand-grip exertions with external cues that varied in force and time. The cues were organized in a 2D force-time space. After training, participants performed a motor-space navigation task while undergoing functional magnetic resonance imaging. The task consisted of two phases: (1) localization, where participants performed an exertion and matched the exertion’s force and time to the corresponding cue, and (2) navigation, where participants mentally computed the distance between the localized cue and a target cue in the force-time space. During navigation in the force-time space, activity in the entorhinal cortex exhibited six-fold periodicity, consistent with a hexagonal grid-like code. This finding extends the entorhinal cortex’s role in encoding bodily states such as position and velocity to include force and time, which are essential features for interacting with the physical world. Using dynamic causal modeling, we found that during localization, the retrosplenial cortex integrated inputs from the hippocampus and sensorimotor regions (e.g., primary motor cortex, supplementary motor area). Additionally, pattern component modeling revealed that the primary motor cortex and cerebellum prioritized force over time when encoding exertions, suggesting a warped representation of force-time space. This was consistent with participants’ subjective ratings, which revealed that changes in force levels felt more effortful than changes in time levels in the force-time space. In contrast, the retrosplenial cortex had a balanced representation of force and time, reflecting the demands of the navigation task, which required participants to evaluate both dimensions equally. These results suggest that different brain regions have different representations of force-time space, which are integrated to allow for the flexible navigation of multidimensional sensorimotor environments. Together, these findings reveal that the human brain encodes sensorimotor information within cognitive maps. Memory and sensorimotor regions interact to associate precise motor actions with external cues, with the entorhinal cortex representing higher-order bodily states and the retrosplenial cortex acting as a hub for integrating and balancing sensorimotor information.

Panel Discussion


16:10 – 16:30

Coffee break


16:30 – 17:15

Closing Keynote: Myrto Mylopoulos, Carleton University

From Intention to Action: A Theoretical Treatment of the Interface Between Cognition and Motor Control

Interactions between cognition and motor control display two distinctive, yet seemingly contradictory, features: (i) remarkable smoothness and flexibility, as exemplified by skilled actions in both everyday and elite contexts (e.g., driving a car; Serena Williams on the tennis court), and (ii) significant limitations and constraints, evident through everyday action slips and findings from empirical work on sensorimotor adaptation. In this talk, it is proposed that this apparent tension can be resolved, at least in part, by examining fundamental architectural differences between cognitive and motor systems and the distinctive representational formats characteristic of each. By exploring how these underlying structures shape dynamic cognitive-motor interactions, the aim is to contribute to a unified, coherent framework for understanding the cognition–motor interface.


17:15 – 17:30

Closing remarks

Thank you to our sponsors, exhibitors, and supporters

Confirmed Speakers

Keynotes

Opening Keynote:
Daniel Wolpert, Columbia University

Closing Keynote:
Myrto Mylopoulos, Carleton University

Invited Speakers

Theresa Desrochers, Brown University
Randy Flanagan, Queen’s University
Adrian Haith, Johns Hopkins University
Nuo Li, Duke University
Emily Oby, Queen’s University
Andrew Pruszynski, Western University