Join us at the Society for the Neural Control of Movement Annual Meeting to hear from our Distinguished Career Award Winner and the 2020 & 2021 Early Career Award Winners deliver keynote presentations.
Distinguished Career Award Winner Presentation
Thursday April 22, 2021
Michael E Goldberg
David Mahoney Professor of Brain and Behavior, Department of Neuroscience, Neurology, Psychiatry and Ophthalmology, Columbia University
Dr. Michael E. (Mickey) Goldberg is David Mahoney Professor of Brain and Behavior in the Departments of Neuroscience, Neurology, Psychiatry, and Ophthalmology at the Columbia University College of Physicians and Surgeons. He is a graduate of Harvard College and Harvard Medical School. He is a pioneer in the technique of using single neuron recording in awake, behaving monkeys to understand the physiology of cognitive processes. He has made major contributions to our understanding of the mechanisms underlying cognitive processes such as the generation of eye movements, attention and spatial perception. Among his many honors are election as a member of the National Academy of Sciences of the United States, a fellow of both the American Academy of Arts and Sciences and the American Association of the Advancement of Science, the Heller Lecture for Computational Neuroscience of the Hebrew University, Jerusalem, Israel and the Patricia Goldman-Rakic Award for Cognitive Neuroscience of the Brain and Behavior Research Foundation. He is a past president of the Society for Neuroscience and a former trustee of the Neural Control of Movement Society. In addition to his scientific work he is an active clinical neurologist at the Columbia Campus of the New York Presbyterian Hospital and the 2006 awardee of the Lewis B. Rowland Award for the Teaching of Clinical Neurology at Columbia University.
“Corollary discharge and oculomotor proprioception: Two strategies for spatially accurate movement”
In order to link perception and action the brain must have a spatially accurate representation of the visual world, so it can generate actions appropriate to the objects it perceives. The only way visual information enters the eye is through the retina, which moves constantly between brief fixations. The retinal location of targets for action is not useful for calculating movements to acquire those targets. Two strategies have been postulated to calculate the accurate location of movement targets: Helmholtz suggested that the brain knows the command to move the eye, and therefore can use that motor command to update the sensory representation. This feedback from the motor system to the sensory system is now known as corollary discharge. Sherrington suggested that the brain can calculate accurate target location if it knows the position of the eye in the world, and the first step in this process is to know the position of the eye in the orbit. He postulated that this signal arose from oculomotor proprioceptors. The lateral intraparietal area (LIP) is a brain region important in choosing targets for saccadic eye movements, and solves the spatial accuracy problem using both Helmholtz’s and Sherrington’s strategies: a rapid, relatively accurate corollary discharge mechanism, and a slower, but more accurate proprioceptive mechanism, which is dependent upon the sensory representation of eye position in Area 3a of the primary somatosensory cortex..
Early Career Award Winner Presentations
2020 Early Career Award Winner
Tuesday April 20, 2021
University College London
Tamar Makin is a Professor of Cognitive Neuroscience at the University College London, UK and leader of the Plasticity Lab www.plasticity-lab.com. Her main interest is in understanding how our body representation changes in the brain (brain plasticity). Her primary model for this work is studying individuals with a hand loss. Tamar graduated from the Brain and Behavioural Sciences programme at the Hebrew University of Jerusalem in 2009. She was then awarded several career development fellowships to establish her research programme on brain plasticity in amputees at the University of Oxford, first as Research Fellow and later as a Principle Investigator. In 2016 she joined the faculty of UCL to continue this work. She is currently supported by the Eureopean Research Council (Starting Grant) and the Wellcome Trust (Senior Research Fellow).
“Homo Cyberneticus: Neurocognitive embodiment of artificial limbs”
Technology is progressing at a remarkable pace, providing us with wearable robotic technologies to substitute, and even supplement, our own limbs, freeing humans from the biological constraints of their own bodies. But can the human brain embody these exciting technologies as new body parts? I will describe very recent neuroimaging and behavioural studies we’ve been conducting in amputees who use prosthetic limbs to substitute their missing hand function. I will then present ongoing studies examining what happens to people’s (intact) biological body representation after they are provided with robotic augmentation – a Third Thumb. We find that although brain resources originally devoted to body representation can be utilised to represent an artificial limb, the representational features of a prosthesis do not mimic that of a biological hand. These studies provide a first glimpse into neurocognitive opportunities and limitations towards artificial limb embodiment.
2021 Early Career Award Winner
Wednesday April 21, 2021
Ben Gurion University of the Negev
Prof. Ilana Nisky received all her academic degrees in Biomedical Engineering from Ben-Gurion University of the Negev where she is now an Associate Professor of Biomedical Engineering. After a postdoctoral fellowship at Stanford University as a Marie Curie International Outgoing Fellow, she returned to BGU, and established the Biomedical Robotics Lab. Recently she also joined the Negev Translational Neurorehabilitation Lab at Adi-Negev Nahalat Eran as the principal investigator for rehabilitation with haptic interfaces. She is the recipient of the 2019 IEEE Robotics and Automation Society Early Academic Career Award, of the Alon fellowship by the Israeli Council of High Education, and the Toronto award for excellence in research, and was selected as one of 40 promising young Israelis by TheMarker magazine. Her research interests include human motor control, haptics, robotics, human and machine learning, teleoperation, and robot-assisted surgery. Nisky and her students apply neuroscience theories about the human sensorimotor control, perception, adaptation, learning, and skill acquisition in the development of human-operated medical and surgical robotic systems. They also use robots, haptic devices, and other mechatronic devices as a platform to understand the human sensorimotor system in real-life tasks like surgery, in virtual tasks like virtual reality games or surgical simulation, and in rehabilitation following neurological disorders. She hopes that this research will improve the quality of treatment for patients, will facilitate better training of surgeons, advance the technology of teleoperation and haptics, and advance our understanding of the brain.
“Modeling human sensorimotor control for better control of surgical robots”
During everyday interaction with the external world, for example during surgery, our brain graciously deals with a task that control engineers find very challenging – closed-loop control of movement and contact forces with outdated and noisy information that arrives from multiple sensors. Robot-assisted minimally invasive surgery (RAMIS), where a surgeon manipulates a pair of joysticks that teleoperate instruments inside a patient’s body, requires precise control of movement, object and tissue manipulation, and perception. Despite many advantages for both the patient and the surgeon, the full potential of RAMIS and other teleoperation applications is yet to be realized. Two of the major progress-impeding gaps, the lack of touch feedback, and limited knowledge of how to measure skill and optimize training, could be bridged by applying models of human sensorimotor control. We use behavioral studies to investigate how the sensorimotor system integrates information across time, space, and modalities, for movement, object manipulation, and perception, and how the system changes following adaptation and skill acquisition. I will present our recent results on integration of tactile and kinesthetic information during interaction with virtual objects, and about the acquisition of RAMIS skill in dry-lab tasks and interaction with real objects.