Dr Michael Reinsborough, King’s College London, discusses the leading international neuroscience projects and the growing interest of the military.
Article from SGR Newsletter no.44; online publication: 1 September 2016
Download pdf of article [0.05MB]
Are computers a biotechnology? One place from which the future of computing and robotics technologies is being thought about is a bit unexpected – the neuroscience lab. Over the past three years, a number of large research initiatives on the brain have been announced. Following the launch of the European Union’s Human Brain Project (HBP)1 and the BRAIN Initiative in the USA,2 other large scale cross-laboratory collaborative initiatives have begun in Japan, Australia, Israel and now China. While each project varies in its objectives, one similarity is the emphasis on using computers to draw together large amounts of experimental brain data for analysis. Not only are computers being used to think about how the brain is organised, the brain is being used to think about how computers are organised – and there is a lot of interest.
Brain or computer?
The 86 billion neurons (up to 860 trillion synaptic interconnections) fitting neatly within the human skull, utilise 20 watts of power and can solve complex problems like recognising a face. In comparison, an exascale supercomputer – probably the size of a football field, and requiring the equivalent of a small coal-fired power station to run it – would be necessary to simulate this amount of neuronal interconnection.3 Most visual or other pattern-matching tasks that are necessary for movement in an environment, and quite simple for a human, are beyond the capability of advanced computers and robots. Researchers who think the brain is comparable to a computer are very interested in learning from biology. One might even satirise some computer scientists as having ‘brain envy’.
Of course, increasing our knowledge of the brain is potentially beneficial. On the medical side, the research could help to improve our mental health or our treatment of brain diseases.
Lesser known, however, are the possible benefits that neuroscience might bring to computing. Two examples are better pattern recognition and greater energy efficiency. Ever since Santiago Ramón y Cajal drew the first pictures of a neuron in the 1890s, scientists have tried to understand the electrical properties of our constantly changing brains. A key step was the 1952 discovery of the relationship between charge and ion exchange at the synaptic cleft between the neurons. The changing relationship between neurons was simplified by Carla Shatz in 1992 as ‘what fires together wires together’. This neural plasticity allows the brain to strengthen links that acknowledge patterns in its environment.
This same principle is emulated when building neuromorphic computer chips – chips that mimic the decentralised memory and unusual firing patterns of the brain. Since much energy lost in computing happens in the distance between the memory storage location and the central processor, a decentralised structure of memory stored in or near the relationships of firing patterns that carry out simple calculations can be more energy efficient. This is crucial for supercomputers.
Military interest
While most funding for the leading brain projects comes from civilian (especially medical) research budgets, it is important to realise that there is also military interest. In the USA, the Defense Advanced Research Projects Agency (DARPA) is one of several agencies providing the overall budget for the BRAIN Initiative. DARPA’s goals are primarily in relation to veterans’ after-combat mental health, but there is also interest in enhancing the combat effectiveness of soldiers. In the EU, all funding for the HBP comes from a science budget earmarked to develop innovation in and improve the competitiveness of the EU computer industry. Specifically the HBP does not accept military research funding.
Many advances in science and neuroscience (regardless of how they were funded) have resulted in applications with both military and civilian use. For example, shortly after acetylcholine was discovered to be a neurotransmitter, the G-series of nerve agents (including sarin) were discovered during civilian research into pesticides. Other civilian discoveries led to the more deadly V-series, as well as the development of ‘incapacitants’ (also potentially lethal). Early warnings from researchers in neurotoxicity helped raise the alarm. Work since has limited their use according to international law, but with very poor verification and enforcement mechanisms.
There are parallels here with current research in artificial intelligence. ‘Brain-like machines’ are likely to have numerous civilian applications – for example, self-driving cars and medical informatics. Their development may also directly or indirectly lead to complex autonomous weapons systems and new potentials for intelligence gathering and other surveillance.
While some science fiction imaginings for artificial intelligence are either not possible or a long way off, there are still clearly many serious causes for concern.
The International Committee for Robot Arms Control4 and other initiatives are presently pushing for international treaties to prevent advances in drone warfare. But, in addition to the vigilance of individual scientists, we must continue to challenge the commercial, military and government institutions to be open and accountable.
Dr Michael Reinsborough is a Research Associate at King’s College, London, where he contributes to the Human Brain Project.
References
1. https://www.humanbrainproject.eu/discover/the-project/overview
2. http://www.braininitiative.nih.gov/about/newg.htm
3. An exaflop is a billion billion floating point calculations per second, a thousand-fold increase in the performance of the first petascale computer developed in 2008.