Deloitte estimates that global spending in AI technology will exceed $78 million (USD) by 2022. To date, 23 countries have created national Artificial Intelligence programs, each with their own focus but all sharing the expectation that the technology will drive innovation. The result will likely redefine what is possible in our social, political, and economic organizations for more sustainable and resilient societies. However, many people continue to speculate how disruptive it will be with no way to know what limitations it will have to overcome.
One of the challenges in the future of AI is collaboration. Presently, there are few opportunities to build partnerships across sectors, disciplines, and borders. While some countries have research institutes and partnerships between governments, universities, and the private sector, many working in artificial intelligence today are doing research and development independently.
Recently, physicist Timothy Koeth received a uranium cube that was traced back Germany during WW2 that was in possession of Nazis attempting to build a nuclear reactor. After some research, it was discovered that during the war, there were enough uranium cubes in the country possessed by German scientists that they could have built a nuclear reactor had they pooled their resources together – fortunately something that was not realized at the time of the war. Uranium was a scarce resource and the supply was so spread out among different teams that none had enough to succeed on their own.
While not completely analogous, researchers working in artificial intelligence could be facing a similar Prisoner’s Dilemma – where teams and institutions are not collaborating with one another, even though it may be in everyone’s best interest to do so.
The discoveries and innovations made possible with AI research and technology could be stymied if teams are not collaborative. Researchers tend to be overprotective of their data, findings, and mistakes. Not everyone in the field is reading academic journals, and authors could potentially overlook the value in their failed experiments to other researchers. Open data and collaboration among diverse teams may lead to greater discoveries much quicker than a competitive model where individuals or small teams work in secret.
Another problem with the future of AI is public perception. For the general public, images and notions of science fiction tend to enter the conversation quickly. In more realistic cases, it appears that people are as afraid of a robot uprising as they are of losing their jobs to automation. While some jobs will be replaced by automation, it is as likely that many jobs won’t be replaced, but enhanced by the technology and will even lead to the development of new positions entirely. We can’t anticipate what sorts of changes will come with the implementation of a system that makes work more efficient.
Achieving that vision will require some failing and adjustments. There has been a lot of research on the subject of bringing diverse perspectives into programming. Who codes matters just as much as what they code. The goal is to confront unconscious biases in a system where the majority of people have similar backgrounds and cultures. Having diverse developers though, does not help to solve all problems related to human behaviour.
At the 2019 Canadian Science Policy Conference, a popular theme throughout several panels was the importance of bringing social scientists into AI development. Confirming training data with subject matter experts is important, but is only a first step in pushing the technology to do better.
Social scientists are good at assessing different perspectives and perceptions, and inferring what values and issues matter to different groups and why. Because AI will have such a significant impact in the world, it is important that we ask the right questions on the ethical implications to ensure that the adoption of the technology is positive for all. That means ensuring that no harm comes to any group because of overlooked biases in the data or that results are manipulated by anyone.
Considerations in artificial intelligence will need people who study human behaviour. They will need to be able to understand expectations, compliance, and how people will react to new information. Fields like anthropology, sociology, economics, and political science will be key in considering the results generated from algorithms, and social science experts will likely take on a leading role in outlining the ethical considerations.