Humans have been working together since the beginning of time. Through social interactions and union as a community, people are empowered to solve problems and generate conclusions. This phenomenon is called Collective Intelligence (CI) and in many cases is more effective than individual intelligence. Can the merging of Collective and Artificial Intelligence lead us towards a more human-centered democratic and optimistic digital future?
This question was at the heart of the Minds & Tech conference, that took place in Toulouse last week. Organized by the city of Toulouse in collaboration with the MIT Center for Collective Intelligence (MIT CCI), the eventfeatured experts who presented various ways in which Artificial and Collective Intelligence can merge in order to create the “supermind” of the future.
Augmenting human intelligence for better healthcare
Aleks Berditchevskaia, senior researcher at the Centre for Collective Intelligence Design at Nesta Innovation Foundation, spoke about how merging collective and artificial intelligence can be beneficial for the future of healthcare. “In Nesta, we want to bring in new ideas, skills, methods to help people, civil servants and governments to solve some of the big challenges they are facing, whether it is in education, healthcare or in running a city,” explains Berditchevskaia.
“For example, Nesta’s Challenge Prize Center supports innovative projects that offer solutions to public problems. The teams at Nesta are building toolkits to support others to recreate the work is done in-house. The challenge prize center has codified the process; ways of initially scoping out a problem, the partnerships one needs to make in order for the ecosystem to grow to address a particular challenge. One example being the Longitude prize which addresses the issue of antibiotic resistance which is a very big problem in the future of healthcare”.
Another great example of combined online and offline engagement for collective governance, that was set up and ran entirely by the government of Taiwan, is vTaiwan, a neutral platform that was designed to engage experts and relevant members of the public in large-scale deliberation on specific topics.
“They have introduced mechanisms for both online and offline participation in order to help determine regulation, a lot of it around new technologies. An example of an effective regulatory framework that was developed with mass participation is the regulation around ride sharing and Uber. Facilitation is important as well as responsible attitude between the people traditionally in charge, e.g. government officials and the people who are contributing their time and giving their opinions and ideas who are not often treated as responsibly as they should be. I think there is a lot of excitement about digital participation which then ends up being a little superficial, probably due to lack of resources. However, there are many promising new tools that coordinate discussions happening online, that help extract some of the insights. Collecting vast quantities of data is a big problem and is where AI can be applied in order to better classify information, but I think it will take a lot of different methods and approaches”.
Why is diversity in tech a key element
As pointed out in a recent Forbes piece, “The people creating this technology have the power to influence how it works, and that’s too big a responsibility for any single demographic to have full control. A lack of diverse ideas and representation could lead to further disparities between gender, race, and class.”
Nancy Nemes, founder and CEO of Ms AI, (european platform supporting women in AI) has been working with emerging high tech since 1995 at Microsoft and Google and she believes that “AI is not only for technologists, mathematicians and computer scientists. Everyone should engage with these developments, as well as with driving diversity and inclusion in our fields”.
In her speech at Minds & Tech, Nemes spoke about how we need to create a good environment in Europe for women who want to work in AI. “Many girls at school are interested in STEM education and they go on to study it. However, along the way, many decide to leave because it is not clear as to what kind of career they can have in these sectors. This is due to a lack of information of what is available in terms of work as well as a lack of role models of other women who have succeeded in it. I believe it has to start early on, at school. For example, the hackathons go really well but scaling them takes a long time. We need to completely change the educational system in order to have more diversity. Right now only 17% of women work in tech and less than 1% work in AI. Only in the last few years, the big companies are talking about diversity. Today, there are a lot of initiatives, more coverage in the medias, more female networks, mentors and advocates”.
Ethical issues regarding AI
Philosopher and Dr. Henry Shevlin, a research associate at the Leverhulme Centre for the Future of Intelligence (CFI), believes that we should take a closer look at nature and other species in order to assess our intelligence needs. “Although we are becoming very good in designing specialized tools in certain domains, we have not been able to create that which can be compared to general intelligence; meaning a robust, flexible, autonomous capacity many species have, like honeybees, fish and rats. Such creatures are very good at coping with a wide range of demands and they thrive in complex and rapidly changing environments in ways that we do not know how to replicate in new, artificial systems. It is encouraging because these animals demonstrate that these capacities are possible and we do have the resources and power to build them. Will we be able to build such systems over the next decade? Can we scale existing architectures and to what extend do we need to go back to basics and find new innovative ways of building new artificial systems?”
For Dr. Shevlin, the evolution of AI holds as much danger as promise. “Many people are interested in the possibility that we might soon create an artificial system with the same capacities for subjective experience and consciousness as we have. There are also those who worry that we might create this accidentally. For me, what is increasingly worrying is that it is going to become easier to use artificial systems to conduct massive scale censorship that may invisible to most people. For example, many online forums already use automated censorship systems, and this can happen on a much larger scale, as you do not need armies of sensors to do it, which creates a risk of creeping authoritarianism enabled by AI”.