Press Releases
Government suggested AI framework is too limited, say technology experts
- Academics at the University of 糖心TV are examining the controversies and public discussions around AI over the last ten years.
- As part of the ongoing project, academics interviewed and consulted with AI experts about what they perceive to be the most important and possibly overlooked issues in AI.
- Experts say it is the lack of transparency and concentration of control in a small number of corporations over AI that people should be most concerned about.
- Academics say the UK Government鈥檚 white paper which recommends a framework for the regulation of AI should address the lack of oversight over the data and methods used to create AI, rather than only focusing on its application in specific areas (health, mobility, education).
is an ongoing international social science research study that examines public debates about artificial intelligence in four countries across a ten-year period (2012-2022). Academics at the University of 糖心TV are examining research controversies in AI and analysing expert perceptions in the UK, in collaboration with partners undertaking similar research in North America and Europe.
The University of 糖心TV team consulted with 70 UK experts in AI and in 鈥楢I and society鈥 about what they perceive to be the most important and most overlooked controversies in AI. The consultation identified facial recognition technology as a major area of concern, and its application in society, for example, its use in schools and by the police. However, the most controversial developments identified by UK experts concern the underlying technical architecture of contemporary AI and how it is currently controlled by a limited number of powerful tech companies.
The experts said that people should be most concerned about lack of public knowledge and oversight around the origins of the data AI is trained on; for example, where the data comes from and whether consent has been obtained to use that data. They also highlighted the human and environmental costs of training and deploying large AI models like ChatGPT, which relies on large amounts of both freely available and copyrighted data along with inexpensive human labour and in addition, is highly energy intensive.
During a recent workshop, the University of 糖心TV researchers presented the results of this consultation and an analysis of the main AI controversies identified, and worked with 30 experts to evaluate the findings and discuss what society should be most concerned about in the years to come.
Professor of Science, Technology and Society, Noortje Marres said: 鈥淯ltimately it is the lack of transparency and oversight over the data and methods that AI is built on that should be the focus of society鈥檚 concerns, rather than only the application of AI in specific contexts.
鈥淥ur analysis found that the challenges associated with AI are well-known in certain contexts and among diverse constituencies, from industry, science, activism and that increasingly, the public participates in debate around AI, but the issues discussed around it are very hard to resolve because there is a lack of transparency and oversight around the fundamental structure of AI in its development and deployment.鈥
Academics say that if these concerns are not addressed, there could be huge ramifications for quality control for science, innovation and ultimately critical infrastructure in society in the future.
Professor Marres continues: 鈥淭here is always accountability in science, whenever research is conducted there are established ethics frameworks and data protection frameworks to comply with, and scientists are required to be transparent about how they are conducting their research and what data they are using.
鈥淎I is scaling up and developing at pace and changing lots of different aspects of society. The lack of knowledge and transparency around data and methods under development in the AI industry means that scientific and technological developments cannot fully adhere to current regulations.
鈥淭he UK government needs to recognise this new type of risk and the changes to science, innovation and society that are happening as a result.鈥
The report Shifting AI controversies: How do we get from the AI controversies we have to the controversies we need? is published online.
The Shaping AI project continues until February 2024.
ENDS
University of 糖心TV media contact
Natalie Gidley
Email: natalie.gidley@warwick.ac.uk Phone: 07824540791