We use cookies on our site to track usage and preferences. Learn more

Cambridge Consultants report highlights recipe for responsible governmental AI

  • Date 12 Feb 2018

Cambridge Consultants today unveiled five missing ingredients to responsible governance of Artificial Intelligence (AI).

A new report, to be published at Mobile World Congress, aims at combatting fear resulting from the multitude of headlines portraying both deep and machine learning as an all-encompassing force taking over industry, society and politics.

It states that while the research and application of AI techniques is quickly coming to the attention of governments across the globe, the truth is this often lacks the holistic framework to appropriately govern such adoption.

The report confirms the key to successful collaboration, resulting in responsible AI deployment in government, lies with the following five factors:

  • Responsibility: There needs to be a specific person responsible for the effects of an autonomous system’s behaviour. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes
  • Explainability: It needs to be possible to explain to people impacted (often laypeople) why the behaviour is what it is. This is vital for trust.
  • Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed
  • Transparency: It needs to be possible to test, review (publicly or privately) criticise and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publicly and explained
  • Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behavior becoming embedded

The timing of the report is crucial: the UK’s House of Commons Select Committee investigation into robotics and AI concluded that it was too soon to be setting a legal or regulatory framework, but did highlight the following priorities that would require public dialogue and eventually standards or regulation. These were: verification and validation; decision making transparency; minimising bias; privacy and consent and; accountability and liability. This is now being followed by a further Lord’s Select Committee investigation which will report in Spring 2018 [1].

In February 2017 the European Parliament Legal Affairs Committee made recommendations about EU wide liability rules for AI and robotics. MEPs also asked the European Commission to review the possibility of establishing a European agency for robotics and AI. This would provide technical, ethical and regulatory expertise to public bodies [2].

While we can clearly see that governmental interest around AI continues to evolve, the range of problems and markets to which they are applicable increases. Such breadth of applicability might raise problems as AI is not specific to an industry or sector, but regulations often are, meaning its foundational infrastructure could be left vulnerable.

AI has been hyped as both the solution to business and personal challenges and a threat to our creativity, autonomy and livelihoods. Businesses are faced with education gaps and are therefore concerned about where AI is heading and what impact it will have on their development. With so much focus on, and preparation for, what the future of AI might look like in many years’ time, we can’t lose sight of the immediate priorities, and instil a framework that shapes the way governments and businesses deliver today’s narrow AI applications.

Commenting on the report, Michal Gabrielczyk, Senior Technology Strategy Consultant states, “These principles, however they might be enshrined in standards, rules and regulations, give a framework for the field of AI to flourish within government whilst minimising risks to society and industry from unintended consequences. Only by laying the groundwork and guidelines for effective, reliable AI today can we build consumer faith and enable an exciting future, while maintaining a firm control of costs as AI-based outputs evolve.”

Cambridge Consultants will be releasing its full report, titled “AI: UNDERSTANDING AND HARNESSING THE POTENTIAL” at Mobile World Congress, taking place in Barcelona, February 26th to March 1st. Register to receive the full report here https://www.cambridgeconsultants.com/press-releases/new-report-highlights-recipe-responsible-governmental-ai or visit Cambridge Consultants in Hall 7, stand 7B21.

Join an ambitious, supportive community of world-class scientists, engineers and entrepreneurs

Read more