BioNews

AI and ethics: What does it look like and what’s the agenda?

AI and ethics: What does it look like and what’s the agenda?

BioCentre is delighted to invite you to this afternoon symposium on AI and ethics. 

Chaired by Prof Nigel Cameron, Executive Chairman of BioCentre; President Emeritus, Center for Policy on Emerging Technologies; Latest book: Will Robots Take Your Job? A Plea for Consensus (Polity/Wiley).

Guest panellists include:


  • Christian Byk, Chair of UNESCO’s Intergovernmental Bioethics Committee; Chair of science ethics committee of the French National Commission for UNESCO (CNFU) 
  • Natasha McCarthy, Head of Policy, Data – The Royal Society
  • Lorna McGregor, Professor of International Human Rights Law, University of Essex
  • Huw Price, Bertrand Russell Professor of Philosophy; Academic Director Centre for the Study of Existential Risk (CSER), University of Cambridge
  • Richard Sargeant, Chief Commercial Officer at Faculty; Board member of the UK Government’s Centre for Data Ethics and Innovation.

The afternoon will consist of short presentations from the panel, following by Q&A with the panel and audience.

For further details and to RSVP (the event is free to attend) CLICK HERE.  

 

Symposium brief



Recent reports published from the UK House of Lords, EU High Level Expert Group on artificial intelligence, the OECD and IEEE, all help to shape and inform the development of ethical codes, frameworks and principles. To see convergence of thinking and ideas in this way is great but principles alone are not enough. Principles need to come together with practice and an appreciation of the social environment (including the ethical, legal and social implications) in which these technologies are applied so that the advantages AI affords can be harnessed whilst mitigating the risks.

AI and autonomous systems provide enormous power to a small number of huge companies who scoop up data about us and use if for business purposes. Governments are acquiring more and more power through the use and manipulation of digital information about our lives. The Chinese “social credit” system shows where authoritarian regimes can go in controlling the lives of their citizens. The technology may be new but the power dynamics are very old and well defended. How best to use regulation to encourage effective implementation of AI systems whilst preserving privacy?

There are also implications for weapons systems have attracted widespread concern. Should machines make the decision to kill – with no human supervision? Then there is the superhuman intelligence question and the prospect of creating a form of “digital supreme being.” This may happen soon, it may not happen for many years. It may never happen. But the fact that we can’t rule it out means we need to take it very seriously.


BioCentre is hosting this afternoon symposium to help contribute to this ongoing process by bringing together key specialists and thought leaders to explore the opportunities and stimulate understanding and solutions.

 

Topics

Comments

There have been 0 replies to this Article. + Post your comment here.


All opinions are welcome but comments are checked to ensure they are not abusive or profane






This is a spam prevention measure!