Artificial Intelligence: Stephen Hawking And Elon Musk Pairs Up Again

As the future becomes at hand and a number of developments are being made, Artificial Intelligence becomes increasingly more prevalent. Thus, a significant number of experts say that it is just but humanity's responsibility to ensure it is developed ethically without undermining humane values. In line with this, renowned physicist Stephen Hawking and famed Tesla CEO Elon Musk, along with hundreds of other academics and researchers, have recently endorsed a set of principles designed to guide AI development.

Why Should There Be Principles For Artificial Intelligence?

According to reports revealed by Computer Business Review, Prof. Hawking and Musk have both agreed with the fact that machines are becoming more prevalent and more intelligent, and could eventually parallel human intelligence. Last 2014, Professor Hawking has already revealed and warned that AI has the potential to threaten humanity and Musk has said that AI could potentially be more dangerous than nukes. Consequently, that idea led them to come up to their support behind the 23 Asimolar AI Principles, which has been drawn up by the Future of Life Institute, that are known to have been designed in order to ensure that machines exist to serve man, and never to rule over him.

The Principles For Artificial Intelligence

Meanwhile, as per Inverse, it was found that some of the known principles, such as transparency and open research sharing among competitive companies, were noted to be less likely than others. However, the experts have noted that even if they're not fully implemented, the 23 principles could go a long way towards improving A.I. development and ensuring that it's ethical and prevent the rise of Skynet. Furthermore, it was found that the goal of Artificial Intelligence research should be to create not un-directed intelligence, but rather beneficial intelligence for the entire humanity. Ultimately, the principles include:

Research Goal, Research Funding, Science-policy link, Research Culture, Race Avoidance, Safety, Failure Transparency, Judicial Transparency, Responsibility, Value Alignment, Human Values, Personal Privacy, Liberty and Privacy, Shared Benefit, Shared Prosperity, Human Control, Non-subversion, A.I. Arms Race, Capability Caution, Importance, Risks, Recursive Self-Improvement and last but not the least, Common Good.

            

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost