The ethics of Artificial Intelligence (AI) has been a hot topic to discuss recently, especially with how controversial facial recognition has become. Now mathematicians have created a model that will help those in charge of commercial AI systems find out if the AI makes unethical choices in their mission to accomplish their goals.
The use of AI has been helpful across many industries and processes. When trying to find the best choice in any given situation, for example, such as determining the most profitable price for a product, or the most efficient way to distribute the resources of a company, AI can help.
However, artificial intelligence doesn't consider the context that a human would consider when deciding on a similar matter, and it is especially blind to ethics.
Unethical Artificial Intelligence Has To Go
One example of how AI would make an unethical choice compared to a person would be in determining the price of medicine during a pandemic or some health crisis. A person wouldn't increase the price because it would be unethical to do so while many are suffering. An AI, however, would only see the profits that would come from raising the price during a crisis.
A paper from the Royal Society Open Science has shown that AI will be more likely to go for the unethical strategy even in general conditions when going for maximized returns. It wasn't a challenge for the researchers to predict when the AI would pick an unethical approach, so they could quickly modify the AI to avoid doing so.
It seems intuitive for the AI to be more likely to pick the strategies that didn't consider ethics. If you can get away with unethical business practices, there's the potential to reap substantial rewards. Other companies and such wouldn't dare do anything unethical since there would be a significant regulatory and reputational backlash.
The repercussions that could come with going for unethical business practices should be something that companies that use AI should always consider.
Artificial Intelligence Without Guidance Is a Bad Idea For Now
Efforts to implement ethical principles into AI are going strong, but they will still need to pick from many potential strategies. These AIs will be making the decisions with little to no guidance from a human, making it a challenge to predict what they will choose.
The authors of the paper have mathematically proven that AI that has the purpose of maximizing returns will most likely choose an unethical strategy. However, the impact can be estimated, which will help detect all of the unethical strategies that could be picked.
The focus is to keep an eye on the strategies that will be giving the most returns, which is what AI will want. The paper's authors suggest ranking the strategies by the returns and then inspecting the strategies that give the most returns to see if they're unethical.
It won't only have unethical strategies, but it will help develop intuition in the AI. The AI will understand how to solve problematic strategies better.
The researchers hope that this research makes it possible to redesign AI so that they avoid unethical strategies. They found that there are only a small number of strategies that have the probability of massive returns. Statistical techniques will be used to estimate the chance that the AI will pick an unethical strategy.
However, if the chance of returns is distributed evenly, then the optimal strategy will most likely be unethical. Companies wouldn't allow these decisions to be made by AI without a human's guidance.
It's a good idea to have humans watch over the majority of AI's decision-making since it's unclear when AI will be reworked to weed out unethical strategies from the get-go.