What are the ethical nightmares of Ai

With new and exciting AI products coming out every week, what ethical issues do companies need to consider before creating new AI devices or implementing them in the workplace.
With new and exciting AI products coming out every week, what ethical issues do companies need to consider before creating new AI devices or implementing them in the workplace.

Click here to listen to TBT on Air’s latest podcast, ‘The ethical nightmares of AI’ 

AI is a transformative technology that will become part of our everyday lives, from the office to our homes and cars. By introducing AI into our lives, we need to address that AI is no longer purely about functional skills but also about ethical questions behind building these systems. With AI spanning across various industries such as healthcare, retail and manufacturing, there are various ethical issues that we need to be vigilant about to ensure that AI is not doing more harm than good. So, what issues do we need to look out for to ensure AI remains a helpful part of our world.

Threatening human jobs

One of the biggest issues that need to be dealt with is AI replacing human workers. AI has brought mixed emotions due to people worrying about AI taking over their jobs; however, that is not the case. Companies need to be open and honest with their employees about how their responsibilities will change, and the new categories of jobs can be created.

In research done by Cognilytica analysts Kathleen Walch and Ronald Schmelzer, it was found that companies that adopt augmented intelligence approaches, where AI is augmenting and helping humans to do their jobs better, rather than fully replacing the human, shows faster and more consistent ROI for organizations and is welcomed much more warmly by employees. In addition, people feel more comfortable working with machines instead of being replaced by machines. 

Misuse of AI

Another major issue that needs to be addressed is the misuse of AI for surveillance and manipulation of human decisions. AI allows governments and countries to keep tabs on what people are doing through technologies such as facial recognition. It has been estimated that over 176 countries have been using AI surveillance. Even tech giants have raised their concerns on AI surveillance and the possibility of governments and companies abusing the technology. For example, Microsoft President Bradford Smith came forward to the US Congress and said that “we live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology”. 

In the near future, we can assume that we will be in a world where everyone knows who we are, where we are and what we want. It’s uncomfortable to think that companies and governments will have massive amounts of knowledge about our lives and, in turn, may influence our lives by manipulating our decisions. Therefore, companies need to look at how they implement AI surveillance and ensure that they are not infringing on their employees’ privacy. Unless you have a really good reason to access an employee’s computer camera or emails, it simply should not be done.  

AI-powered analytics have been in action for a few years now, and this gives companies an idea of what you will purchase, who you would vote for and what content you would read. Unfortunately, this has resulted in companies and governments abusing analytics to manipulate the decisions you will make. This is ethically wrong in its entirety and can be seen from the Cambridge Analytica case where American voter data was unlawfully obtained from Facebook to build voter profiles. With this information, AI was brought in to automate social media accounts to help create and spread misinformation across the internet to manipulate voters’ decisions on who to vote for. 

Companies need to remain cautious about what they do with their employee and client information while protecting that information from malicious attacks. 

Malicious users

Malicious users are becoming a major issue that needs to be acknowledged. One way that a user can be malicious is through deepfakes which can also have a major impact on the decisions we make and how we make them. Deepfakes are falsely created images of videos where someone else’s image can replace a person. In addition, malicious internet users may use this platform to misrepresent political leaders’ speeches and actions. 

The need for action is now

With technology constantly evolving, there has been an increase in major AI-powered threats that are becoming harder to detect, more flexible to systems and environments, and more accurate in identifying vulnerable areas within a system. Companies and governments need to act now to build a digital a strong, and reliable infrastructure that can withstand the force of these attacks. In a report on the Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, the writers found that companies and governments can expect novel attacks that exploit human vulnerabilities, existing software vulnerabilities or the vulnerabilities of AI systems. The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems may expand the threats associated with these attacks. They also expect novel attacks that subvert cyber-physical systems or involve physical systems that it would be infeasible to direct remotely. The use of AI to automate tasks involved in surveillance and deception may expand threats associated with privacy invasion and social manipulation. They also expect novel attacks that take advantage of an improved capacity to analyze human behaviours, moods, and beliefs based on available data. These concerns are most significant in the context of authoritarian states but may also undermine the ability of democracies to sustain truthful public debates.

Finally, with machines becoming more intelligent by the day, we need to understand how they should be treated and viewed as a society. At the moment, this issue is surrounded only by questions as we simply don’t know yet. For example, when machines start to replicate emotions while also acting similar to humans, how should they be governed? Should we consider machines as humans, animals, or inanimate objects? Will we consider the feelings of machines? 

READ MORE: 

With no answers to these questions, we will have to wait and see what happens over the years and maybe machines will be seen as humans who deserve and require protection. 

Click here to discover more of our podcasts now

For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!

Follow us on LinkedIn and Twitter 

AI, AI, What are the ethical nightmares of Ai

Amber Donovan-Stevens

Amber is a Content Editor at Top Business Tech

Personalization is the beating heart of successful hybrid cloud

Amber Donovan-Stevens • 27th November 2021

In the post-millennial era of real world cloud deployment, the modern digitally distributed nature of businesses requires a range of infrastructure options to allow each customer to leverage a mix of cloud technologies to best suit their unique needs while optimizing the associated costs. How can we enable this kind of flexibility in the face...

The Best Ten Rated Cloud Security Management Options For Business

Erin Laurenson • 24th November 2021

Cloud Security programs that can carry out safety procedures and address or flag potential high-risk elements are now critical, allowing businesses to function normally without fearing a potential breach. To help you find the best Cloud management and security system for your business, we’ve done the research and found the top systems presently available on...

How the cloud can drive organizational sustainability goals

Amber Donovan-Stevens • 24th October 2021

Mark Hughes, RVP of UK & Ireland, Epicor, explores cloud computing’s implications for organisational sustainability practices and introduces the key findings of Epicor’s survey of technology decision-makers in the US and the UK.xplains how cloud technology can spearhead an organizations sustainability initiatives.