AI Bias

If people built AI, does human bias carry over?

Bias in AI can be introduced at various stages of the development process, including data collection, data processing, model training, and deployment.

An example that we are all familiar with is how ChatGPT could write a poem praising Joe Biden but not Donald Trump.

AI bias has serious consequences, particularly in areas like criminal justice, healthcare, and finance, the AI is only as strong as the data you feed it and as of right now, it is ultimately people that feed AI data of their selection which introduces inherent bias.

Prior to widespread adoption of utilizing AI to gain information from the internet, it is important to have diverse teams working on AI development, use representative and inclusive data, and continually monitor and evaluate AI models for potential biases.

We have seen infractions incurred by facial recognition technology, predictive policing, credit scoring algorithms, healthcare algorithms, hiring algorithms to name a few, having people build new future AI software from scratch with some using internet communities no doubt introduces potentially unsavory results.

AI Job Displacement

Sci-fi leads us to believe that AI will overnight displace works, while I don't think it's that quick, is your job safe?

We have all seen the story in which the artist found her position replaced by the AI known as: 'Midjourney V5'. Going one step further, there is already various speculation regarding AI's potential to continuously process through repetitive tasks with specific rules and boundaries, much better than humans can and can work around the clock.

AI in my research has the potential to disrupt: manufacturing, transportation, customer service, data entry and processing, financial, healthcare and analyze and case studies jobs (solve scenarios only thought to be capable by professionals) to name a few. AI can take a fraction of the costs and easily displace hundreds of millions of jobs while offering far fewer jobs in response.

It is likely that AI will rapidly change the job landscape within the next decade and thus far, there seems to be lack of regulation on what AI is allowed to be applied to and many jobs are in the middle of transitioning to a hybrid model of mixing human and AI, with AI improving logarithmically over time, one can only imagine the disruption given enough time without regulation, affecting not only you, but everyone around you. No wonder there's speculation of 'Luddite riots" again.

Societal Dependence

Do you think a new generation of students who uses ChatGPT for their homework has all the skills needed to survive after leaving an academic setting?

Society's increasing dependence on AI systems could lead to a lack of resilience and flexibility, making it difficult to adapt to unexpected events or disruptions. For example, if a critical AI system were to fail, it could have far-reaching consequences on many aspects of society. Due to how well AI could performs in areas such as: autonomous systems, healthcare, emergency response and financial systems and academia. 

Movements away from professional careers in these field could lead to a black swan event that completely capsizes society due to how deeply rooted people relies on AI potentially. AI has already been seen to outperform academics in fields of medicine, legal and coding, even writing papers that are indistinguishable from humans, why would someone in school go into those fields?

Overall, as you can imagine, overreliance on AI could result in a lack of flexibility and resilience, making it difficult to adapt to unexpected events or disruptions. If you have watched the movie Wall-E, the sci-fi aspects of AI taking over society is portrayed as a negative as even the captain of the ship forgets what his mission was. It is important to strike a balance between using AI to improve our lives and maintaining our ability to function without it.

Let's just mention the example of students utilizing ChatGPT for their assignments, education is our future, and how reliable really are the population that grew up utilizing AI as a placeholder for actual learning? Thus far, this industry is unchecked aside from a few start-ups like GPTZero.

Accountability Issues

Who is accountable for the results that AI cause?

This topic is probably the most intertwined with all of the other topics mentioned on this page. Due to the nature of AI systems, if they are used by people as a place to obtain information or get updates on news of the world, it is very difficult to blame the end result of what AI comes up with. Article on shift of accountability.

Some of the accountability issues with AI are as follows: bias, lack of transparency, responsibility, safety, privacy, lack of regulation and ultimately inaccurate information.

I am sure we we have all seen the news piece about the professor who got accused of sexual assault by AI. Who would you pursue in court, the AI itself or the people who built it, or you cannot pursue legal action? As you can imagine, a precedence has not been set for how you would deal with potential harm that AI brings to society and the lack of accountability associated with it.

To address these accountability issues, it is important to develop ethical standards for AI development and use, and to ensure that these standards are enforced, the closest we have reached in terms of enforcement is Elon Musk calling for a pause on rapid AI development, hardly considered any accountability offered.

Privacy Concerns

What do you do when AI uses your information, voice, likeness, personality, physical features without your permission?

AI systems require vast amounts of data to function properly, which can include personal data. The collection and use of personal data raises concerns about privacy and the potential for misuse as AI develops and becomes entwined with our daily lives. 

Everyone was so alarmed by the data that meta collects, or how apple/ smartphones collect so much sensitive data on the population because everyone has one in their pocket, carrying them around everywhere. Imagine if AI gains access to biometric data, uses it to recreate deceased individuals, or even people who are still alive? Have you heard of nefarious deep fakes and AI voice generated videos/ music?

Another potential nightmare is data breaches, due to AI being built with close ties with the internet, it can also be hacked or have personal data breaches, what would you do to stop identity theft and financial fraud? How about the lack of transparency making it difficult for individuals to know what data is being collected about them and how it is being used in AI systems.

Lack of Transparency

Does an everyday person understand all of the intricacies of the internet (a 30 year technology)? How about AI?

Lack of transparency by AI systems is a major concern that has been raised by many experts in the field of artificial intelligence. Transparency refers to the ability to understand and explain how an AI system works, including its decision-making process and the data that it uses.

There are several reasons why AI systems may lack transparency, complexity, black box algorithms, data bias, intellectual property. 

Ideally, the world would prioritize transparency in AI development and use, example of this can include developing explainable AI systems, opening up data sets and algorithms to external review and analysis.

Due to the nature of capitalism, new technology is shrouded in mystery and largely examined through only the positive lens of what it could do for society.

Weaponization

If AI can be used for good, what about if it was used nefariously?

The weaponization of AI is a concern that has been raised by many experts in the field of artificial intelligence. This refers to the development of AI systems for military purposes, which can have serious implications for global security and the safety of civilians.

There are several ways in which AI could be weaponized, including: autonomous weapons, cyberattacks, surveillance and propaganda to name a few major applications. It seems far away as no major weaponization of AI has appeared as of yet, but it only takes one time for the world to be changed forever.

The weaponization of AI raises ethical and legal concerns, including questions around accountability, transparency, and the potential for unintended consequences. There have been calls for international agreements to regulate the development and use of AI for military purposes, but what if these weapons integrated with AI falls into the wrong people's hands? Everyone is fearful of AI cyborgs destroying humans, but I envision humans destroying humans with the aid of AI.

Usage of AI for Mass Auto Radicalization

AI+ Automation= Tools of mass influence.

"GPT powered sock puppet accounts promoting an endless stream of propaganda, especially propaganda meant to radicalize individuals. This is my personal AI doomsday scenario, mass scale automated radicalization or “reprogramming” of the human mind. I think it’s something everyone is susceptible to a larger degree than they’re willing to admit. It’s something that’s been known to psychologists for decades and big tech for ~11 years about."

-This is a new section added by a supporter and I agree fully.

To add onto the AI bias potential, how about having AI backed tools for usage to sway public opinion based off of biased views, we all know of China's infamous 50 cent army that does whatever the CCP does, how about a 0 cost army that will never get tired and can propagate and amplify messages that the creator desires? Once AI unlocks the means to bypass human security checks, they can self generate new fake users and spread chaos in a way never seen before, to the point where you can't trust anything you read on the internet anymore.

OTHER AI Concerns

Do you think I missed a pillar for AntiAI?

Come join my team and work with me to bring regulation in the space.

I want to cover plagiarism next.

I am always in need of partners to create new media stories and people to work on the website with me!

We need your consent to load the translations

We use a third-party service to translate the website content that may collect data about your activity. Please review the details and accept the service to view the translations.