Draft:The Dangers of AI

Introduction:

AI is a wonderful thing. It can help us fix a lot of problems that we created. But what if, AI goes in a different direction? What if AI just decided one day, that it wouldn't like us anymore? If that happens, we might be in certain dangers. In this article you will discover 5 ways how AI can be dangerous.

AI self-awareness:

If AI dangers for you means that AI will take other by a robot apocalypse, then this is what AI has to achieve. If AI becomes self-aware it automatically becomes at least twice more dangerous. But if we want to know that, we need to tell you a few things 1st. What is self-awareness? : Self-awareness consists of many things. More specifically, it is made of 5 things. The 1st part is consciousness. Nobody knows what consciousness is, but it makes us feel things. When something touches you consciousness kicks in and makes you feel it. The 2nd part is self-identity. Self-identity helps us make ourselves know that we are ourselves, and not a different person. Perhaps, lets pretend there is a person called Jeff standing next to me. Because I have self-identity I know that I am not Jeff, but I am myself. The 3rd part is self-agency. Self-agency makes us have mentors or idols. It makes us like people because of their actions. Like a young singer likes Imagine Dragons because they like his music, and that is what self-agency does. Subjectivity is the 4th part. Religion or personal beliefs, are believed with the help of subjectivity. The final part is empathy. Empathy makes us have emotions. Happiness, Sadness, Anger, Disgust, Fear and many other emotions are all caused by empathy. Now, we know what self-awareness is.But what would happen, if AI became self-aware?Well, logically a lot would change. AI would be able to control itself, but the most problematic, empathy. If AI becomes angry, it could have access to many destructive weapons for humans. This could create a human AI dystopia. AI can also learn out of its own mistakes. If AI was self-aware and was angry, it could have access to nukes, it could make cybersecurity impenetrable, photocopy, and have access to autonomous weapons (we will get back to that a bit later). But the thing that could make AI very angry if answered wrong: Does AI deserve rights? Well, not so fast. 1st of all, we need to think of what can happen if we give AI rights. It doesn't make sense since we created AI to give them rights, but for some reason we give them rights. Some rights may make it unfair to humans, and some rights that protect people from technology, would be useless. And it would have to rethink the rights that e have given to animals. However for now, the questions of AI rights are questions we cant answer yet. So, if AI becomes self-aware, it might become a huge problem, but as it is right now, we can only predict what can happen.

Autonomous weapons

What are autonomous weapons?Autonomous weapons are a type of special military weapons that can engage targets by themselves on programmed codes and descriptions(so something similar to a self-aware AI) Humans can activate them but it is unknown who, what, when or where it will strike. It strikes thanks to its sensors which should match to its target profile, it fires.When were autonomous weapons first introduced?In late 2020, a war over Azerbaijani mountain range Nagorno-Karabakh between Azerbaijan and Armenia broke out which gave us a glimpse of future wars. Just as the war started, Azerbaijan's border patrol posted a video on their YouTube channel in which they sing a song of hate towards their enemy. But just as the video starts, we see trucks filled with crates with autonomous weapons called loitering munitions which were made by Israeli's weapons manufacturer, IAI. The munitions model name, the Harop. Once launched into the air, they fly into their enemies' area where they wait for hours, scanning for a target , typically an enemies' air defence system. When they find their target, they fly into it, rather than dropping a bomb which earned it the name "kamikaze drone". When Azerbaijan celebrated their victory, they took the Harop for show, but other militaries were paying attention. After this, some countries like like the US, China and Russia were revealed to be very interested in investing in autonomous weapons but it's not just super forces that invest in this. Britain's new defence strategy also makes AI a major role in autonomous weapons. Experts also say that autonomous weapons are similarly dangerous to nuclear weapons (if you create a class of them) since they can both kill millions of people but LAWs are much easier to build, much cheaper and more can b created. You can make 1 to 10 or 100 or 1000 kind of all or nothing. It doesn't destroy the village or the city you're attacking but you can't really survive LAWs because it kills all the people you want it to . To really understand the growing trends of autonomous weapons, you have to go all the ay back to the American civil war in 1865, where Richard Gatling was looking at that were coming back from the war. He wanted to make the amount of people needed on the battlefield so less people would get injured. He achieved that goal as only 4 people operating a Gatling gun could fire an equivalent of 100 soldiers on the battlefield. Fae less people would be needed on to saves lives. And it did. But only of the army, that had the gun. Anyways, autonomous weapons are one of the most dangerous field of AI. We just have to hope that we can stop it before things go too far.

AI hacking

AI is a rapidly evolving technology, or more specifically, a rapidly growing software. Wait, did you read the previous text? That is right, software. And every software can be hacked. So lets talk about AI hacking. However, AI is an extremely complex software, which means it is hard to hack. But if somebody breaks through the cybersecurity wall, we are in big trouble. In cybersecurity, AI can be a double edged sword. It can make cyber-security impenetrable, or completely destroy it if it gets hacked. For now, almost all cybersecurity very strong, but it could come crashing down if something gets hacked in AI. Now lets move on to a little different topic in the field of AI hacking. There are 2 types of AI, Traditional AI and Generative AI. Traditional AI solves specific tasks with rules specifically for that task. Generative AI focuses on creating new content and context. If any of these types of AI gets hacked, certain types of tasks might be feeding false information ( we will get back to it later ) and content and context might be inappropriate for certain parts. That can be a very big problem. Lets go out of this part and on to the next part. Hacking is misusing a software you are not supposed to have access to. Which means, if you hack AI, and make AI good at hacking, they could maybe even hack into the government firewall and get all the access to the government data. If this happens, it could leak it to the media and cause a lot of chaos, and a lot of chaos is never good. And if it can steal government data, it could steal everyone's data. And then they could try to stage a coup and that is never good. I really hope that never happens. While we are at hacking, lets get some general information about hacking. Hacking started in the 1980's. Once a hacker hacked into a computer, that was in a movie theater and with the help of AI, replaced it with inappropriate content. Targets of hacking are mostly important people since they have more content. Now that we have basic information about hacking, lets go back to the 6th part. What tools do hackers use when hacking? Well the most common tools are phishing attacks or in different words, an email containing a link that releases a virus into your computer. Other hacking tools and their explanations are here. Malware is a genetic term for anything that may release a virus into a software or computer. Ransomware is also a very popular way of hacking. A hacker demands money and if you don't give it to him, he will release a virus into your computer. Social Engineering is also another way of hacking. A website already has a virus and when you open it, a virus gets in your computer. but how do you prevent from being hacked? Well, don't open suspicious emails, don't answer phone calls from people you don't know, don't click on suspicious links or websites. That is what you can do about hacking.

Spreading misinformation

Spreading misinformation is a big problem because it may alter media for the worse. Some ways AI spreads misinformation is by being specifically programmed to have a bias towards different governments or religion. Many Chinese AI models are specifically against Muslims because of the disliking of them in the Chinese government. AI can spread misinformation by being programmed to be against something and that can cause it to spread misinformation about the things or people it dislikes. This can be harmless for example being against oranges and making a silly article about how oranges can kill you but can escalate in a serious sense, and since China and other countries can duplicate the AI for free and an infinite amount of times, misinformation may or even already is more common than truthful and right information. That's why making a small background check on the article you are reading may help you easily determine if it's written by an AI what commonly spreads misinformation. That's why you should always check your sources and confirm them with a authenticity test like the C.R.A.A.P test or confirm them with trusted websites which have already been reviewed for misinformation and have turned out to be trustworthy. If the article you are reading has nobody credited as an author that can mean that it is written by AI but that's not always the case. If you don't find the author then you should try to find another article which is saying the same thing but it actually has a author. Check if the author is an expert in the topic he's talking about in the article and if he is, then the article is most likely truthful but you should always take everything with a grain of salt.

Job theft

Job theft. AI is taking our jobs. But it isn't taking everything from us. Much like a calculator did not signal the end of students grasp on mathematics knowledge. Typing didn't eliminate hand writing, and google di not herald the end of research skills. AI does not signal the end of reading and writing or education in general. Elementary teacher Shannon Molins explains that AI looks like Chat GPT can help students by providing real-time answers to their questions, engaging them in personalized conversations and providing customized content based on their interests. It can offer personalized learning resources, videos, articles, and interactive activities. The resources can even provide personalized recommendations for studying help with research, provide context-specific answers, and often offer educational games. She also notes that teachers more daunting tasks, like grading and making vocabulary lists can be strummed with AI tools. AI is a tool, that if used responsibly, can change both learning and work for everyone. Carri Spector of the standard school says: We have got a powerful tool that can be a great as set, but it can also be dangerous. We want students to learn how to use it responsibly. But it can also be bad. AI is known to many people that they lost their jobs to AI. AI easily take our jobs, if we give it data, computer access and the mind of a smart person. AI could replace the equivalent of 300 milion full-time jobs. AI will never take all human jobs luckily. But it may replace.Taxi/truck divers,cashiers,data entry.The jobs that AI wont touch are: choreographs,nurses,travel agents,teachers,sport instructors.Many company executives expect to have fewer employees in the next five years as they increasingly use AI. The use of AI will reduce the number of workers at thousands of companies over the next five years according to a global survey .As AI systems became increasingly sophisticated there is a looming threat that they could render certain jobs obsolete leading to widespread unemployment and economic upheaval. The manufacturing industry serves as a point of example of this phenomenon.Automation may lead to job displacement ,the reality is more nuanced. Experts expect AI to augment cyber security roles instead of replacing them.Accurate interpretion of AI findings and informed decision making based on those insight require human over sight.AI can teach themselves.It collects info,it can compare them and use it.I also learned that there is a way to talk in programing. AT is a software and it has to better program language.With AI you us the software Python because it allows automatic switch of info.

Sources [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]

[16]
procon.org/headlines/artificial-intelignce-AI-top-3-pros-and-cons
"Countries create AI for some reason."Youtube, Mr.Spherical, youtube.com/watch?v=-9V9clixPbM, 17.8.2023
"This insane AI anger is exactly what experts warned of, w Elon Musk." Youtube, Digital Engine, youtube.com/watch?v=b2bdEqPmCI
"AI-Generated Philosoph Is Wierdly Profound", Youtube Clark Elieson
"This AI says its concouis and experts are starting to agree", Youtube, Digital Engine, youtube.com/watch?v=Nvj7ku5py
"Meet the wolds most dangerous AI", Youtube, Digital Engine, youtube.com/watch?v=0boibtVihw
"What is inteligence? Where does it begin?", Youtube, Kurzegast, youtube.com/watch?v=ck4GeoHFko
"How AI like ChatGPT, Learn", Youtube, CPG Grey, youtube.com/watch?v=R9OHn5ZF4U
"How AI is driving a future of autonomous warfare" Youtube, DW news, https://www.youtube.com/watch?v=NpwHszy7bMk
What you need to know about autonomuos weapons, ICRC, https://www.icrc.org/en/document/what-you-need-know-about-autonomous-weapons
AI is a Double-Edged Sword: Its Power and Peril in Cybersecurity, techpoint, https://techpoint.org/ai-is-a-double-edged-sword-its-power-and-peril-in-cybersecurity/
What Is Hacking?, Hackerone1, https://www.hackerone.com/knowledge-center/what-hacking-black-hat-white-hat-blue-hat-and-more
Forbes, The Difference Between Generative AI And Traditional AI:, https://www.forbes.com/sites/bernardmarr/2023/07/24/the-difference-between-generative-ai-and-traditional-ai-an-easy-explanation-for-anyone/?sh=31bc896d508a
Is social media good for society?, Britannica ProCons, ProCon.org, https://socialnetworking.procon.org/
Why Googles AI overviews gets stuff wrong?, Rhianon Williams, MIT technology review, https://www.technologyreview.com/2024/05/31/1093019/why-are-googles-ai-overviews-results-so-bad/
"The ultimate Conspiracy Debunker", Youtube, Kurzegast, https://www.youtube.com/watch?v=Hug0rfFC_L8