{"id":40244,"date":"2024-12-12T10:58:48","date_gmt":"2024-12-12T16:58:48","guid":{"rendered":"https:\/\/sites.imsa.edu\/acronym\/?p=40244"},"modified":"2024-12-12T10:58:48","modified_gmt":"2024-12-12T16:58:48","slug":"artificial-intelligence-real-consequences-a-call-for-action","status":"publish","type":"post","link":"https:\/\/sites.imsa.edu\/acronym\/2024\/12\/12\/artificial-intelligence-real-consequences-a-call-for-action\/","title":{"rendered":"Artificial Intelligence, Real Consequences: A Call for Action"},"content":{"rendered":"<p><span style=\"font-weight: 400\">Artificial Intelligence (AI) is fundamentally changing our daily lives, from powering self-driving cars to solving complex healthcare problems. But as AI applications and use cases evolve, there is an increasing debate over the need for regulations. True to its capitalist roots, the U.S. government hasn\u2019t imposed any restrictions on AI yet, while Europe has already moved forward with <\/span><a href=\"https:\/\/artificialintelligenceact.eu\/\"><span style=\"font-weight: 400\">strict rules.<\/span><\/a><span style=\"font-weight: 400\"> As such, The Acronym decided to assess the pros and cons of AI regulations, evaluating the European Union&#8217;s approach and considering what it could mean for the U.S.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Advocates for regulating AI development point to the inherent risks of allowing a technology with such a profound impact on everything around us to go unchecked. We don\u2019t have to look far for examples\u2014consider how social media, without an adequate regulatory framework, started as a fun way to connect with friends and family, but has become a tool for <\/span><a href=\"https:\/\/www.apa.org\/topics\/journalism-facts\/how-why-misinformation-spreads\"><span style=\"font-weight: 400\">misinformation <\/span><\/a><span style=\"font-weight: 400\">and voter manipulation. Or how the lack of regulations around firearms has made gun violence a pervasive issue in our society\u2014so much so that elementary school students doing <\/span><a href=\"https:\/\/youthtoday.org\/2023\/12\/95-of-public-schools-conduct-active-shooter-drills-are-students-safer\/\"><span style=\"font-weight: 400\">active shooter drills<\/span><\/a><span style=\"font-weight: 400\"> doesn\u2019t shock us anymore. None of these examples are normal, yet their overall impact is nowhere close to the risks AI poses. Therefore, there is a solid case for smart, strategic regulations on AI.<\/span><\/p>\n<p><span style=\"font-weight: 400\">AI programs are now driving our cars, which\u2013while once the stuff of science fiction\u2013poses significant public safety risks. As a teenager going through tedious but important Driver\u2019s Ed lessons, I wonder if self-driving vehicles will also face as much regulatory scrutiny to ensure AI capabilities are road-ready (or, worse, that ambitious corporations don\u2019t over promise capabilities). Beyond that, as consumers, how can we ensure our data isn\u2019t used in these AI models without our permission? And how will our privacy be respected when AI software knows more about us than our closest friends and family? Will consumer privacy depend on each company\u2019s ethics and policies? There\u2019s a need for regulations to ensure all companies are doing the right thing. Furthermore, AI regulations are essential to increase transparency and fairness so we can all understand and know how decisions are being made. If AI begins to take key roles in decision-making within society, we must ensure these models are fair and unbiased, unlike past iterations of such <\/span><a href=\"https:\/\/www.cbsnews.com\/news\/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi\/\"><span style=\"font-weight: 400\">models.<\/span><\/a> <span style=\"font-weight: 400\">Finally, regulations could provide a safeguard against <\/span><i><span style=\"font-weight: 400\">Terminator\u2019s <\/span><\/i><span style=\"font-weight: 400\">\u201cSkynet\u201d scenario. With the rapid pace of innovation and investment, AI is becoming more intelligent at an exponential rate, and is on a fast-track to surpass human intelligence. At some point, we need to ensure AI doesn\u2019t become too powerful to be controlled by humans. We cannot rely on corporates that are driven by shareholder wealth maximization motives to self-govern. The government, with its primary responsibility for public safety, needs to step in and do its job.<\/span><\/p>\n<p><span style=\"font-weight: 400\">While the case for regulation is strong and logical, it may not hold up in the practical realities of today\u2019s world. The key argument is that even if the U.S. slows down AI evolution to ensure social responsibility, it won\u2019t stop other countries like China, where AI development is progressing rapidly with minimal regulatory constraints. Moreover, if companies find U.S. regulations too restrictive, they might relocate to other countries, taking innovation and jobs with them. We saw this play out when, just a few weeks ago, SAP, a European technology giant, <\/span><a href=\"https:\/\/www.cnbc.com\/2024\/10\/22\/sap-ceo-urges-europe-not-to-regulate-ai-says-will-put-region-behind.html\"><span style=\"font-weight: 400\">warned <\/span><\/a><span style=\"font-weight: 400\">that the EU AI Act could cause Europe to fall behind the U.S. and China. Like with climate change, an AI act requires multilateral commitment. Moreover, it could place a heavy burden on developers and small AI startups, who may lack the resources to comply with complex regulations. Overly restrictive policies could discourage investment in AI research and development, stalling technological progress. Finally, could any regulatory framework even keep up with such rapidly evolving technology? Any regulation is likely to become outdated soon and could ultimately become ineffective or even harmful.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Consider the EU AI Act as a case study: it uses a risk-based approach to categorize AI systems based on their potential impact on public safety and fundamental rights. The EU Act puts stringent restrictions on high-risk systems\u2014like those used in healthcare, law enforcement, and transportation\u2014while also prioritizing citizen privacy by banning <\/span><a href=\"https:\/\/cdt.org\/insights\/eu-ai-act-brief-pt-2-privacy-surveillance\/#:~:text=Prohibited%20Practices&amp;text=The%20AI%20Act%20purports%20to,enforcement%20in%20publicly%20accessible%20spaces.\"><span style=\"font-weight: 400\">real-time biometric surveillance in public spaces<\/span><\/a><span style=\"font-weight: 400\">. While it\u2019s too early to say whether these regulations will succeed or how they will impact the EU economy and AI development, the EU has undoubtedly taken a lead in setting a regulatory precedent and offering a template or case study on what can be achieved. This leaves the U.S. in an unfamiliar territory\u2014we certainly have the resources and know-how to lead the world in addressing this challenge. The question is whether our already fragile political system can stand up to Big Tech and whether diplomacy could encourage other nations to follow suit (similar to how we got <\/span><a href=\"https:\/\/obamawhitehouse.archives.gov\/the-press-office\/2015\/09\/25\/us-china-joint-presidential-statement-climate-change\"><span style=\"font-weight: 400\">China to agree on climate change<\/span><\/a><span style=\"font-weight: 400\"> 10 years ago). The goal is to ensure we remain the masters of AI in the future, not the other way around.&nbsp; <\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial Intelligence (AI) is fundamentally changing our daily lives, from powering self-driving cars to solving complex healthcare problems. But as AI applications and use cases&#8230;<\/p>\n","protected":false},"author":925,"featured_media":40245,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[2724,12],"tags":[2849,2206,2047],"coauthors":[4238],"class_list":["post-40244","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","category-opinions","tag-ai","tag-eu","tag-regulations"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/posts\/40244","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/users\/925"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/comments?post=40244"}],"version-history":[{"count":5,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/posts\/40244\/revisions"}],"predecessor-version":[{"id":40288,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/posts\/40244\/revisions\/40288"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/media\/40245"}],"wp:attachment":[{"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/media?parent=40244"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/categories?post=40244"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/tags?post=40244"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/coauthors?post=40244"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}