Artificial Intelligence (AI) is an emerging field of science and technology, boasting a promising future in countless industries. Recent releases such as ChatGPT and DALL-E have demonstrated the various applications of AI. From interpreting and summarizing large amounts of text and creating artificial images from a prompt, it seems as though these models are able to learn anything. However, the ever-growing potential of AI also necessitates some questions: What about its future? Will it ever replicate human intelligence? Will it take all of our jobs? Should I be scared of a robot uprising in the near future?
“It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.”
—Dr. Stephen Hawking on Artificial Intelligence (at the launch for the Centre for the Future of Intelligence)
Fortunately, a robot uprising is not a legitimate threat in the near future. However, the fast rate at which AI is improving is surprising, and therefore, the relationship between humans and AI requires some regulation and balance. To explore this relationship, The Acronym interviewed Dr. Ben Shneiderman, a Professor at the University of Maryland’s Department of Computer Science. Shneiderman has conducted research in the field of human-computer interaction and has developed concepts such as his 8 rules of design and the direct manipulation interface. In 2022, he published Human-Centered AI, which extensively covered AI’s future and its functionality with humans. In our interview together, The Acronym aimed to explore the future of this relationship and how we can avoid making errors that could negatively affect the implementation and impact of AI.
Q: After reading some of your notes and articles, it seems as if the idealistic and captivating future of AI often overshadows the real truth behind how artificial intelligence works and the extent of its abilities. As of right now, based on what you see and hear from the media, how do you envision the future of artificial intelligence and its applications?
Shneiderman: I believe that the future of AI is to amplify, augment, empower, and enhance human performance. These super-tools give people superpowers that build their self-efficacy, creativity, responsibility, and social connectedness. Ensuring human control, while increasing the level of automation is the path to success.
Q: Why do you believe that the ethics behind the use of AI is essential? What would you say to someone who argues that looking at AI from an ethical perspective is inhibiting its true potential?
Shneiderman: Ethics are an important foundation for technology that is to be deployed for wide use or in life-critical applications. They improve the quality of your work, and your development as a thoughtful, caring, kind, and compassionate person. However, ethics need to be put into practice, so it’s important to learn the human-centered social structures for the governance of technology. For instance, Civil Aviation is remarkably safe because of the devoted efforts of many people.
According to Dr. Shneiderman, AI is best utilized under the control of a human. As he put it, “ensuring human control while increasing the level of automation” will ensure a successful future. Without a balanced relationship, the distinction between humans and AI can become weak, leading to an ineffective future for AI. In his article on Medium titled “Guidelines for journalists and editors about reporting on robots, AI, and computers,” Shneiderman raises his concerns about recent news headlines and titles that incorrectly imply that AI is “thinking” and taking actions on its own initiative. To him, these titles fail to give the proper credit to the humans that developed the AI and exaggerate the computer’s abilities instead of celebrating human intelligence. The distinction between humans and computers, in addition to crediting and celebrating human intelligence, can also provide a better picture of how future models should be designed. Artificial intelligence is an extremely beneficial tool that can do wonders when used effectively, but it is not the magical sentient machine that the media often describes.
“Will it ever replicate human intelligence?” and “Will it take all of our jobs?” seem like genuine issues we may face in the future, but they are largely fueled by the perception that AI is exceeding human capabilities and becoming self-aware. As much as some movie fanatics may want it, a line of Python code doesn’t “know” that it is a piece of code, and it only runs when it is told to be run by a human. Although it seems enticing to recreate true intelligence using code and software, it distracts us from AI’s potential to aid humans. AI is best regarded as a powerful tool, and for the near future, it will remain a powerful tool.
— Ishan Buyyanpragada
Be the first to comment on "Friend or Foe? The Future of Artificial Intelligence"