AI is increasingly pervasive in our everyday lives, from using it as a personal assistant to writing college essays or even enhancing student research projects. While very useful and exciting, AI also raises important questions about its implications for scientific research, ethical considerations, and the challenges and risks that young researchers must navigate. In an interview with the Acronym, Dr. Peter Dong–one of the IMSA SIR program coordinators, leading stellar research in the field of Particle Physics, and highly experienced with various AI tools suitable for research applications–shares his insights on these matters.
To summarize, Dr. Dong recognizes the significant benefits AI brings by automating routine tasks, thereby freeing up researchers to focus on more complex and creative aspects of their work. However, he cautions that AI’s capacity to contribute might reach its limits when moving from these “busy” tasks to roles requiring creativity and problem-solving. Despite this, he highlights the substantial long-term implications of AI in research, stressing the importance for students and young researchers to view AI as a tool rather than a replacement for their own curiosity and eagerness to learn new concepts. Below are excerpts from the interview, shedding light on these perspectives.
Applications in Research
AI applications in science are mostly going to be removing busy work, the things that humans don’t want to do, or tasks such as removing noise from signals or separating signal and background. In his words, “There’s so much time wasted and duplicated effort that happens at the lab right now. If we could get rid of that, that would be really great. So I hope that’s the direction. You know, it’s the same thing as automation and mechanization. We learned and figured that out, and in the long run, that was a huge gain for society, for humanity, to be able to mechanize stuff. So in the same way, I hope it works out.”
While Dr. Dong sees AI (or expects future versions of AI) to be adept at doing the routine stuff we have already done or catching stuff that humans might have forgotten, AI is just a long way off from doing creative tasks. As he sees it, “And all it [ChatGPT or Gen AI] could do was give me vague boilerplate. And so that’s what I am seeing, it breaks down when you get too specific and isn’t able to handle it. It can only do simple stuff. Because what it does so far is it’s good at making stuff that looks right, but not so good at the stuff that is right. And so if it’s trying to come up with the design, it’ll look like the right design, but it may not be what you actually wanted, it may not work very well.”
Applications in the Specific Steps of the Research Process
Source collection
“Absolutely. That’s what all search engines are designed for. Search engines such as Perplexity are doing the same thing as Google search, i.e. trying to remove all the difficulty in finding the sources you want.”
Literature review
“Useful, but a researcher needs to be very careful. You just give AI engines a web page to read, and they will give you a good summary. And you could tell it how detailed you want the summary to be. So it’s good at it. But there are some key dangers here. One is the hallucination problem [when AI generates exaggerated or untrue information]. Then, if you want it to write literature review for you, you should have read all those papers. And you should read over what it says and make sure it all makes sense. Ultimately, you will have to double check the references – AI is a random process. You can’t trust it to be right all the time.”
Report writing
“Some aspects of report writing aren’t very intellectual and can be easily automated. For example, if you upload Chemistry data, and say, ‘I did a titration; write me a lab report,’ AI software will be able to do that and do a pretty good job.”
But evidently, other portions of creating a report may not lend themselves well to be automated.
“To me, the biggest danger is accountability. Making sure what’s in there is right. The discussion portion of a research paper is the hardest part- that part, I think should never be automated.”
Could it Hinder Learning?
Applications of AI in research and learning poses various challenges. For example, a large argument against developments in AI argues that it would hinder the learning of students. Dr. Dong compares it to using calculators, and believes it will make certain tasks easier but it’s not replacing everything. That said, he also advises students to understand its limitations.
“There’s going to be an adjustment period where we figure out how to teach students, and more importantly, where students realize what they need to know and what they don’t. As long as this generation of kids like you who are being raised with GPT, understand what GPT is and what it’s not, then we’re fine. And if you treat GPT as God, then we’re in trouble. The problem with Skynet in Terminator 2 wasn’t the AI. The problem was that someone gave AI control over nuclear weapons.”
Dr. Dong also emphasized that just because AI can do certain tasks, we as students still need to build those skills.
“GPT can write better than you. But you need to learn how to write. And frankly, I think that’s already where most students are. I don’t see students believing that this replaces their need to know how to write.”
General Words of Wisdom
“I think all technologies hit a wall. Notice that the vacuum cleaner was invented 50 years ago, and no one’s been able to make any real big improvements. And it’s just run slightly more efficiently, quieter, or longer battery life. Even Roomba isn’t a game changer because it can’t go up stairs. Similarly, AI will be able to find and point out interesting connections that humans might not have known, but it will still need humans to verify those.”
“AI is merely a tool – it’s like a knife. When you make a knife, you figure out safeguards: you store it in a knife block or you keep it out of reach of children. We need to stop thinking about AI as a consciousness, or an alien mind that we’re communicating with. It’s just a tool, and it’s a useful one. And we’re gonna make a few mistakes. But the biggest thing is to put safeguards on your tools.”
“It’s okay to lose some skills. For example, COBOL programming. So people, in fact, are looking at using AI to update their COBOL code base without humans, because the old COBOL programmers are retiring. That’s what we want to happen. So what will happen is some skills get lost, and it’s okay. There will be a transitionary period where you find people short on some skills, but you know what will happen? They’ll discover them, and then they’ll learn them. Because you don’t have to learn everything before you’re age 20. You can learn something when you’re 35.”
Something Worth Sharing Beyond AI
During our conversation, I gained a new perspective on appreciating the dynamic nature of impacts and potential solutions when assessing challenges. Throughout our 40-minute discussion, Dr. Dong never assessed challenges based on the status quo; instead, he recognized problems and offered insights on how he expects the ecosystem might evolve. For example, he acknowledged how increasing the use of AI could potentially hinder skill development in future researchers, but also understood that students will nevertheless learn the skills when needed. Similarly, he views hallucinations in AI as a challenge, but he sees them as bugs that will be fixed once they become significant constraints. To this extent, he offered his perspective that an increased focus on AI will lead us to figure out how to manage it. The key takeaway was that it’s acceptable to encounter unintended consequences, provided we act and learn from them.
Be the first to comment on "Applications of AI in Research – An Interview with Dr. Dong"