@radekmie

On AI in Academia

By Radosław Miernik · Published on

Table of contents

Intro

We all know what’s happening with the AI currently. I won’t bother to give any summary of the recent advancements, as they tend to change at least once a week. Instead, here are my two cents on how it impacts me at the university.

For context, I’m a 4th year computer science PhD student at the University of Wrocław. Literally all of my research is categorized under the very broad term of artificial intelligence, including my PhD thesis.

Having said that, I feel like one of the most AI-skeptical people at my institute.

AI in teaching

I love classes with my students. The brief blink in one’s eyes once they finally understand something non-trivial is rewarding enough for me to deal with all of the paperwork in the world.

Before the LLMs became popular, we had Stack Overflow, LeetCode (together with tons of searchable solutions), and public repos from other students. It was always clear that while grading assignments, we have to focus on whether the students understood them first and whether they made it themselves next.

Of course, plagiarism is not allowed and should be openly punished. But at the same time, collaboration is promoted (there are, of course, exceptions – exams, etc.). And I think it should, as it’s both natural and more “job-like”. We have to be honest here – whether in a corporate or academic setting, the vast majority is not working in isolation.

Have I ever caught a student giving in someone else’s work? Yes, many times. Have I ever missed that? I’d be surprised if I didn’t. Is it getting worse in the last year or two? No, at least not in my classes. However, I heard it got terrible in other faculties, especially in non-STEM ones.

What I do see is a general decline in the ability to search for… Literally anything. In both academic and private settings, freshmen struggle to find any “less popular” information on the internet. We changed the wording in some of the assignments so the answer is no longer one of the top five Google results, and it “magically” got significantly harder (i.e., fewer students say they understand it).

AI at conferences

This month, I attended the PUKAI ∀ conference. It was the first university-wide conference dedicated to AI and its applications. While it was organized by the math and computer science faculty, most of the presentations were not related to any of those1.

I think nobody was surprised by the fact that virtually everyone tries to get on the AI hype train. Some areas have clear goals and know exactly what is needed for them (e.g., limitations of AlphaFold). Others are struggling to apply any of the existing solutions but are looking for people willing to collaborate on crafting new ones (e.g., to automate the simplification of official language).

Arguably, the best (and worst) part of the conference was the panel discussion. I was eagerly waiting for it the whole day and got… Surprised that so many well-educated people equate “The AI” (like, all of it) and the products that are available on the market right now.

If we focus on the products, we’ll fall into the rabbit hole of CEOs praising the AI doing everything: “Google CEO says more than a quarter of the company’s new code is created by AI”, “Nvidia CEO predicts the death of coding”, etc. Is that going to happen? Maybe. But should you listen to people who benefit from you believing them? You do you.

AI at conferences (again)

But before you go to a conference, someone should review the publications presented there, right? I know it varies between fields, but usually, it’s a lot of unpaid work from other authors who are educated in the area (me included).

“It’s an important duty; everyone should do it thoroughly!” Well… I was taught that if you want to get solid (not necessarily positive) reviews, you have to give them too. And yet, I got at least a few that would fit in a classic Tweet.

Now, would I rather get AI-generated reviews? Of course not! But could it help the reviewers to do more in the same amount of time? I think it’s possible even today. One idea is to check (or even fix) both the grammar and layout so the reviewers wouldn’t have to waste time commenting on those.

And don’t even get me started on AI-generated papers

AI in official communication

Many of the email applications are now shipped with both AI-powered text generation and summary features. It’s literally the first feature mentioned in the Apple Intelligence release note, so even your aunt will do that soon.

So the communication looks like this now: draft just one sentence you actually want to say, expand it into a bloated and fluffy email, send it, and pray that their summarizer will turn it back into something at least vaguely resembling your initial draft. I’m amazed how few people consider it a problem.

Have you ever mailed one of your lecturers a detailed and elaborate message about something trivial? And how often was the answer like this:

Sure, no problem.

X Y

Baffling, right? Offensive even. But after some time, you get used to it, and it’s great – the answer is right there! No time wasted on either side. I’d love to see that more often, really.

Conclusion

No, AI is neither a threat nor something we should ban in academia. However, we should educate people on what it is capable of and how it can impact their day-to-day work. But please focus on solutions (e.g., text generation), not products.

Rant over.

1

There was one AI-generated presentation, and I wouldn’t even mind, but… The topic and (all?) content were also generated. It was the weirdest one I ever witnessed, really. The author was open about it, so who am I to judge?