Editor’s note: sorry this post was submitted late: I had some midterm exams to prepare at the time that this assignment was due. -Ben
Dr. Mariel Miller hosted a fascinating guest lecture for Week 6 on GenAI usage by students and I wanted to reflect on my findings here. During her lecture, Dr. Miller went over the pros and cons of GenAI to support student learning, as well as a study that she conducted, and a framework she presented on how to ethically use GenAI.
Benefits of GenAI models for Learning

- They’re easily accessible. Chatbots such as ChatGPT usually run on freemium business models, allowing a very low barrier to entry for students- but they have the option to pay a yearly fee for an upgraded version of the same chatbot if they want to.
- They can be customized to the needs of the individual. I mentioned this before in my reflection of Lucas Wright’s guest lecture; chatbots can be ‘typecasted’ into a specific field of expertise by prompting them to specialize in your subject of choice, (for example, mechanical engineering). You also don’t have to wait for the information to come to you. With the right prompting, you essentially have a 24/7 tutor that lives in your phone and can be configured to support you in whatever subject you’re taking at the time.
- They’re awesome at automating the boring bits for you. Repetitive, formulaic tasks, like calculating math equations or breaking down French grammar is where a chatbot can excel, shaving a large chunk of the user’s time that would normally be spent on performing those tasks themselves.
Challenges of GenAI models for learning

- Potential to violate academic integrity guidelines. Naturally, this depends on the school. Whilst some allow usage of AI provided it doesn’t outright replace human work, others forbid its usage entirely. If you aren’t sure, there’s typically a set of guidelines for your school on AI usage that I recommend you read if you can find it.
- Some people rely on them too much as a replacement for their work and creativity. Ironically for some people, too much AI usage can worsen their creativity, rather than enhance it. Dr. Miller mentions in her lecture that some people that rely heavily on GenAI, often use it to give them the direct answer to the question they want to know rather than guiding them to it. In turn, this means you’re outsourcing brainpower that should be kept in your own head, downgrading their creativity and their own critical thinking as a side effect.
- AI can present misinformation based on its training data. GenAI models are typically trained with data en masse- but not everything on the internet is correct, meaning that sometimes chatbots can misinform you with full confidence that it’s real. This is called ‘hallucination,’ and it’s common amongst many mainstream language models. Another type of hallucination is when the AI directly quotes lines from something that it’s learned. Just last year in 2024, Perplexity, an AI-based search engine, was busted for directly quoting articles from the New York Post and Dow Jones, the Wall Street Journal’s parent company.
A major thing I took away finishing Dr. Miller’s lecture was that I needed to develop a specific type of critical thinking for usage and awareness of GenAI. This was where she presented a study, conducted by herself with students that were using GenAI- to find out what it was primarily used for and for what reason.

What Dr. Miller found in her study
- 91% of the students she surveyed were using GenAI, and over half of them occasionally used it to break down their cognitive challenges.
- The second most popular use case was to improve students’ time management.
- The least common use case in Dr. Miller’s study was for motivation and encouragement- to regain the initiative to do a task they need.
“When people are running out of time, they tend to turn to AI to help them manage their time, get back on track, and achieve the goals they want to achieve within that time frame.”
dr. mariel Miller
Which leads us back to the competencies needed to ethically use and understand GenAI.

- Situational report: know the expectations and policies surrounding the usage of GenAI and IF it’s approved, what tools you can safely use without violating your academic integrity.
- Create a goal that you want to achieve. You want the goal to be balanced, enabling your AI model of choice to lead you to the right place, enabling the key work to be done by yourself.
- Adapt based on the conditions you’re given. Read the policies that you’re given carefully, and deciding when to use GenAI in the conditions they create.
- Tool selection. Offload your ‘boring stuff’ to the AI model- and be careful to take any information it gives you with a severe grain of salt.
I personally use ChatGPT to enhance my own learning all the time. In future, I’ll try using Dr. Miller’s framework myself, assessing what goal I want to use it to achieve and using it to adapt to the circumstances I’ve got- I’ll talk about this more in how I’ve used ChatGPT to prepare for my midterm exams. Overall, this was an incredibly fascinating lecture and I’d love to see Dr. Miller present a lecture in person one day.
Leave a Reply