top of page

The AI horse is out of the barn: How we approach it now will determine whether it hurts us.

Updated: Jan 23




Generative AI systems like ChatGPT, recently made available by OpenAI, are a fascinating use of emerging technology that has captured the world's attention. Search instances for “AI” peaked at an all-time high in December 2022 as a direct result of conversations about that system (trends.google.com). It’s not just the “tech nerds” that know about this stuff now and to say many in the education industry are “worried” is a colossal understatement (e.g., Downs, 2023; Warner, 2022). These systems hold a lot of promise for how they will positively impact our lives. Still, they challenge our notions of intellectual property and who is rightfully the owner or creator of a piece of art, a written passage, or an idea. Certainly, those of us in the education industry will need to wrestle with whether using these tools is “cheating”, if students should be allowed to use them, or under what circumstances it is acceptable.






A difficult aspect of determining whether it should be considered cheating is what acceptable help is. For example, Grammarly is an AI-powered tool to improve writing. It doesn’t usually generate long passages, but it effectively suggests changes. It can be a powerful tool for allowing students who are not native speakers to participate more completely in their education, but the applications for Grammarly are broader than that. Most of our company uses a Grammarly license, including very competent writers. I’m using it now because it helps me ensure a better final product with less effort. Helping people do a better job more quickly is what good tools are for.