A generative AI language model is a deep learning-based tool used to generate human-like text, usually using the initial text as a prompt. These systems are also known as universal language models. The most popular one among these tools is called GPT-3 or Generative Pretrained Transformer 3. It uses a state-of-the-art neural network, popularly known as transformers. These can be considered a probabilistic model that predicts the next word in a sentence given the words that precede it.
If we look at the automatic word suggestions on our smartphones, it uses a basic word frequency counter algorithm to make predictions. However, these neural networks consider more variables such as combinations of words, linguistic repertoire, contextual uses of certain words, and statistical maps of the probability of the word occurring next, thus making it extremely powerful and robust. These models have been trained using human-written text-based entities such as books, online articles, academic papers, journals, and even open-sourced code. Using a vast amount of text and high-performance computing makes it the most powerful natural language-based model created so far.
The idea to have a text prediction service in a smartphone or a word processor is to assist humans in typing better. However, these GPT-3 suggestive tools have become so powerful that they can perform intelligent tasks such as summarizing an academic paper or writing a function in code that calculates the area of a sphere. While these can be incredibly useful in certain areas, they raise ethical concerns when used for academics. Take student essays for example. They could input their prose into GPT-3 and the program would generate the rest of the essay for them. These are generated texts and thus plagiarism tools would not be able to pick them up. Another example could be coding assignments. These coding assignments are designed for students to develop and improve their cognitive thinking abilities so that they can produce their approaches and innovate. But these generative tools can generate code or texts allowing students to take advantage of the same. Usage of these tools could prevent students from developing key skills such as cognitive thinking, reading, and writing. Because plagiarism check tools cannot help professors or teachers, schools and relevant authorities will have a tough time going against this tool. This practice could also demotivate other students who put effort into writing their assignments. It could also motivate other students to use these tools for quick solutions.
Are These Tools Helpful?
In general, students are tech-savvy and are likely to adopt modern technologies faster than older generations, but these models do not help them with the real work of writing. They can predict sentences out of previously written sentences only. They can’t help these students express their thoughts and ideas. It is good at generating impressive texts. However, they might generate equally as much complete nonsense and often require human intervention to be able to pick the right suggestion for their use case. These tools can be helpful in cases where a person must fill in words. However, writing non-trivial computer programs or writing complex documentation or a play or a novel that expresses their feelings cannot be done.
Instructors must spend crafting creative assignments that should be unique to the current course they are teaching that include context, location, and time, often that cannot be easily replicated or reproduced. These text-generation tools are the latest technological advancements. Some of these tools have already been incorporated into tools such as word processors and email. They are incorporated because they help people solve a particular problem or complete a task quickly. It’s not difficult to imagine that the future generation of students will use these tools. We might see academic institutions allowing students to use these tools to improve their writing in some situations.