By Kean Huy Alado
One day, a group of digitally inclined humans put a team together to build and push the limits of technology. With their ingenuity and persistence, they produced artificial intelligence (AI) capable of communicating with humans in an unfathomably realistic manner in less than a decade.
The history of Chat GPT began this way with OpenAI, an American organization aiming to research and develop AI. The company was founded in 2015 by Sam Altman, Greg Brockman, Elon Musk, Ilya Sutskever, Wojciech Zaremba, and John Schulman; however, Elon Musk has since exited the organization. Altman has been CEO since 2020.
These AI assets have been dubbed “chat AI” or “chatbots,” and can understand a myriad of languages with a sensitivity to context, humor, and slang. You can hold a conversation with them, and you can do so if you’re bored on Snapchat with their chatbot.
But for this generation’s tech-savvy college students, these bots can be a great asset and make life easier when an unfinished essay is due in a few hours. But this reality worries educational institutions and has them questioning the authenticity of turned-in assignments. Few school districts across the country have made efforts to ban chat AI use, including New York City public schools. At the university level, including Barry, the policy has been left up to professors for each of their courses.
The first iteration of the chat AI software of OpenAI’s language model series, GPT-1 (Generative Pre-trained Transformer), was released back in June of 2018. It held over 117 million parameters monumental to the development of word prediction in later models, achieved by unsupervised language comprehension of online texts and books.
The second model of the series was unveiled in February 2019, which displayed improvements in text generation and greater parameters. This level allowed GPT-2 to produce coherent, multi-paragraphed text in responses. But it was not yet ready for the public.
Before the next model, the organization vowed to continue studying AI and potential means of public exploitation. In June of 2020, GPT-3 was launched and boasted greater capabilities than its predecessors with prompted text generation for essays, emails, text messages, poetry, programming code, and translations between languages.
Two years later, last November, GPT-3 became the software used in ChatGPT, the free public version available on the OpenAI website. However, despite their efforts to reduce human exploitation of the language AI, the public version would still induce fear among OpenAI’s top brass.
“I’m particularly worried that these models could be used for large-scale disinformation,” said ChatGPT CEO Altman to ABC in March.
Coincidentally, GPT-4 came out at the same time as the interview and became integrated into the AI. By April, it was trending online, creating widespread shock and awe at its power and the extent of human ingenuity.
“ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward,” said Aaron Levie, CEO of Box, a cloud storage company that markets cloud tools for businesses.
Google Strategy Lead Corry Wang posted his experience requesting the language program to produce a four-paragraph essay comparing theories of nationalism on X. He was left in disbelief at its quality and speed.
“We’re witnessing the death of the college essay in real-time,” he said. “Solid A- work in ten seconds.”
Challenging the work ethics of students, he lit the lightbulb in the brains of many. It currently holds second place for top productivity applications in the Apple App Store.
At first, many instructors across America tried to use the software to determine a paper’s authenticity, but, according to the AI’s resource page, responses are random and cannot be identified. This conflict sparked discussion on how to use the program ethically.
“I feel that using it in scholastic assignments is alright if you keep in mind not to have it do all the work for you,” said computer science sophomore Sean Chin Loy. “I use Chat GPT to brainstorm for projects and general outlines, as well as to summarize large articles. However, I do not use Chat GPT to generate content for submission.”
Alternatively, Dr. Pedro Gonzalez, an adjunct communication professor voiced his support for the assistance of chat AI in an academic setting, advocating for a balance between integrating emerging technologies while maintaining student integrity.
“We must teach about the tools and AI possibilities, like Copilot from Windows. It is stupid to be afraid of a tool, he said. “As usual, mediocre minds fear innovation and are fascinated by the bling.”
When asked in a poll, almost two-thirds (64 percent) of 111 students voted for the allowance of ChatGPT in an academic setting. To appease curiosity on how to use it, The Buccaneer compiled ten common prompts to use with the chatbot to help with assignments:
1. Evaluate and explain the following equation:
2. Identify the bug in this code and explain it:
3. Explain this concept to me/Compare and contrast these concepts:
4. Tell me what the main point of this paragraph is in a few sentences:
5. Summarize this transcription of a YouTube video for me:
6. Find __ research paper(s) and their links on the topic of __:
7. Act as StoryBot (writing assistant and story organizer) and explain this:
8. Review my work and give me feedback on areas I need to improve:
9. Generate a list of titles that I could research on the topic of __:
10. Quiz me on this topic and provide answers:
It may be simply just a tool, but ChatGPT and similar AI require proper care and handling, as a surgeon handles a scalpel during a risky operation.
Comments