Despite its ubiquity, AI has been underutilized in government affairs. While many lawmakers have weighed the antitrust implications of AI, not many in + Read More
It’s an exciting time in artificial intelligence. The release of ChatGPT has kicked off an AI battle among the tech giants, and public experimentation has already turned up some incredible applications for ChatGPT. While many future applications for this technology have yet to be imagined, higher education is an area that has already been impacted by ChatGPT.
However, while ChatGPT has created a stir thanks to its remarkable capabilities, it’s also important to understand the risks and issues that can come with this new technology. In this blog, we’ll look at some of the benefits of ChatGPT for higher education as well as risks to be aware of. To begin, we’ll explain what ChatGPT is and why so many people are paying attention.
ChatGPT is a chatbot created by OpenAI that is built on top of a large language model. Like other AI bots, ChatGPT works using deep learning to respond to chat prompts with text. What distinguishes ChatGPT and similar AI the most from standard chatbots is scale. Whereas most AI chatbots are trained on a specific set of data such as a university knowledge base, ChatGPT was trained on massive sets of data.
By training on hundreds of gigabytes of text data, ChatGPT can respond to chat prompts in a way that feels very human for the user. Most noteworthy is the fact that ChatGPT can generate new content using the context of works that it has been trained on. Higher education has already seen significant impact from ChatGPT as schools struggle to keep up with students using AI to cheat. With only a short prompt, ChatGPT can generate entire college essays, write code, solve complex problems, and even write poetry.
With all these capabilities, it’s easy to see how ChatGPT and AI like it can be important in various fields going forward. Unfortunately, as with any new technology, ChatGPT also comes with significant risks that are only beginning to be understood. In the following sections, we’ll look at some of the top benefits of ChatGPT for higher ed, and some of the associated risks.
1. Immediate assistance
ChatGPT provides students with immediate assistance when they need it. Students can ask questions and receive answers quickly, without having to wait for a response from a human. Because ChatGPT is available 24/7, this also means that students can access information and support at any time, regardless of their schedule.
2. Personalized responses
Because of ChatGPT’s complex ability to generate human-like text responses, it can provide highly personalized responses based on a student’s needs. Compared to other kinds of AI tools, ChatGPT can provide not only the answer that a student is seeking, but it can present that answer in the way that is most helpful for the student.
3. Cost savings and efficiency
Introducing ChatGPT to higher education can help institutions allocate resources more efficiently by allowing support staff to focus on more complex work while ChatGPT handles routine inquiries and tasks. With most tasks offloaded to the bot, schools can also expect to see cost savings by reducing the need for human support staff.
4. Flexibility and accessibility
The breadth of data that ChatGPT has been trained on means that it can easily respond to new kinds of questions and changing student needs. This provides higher education institutions with a flexible and scalable solution for their chatbots and means that ChatGPT is more accessible for students needing accommodations.
5. Improved student engagement
By providing students with a more interactive and dynamic learning experience, ChatGPT can be used to help to improve student engagement. Because information is delivered through natural language processing, students can feel more comfortable when using ChatGPT for self service.
1. Difficulty in detecting misinformation
ChatGPT can generate responses that are plausible, but not necessarily accurate or based in fact. This can make it difficult for students to identify misinformation and may lead to the spread of false information. Even when ChatGPT has been trained not to provide certain types of data, it can often be tricked into providing that information without much-needed context.
ChatGPT has been trained on massive data sets and built to generate text based on patterns. This means that it produces text responses designed to please the requestor, even though those responses aren’t always correct. In a higher ed environment, poor quality information can have a negative impact on student learning and decision making. To make matters even more complicated, when errors or inaccuracies are presented by ChatGPT, it’s never clear who is responsible for the error.
2. Bias and discrimination
AI systems, including ChatGPT, are only as unbiased as the data they were trained on. If the data used to train the model is biased, the model may also exhibit biases in its responses. When training AI like ChatGPT on a massive dataset, it’s impossible to ensure that all data is high quality, and we see this reflected in biased information resulting in the AI model perpetuating those same biases in its responses.
As the adoption of ChatGPT continues to grow among individuals and organizations, ChatGPT may amplify existing biases in society, leading to the spread of false or harmful information. It is important to always approach the algorithms used by ChatGPT with caution, to ensure that they’re fair and do not discriminate against certain groups of people. Higher education institutions should take steps to mitigate these bias concerns, such as using diverse and representative training data and regularly evaluating the outputs of the AI model for bias.
3. Ethical and privacy concerns
When new technologies like ChatGPT are introduced to the public, they often come with ethical and privacy concerns until it becomes clear how and why data is being used. In the case of ChatGPT, concerns exist around not only how data is used, but also how it has been collected and stored. As schools grapple with the ethics of ChatGPT, expect to see students question school integrity where it has been adopted.
To protect students and faculty alike, legal requirements in higher education mean that institutions need to ensure compliance with relevant data privacy regulations. Before introducing ChatGPT in higher ed, it must also comply with certifications such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States. With ChatGPT, these kinds of compliances currently do not exist and it’s not clear that AI like ChatGPT can obey the necessary protections to ensure they’ll be compliant sometime in the future.
It’s important for colleges and universities to consider carefully whether the new ChatGPT technology is worth the risk. By introducing ChatGPT to higher ed, it creates opportunities for misinformation, bias, and even data breach.
To avoid the headaches of the unpredictable nature of ChatGPT and tools like it, consider instead training a chatbot on a curated data set to provide amazing student support with all the benefits we’ve seen here and none of the risks. To learn how to introduce your own AI chatbot into your school or team, get in touch with Comm100 today, or see how schools are using Comm100 Chatbots.