The following text is licensed under a CC-BY-SA 4.0 Global License. You may share it freely, even for economical purposes, so long as you provide me credit and a link to the license.
Humanity has once again eaten a forbidden fruit: various types of AI have made their ways into the hands of the people, with much rejoicing and hatred alike. Every field was blindsided by the capabilities of AI, especially ChatGPT.
ChatGPT is a language learning model, which means its goal is to process language. Not only does this apply to human languages such as Dutch and Japanese, but also programming languages as well. This meant being able to get very easy help in the language of my choice, which is useful when you are learning both Japanese and JavaScript at the same time.
Just as it has been for other people, ChatGPT has given me both great and terrible results. While it cannot work miracles, it has managed to save me from situations that my ADHD would have otherwise exacerbated.
I have dealt with assignments called “WODs” of three types (homework, practice, and standard). All of these were timed, and with no way to get my usual time extensions, anything to save time was as water in the desert. However, I was initially reluctant to use AI, especially on the homework. At that point, I had heard much negativity about how it is “lazy” to use AI, so I tried avoiding it. Ironically, the extra, human efforts I was making led to a perfectionism that slowed me down and caused several “Did Not Finish” results. With AI, on the other hand, I often halved the time I would have otherwise taken. In the real world, you cannot let perfect get in the way of good enough, which is why when it came to time sensitive tasks, AI was generally quite helpful. I would have stood little chance of passing without AI.
I must emphasize that there are still places where I personally did not want to use generative AI, and essays such as these are one of them. In fact, this essay is the first time I have used it in this course, with AI-generated (and human-edited) content starting at the section “Challenges and Opportunities.” On the one hand, a coding problem is mostly about knowing how to make the computer do what the human wants. If a human does not know things such as how to implement their code, what language they are using, or how their current code/language is structured, they will not know what to ask the AI and therefore cannot get good output out of it. Even then, explaining the scope of a program is many factors harder than explaining the scope of a piece of literature, especially when AI is involved. This means that it is much more likely a human will need to make minor tweaks to otherwise perfectly usable code than in the case of literature.
Of course, human language is much different from computing language. Your paper will not instantly shred itself when you write “There saying their over they’re,” nor will your PDF fail to export because you typed “La Niño.” You should probably have a spellchecker, and if you are multilingual AI can help save a lot of time on the translation process, but using generative AI to write the original work or to translate it without further refinement feels excessive. Human languages, even constructed ones such as Esperanto, serve the purpose of communicating what the writer has to say, be it a warning against flash photography, a judge’s ruling on a court case, or even a work of fiction. In all three of these cases, the opinions a writer expresses are in the words themselves, rather than the practical functionality a coder expresses through the way the code is arranged and compiled rather than the code itself. Therefore, while it might be OK to ask an AI for suggestions as you would a proofreader, it would still take away from the writer’s own voice by using AI to write the essay as is.
At the time of writing, this is currently underway. With that said, one of my coworkers has already used AI, and is more positive about it than me. Thankfully, I do not see this leading to conflicts, but I cannot say the same for every team such as ours. Specifically, my coworker used AI to generate a logo for the project to use, which I thought was simply text placed on an image found on Google until I realized the text on the CPU was misspelled and misshapen. It would have looked nicer had he only generated the CPU image and then placed the text on top in MS Paint or something, but we have bigger fish to fry. Both my coworker and I will likely find ourselves using AI more and more (however, I was not interested in Github Copilot because a free trial should not be asking for my credit card info).
While I did technically use ChatGPT to do this, it never occurred to me to actively do so because I was almost always too focused on something else. During the times when I was focused on the class, my attention was violently burned up by things that were worth points, leaving zero energy to come up with a plan to study the concepts. Zero energy is too little to even ask the AI to come up with the study plan for me; I would have needed to get the list of topic myself or else it would have been garbage in, garbage out.
In this case, I had a different reason for never using ChatGPT: the humans were right there! When answering questions, it was easier to either ask a classmate/the professor for help or look up the answer in a search engine. That way, context about the problem (and therefore the solution) was implicitly understood and the conversation would go quicker and more smoothly. This also applied to asking smart questions, since while both the humans and the AI benefited from the extra information, the humans still had a better understanding of the context to my problems.
I remember having actively done this once, but I cannot remember when I did this, or, as a result, how helpful it was in general. Remember, if I did not give what little attention I had to the parts that would impact my grade, then I would not be able to focus on the class at all.
However, I would passively do this all the time while using AI (that is, while using it for other explicit purposes) as described below.
This mostly happened while getting code from the AI to use in my projects, where the AI would explain the code it had generated and how it would solve my issue. However, there were times when I was confused about why a certain piece of code did what it was doing, regardless of whether or not it worked as intended. Those few times I requested an explanation from the AI, the explanations did prove to be useful, at least.
When I used ChatGPT in this class, this was the thing I did with it the most. I would use this for many reasons, such as when the documentation of whatever framework or language I was using took way too long to read (which, no thanks to my condition, is very frequently), when I knew what I wanted to write but also that it would be time-consuming, or when I had no idea how to even begin. While the assigned reading was important to understanding the material (and therefore how to write the code I needed), there were times when, even after playing the videos at double speed, I would never be able to stay still long enough to process the info and still have time to do the assignment. Thus, I once again bring up the state I was in, where I hyperfocused on only what would give me points, or else I would stand no chance of getting the assignment in on time.
That did happen anyway several times, however. ChatGPT cannot cure my ADHD or even send annoying reminders when I get distracted. That was likelier to happen when the output was not helpful, and I had to debug the code myself. Combine this with my attempts to get finishing scores on WODs even though I could have gotten full credit without them, and one gets numerous cases where with or without AI, doing the work was excruciatingly long. This got to the point where it even destroyed the satisfaction one would expect after squashing a particularly nasty bug (i.e. a misplaced semicolon).
I did not do this at all, with or without AI. The closest thing that I ever did to doing so was changing the language of comments that were already written (e.g. from English to French) because I had asked the AI for help in a language other than English and it translated the comments into the foreign language I asked it in. (This did not affect the code itself unless it was something to be displayed as text.)
This was also an area where ChatGPT proved itself useful. A lot of the time, humans were simply unavailable, meaning I would have to run some code by the AI to ask why it did not work. It helped me understand why my code was failing almost every single time. However, the output’s usefulness varied depending on whether or not I had an explicit error message. While I would get the right explanation most of the time without one, I would get that explanation all of the time with a proper message, especially if it was related to an ESLint error. The code provided as a solution was slightly more of a mixed bag, but this was not a problem so long as the explanation of the issue was correct.
Since Computer Science is actually my minor, whereas my major is Japanese, one would think that I would have used AI to unite the two fields by asking for help in Japanese. While I did do that, I mostly did so in Western European languages. ChatGPT does work well in Japanese, but there were times when I was too pressed for time to convert my thoughts into Japanese structure, as seen with this example:
What was the cause of the error?
エラーの原因は何でしたか?
Also of note: code comments were less likely to be translated when asking for help in Japanese compared to Western European languages.
To be honest, an outsider looking in might say AI negatively impacted my learning. I would disagree with them, but understanding where that outsider would be coming from is why I would instead say that the impact was overall slightly positive for me.
Someone who does not have ADHD might confuse my condition with good old laziness, therefore concluding that all I did was copy/paste “stolen” code when I should have watched the screencasts and come up with “original” code as a result. They would likely assume that AI gave me a way to get better grades in the course without being better at the material myself.
It is true that I could have done better by taking some kind of notes on the videos more before trying to jump into the assignments. However, this does not change the fact that I did watch the videos (on double speed to keep the dead air from distracting me), yet I still could not retain the information as well as someone without my disability. I would have needed to burn extra time that I would have needed for not only my computer science homework, but also my Japanese assignments, which still manage to be time-consuming despite their relative ease. I also needed a better schedule to help reduce distractions, which would have given me the study time. However, taking away AI does not take away my disability. While it would have been better for me to study more, I still had points to earn in the present, accommodations or not.
AI is not only involved in computer science, but medical science as well. System76, a Coloradan computer company that exclusively sells Linux machines and offers its own distribution called Pop!_OS, interviewed a company called AI Proteins that uses System76 hardware to do its work.
As the name explicitly states, AI Proteins uses AI to simulate human proteins. The goal, as purported in the interview, is to create entirely new molecules that could not be made using traditional methods, cutting a path to safer and more effective medicines and curing diseases more quickly.
Of course, such a new advancement with a new type of technology does come with challenges and risks:
“We don’t fully understand why proteins like to adopt these particular shapes, but we see bonds often have these angles for some reason, so we’d think, ‘let’s just make an energy term that says that they should be in those angles.’ And that was all done primarily on CPU and a combination of poorly understood physics and poorly understood observations mixed together. It wasn’t the most accurate. “
The following paragraphs in this section were generated using ChatGPT, then edited by me. Edits are in square brackets.
[One] challenge involves the risk of dependency on AI tools. While ChatGPT was a valuable resource, there were instances where its assistance became a crutch. Overreliance on AI for code generation and problem-solving could potentially hinder the development of crucial problem-solving and critical thinking skills, essential for a career in software engineering. [Additionally, a]lthough AI can assist in generating code, it falls short in providing comprehensive assessments of code quality. Evaluating the efficiency, scalability, and maintainability of code requires a nuanced understanding that current AI models lack. This limitation poses a challenge in guiding students toward writing not just functional but also high-quality, industry-standard code.
[On the other hand, i]ncorporating AI into software engineering education provides an opportunity to expose students to real-world problem-solving scenarios. AI tools can simulate industry challenges, [such as those faced by the AI Proteins company that System76 interviewed], allowing students to apply theoretical knowledge in practical contexts. This not only enhances their problem-solving skills but also better prepares them for the challenges they will face in professional settings. AI can also play a crucial role in providing continuous feedback and assessment. Automated code reviews, performance analytics, and targeted feedback on common mistakes can help students refine their skills and address weaknesses systematically. This not only streamlines the grading process for instructors but also ensures that students receive timely and constructive feedback to improve their coding practices.
A comparison between traditional and AI-enhanced education can actually be made in general before narrowing it down to software engineering in particular, and this comparison comes courtesy of Duolingo. It is chiefly known for its wide array of self-study language courses, from widely spoken languages such as Spanish to less common ones such as Hawaiian. Even before the company applied AI, the self study approach already offered more freedom in when one can do their lessons as well as offering more options for learning specific languages to people who might not have been able to learn a given language at school.
The company has expanded in two ways recently: one is by implementing AI into its French and Spanish courses for English speakers. GPT-4 is used to let the user have conversations with either the mascot Duo to get clarification on why their response was right or wrong, or with one of the other Duolingo characters to roleplay conversations that would take place with humans in the countries that French and Spanish are spoken in. This allows the use of less “textbookish” language compared to previous iterations of the same idea.
What does this have to do with software engineering? That is where Duolingo’s second expansion is relevant: they are no longer limited to languages. They are applying their current self-study approach to math and music and all the benefits that brings.
While Duolingo’s math and music courses may be too new to think of implementing AI into them in the same way as they have implemented it into French and Spanish, making that connection is by no means a stretch. (Otherwise, Duolingo would not have expanded beyond languages in any way in to begin with.) It would be reasonable to combine the feedback given by AI and the self-study of the Duolingo model to software engineering. Learning the difference between “tu es” and “vous êtes” is similar to learning the different types of HTML tags, and thus having a conversation in Lille is similar to working on a web project.
As with any change to any type of regime, there will be both pleasures, as described above, and growing pains, as follows.
One growing pain is the centuries-old cancer known as copyright. The idea that one can “own” (and by extension, “steal”) an infinitely reproducible resource in the same way one owns a house was already nonsensical even before the VHS, let alone the internet or especially AI. Worse yet, the specifics vary from country to country in a globalized world. For instance, the UK allows prompters to hold the copyright to AI generations, whereas the US disallows any type of copyright to non-human authors. Even then, the situations worldwide are changing as the art world, legitimately afraid of having their jobs automated yet still deluded by the sinister hands of media giants, demands to strengthen intellectual monopolies as if that will save human art, or even fix AI’s legitimate problems. For software engineering education, it means needing to learn more about copyright law, potentially internationally, as well as making sure that only permissively licensed or public domain code is used (assuming civil disobedience is not an option).
Another issue is real ethical problems. When AI makes coding easier for everyone, that is not limited to the morally upstanding. Photoshop already lit the match by allowing one to photoshop the heads of people someone dislikes (including children) onto naked bodies; this is most charitably described as severe defamation. AI threw gas onto the fire by making such images significantly easier to produce, as well as allow other misuses of someone’s identity. It is now easier than ever to clone someone’s voice and generate videos, allowing things such as convincing fake videos of Joe Biden announcing a draft of American soldiers to fight on the ground in Ukraine. Even if the AI itself is not the issue, something will need to change as AI makes its way into our lives. Ethics will have to be taught when using AI, and the clearest possible window into bad ethics must be shown, along with its gruesome consequences.
The intrigue of AI speaks for itself… quite literally; I have asked the AI to conclude my essay for me to perfectly illustrate how it is such a double-edged sword.
“The integration of AI, especially ChatGPT, in my software engineering course has been a journey marked by both challenges and opportunities. From time-sensitive assignments to the final project, AI has played a multifaceted role in shaping my learning experience. Its impact on my ability to comprehend, generate, and apply code has been significant, given the unique context of living with ADHD.
While AI, particularly ChatGPT, has proven to be a valuable asset, it’s essential to recognize its limitations. Overreliance on AI can risk becoming a crutch, hindering the development of critical problem-solving skills. The inability to comprehensively assess code quality is another challenge, emphasizing the need for a nuanced understanding that AI currently lacks.
On the flip side, integrating AI into software engineering education presents a host of opportunities. Exposing students to real-world problem-solving scenarios, simulating industry challenges, and providing continuous feedback are invaluable aspects. Automated code reviews, performance analytics, and targeted feedback contribute not only to streamlined grading processes but also to the systematic improvement of students’ coding practices.
Reflecting on my personal experience, AI has been a time-saving tool, especially when dealing with coding challenges and assignments. However, its application has been selective, with a recognition that certain aspects, such as essay writing, demand a more nuanced human touch.
Looking ahead, the optimization of AI integration in future courses requires a balanced approach. It’s crucial to maintain a focus on skill development, ensuring that students cultivate problem-solving and critical thinking abilities alongside AI assistance. Incorporating AI into practical, real-world applications within the curriculum, as demonstrated by AI Proteins, can enhance the relevance of the learning experience.
Furthermore, a comparative analysis with other fields, such as language learning, suggests that the self-study approach augmented by AI feedback can be a powerful model for software engineering education. The adaptability and flexibility offered by AI align with the evolving nature of the field, providing a dynamic learning environment for students.
As we navigate the future considerations, challenges like copyright issues and ethical concerns must be addressed proactively. Education on AI ethics and a clear understanding of copyright laws, especially in a globalized context, become essential components of the curriculum.
In conclusion, the AI integration in software engineering education is a transformative journey, offering a spectrum of benefits and challenges. A thoughtful and strategic approach, keeping human skill development at its core, will pave the way for an optimized and effective integration of AI in future courses.”