Keating: Do Great Masters Cheat When Using ChatGPT?

February 9, 2024

(This is a draft copy, not for publication or distribution.)

By Barry Keating, Ph.D., an adjunct scholar of the Indiana Policy Review Foundation and Professor Emeritus at the University of Notre Dame.

Johannes Vermeer, the artist, produced about 35 works that are attributed to him with relative certainty. His most famous painting is arguably Girl With A Pearl Earring (1665). Centuries after his death Vermeer remains one of the most popular artists to have ever lived. When the National Gallery in London closed because of the pandemic they opened their collection to online viewers; one of the 20 most viewed works during that period was a Vermeer.  It’s clear from the small volume of his paintings and the care with which they were executed that Vermeer worked carefully and slowly. There are some, however, who believe Vermeer “cheated.” 

The criticism centers on Vermeer’s use, or alleged use, of “enhanced tools.” The only reason any criticism or notice of Vermeer’s work exists today is because he is considered one of the greatest painters of the Dutch Golden Age. However, along with other artists like Hans Holbein and Diego Velázquez, he stands accused of using something other than his trained eye and a set of brushes. These three Masters are accused of using “optics” to achieve the precision that appears in each of their works (D. Hockney, Secret Knowledge: Rediscovering the Lost Techniques of the Old Masters, expanded edition, Aery, 2006).   

When it was emptied after his death, Vermeer’s workshop did not contain any optical devices. However, he did know one of the first lens makers in Holland and that same individual was the executor of Vermeer’s estate. This suggests that Vermeer might have learned how to use optics to produce his paintings. Because his works seem to have been completed in the same room, they may have required optics that were not easily transportable.

In 2013, Tim Jenison, an American, produced a documentary entitled Tim’s Vermeer. In it, he describes how he attempted to duplicate Vermeer’s The Music Lesson, completed in the artist’s preferred work site.   Jenison feels confident after completing this reproduction that he has demonstrated the techniques employed by Vermeer (Tim’s Vermeer- Wikipedia).

The optical tool that Vermeer and others are suspected of using is called a camera obscura. Although Jenison considers various techniques, he initially employed a camera obscura in his Vermeer reproduction. Some art historians dispute the idea that Vermeer’s work is based on the use of such a device, but it is worthwhile to consider the hypothesis.

A camera obscura entails a darkened room with a small hole in one wall that allows light to enter the device and project an image of what is outside on the opposite wall.  An optical lens may be placed in the hole and that would sharpen the projected image. The use of such a lens was introduced well before the time Vermeer was painting in the 17th century. The image conceivably could be projected on canvas and used by an artist as a template for an extremely detailed painting.  

The question we wish to examine here is, “If indeed he did employ the camera obscura as a tool, was Vermeer cheating?”  Is it cheating to use a device of some sort to achieve an effect that others see as pleasing or useful? Should art critics discount Girl with A Pearl Earring as less pleasing if the result was “only” achieved by using an optical tool? Centuries of art critics and art lovers seem to speak with a single voice; Vermeer was a Master regardless of his technique.

Is Artificial Intelligence a Form of Cheating?

Artificial Intelligence (AI) is the development of computer systems to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Consider the use and benefit of Artificial Intelligence (AI) built into your automobile.

 If you have a car that was purchased within the last ten years, it includes a sophisticated AI system that goes by many names depending on the car’s manufacturer. Subaru calls their system EyeSight; Honda has Sensing 360; Toyota has Safety Sense, and so on. Most of these systems are very much alike. EyeSight involves a set of data-gathering sensors coupled with a decision-making AI algorithm. EyeSight has the ability through Lidar, a laser light detection system, along with dual front-facing color stereoscopic cameras to “see” and sense objects (cars, motorcycles, people, horses, etc.) in front of the car. The system classifies the objects, estimates their distance, senses their movement, and finally makes a decision on whether to take some action; all of this takes place in the blink of an eye. Actions taken by the AI might include visual and auditory warnings to the driver, reduction in engine speed, or the application of brakes. 

One of the most common uses of AI systems in automobiles is “intelligent cruise control.” This use of AI allows the driver to set a desired speed while allowing the system to detect vehicles (or people, animals, etc.) in front of the car and make necessary adjustments such as slowing down, speeding up, or stopping altogether. The system constantly classifies any objects in front of the vehicle, assigning them a category based on the system’s training, and then takes appropriate action.

How does the accident-avoidance system classify a situation as one that might result in an accident?  The system collects data, applies a set of rules, classifies the threat, and finally takes appropriate action. If you have ever felt your car brake automatically, you have benefitted from the AI system. 

Are you cheating if you use adaptive cruise control or accident avoidance? Yes, in one sense, you are cheating. When these systems are enabled, you are not in complete control. You have transferred some of the elements of safe driving to the AI system. Many drivers cheat all the time when they drive; their AI systems are always enabled.

Now, consider taking a grade-school student aside and teaching him or her to use ChatGPT, a natural language processing tool driven by AI technology.  This is just one way to enhance the learning process and pique student interest.  

The situation, we pose, is the following: students have been assigned a project in ecology.  Our student chooses to learn and write about jackrabbits that live in the high desert of Idaho with special attention to how they survive harsh winters and omnipresent predators.  There are full-semester college courses on how to prompt ChatGPT to get desired results, but in this example, merely one type of prompt is employed, namely, the “persona prompt.”

To use this prompt our grade school student “tells” ChatGPT to take on a persona, in this instance a jackrabbit.  The AI processing tool is instructed to answer all questions as if it were a jackrabbit.  The student can then interview the jackrabbit and collect information on any topic within the Jackrabbit’s domain: where and when it sleeps as well as dangers from intense desert cold and predators. Imagine having a Jackrabbit that knows everything you need to know for a school project and one that will answer all your questions.

Computers use algorithms, a set of rules desired to calculate or assess problematic situations.  Suppose we introduced the student in addition to ChatGPT to another set of AI algorithms called Firefly. We ask this AI application to produce a drawing of a jackrabbit in the Idaho high desert during winter with snowcapped mountains in the background.  Within a few seconds, the AI displays 20 options from which to choose. We then request Firefly to rearrange the graphic of the image chosen such that the jackrabbit is larger and the mountains, smaller.  

Be aware of the response a student might receive on submitting his or her ChatGPT and Firefly-assisted jackrabbit project.  Some schools have specifically prohibited any use of ChatGPT and other schools have had to contend with instances of plagiarism resulting from non-cited AI techniques.  If you believe that any one of the discussed situations represents cheating (i.e., Vermeer’s alleged use of some form of a camera obscura, the use of an automobile accident-avoidance system, or the use of a persona prompt in ChatGPT), all three tools should be avoided. However, to clarify the nature of alleged cheating, it is useful to understand how AI works by examining one of the many classification algorithms used. 

Artificial Intelligence at Work in Classification

Assume that we wish to use Artificial Intelligence to begin to classify a set of one type of flower as belonging to one of three possible varieties. Ultimately, this classification algorithm can be employed to sort any population of these flowers as well and even more quickly than a human person. Note that this problem is very much like asking the accident-avoidance system to either apply or not apply brakes; both tasks represent a classification decision.

Our flower example lends itself to one type of AI algorithm, namely classification. A classification algorithm is only one of many types of AI algorithms. You might be inclined to ask “Is there one best classification algorithm that we could employ all the time?” That would be like asking a carpenter “Isn’t there one best carpenter’s tool that could be used all the time?” 

Any carpenter would explain that all the tools on his/her belt are “good” tools for their intended purpose. The hammer is excellent for driving nails but the measuring tape in the belt is not useful when used with a nail. Even within the domain of “hammers” there are sledgehammers, tack hammers, ball-peen hammers, etc. AI algorithms work in the same manner; different types of algorithms serve different end goals but within each type of algorithm there remain many differences. One characteristic that all AI algorithms have is that they have been trained on large amounts of data, hence the term “big data.” These algorithms go by the names of Artificial Intelligence (AI), data mining, big data, analytics, machine learning, and predictive analytics. Nuances distinguish these sets of algorithms, but they may all be thought of as synonyms for AI. 

Figure 1 is an attempt to classify 150 plants in the Iris family; there are three possible classes that the plants could be assigned to:

Setosa Iris

Versicolor Iris

Virginica Iris

Can we develop a classification algorithm that could in the future successfully sort all irises into one of three categories based on subtle differences? This very situation gives rise to one of the first and simplest of the classification algorithms: the linear classification algorithm. 

 To sort the flowers, we use two “attributes” of each plant:  petal length and petal width. If, after measuring, we graph our results, it would look like that presented in Figure 1. 

We have identified each of the 150 plants by taking and plotting measurements in terms of two attributes. The cluster of blue crosses in the lower left represents the Setosas, the green dots in the center of the diagram represent the Versicolors, and the red ones in the upper right are the Virginicas. R. A. Fisher, the noted statistician who created this example, stated that a simple linear classifier (that is a simple straight line) works very well at classification if the correct attributes are chosen (Annals of Eugenics). It appears that the attributes chosen in the Iris example given truly differentiate between three classes; therefore, the straight-line linear classifier performs well. The linear classifiers (i.e., the straight lines in the diagram) that perform best would look like those separating the three clusters in Figure 2.

 The straight lines in Figure 2 show the border between the different classes of Iris. With this information we would code a rule that matches the diagram; the beginning of the rule would look like this:

It is this rule that would be applied to sort new instances of flowers; the use of the once-estimated rule would make sorting go very quickly. Note that the coded rule that matches the linear classifiers in the diagram would not perform a perfect sort. Some plants would be misclassified. That is a characteristic of classification models (and all AI algorithms); they do not predict perfectly. They may, however, predict much better than a human and in less time. In his example, Fisher used only two attributes: petal length and petal width. Could we enhance the classification accuracy by using more than two attributes? The likely answer is yes. Could we enhance the accuracy of the classifier by using a nonlinear classifier? Again, the likely answer is yes. Most classification algorithms have the desirable characteristic of arriving at an answer quickly once the “rules” are estimated, although some AI algorithms work more slowly. More than one algorithm may be used at a time; that would be called an “ensemble” and ensembles tend to be very powerful indeed. The classification mechanism in automobile accident-avoidance systems is most likely an ensemble using information gained through multiple algorithms. 

All AI algorithms predict something; those predictions arise from the large amount of data that the algorithm has been trained on. The training of an algorithm and its application is another article entirely.

Artificial Intelligence Used as a Tool

AI has the potential to develop applications for both the private and public sectors that could significantly increase our standard of living.   For example, some insurance companies presently assign a probability based on classification attributes to flag fraudulent claims.  Those claims with the highest probabilities would be candidates for further and deeper scrutiny to prevent fraud. Do the AI fraud detection systems perfectly flag fraudulent claims? No, but they do increase the likelihood that insurance companies catch fraudsters cost-effectively resulting in lower insurance premiums for the rest of us. The very same tools (most likely a classification algorithm or ensemble of classification algorithms) could be used by public agencies to economically identify tax frauds.

Table 1 shows some of the types of AI applications currently used by private businesses. Interestingly, each of these common uses for AI in the private sector has a potential analog in the public sector.

Much of the internet remains unavailable to ChatGPT; even so, the information that it has access to is enormous. ChatGPT was requested to project 10 reasonable uses for AI in the public sector based on currently available applications in both the private and public sectors. Table 2 lists these potential uses for AI in the public sector as generated by ChatGPT. 

Table 2 (below): A List of Suggestions made by ChatGPT for possible uses of AI in the Public Sector.

1. Data Analysis and Insights

2. Fraud Detection and Prevention:

3. Customer Service and Chatbots:

4. Public Safety and Security:

5. Healthcare Planning and Management:

6. Traffic Management and Urban Planning:

7. Education and Personalized Learning:

8. Cybersecurity:

9. Policy Analysis and Simulation:

10. Natural Disaster Response and Management:

Concerns About and Regulation of Artificial Intelligence

Artificial Intelligence (AI) should be viewed as a supplementary tool; it is one tool among many that may be used appropriately and effectively. But it is a very powerful tool.

Universities are treading very carefully in the use of AI as a learning tool because we are in the initial stages of use. At the same time, universities seem committed to teaching students about AI and allowing students to learn how to use it to their advantage.  According to an article in Notre Dame Magazine, “Artificial Intelligence is the simulation of human intelligence processes by computers using algorithms to break down vast amounts of data. The almost-instantaneous results – text, photos, videos, computer codes, music, and more – look and sound like they were produced by humans.” Further, “The University’s AI policy recommends that professors become familiar with AI tools and take advantage of learning opportunities offered on campus (M. Fosmoe, “Students, Faculty Cautiously Embrace AI as a Supplemental Learning Tool”, Winter, 2024, 6-9).” 

One concern about AI holds little weight; that is the fear among some that it will replace humans and cause significant unemployment. Centuries of experience in the production and provision of services have taught us that innovations cause some jobs to disappear but create many new forms of employment. If this were not the case, unemployment would have been rising monotonically since the Industrial Revolution. That has not happened; structural or temporary unemployment does result from new tools being used but, as the night follows the day, new forms of employment replace the old.

The question of regulating AI is problematic. The government has chosen in the past to regulate anything that could cause harm to third parties. We regulate automobiles, aircraft, pharmaceuticals, and a host of other productive  “tools” because they could be used in harmful ways. Does AI fall into this category of regulations? 

Consider just two regulatory issues.   First, ownership rights need to be redefined as AI combines material from a variety of sources. Do the legal rights to use sources belong to the originators of the material or the AI codes that have transformed them? Secondly, although AI tools can be developed to protect teens from the harmful effects of social media, some Americans as well as congressmen are calling for immediate AI regulation. The sense is that social media platforms employ AI algorithms that are addictive and provide harmful images and information. Governments will undoubtedly implement regulations to establish cybersecurity standards, prevent malicious use of AI, and address safety risks associated with AI applications. 

It is not the case, however, that AI requires regulation in terms of job displacement and overall economic impact.  Long-term job displacement is an unnecessary justification for AI regulation. In a vibrant economy with a skilled labor force, any job displacement will be temporary and in little need of protection beyond the present social safety net.  Unfortunately, the government will also likely consider the effects of AI on competition and monopoly. With the already widespread use of AI, it would seem that intense competition and innovation will be more likely to diffuse market power across many sectors of the economy rather than concentrate it. 

If we don’t explore the uses of AI and learn how to harness its power, we risk falling behind both as individuals and as a nation.   


Leave a Reply