top of page

AI Tools for Teaching and Learning

The content below relates to an online course I took through California State University. I have included some of the assignments I completed, reflections on what I found, and links to some of the materials I found especially helpful. 

Module 1: AI Terms

Firefly guinea pig wearing glasses taking notes 28912.jpg

Assignment: If a student asks what Artificial Intelligence means, how would you describe it?

The term ‘artificial intelligence’ can be difficult to strictly define (for oneself or for others) because it can be used to refer to both a variety of thing and a type of property. I’ll break my response into two parts. 1: What kinds of things count as examples of artificial intelligence? The ‘artificial’ component of the term indicates that the things we are referring to are (non-human) computer programs developed by human beings. The ‘intelligence’ component indicates that these computer programs engage in operations that emulate a level of understanding and reasoning capacity typically had by humans. So, examples of artificial intelligence comprise computer programs which demonstrate functioning which replicates human intelligence. Lofty and impressive as the term sounds, artificial intelligence already exists in our lives across a plethora of spheres, including in tools that we now take for granted. Alexa, Siri, and Google Home are common examples of artificial intelligence, but even seemingly mundane tools such a spell-chick and text-to-speech software fit the bill. 2: What does it take for a computer program to have artificial intelligence? Spell-check and autocorrect clearly fit the ‘artificial’ component above, since they are undoubtedly human-developed programs. But in what sense are they ‘intelligent’? And, more pointedly, in what sense are they more intelligent than the calculator app on my phone? That’s a deep and tricky question! The idea is that, in artificial intelligence, the programs in question can ‘reason’ in a way that goes beyond simply ‘if-this-then-that’ instructions. This comes out more clearly when we think of AI programs as being capable of making predictions and, as a result, of making suggestions. My calculator app can spit out simple outputs when given simple inputs. We might think of it as having to be asked a question – ‘what is the result of 2 + 2?’ The calculator can only provide a single answer when asked this direct question. In comparison, autocorrect doesn’t need to be asked a question in the first place – it takes all of your keystrokes as inputs and it’s programming guides it in making unprompted suggestions based on what it ‘thinks’ you want to say. Importantly, ‘artificial intelligence’ is a broad term and fully understanding the meaning of that term requires that we appreciate the different ways in which AI can manifest. For example, some instances of AI also demonstrate machine learning – which means that the AI can use data inputs to self-regulate and improve its own programming model. To gain a deeper understanding of AI, I encourage you to look into machine learning, deep learning, and generative AI.

Reflection

It took a lot of thought to develop this definition. I wanted the definition to be relevant to my Philosophy, Technology, and Our Future course, so I was particularly careful to accommodate the nuances explored in the course. What was the most helpful was the instruction in the distinction between AI, Generative AI, Deep Learning, and Machine Learning. These definitions actually helped me in my philosophical investigations regarding philosophy of technology.

Credit: Adobe Firefly (image generator), prompt: 'guinea pig wearing glasses taking notes,' ultraviolet effect

Module 2: Generative AI

Assignment: Use generative AI to create once piece of content.

I used this assignment as an opportunity to develop a poster for an event I needed to advertise (the CSUB Philosophy Club Lifeboat Challenge). Initially I experimented with Stability (a free image generator), but I wasn't happy with the results so I tried Adobe's Firefly program (also free). That's where the image at the bottom of the poster comes from.

Reflection

After repeated attempts (e.g., the images used on this page) I still find it surprising how difficult it is to generate an image using this software that is accurate to both the intended content and to reality.

For example, one of many attempts at rendering an image of a guinea pig looking like a detective (intended as the Module 3 image below) resulted in this bizzare 5 legged animal.

The course materials on how (as a human) to detect AI-generated images was very helpful, though this example is pretty clear from a brief look. Some helpful resources are here and here

Firefly guinea pig holding a magnifying glass 14.jpg
Lifeboat Challenge2 (1).png

Credit: Adobe Firefly (image generator), prompt: 'guinea pig holding a magnifying glass,' bokeh effect

Credit: Adobe Firefly (image generator), prompt: wooden liferaft on ocean'

Module 3: AI Detection Tools

Firefly guinea pig detective 14.jpg

Assignment: Create AI content and run it through at least two of the AI detectors.

I compared results from two popular (free) AI detectors (GPT Zero and DetectGPT) on two different inputs. First, I input original content from one of my research papers and got the following results:

  • Detect GPT replied with 24% AI-generated. GPTZero suggested just 2% AI-generated. A huge difference!

​ Second, I input the same original content but inserted the following paragraph, generated by ChatGPT with the prompt 'write an essay on Shamik Dasgupta's view on grounding').

According to Dasgupta, grounding relations are contingent upon context and perspective. What counts as grounding depends on the specific framework or theoretical perspective from which it is examined. This perspective acknowledges the diverse ways in which reality can be understood and the plurality of explanatory frameworks that may be employed.

I got the following results:

  • Detect GPT returned 53% AI-generated where GPTZero returned 1% AI-generated.

The difference in results between the first and second results (when comparing like-to-like) are small, but it's still worth noting that GPTZero thought the original work was more likely to be AI-generated than the doctored version.

Credit: Adobe Firefly (image generator), prompt: 'guinea pig detective,' ultraviolet effect

Reflection

The content from this module included eye-opening material on the dangers of using AI detecting programs (see here, here, and here). A take-home for me was that AIDetection software is currently no more accurate than humans. I even tried to instruct ChatGPT to circumvent factors that might expose its output as AI generated with the prompt:

'Write a 500-word essay about Shamik Dasgupta's view on grounding. Add some grammar and spelling mistakes and don't be repetitive'

 

The first half (for brevity) of the response is pasted below. Notice that the 'spelling mistakes' it includes exclusively include changing 'this' to 'dis,' 'that' to 'dat,' and 'the' to 'da.' I'm not sure the machine had any idea how to make the response less repetitive. If anything, it's more repetitive.


'Shamik Dasgupta, a contemporary philosopher, has made huge contributions to da study of metaphysics, especially in his exploration of da concept of grounding. Dasgupta's view offer a fresh perspective on grounding, challenging traditional notions and paving the way for a more dynamic understanding of reality. Dis essay seeks to elucidate Dasgupta's innovative approach to grounding and its implications within metaphysical discourse.
Grounding is a fundamental concept in metaphysics dat pertains to da relationship between entities or facts, where one is said to ground another if it provides the explanation or basis for its existence or nature. Dis notion aims to uncover da underlying structures or principles dat constitute reality, offering a framework for understanding da connections between different aspects of existence.


Dasgupta's perspective on grounding diverges from conventional accounts by emphasizing its dynamic and context-dependent nature. He challenges da notion of grounding as a fixed, hierarchical relation between abstract entities, proposing instead a more fluid and relational understanding.
According to Dasgupta, grounding relations are contingent upon context and perspective. What counts as grounding depends on da specific framework or theoretical perspective from which it is examined. Dis perspective acknowledges the diverse ways in which reality can be understood and da plurality of explanatory frameworks dat may be employed.


Furthermore, Dasgupta advocates for a pluralistic conception of grounding, which recognizes dat there are multiple ways in which facts or entities can be grounded. Dis plurality reflects da complexity of reality and acknowledges dat different perspectives may yield distinct insights into its nature.
Implications and Significance: Dasgupta's perspective on grounding has profound implications for metaphysical inquiry and philosophical discourse. By emphasizing da contextual and perspectival nature of grounding, he encourages a more nuanced approach to understanding reality.
One implication of Dasgupta's view is da recognition of da limitations of any single explanatory framework. Rather than seeking a universal theory of grounding, he suggests dat we should embrace da diversity of perspectives and theories dat contribute to our understanding of reality.'

 

It seems extremely problematic to me that 'spelling mistakes' are interpreted by the AI as being solely represented by altering 'this' to 'dit' and so on (in the way demonstrated above). See this article for more on the issue of inclusion in this area). I worry that there is a cultural and racial bias here and, apart from that, it's just odd that this was the input.

Module 4: AI Policies

Assignment: Craft an AI policy to include in a syllabus

I used the Pepperdine Course Policy Generator as a starting point to develop the AI syllabus policy below: This policy governs the use of generative artificial intelligence (such as ChatGPT) in this course. 'Generative AI' encompasses any 'software that creates new text, images, computer code, audio, video, and other content' (OpenAI, 2024). You may use generative AI tools on assignments in this course under the strict constraint that they are adequately and properly cited. Any and all AI tools used for your assignments must be thoroughly documented and detailed credit must be given to the tools themselves.  When citing AI tool, you must to all of the following: give a parenthetical citation in the body of the text,  include a corresponding entry in your list of references which included the name of the tool used, the date it was used, and the prompt you entered, include a corresponding appendix that  presents unaltered response/output given by the tool. To learn more about how to cite AI tools, you can look at this website. You may also use this policy (which was created with the use of AI) as an example. Where a tool's output is used directly in the body of the text, it must be put in quotation marks just as you would do so when quoting published literature. As is always the case, a paper which includes heavy quotations is likely to be a weak one. Similarly, just as you should not include simple paraphrases of published litrature in your papers, it is imperative that your final paper not include simple paraphrases of the response provided by the tool.  'If you choose to use generative AI tools, please remember that they are typically trained on limited datasets that may be out of date. Additionally, generative AI datasets are trained on pre-existing material, including copyrighted material; therefore, relying on a generative AI tool may result in plagiarism or copyright violations. Finally, keep in mind that the goal of generative AI tools is to produce content that seems to have been produced by a human, not to produce accurate or reliable content; therefore, relying on a generative AI tool may result in your submission of inaccurate content' (OpenAI, 2024).  If you use generative AI tools inappropriately, I will apply the CSUB Code of Academic Integrity as appropriate to your specific case. In addition, you must be wary of unintentional plagiarism or fabrication of data. Be aware that direct reliance on the work of others does a disservice to yourself and will certainly have detrimental consequences in the future. Even outside of the potential consequences of violating University academic integrity policies, passing a course without understanding the course's content will not serve you well as you advance in university or into the job market. Remember that there is a reason you are here and a reason you are taking the courses you are taking - the work you complete and the courses take are designed to enrich your life and enable you to succeed in your future endeavors. ​ References: OpenAI, (2024), Pepperdine Course Policy Generator, ​ Appendix: OpenAI (2024) Pepperdine Course Policy Generator, Original Response Generative artificial intelligence tools—software that creates new text, images, computer code, audio, video, and other content—have become widely available. Well-known examples include ChatGPT for text and DALL•E for images. This policy governs all such tools, including those released during our semester together. You may use generative AI tools on assignments in this course when I explicitly permit you to do so. If you do use generative AI tools on assignments in this class, you must properly document and credit the tools themselves. Cite the tool you used, following the pattern for computer software given in the specified style guide. Additionally, please cite all places where you used generative AI with the parenthetical (NameofTool/ProgramUsed) and a corresponding entry in the reference list which contains both i) the prompt you input you provided the tool with and ii) its initial response. It is imperative that your final paper not include direct quotations from or simple paraphrases of the response provided by the tool. If you choose to use generative AI tools, please remember that they are typically trained on limited datasets that may be out of date. Additionally, generative AI datasets are trained on pre-existing material, including copyrighted material; therefore, relying on a generative AI tool may result in plagiarism or copyright violations. Finally, keep in mind that the goal of generative AI tools is to produce content that seems to have been produced by a human, not to produce accurate or reliable content; therefore, relying on a generative AI tool may result in your submission of inaccurate content. It is your responsibility—not the tool’s—to assure the quality, integrity, and accuracy of work you submit in any college course. If you use generative AI tools to complete assignments in this course, in ways that I have not explicitly authorized, I will apply the CSUB Code of Academic Integrity as appropriate to your specific case. In addition, you must be wary of unintentional plagiarism or fabrication of data. Be aware that direct reliance on the work of others does a disservice to yourself and will certainly have detrimental consequences in the future. Even outside of the potential consequences of violating University academic integrity policies, passing a course without understanding the course's content will not serve you well as you advance in university or into the job market. Remember that there is a reason you are here and a reason you are taking the courses you are taking - the work you complete and the courses take are designed to enrich your life and enable you to succeed in your future endeavors.'

Reflection

The initial policy generator proved to be extremely helpful - more helpful than I expected. I am examining my own psaychology to work out why this is the case. The final product was changed subastantially from the initial input (included in the appendix to the full policy above). I feel as though this gives me an invaluable insight into the motivations my students might have to use AI while working of assignments for my classes. While I already adhered to the position that educators should work towards incorporating and embracing these tools as part of the learning process, but this exercise gave me a more visceral sense for how these tools might ease the burden of academic work. My aim was never to lift material from the AI tool and replace my own thinking, but the use of it gave me a starting point and, more importantly, helped me to realize what I wanted to add and what I wanted to omit. I am now more strongly committed to my belief that it is my responsibility to educate students on the responsible use of AI tools, not only because I sympathize with the many motivators students may have for over-using AI (see here), but also because I have an active understanding of how valuable they can be as a positive tool in proper, conscientious education (see here and here for excellent strategies for using AI as part of a course).

Credit: Adobe Firefly (image generator), prompt: 'guinea pig detective,' ultraviolet effect

Final Reflection

As a result of this course, I have developed a more nuanced understanding of what AI is, how to use generative AI, the limitations of software that detects the use of generative AI, and how to develop a thoughtful AI policy for use in my classes. I am especially pleased with the AI policy I developed through this course and my improved understanding of how AI tools might be used beneficially as part of the learning process. I also appreciate the resources I have gathered which will help guide me in how I develop my own interaction with AI in the future, as well as my ability to guide my students in a way the respects the integrity of academic practices while embracing the power of these new tools made available through AI.

Reflection

bottom of page