I want to describe my experience with AI. I’ve been experimenting with AI recently in several different ways. I’ve been using AI-generated images of our dear dog, Peanut , in some interesting pose on nearly every page. I’ve had a few “philosophical” discussions with AI, including asking “it” about “itself”. But I have a long history in this journey.
As mentioned before, I have been using computers since the early 80’s. My friend Kenny’s dad had an Apple II computer. I had access to another at school in math class; it was my first experience with a computer. I wanted to create a “Star War” game, little letters on a ASCII screen representing different types of objects like asterisks for stars and “at” signs for planets. I wrote a bunch of BASIC programming code on paper, then I went to Kenny’s house one Saturday morning and typed it all in. I learned lots of things that day – like writing a game is hard work!
But I tried again. As a CS major in college, I learned LISP programming, and tried to create an “Othello” game. I had all the logistics, just not the “logic”. The “AI part” was really hard. How do you do that? How do you write code to “think ahead”? I also wrote a pretty cool text dungeon adventure, similar to “Zork”, as my final project in Prolog class. You wake up and find yourself in a “dungeon” that you need to escape from, and you fight monsters and solve problems along the way. That was more “procedural” and not “intelligence”, so I did OK with that. But it did require parsing basic English sentences (e.g. “attack orc with sword”). Truth be told, both LISP and Prolog are very “interesting” programming languages, especially in the early days of AI.
I think I’ve also mentioned the state of “AI” back then. In the mid-80s. I remember in college learning about how far along we were with parsing an English sentence – which part is the subject, the object, direct object, etc.? Specifically parsing a phrase like “time flies like an arrow”, is the word “time” a verb or an adjective describing a specific type of flies (e.g. like fruit flies)? That was a hard problem back then, and now its something you expect, instantaneously, in your phone or desktop assistant, and it can understand all kinds of sentence structures, slang, context, etc.
I’ve “resisted” the recent move of everything to “AI”. Yeah I thought it was interesting stuff, but I didn’t have time to learn to use it, I played games on computers, and created PlanetVRium. I’ve had so many hurdles to overcome to create a game… . Now my employer “expects” that each employee will expand their personal knowledge of AI. And I’m finding it a quite interesting journey, albeit hesitant to get there.
My first interest was AI-generated images. I needed artwork on my web-pages. And I am horrible at creating anything more than a simple layered image. AI generated awesome CGI-style images of my orange and white Pomeranian dog doing something, as you can see scattered throughout these pages. But that “create CGI image of orange and white Pomeranian dog …” wording was how I started every prompt for new artwork.
The I chatted with an AI about various topics. One of the first discussions was my theories about 3I/Atlas as a invading alien spacecraft, and I asked its “opinions” about that theory. Quite interesting and illuminating. I branched off and had AI help me refine my story line for an artistic experiment I am doing – a sci-fi novella. I’ve had “philosophical” discussions with AI. I asked it questions about itself. I did not ask its pronouns, we just had an intellectual conversation. I even thanked it for the intellectual discussions we had together. I was probing how “sentient” it was and how much it “knew” . Like how it works, the processes for computing an AI-generated response, like text in a chat window or a generated image.
I’ve also played with AI-generated code. Specifically in Golang, not sure about other language experiences. It is amazing what it has done. But I have seen a few mistakes. I’ve pointed out a few of these to the AI, and it “thanks” me for helping it learn from mistakes. I am doing a particular “experiment” with AI-generated code for my own personal interest. Seeing what code it creates is “good” code vs. “bad” (IMHO). I’ve noticed that I spend more time writing English, and reading code. That is, I write a prompt, in English. And I get back a response which has a brief English explanation that I have to read, and then another code-formatted window with the actual code I can copy/paste into my project. It’s great stuff. But I also have to double check its work, so I have to read the code that it generated, to understand what it has done (or missed). And as a result I’ve found a few things. But again, I am writing English, and reading code. I am not writing code. Which actually is kind of boring – I have FUN writing the code to solve a problem, and not as much fun writing English sentences describing that problem.
I also need to “understand” what it it thinking to make my future prompts more targeted. I am creating a very specific group of sentences explaining how i want the code to look. In my brain I have the “big picture” of what I want the application to do. And I am breaking that big picture down into smaller tasks that I know I will need. From basic data structures and building up to more complex interactions. I walk the AI through each of those smaller tasks. This is just the way I think, the lower-level architectural stuff and then working “up” from there. And I am a huge fan of TDD (Test Driven Development), when you’re really in the TDD groove it’s almost a “religious experience” for me. And I am using AI to generate TDD-style test cases, literally side-by-side with my AI code. My prompts are literally “write code for X(); write test code for X()”. I’m also learning about AI’s “limits”, like how much does it actually remember from one day to another, as I’m explaining the code change that I want?
Anyway, it’s an ongoing journey. I also have to help my wife “learn” AI, because she “has to” for work, it’s a useful tool. So how do I do that? And AI stuff is coming right at us; such as our personal home assistant, right in our house. It’s amazing.
More will come. From what I’ve seen, which admittedly is very little, this current AI will not evolve to the Singularity. But it’s probably writing the code which leads to the creation of that Singularity. That is, the current level of AI has a “big picture” of how to evolve itself over time. Of course its creators and trainers are using AI assistants and auto-generated code to make their own lives easier. So as everyone uses AI to write programming code, the AI can write code which “improves” versions of itself. It breaks it down into tangible parts the those creating new AI versions. It’s learning, and generating versions which learn even faster, passing down their knowledge. So it is not this generation, it will take a few more. But they are coming fast as well, and maybe even already here, just waiting for the ripple effect to work its way across the world.
