Are you any good at solving jigsaw puzzles?
There is a kind of jigsaw puzzle that is vexing those within the field of AI and that if solved could immensely advance our understanding of how generative AI works and perhaps even provide insights into how human minds work. I am referring to a complex jigsaw puzzle of tremendous importance and one that right now is exasperatingly difficult to solve.
Some might insist it is unsolvable.
In today’s column, I will share with you the intricacies of this puzzling engulfment concerning AI. My erstwhile aim is to point you toward viable ways that you can aid in deriving potential solutions. We need all hands on deck for this. Thanks, in advance, for potentially volunteering to help on a rather grand quest.
The circumstance entails how it is that generative AI is so ably able to provide seemingly fluent essays and carry on with human-like interactive dialogues. You might be under the impression that AI insiders know precisely how generative AI does such an awe-inspiring job. Regrettably, you would be incorrect in that assumption. As I have covered in a prior column, nobody can say for sure how generative AI truly works, see the link here for details on this beguiling problem.
I’d like to clarify that when I say that nobody can say for sure how generative AI works, this is a somewhat stark statement entailing the logical manner in which generative AI works. It is readily possible to in essence mechanically identify how generative AI works, nearly easy-peasy. The real problem is identifying the reasoned basis or logical underpinnings of what is going on.
To explain that key difference, I’ll need to first walk you through some crucial background about generative AI. Let’s do that. Once we’ve gotten the cornerstones in place, we can dig into the conundrum or puzzle and also consider a recently announced approach by OpenAI, the maker of the widely and wildly popular ChatGPT generative AI app, which might serve as a means of laying open this intriguing and vital enigma.
Hang onto your hat for an exciting ride.
Setting The Stage About Generative AI
Generative AI is the latest and hottest form of AI and has caught our collective rapt attention for being seemingly fluent in undertaking online interactive dialoguing and producing essays that appear to be composed by the human hand. In brief, generative AI makes use of complex mathematical and computational pattern-matching that can mimic human compositions by having been data-trained on text found on the Internet. For my detailed elaboration on how this works see the link here.
The usual approach to using ChatGPT or any other similar generative AI such as Bard, Claude, etc. is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit remarkable and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur. The reaction by many people is that surely this might be an indication that today’s AI is reaching a point of sentience.
To make it abundantly clear, please know that today’s generative AI and indeed no other type of AI is currently sentient.
Whether today’s AI is an early indicator of a future sentient AI is up to highly controversial debate. The claimed “sparks” of sentience that some AI experts believe are showcased have little if any ironclad proof to support such claims. It is conjecture based on speculation. Skeptics contend that we are seeing what we want to see, essentially anthropomorphizing non-sentient AI and deluding ourselves into thinking that we are skip-and-hop away from sentient AI. As a bit of up-to-date nomenclature, the notion of sentient AI is also nowadays referred to as attaining Artificial General Intelligence (AGI). For my in-depth coverage of these contentious matters about sentient AI and AGI, see the link here and the link here, just to name a few.
Into all of this comes a plethora of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing coverage of AI Ethics and AI Law, see the link here and the link here.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
With those foundational points, we are ready to jump into the details.
Making Use Of Artificial Neural Networks
I mentioned moments ago that the core of generative AI consists of a complex mathematical and computational pattern-matching capacity. This is usually arranged in a data-structured fashion that consists of a series of nodes. The parlance of the AI field is to refer to the nodes as part of an artificial neural network (ANN).
I want to be abundantly clear that an artificial neural network is not at all on par with the biological neural network that we have in our heads. The artificial neural network is merely a data structure that was devised inspirationally by trying to figure out how human brains function and that somewhat tangentially attempts to parlay off the same precepts.
I say this because I find it worrisome and quite disturbing from an AI Ethics perspective that many AI researchers and AI scientists tend to blur the line between artificial neural networks of a computational bent and the biological or wetware neural networks that sit inside our noggins. They are two completely different constructs. Lazily comparing them or subliminally using akin terminology is misleading and sadly another disconcerting form of anthropomorphizing AI, see my explanation about this at the link here.
We generally all realize nowadays that our brains are made of an array of neurons that interconnect with each other. These are the elements of what I would consider a true neural network. To me, when someone refers to a neuron, I immediately assume and so do most people that the reference indicates a living neuron of a biological nature.
For an artificial neural network, you can construe that a data-based node is essentially the considered “neuron” even though it is not truly equivalent to a biological neuron in any semblance of what a biological neuron fully encompasses. I find it useful to refer to these as artificial neurons, rather than plainly just saying they are neurons. I think it is clearer to reserve the solo word “neuron” for when discussing neural networks of our brain, and not mess things up by using that same solo word when referring to mathematical or computational ones. Instead, I would stridently depict them as artificial neurons.
Glad we settled that nomenclature concern.
Here’s roughly what takes place in an artificial neural network.
A computer-based data structure making use of an artificial neuron or node will have numeric values fed into the construct, which then mathematically or computationally calculates things, and then a value or set of values is emitted from the construct. It is all about numbers. Numbers come into an artificial neuron. Calculations take place. Numbers come out of the artificial neuron.
We then connect many of these mathematical or computational nodes into a large array or extensive network of them, ergo referred to as an artificial neural network. Oftentimes, there might be thousands upon thousands of those nodes, possibly millions or billions of them. An additional consideration is that these nodes or artificial neurons tend to be grouped into various levels. We might have a bunch of them at the start of the structure. Those then feed into another bunch that we say are at the next or second level. Those in turn feed into the next or third level. We can keep doing so to whatever series of levels that it seems might be useful for devising the structure.
Generative AI tends to then have an underlying array of these mathematical or computational nodes arranged into what is commonly said to be an artificial neural network. This in turn is arranged typically into various layers. The data training of generative AI involves establishing the calculations and such that will take place within the artificial neural network, based on pattern-matching of scanned text across the Internet.
Consider briefly how this works.
When you enter your text prompt into generative AI, the words you’ve entered are first converted into numbers. These are known as tokens or tokenized words. We might for example assign that the word “Leaping” is going to have the token number of 450, while the word “frog” has the token number of 232. Thus, if you enter as a prompt the two words “Leaping frog” this gets converted into the respective set of two numbers consisting of the number 450 followed by the number 232.
Now that your entered words or text have been converted into a set of numbers, those numbers are ready to be fed into the underlying artificial neural network. Each of the nodes that are utilized will then produce further numbers that flow throughout the artificial neural network. At the end of this flowing set of numbers, the final numeric set will be converted back into words.
Envision that all words or parts of words have designated numeric values for use within the generative AI inner workings.
Recall that we earlier pretended that you entered “Leaping frog” which was converted into numeric values or tokens consisting of 450 and 232. Assume that those numbers flow into the artificial neural network. Each node so encountered used those numbers to make various calculations. The calculated results flowed into the next series of artificial neurons. On and on this proceeds, until reaching the outward bound set of artificial neurons. Imagine that the generative AI responds to or generates the numbers 149 and 867. But, rather than showing you those numbers, they are converted into a text output consisting of the words “Landed safely” (i.e., the word “Landed” is the number 149, and the word “safely” is the number 867).
What you saw happen was this:
- You entered: “Leaping frog”
- Generative AI responds: “Landed safely”
We will now look under the hood and see what actually transpired. I am taking you into the kitchen so you can see how the meal is made. Steady yourself accordingly.
What took place behind the scenes was this:
- You entered: “Leaping frog”
- The text gets converted into numeric tokens of 450 followed by 232.
- These numbers begin to flow throughout the artificial neural network.
- Nodes or artificial neurons receive various numeric values, make calculations, and pass along newly devised numeric values.
- Eventually, this numeric Rube Goldberg confabulation produces a final set of numeric values.
- The final set of numeric values in this case are 149 and 867.
- Those two numbers or tokens get converted into words.
- Generative AI responds: “Landed safely”
That is roughly how things work at a 30,000-foot level (maybe beyond that). I hope you are sufficiently comfortable with that simple overview of artificial neural networks because it is the crux of what I am next going to cover about the jigsaw puzzle awaiting us all to solve.
The Jigsaw Puzzle Of Generative AI
I’ve just discussed that you might enter as a prompt “Leaping frog” and that generative AI might produce as a response “Landed safely”.
If you wanted me to trace laboriously through the artificial neural network of the generative AI, I could tell you exactly which numbers went into each of the artificial neurons or nodes. I could also tell you precisely which numbers flowed out, going from each artificial neuron to each other one, and ultimately led to those generated words “Landed safely”. This is a straightforward aspect of mechanically tracing the flow of numbers. Not much effort is required other than being somewhat tedious to trace.
Here’s the rub.
Amidst all that byzantine flowing of numbers, can you logically explain why it is that the entered prompt of “Leaping frog” led to the final output of “Landed safely”?
The answer today is that by and large, you cannot do so.
There is no readily available scheme or indication of the logical basis for the transformation of the words “Leaping frog” becoming an output consisting of “Landed safely”. Again, you can trace the numbers. That though doesn’t especially help you explain the logical basis for why those two inputted words led to the generative AI producing the resultant other two outputted words.
Think of it this way. You use generative AI and ask it to tell you about Abraham Lincoln. A resulting essay is generated that seems like a pretty good telling of Lincoln’s life. The artificial neural network was initially data trained by scanning text across the Internet and within that text there were undoubtedly a lot of essays about Lincoln. Your prompt that asks about Lincoln will flow through the artificial neural network, tapping along the way the elements that presumably pertain to Lincoln, as earlier codified during data training and numerically encoded, and produce the resultant essay.
This all seemingly happened by all manner of numeric rumbling and cranking. What you cannot discern is whether perhaps this was also somewhat logically done. Did this consist of first considering Lincoln as a child and then when he became later President Lincoln? Or did this consist of starting with his having been President Lincoln and then going back to when he was a child?
Can’t say.
Allow a quick analogy.
As humans, we tend to expect that people can explain how they came up with their stories or ideas. Explanations are expected of us each day. Why did you drop that skillet? Because it was hot, you might say in response. Or you might say because it was too heavy to hold. These are logical indications. If you cannot proffer a logical indication, we tend to get worried and at times suspicious of how you derived an answer or took some action.
I write quite a bit about AI and the law. The notion of logic and explanations is replete within the law and the rule of law. You can readily see this in our judicial system and our courts. People need to logically explain what they did. Juries expect to hear or see what the logic was. Judges try to keep things straight by being logical and apparent. We have laws that require us to behave in seemingly logical or logic-based ways. Etc.
On the face of things, we rely as a society on explanations and logic.
Generative AI is currently being used by millions of people worldwide, and yet we really do not have a means to logically say what is taking place inside of the generative AI. It is an enigma. The best we can do right now is trace the numeric values. There is a humongous logic-reasoning gap between being able to see that this number or that number went into the artificial neural network of the generative AI and that these other numbers came out.
How did this occur in any logically explainable fashion, beyond a purely mechanistic viewpoint?
Smarmy users of generative AI are bound to say that they do ask their generative AI app to explain what it is doing. Sure enough, the generative AI will provide you with a seeming word-based full-on logical explanation. Problem solved; you exclaim with glee.
Sorry, you are having the wool pulled over your eyes. The problem is that the generative AI that has generated the explanation of what the generative AI was doing, well, it is yet another fanciful concoction. You have no means of ascertaining that the generative AI-generated explanation has anything at all to do with the actual internal flowing of the numbers. It is once again considered a contrived explanation.
Makes your head spin.
Not wanting to go on a side tangent, but it is possible to make the same or similar argument about humans. I loathe doing so at this point of this discussion since it might seem as though this is anthropomorphizing the generative AI by comparing it to humans. Put that aside. All I am saying is that when you ask someone to explain their reasoning, we certainly can be doubtful that they are self-inspecting their biological neurons and interpreting what the wetware in their heads was doing. The odds seem more likely that they are thinking of what logical explanations are suitable or feasible, based on their lived experiences. I’ve covered that elsewhere, see the link here.
Let’s get back to the problem at hand.
We have this massive jigsaw puzzle of all these artificial neurons or nodes that are doing the work in the plumbing of generative AI. If you were trying to piece together a jigsaw puzzle that was scattered on a tabletop, what would you do?
I dare say that you might inspect each of the jigsaw puzzle pieces and attempt to see how the particular piece seemed to fit within the overall puzzle. You would likely find various pieces that seem to go together in that they portray some notable segment of the entire puzzle. A lot of people use that technique. You are logically trying to figure out where they go and what purpose they serve in the bigger picture of things. Work on this flower over here. Work on that bird that is over there. Those subsets are then ultimately brought together to try and piece out the entire puzzle.
I’m betting you’ve tried that approach.
Suppose we tried the same theory when seeking to derive the presumed logic underlying generative AI and its artificial neural network that does the heavy lifting. Here’s how. We might look at the pieces individually, namely the nodes or artificial neurons. In addition, let’s try to group them as to an assumption that various nodes (or pieces) will depict some larger overarching conception.
One knotty issue is that if the artificial neural network has zillions of artificial neurons, we would be at our wit’s end when trying to look at each node or piece. It is just too big in size. Have you tried doing a conventional jigsaw puzzle of 10,000 pieces? Daunting. In the case of generative AI, we are dealing with millions and billions of pieces or nodes. Overwhelming and impractical to do by hand.
Aha, you might be cleverly thinking, could we use an AI-based tool to help us delve into generative AI so that we can figure out what logically might be happening?
That might do the trick.
And indeed OpenAI, the maker of ChatGPT, has recently made available tools for this purpose. They used GPT-4, which is their successor to ChatGPT, and have put together a tool suite for trying to dive into generative AI apps. You can find this described on the OpenAI website, along with the tools being posted on GitHub, a popular coding repository.
Here’s what their recent research paper says about this situation:
- “One simple approach to interpretability research is to first understand what the individual components (neurons and attention heads) are doing. This has traditionally required humans to manually inspect neurons to figure out what features of the data they represent. This process doesn’t scale well: it’s hard to apply it to neural networks with tens or hundreds of billions of parameters. We propose an automated process that uses GPT-4 to produce and score natural language explanations of neuron behavior and apply it to neurons in another language model” (paper entitled “Language Models Can Explain Neurons In Language Models” by Jan Leike, Jeffrey Wu, Steven Bills, William Saunders, Leo Gao, Henk Tillman, Daniel Mossing, May 9, 2023).
The approach consists of first identifying which generative AI app you want to try and examine. This is referred to as the Subject Model. Next, via the use of GPT-4, a second model is devised that tries to explain the Subject Model. This second model is referred to as the Explainer Model. Finally, once there is a logical explanation concocted that might or might not be applicable, a third model is used to simulate whether the explanation seems to work out. The third model is known as the Simulator Model.
In short, there are three models (as noted in the research paper):
- 1) Subject Model: “The subject model is the model that we are attempting to interpret.”
- 2) Explainer Model: “The explainer model comes up with hypotheses about subject model behavior.”
- 3) Simulator Model: “The simulator model makes predictions based on the hypothesis. Based on how well the predictions match reality, we can judge the quality of the hypothesis. The simulator model should interpret hypotheses the same way an idealized human would.”
In addition, the tool works based on three stages, which I’ve somewhat conveyed above.
The indicated three stages are (as noted in the research paper):
- a) Explain: “Generate an explanation of the neuron’s behavior by showing the explainer model (token, activation) pairs from the neuron’s responses to text excerpts”
- b) Simulate: “Use the simulator model to simulate the neuron’s activations based on the explanation
- c) Score: “Automatically score the explanation based on how well the simulated activations match the real activations”
A person wanting to examine a generative AI app and the use of its devised artificial neural network can use the tool to try and figure out what might be taking place logically within the morass of the artificial neural network. Keep in mind that this is all essentially guesswork. There isn’t any iron-clad proof that the logical explanation you might propose or “discover” is indeed what is taking place.
I’ll quickly give you a concrete example so that you can hopefully better grasp what this is about. The example is one of several mentioned in the research paper.
Imagine that you are entering a prompt into a generative AI app. You decide to enter the word “Kat” and want to see what the generative AI emits in response to that prompt. Mull this over. What comes to your mind when you see the word “Kat”? I would assume you might tend to think of the famous Kit Kat chocolate bars.
Mechanically, we know the flow of what will take place. The generative AI will take the word “Kat” and turn it into a numeric value, it’s token. The numeric value will ripple throughout the artificial neural network. Assume that the artificial neural network has been subdivided into various layers. Each layer contains various collectives of artificial neurons or nodes.
Using GPT-4 and the tool suite, envision that an attempt is made to try and guess what is logically happening related to the input of “Kat” as it progresses throughout the layers.
Suppose we get this series of guesses:
- Token: “Kat”
- Layer 0: “uppercase ‘K’ followed by various combinations of letters”
- Layer 3: “female names”
- Layer 13: “parts of words and phrases related to brand names and businesses”
- Layer 25: “food-related terms and descriptions”
Let’s discuss each of the layers and the logic-seeming guesses about what is happening.
At the initial layer, numbered as layer 0, all that is potentially happening with those artificial neurons is that the word “Kat” has been mathematically or computationally parsed into consisting of a capital letter “K” and followed by a combination of additional letters.
That obviously doesn’t provide much of a logic-based analysis.
At the third layer, perhaps the artificial neurons are mathematically and computationally classifying the “Kat” as potentially being a female name. This might be logically sensible. After having data trained on text across the Internet, the chances are that “Kat” has appeared with some frequency as a female name.
At layer 13, it could be that the artificial neurons are mathematically and computationally classifying the “Kat” as a potential brand name or business name. Again, this seems logical since Kit Kat as a brand or business was undoubtedly found in the vast Internet text used for data training.
Finally, at layer 25, the artificial neurons might be mathematically and computationally classifying the “Kat” as a food item. Logically, this makes sense since Kit Kat is abundantly mentioned on the Internet as a snack.
Ponder this thoughtfully for a moment.
I trust that you can see that we are seeking to uncover within the mathematically dense forest of the artificial neural network a semblance of what might be logically taking place when attempting to computationally process the entered word “Kat” via the generative AI.
Does the prompt entailing the word “Kat” necessarily have to be referring to the food item Kit Kat?
Not necessarily.
The other words used in the prompt, if any, would likely be a further statistical indicator of whether the Kat is referring to Kit Kat versus a person’s name, or maybe having some other usage entirely. This example was notably simplistic since it involved just that one entered word. The attempt to analyze a prompt is more complicated since the other contextual words matter too, as does an entire written conversation that might be taking place and the context therein.
You have to start someplace when trying to solve a large problem. The same goes when trying to solve jigsaw puzzles.
A bit of a hiccup though is once again the size issue. Trying to do this on a generative AI app that might have millions or billions of artificial neurons or nodes is something we would aspire to eventually sensibly undertake. For right now, the belief is that it might be best to see if this can be applied to generative AI apps of modest sizes. Crawl before we walk, walk before we run.
OpenAI opted to use the GPT-4 and its devised augmented tool suite to examine an earlier forerunner of ChatGPT, a version known as GPT-2. It is quite smaller in size and much less capable. The upbeat news is that it has around 300,000 artificial neurons or nodes, thus being sizable enough to be worthy of experimentation, and yet not so oversized that it is completely onerous to examine.
Here are two quick excerpts from the OpenAI research paper about this:
- “We are open-sourcing our datasets and visualization tools for GPT-4-written explanations of all 307,200 neurons in GPT-2, as well as code for explanation and scoring using publicly available models on the OpenAI API. We hope the research community will develop new techniques for generating higher scoring explanations and better tools for exploring GPT-2 using explanations.”
- “We found over 1,000 neurons with explanations that scored at least 0.8, meaning that according to GPT-4, they account for most of the neuron’s top-activating behavior. Most of these well-explained neurons are not very interesting. However, we also found many interesting neurons that GPT-4 didn’t understand. We hope as explanations improve we may be able to rapidly uncover interesting qualitative understanding of model computations.”
At the designated GitHub site, you can find the OpenAI provided tools, and here’s a brief description:
- “This repository contains code and tools associated with the Language models that can explain neurons in language models paper, specifically:“
- “Code for automatically generating, simulating, and scoring explanations of neuron behavior using the methodology described in the paper.”
- “A tool for viewing neuron activations and explanations, accessible here. See the neuron-viewer README for more information.”
Why This Is Important And What Will Happen Next Overall
First, allow me to applaud OpenAI for having undertaken this specific research pursuit and for making publicly available the tools they have devised. We need more of that kind of effort, including and especially a willingness to make these things available to all comers. By and large, academic research efforts generally also tend to make their work products available, but tech firms and such are often reluctant to do so. This can be due to potential business liability exposures, it can be due to wanting to keep the items proprietary, and a slew of other reasons.
You might be aware that there is an ongoing and heated debate about whether today’s AI systems such as generative AI apps ought to be made available on an open-source basis or a closed-source basis. I’ve discussed the tradeoffs at the link here. It is a controversial and entangled topic, including that OpenAI has been thumped by some pundits for an asserted lack of openness for GPT-4 and other matters, see my coverage at the link here.
I’ll move on.
Second, we need a lot more research work of this nature involving logically prying out the puzzling secrets of generative AI.
If we are going to get past the black box considerations and a lack of transparency about what is occurring within generative AI, these types of innovative approaches might get us there. We certainly should be trying to extend these efforts and see where it goes.
That being said, I am not declaring that this is a silver bullet approach. Some would vehemently argue that this line of work or chosen approach is perhaps going to hit a dead end, eventually. Maybe so, maybe not. On the other hand, at this juncture, I would suggest that we need to be heading in a multitude of directions and aim to figure out what seems fruitful and what is not industrious.
Meanwhile, we can pontificate about some next steps. Logical ones, of course.
Some extensions to this particular approach would include a variety of interesting possibilities, such as devising longer explanations rather than short sentences, allowing conditional explanations rather than a single explanation per node, widening attention to entire artificial neural circuits rather than on a node basis, and so on.
Another avenue would be to pursue larger generative AI apps. Once we’ve gotten our feet wet with 300,000 or so artificial neurons, it would be worthwhile to up the ante and seek to examine GPT-3, ChatGPT, and GPT-4 itself. That gets us into the millions and billions of nodes range. There is also the possibility of using the tools on other generative AI offerings beyond those of OpenAI, such as the numerous open-source generative AI apps out there.
We also need and ought to welcome tools from others with akin interests, such as a myriad of other AI makers, AI think tanks, AI academic research entities, and the like. The more, the merrier. I’ll be covering some of those emerging tools in my upcoming column postings, so be on the watch for that coverage.
One pressing question is whether generative AI can produce so-called emergent behaviors, a topic I’ve discussed at the link here. It is conceivable that these kinds of tools can provide insight into those murky questions. There is also an ongoing hunt to devise tools that can cope with the disconcerting issues of generative AI such as the tendency to produce errors, have biases, emit falsehoods, exhibit glitches, and produce AI hallucinations, see my recent analysis at the link here on those foreboding matters.
Another possibility consists of being able to speed up or make generative AI more computationally tractable and smaller in size. It could be that via these types of explorations, we can find ways to optimize generative AI. This could significantly bring down the costs of generative AI, reduce the computational footprint, and make generative AI more widely available and usable.
Conclusion
I’ve got an out-of-the-box zinger for you.
Are you ready?
You might be aware that we are all still struggling mightily to reverse engineer how the human brain and mind work. Great puzzlement still exists as to puzzling out how thinking processes work on a logical basis versus a mechanistic basis. A tremendous amount of interesting and encouraging research is taking place, as I describe at the link here. Some wonder if the attempts to reverse engineer generative AI can be pertinent to how we might pursue the puzzles of the mind. Good idea? Bad idea? Perhaps any port in a storm is sometimes worth considering, some exhort.
Let’s end with a famous quote from Abraham Lincoln.
He noted this important insight: “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.”
This a handy-dandy reminder to put not put the cart before the horse. Some believe that on the matter of generative AI, we are putting the cart before the horse. We are leaping before we look. Generative AI is becoming ubiquitous. There seems to be a lack of will or realization that maybe we are spreading around generative AI as an experiment involving humankind as guinea pigs. The concern is that this maybe should be better refined and cooked before simply being plopped into the hands of the public at large.
Those in the AI Ethics and AI Law frame of mind are urging that we ought to be spending a lot more attention on figuring out what generative AI consists of and how to make it more safely devised for all. In that spirit, tools to try and dive into generative AI and give rise to logical explanations are something we can eagerly encourage.
I asked at the start of this discussion whether you like to solve jigsaw puzzles. Now that you know more about the generative AI jigsaw puzzle, please join in and help out. We can always use another pair of eyes and an attentive mind to solve this monumental and vexing problem.
Puzzle-solving aficionados are openly welcomed.
Follow me on Twitter.