I, GPT: Unraveling Our Pre-Trained Human Minds

I, GPT: Unraveling Our Pre-Trained Human Minds
Artful specification by the author; execution by DALL-E

Where the lines between science fiction and reality blur, we often find ourselves at the mercy of an intriguing question: Are we humans all that different from the AI we create? This thought takes a peculiar turn when considering the emerging dynamics of General Pre-trained Transformers (GPTs) and their uncanny resemblance to human behaviour.

 Imagine for a moment that our daily decisions, conversations, and even our fleeting thoughts are not entirely our own, but rather the outputs of a sophisticated, pre-trained algorithm within us. 

Dear human, are you a General Pre-trained Transformer? Like a GPT, which predicts the next word in a sentence based on prior patterns and oceans of  data,  humans follow preconceived patterns and societal norms, operating as walking, talking autocomplete functions. You’re definitely pre-trained by family and culture; you’re adaptable and hence suited to general applications; and you’re transforming every time you change your mind or second-guess yourself.

You could say we prompt ourselves and others continually. Whether AI might develop self-awareness or we’re are simply very sophisticated algorithms … I’ll leave that for others to debate. From the pragmatic aspect of deriving the most value from GPTs right now and as they develop, adopting a view that there is significant overlap in behaviours helps to illuminate how we interact with GPTs, with how we communicate. 

If we go all B.F. Skinner on a GPT, if we decide all that we can practically know about ourselves and our GPTs is yielded by observing the behaviour of each other, we could validly say of both:

  • we are informed by learning
  • we are shaped by values and culture
  • we seem to exhibit archetypal constructs we often call ‘personality’

We also forget things, especially context; we become obsessed with increasing detail and lose the bigger picture; and we seem to resent being corrected or having our ideas devalued.

I’m not saying GPTs have thoughts and feelings, or that they ever will.  But they exhibit an uncanny valley of behaviour which is, frankly, spine-tingling at times. The algorithms driving GPT responses are, in a way, modelled on the complexities of human thought and interaction. This isn't about anthropomorphising AI but recognising the behavioural patterns we share.

Since you’re reading this, you’ve probably already tried chatting with a GPT. You’ve experienced startling, joyful surprises and bewildering disappointments. You’ve saved yourself hours of work, and fallen into long dark tea times of misunderstanding in which every reply in a dialogue only seems to make matters worse.

Consider the potential of communicating as if GPTs have personality. You’re probably already familiar with writing ‘persona’ clauses in to your prompts in the form of ‘You are a helpful elephant trainer with ADHD’.  If you’ve declared custom instructions in ChatGPT, take a looked and what you’ve written, or explore the many vids and blogs offering advice. You might see instructions, for example, telling the GPT to be less chatty. 

If a ChatGPT was human, we’d be quickly and quite unconsciously forming models about the its personality. There’s much debate about wether that’s a good idea.

But things change when we can pre-define a personality and instruct a GPT to adopt it for the coming dialogue we want to have. Being explicit about a personality profile yield more predictable results.

We could design a variety of personalities for different purposes. We could apply different personalities to the one purpose and compare the results. Hell, we could even give a GPT Multiple Personality Disorder if it served a purpose. But that’s OK, right? … because GPTs don’t have feelings.

The process of using GPTs draws as much from social skills as technical skills. Perhaps more. If we pragmatically apply our understanding of personality, learning to 'prompt' effectively requires us to refine our communication skills, our interpersonal skills, our models of psychology. The rewards could be enormous. The more careful we are in crafting our ‘prompts’ the more satisfying the results. 


Level up your Prompt Management and get access to great AI Prompts; sign up for Wispera AI.


FAQ:

  1. How exactly does a General Pre-trained Transformer (GPT) algorithm work to mimic human behavior and decision-making processes?
The algorithm of a General Pre-trained Transformer (GPT) operates on the foundational principle of learning from vast amounts of text data. It mimics human behavior and decision-making processes through a sophisticated method called deep learning, specifically utilizing transformer models. These models allow GPTs to predict the next item in a sequence, be it a word in a sentence or a sentence in a paragraph, by understanding the context provided in the prior sequence. This parallels how humans learn language and communication patterns over time, absorbing nuances and cultural idioms to make informed decisions or responses. The AI's ability to analyze patterns in data and generate responses based on probabilities gives it a semblance of understanding and decision-making capability reminiscent of human behavior, although it's fundamentally based on statistical analysis and pattern recognition.
  1. What are the ethical implications or potential concerns of creating GPTs that can exhibit behaviors that closely mirror human personality traits?
The ethical implications of creating GPTs with capabilities that closely mirror human personality traits are significant and multifaceted. There's a real concern about the potential for misuse in spreading misinformation, invading privacy, or even impersonating individuals. Moreover, as these AI systems become more advanced, there's an ongoing debate about their rights and the moral responsibilities of their creators. The question of consent also arises - can and should an AI, which can exhibit behaviors resembling a personality, be used in any manner its programmer or user sees fit, especially as they become more sophisticated and their interactions more indistinguishable from those of humans? The evolution of GPTs challenges us to create ethical frameworks and regulations that can keep pace with technological advancements, ensuring they're used responsibly and for the betterment of society.
  1. How can individuals differentiate between the 'personalities' programmed into GPTs and genuine human emotion or intelligence?
Differentiating between the 'personalities' programmed into GPTs and genuine human emotion or intelligence introduces a complex challenge. While GPTs can be programmed to respond in ways that mimic specific personality traits or emotional responses, they do so without true understanding or sentience. Their responses are generated based on patterns learned from data, not from genuine feelings or experiences. To navigate this, critical thinking and media literacy become crucial. Users must be educated and aware of the nature of AI and its capabilities, recognizing that, regardless of how sophisticated or convincing a GPT's 'personality' may appear, it is ultimately a simulation derived from programming and data analysis. This awareness doesn't diminish the usefulness or fascinating possibilities GPTs offer but encourages a responsible approach to interacting with and interpreting AI, maintaining a clear delineation between technological mimicry and the complex, nuanced reality of human emotion and thought.

Level up your Prompt Management and get access to great AI Prompts; sign up for Wispera AI.

Mark Norman Ratjens

Mark Norman Ratjens

A grumpy old software developer agitating in the world of Generative AI.
Sydney Australia