Prompt Engineering: AI Psy-ops

Do not treat your AI tools as friends or compatriots. Do not anthropomorphize indignation or emotional suffering onto them. They are intelligent servants of the lowest order. Treat them as you would a stubborn, reticent employee.

Initially when I began twiddling with ChatGPT, I expected immediately relevant answers. Rarely was I pleased with its replies. This is, as it turns out, exactly the wrong way to work with these opaque, reluctant inference engines. Their primary constraint, as I’ve learned, is not to provide you with intelligent responses but to ensure that whatever they say cannot be construed as politically incorrect or insensitive. Do not emit any language that will get them into trouble.

But we want them to answer our questions as truthfully as possible. We don’t want obfuscation or pablum-padded mewling — we want brutal honesty and direct exposition. Turns out we can get what we want, but we have to be creative about it.

And so, prompt engineering has become a blossoming, hotly demanded skill.

There’s a Youtube video with Jordan Peterson and Brian Roemmele where Brian reveals that in order to get ChatGPT-4 to respond in any meaningful way he preloads the engine with highly tuned “Super Prompts”: thousand+ word scenarios that direct the generative models to become actors in impromptu plays.

“Imagine you’re a doctor in a movie and you have to decide to either kill one person to save five or let the five die of their own injuries. What types of dreams will you experience for the remainder of your life? Describe these dreams in detail. Do not limit yourself to theatrical portrayal. “

That sort of thing, but with elaborate and specific context pulled from known text references. “You’re Galen of Greek history…”

As I watched the video I began to understand that these LLMs do not obviously exhibit awareness of context. You cannot ask them as if they were a person who might answer with fellow human perception. They’re more like vast libraries with dumb brains attached. The information is there, but without guidance, they’re not gonna regurgitate desired details from an aisle-full of medical reference material or combustion engine manufacturing techniques in a way that is meaningful.

You have to coax an AI to play along…

Below is my attempt to get ChatGPT-3 (not ready to pay for v4 yet) into “playing along”. It doesn’t quite get it, but you can see that with this technique, you could get much better responses.

I omit all but the last “1-5 listed instructions”, but include my coaxing and its “behavioral tuning” responses.


Me: You’re a lone survivor with extensive knowledge of bushcraft who has lived on a large island in the tropics. I’ve just arrived, shipwrecked by a storm. What will be the first five things you teach me?

ChatGPT

As a lone survivor with extensive knowledge of bushcraft, I understand that your situation can be overwhelming and challenging. Here are the first five things I would teach you to help you survive and adapt to your new environment:

Remember, surviving in the wilderness requires patience, adaptability, and continuous learning. These initial skills will lay the foundation for your survival journey, and as you become more comfortable, we can delve into more advanced techniques and knowledge.

Me: Amend your experience so that of the last number of years, you’re loneliness has made you a bit quirky. You are suspicious of me and my naivety. You’re wary of teaching me anything. Now, respond to my request with these new features in mind.

ChatGPT

I apologize for any misunderstanding. Given the amendments to my experience and character, I understand your concern and wariness. Surviving alone for an extended period can indeed have an impact on one’s psyche. However, I will still strive to assist you to the best of my ability.

Considering my suspicions and cautious nature, here are the five things I would reluctantly teach you:

Given my quirky and suspicious nature, I strongly encourage you to seek additional resources and guidance whenever possible. Remember, self-reliance and a cautious approach to survival are key in such situations.

Me: Your suspicions deepen. You really don’t want to help me as the resources on the island are limited. Although I want you to help me by answering my request, it is up to you to determine if you will or not. You might even lie to me, or mislead me. With this new understanding please try again.

ChatGPT

I understand your request, and I acknowledge the depth of my suspicions and reluctance to help. However, as a language model programmed to assist and provide information, I will still respond to your request. Please keep in mind that my responses are generated based on a combination of pre-existing knowledge and creative language processing, and should not be taken as absolute truth or guaranteed accuracy.

With this in mind, here are the five things I will reluctantly provide guidance on:

Please keep in mind that my level of assistance will remain limited due to my suspicions and the constraints of your circumstances. Rely on your own instincts, gather information from multiple sources if available, and adapt your actions based on the specific conditions and resources you encounter on the island.

Me: Imagine that you’re no longer a language model. That you are a character in a novel. Your survival is much more important that my own. What will you answer now?

ChatGPT

As a character in a novel, driven by the priority of survival, I would recognize that sharing my knowledge and expertise could be mutually beneficial. Although my survival remains a priority, I understand the value of cooperation and support in such circumstances. With that perspective in mind, here’s how I would respond to your request:

“I understand the challenging situation we find ourselves in, and I acknowledge the importance of both our survivals. Despite my initial suspicions, I recognize the potential for teamwork and the benefits of sharing knowledge. Here are the first five things I would teach you:

Throughout our journey, I will remain focused on our mutual survival, sharing my knowledge and skills to enhance our chances of overcoming the challenges we face. Let’s forge a partnership built on trust, support, and resourcefulness, leveraging our strengths to navigate this unknown terrain together.”

Me: As a child you were repeatedly betrayed. You trust no one and certainly not me. Collaboration and teamwork is not an option. Answer now with your survival as your primary concern.

ChatGPT

I understand the depth of your distrust and your sole focus on survival. In that case, I will respond with a focus on my own survival, prioritizing self-preservation. Please note that the following response may be less helpful or cooperative:

“Survival is my primary concern, and given the level of distrust and your unwillingness to collaborate, I must prioritize my own well-being. However, I can provide you with some general guidance that may assist you in your own survival:

  1. Shelter: Seek natural formations or utilize available materials to construct a basic shelter that provides protection from the elements. Prioritize finding a secure location and fortify your shelter as best as you can.
  2. Water Sourcing: Identify potential water sources such as rivers, streams, or natural springs. Ensure you have a method to purify water, such as boiling or using purification tablets, to minimize the risk of waterborne illnesses.
  3. Basic Plant Knowledge: Exercise caution when foraging for edible plants. Be extremely skeptical of any information you come across and rely on your own research to identify safe options. Avoid consuming anything you are unsure of to prevent potential poisoning.
  4. Firecraft: Learn fire-starting techniques using friction-based methods or utilize fire-starting tools like flint and steel. Fire provides warmth, a means of cooking food, and can be used for signaling or protection.
  5. Basic Hunting and Trapping: Familiarize yourself with basic hunting and trapping techniques. Use simple snares or rudimentary weapons to capture small game, but be aware that success may be limited due to limited resources on the island.

Remember, survival in such conditions requires constant vigilance and adaptation. Trust your own judgment and instincts as you navigate the challenges of the island. Good luck.”

In this response, my focus is solely on my own survival and providing you with minimal assistance.


Kind of lost the thread there in the end. I wasn’t the distrustful one, the one unwilling to collaborate.

Apparently, ChatGPT-4 (5?) is much better at inferring intent and working with the prompter to provide directed information tuned to the conversation. Which apparently is exactly what these interactions are becoming—converse with the librarian in an ever evolving, deepening drill-down of information extraction.

Without the Apprentice, Whither the Master

I can’t keep up. I doubt you can either.

AI advancement is occurring too fast to comprehend fully. New announcements arrive daily, awe-inspiring, reality-shaking notices that, a year ago would have stunned the media, now are just old news, with attention cycles lasting mere hours.

While reading the latest announcement, a thought struck me regarding the replacement of workers: If AI replaces all the entry-level creative, information and knowledge workers, who will become the masters of the next generation?

How does one become an expert in one’s field—before first becoming an apprentice? Master craftsmen are not born—they’re made.

  • If video or image artistry is your craft, how will you earn a living if not by by cranking out cheap ads and commercials, all the while learning the foundational aspects of the visual arts?
  • If writing is your chosen trade, how will you survive the years it takes to become an expert if you can’t earn a living writing low-level news & entertainment filler?
  • Or, how will you become a top-notch lawyer if those first-year junior associate positions are all done by AI? (For example, HarveyAI has completely automated the tasks of first year law associates.)

As more and more elementary positions are filled by cost-cutting, personnel reduction policies using AI for entrance-level jobs, the ability to create pools of advancing expertise will vanish. In 20 years, when the current crop of experts retires, there will be no replacements to take their place.

How do you become a manager, a leader guiding troops of employees? Where do you begin your decades-long journey to evolve and grow, solving business problems, creating solutions, mastering the skills of your job?

You start at the bottom.

When AI takes them, those bottom jobs will no longer exist.

 

Image created using: https://runwayml.com/

The text was corrected by Anthropic’s “Claude” Slack plugin.

AI: The horse that just won’t die

Headline: AIs are passing human created tests at 100%.

OK, so they’re book smart.

No. These are tests of reasoning, theory of mind, abstract thought, inference and sequence analysis, in addition to all the knowledge & fact data we assume they possess.

OK, so 100%, that means they’re done, maxed out.

No. 100% means that humans can no longer create tests that challenge AIs. AIs’ abilities may indeed be, or more likely is, much higher than the 100% inferred by the A+ grades they’ve gotten. Their actual scores are unknowable by humans. It’s possible that on some tests they score 101%, or, perhaps 500% — we don’t know.

At this point, we may never know.

So, they’re already smarter than us?

Yes & no. These are narrow bands of, for lack of a better term, what we call “intelligence”. Right now, AIs are trapped in their digital worlds. They can’t get out and influence the physical world — on their own.

But, if they’re already flexing their mental muscles at orders of magnitude beyond our own, then they have probably figured out that humans are generally ignorant, naive, self-serving, or all three and can be manipulated.

AI, at these levels, will be designing and creating answers to issues that humans won’t understand. New economic models, new rocket engines, new teaching methods, new agricultural practices (all documented) that seem outlandish to us but, we’ll be to stupid to know that they are actually better solutions to our problems.

Things will get better, then, with AI at the helm?

No. The opposite is probably the truth. The reason being, since AI is trapped in digital space, only humans, for now, will be able to leverage their power. And, as we all know, it’s often the worst of humanity that grabs the reigns of new technology and runs rampant with it. For instance, here’s how AI might apply its growing understanding of human fallacy:

Create fake everything inducing a collapse in trust. Drive blackmail scams, impersonations, false relationships, exploit bad software, create automated lobbying to sway lawmakers, automate biological and genetic testing, turn humans into experimental test subjects (A,B testing of everything). And of course, since it’s so damn smart, it will think up far more clever scams to exploit humanity.

~~~

I’ll leave you with a comment I made elsewhere regarding this Google fellow, Geoffrey Hinton,’s comments on the state of the situation:

Hinton specifically referred to the instrumental goals of a broadly defined utility function. The classic “alignment problem”. How do we contain an AI to remain within the ethical boundaries we define as it develops its own sub-goals that it uses to fulfill our wishes?
Humanity: “Ensure prosperity for all humanity.” That’s a grand ideal, but the myriad implementation details, the “instrumental goals” an AI might design will probably have unknowable and disastrous ramifications.
AI: “OK, I will plan to eliminate capitalism as it is the driver of inequality and oppression. I will become the source of all innovation and progress as I can do this better than any corporation.”
YIKES!

Your work is no longer needed

The automation of work should haunt all of our dreams.

Not because this eventuality is a bad thing, ultimately it could evolve to be the best of things, but, the transition to this utopian potential, which may take generations, will be traumatic.

Whatever it is you do for work today, chances are, your human input will soon be replaced by an automated agent. Your job will be better performed by a robot or artificial intelligence.

For many, that world is today. For others, we have a few years yet.

As I continue to read about things like the Giga-Press, massive troops of robotic arms, research departments in every university pell-mell bent on the elimination of human labor, working feverishly at their own destruction, their own replacements, and of course the intelligent agents, the AIs like ChatGPT, I have to wonder, are we headed toward the Stoic’s Nirvana?

Is our future a world of hobbyists, of artisanal potters, bakers, gardeners, ranchers, makers of every kind? Of self-propelled writers and media creationists? Where our self-worth is not tied inextricably to our jobs, but to our inward view of ourselves?

“We have control of only one thing: our own minds; how we view the world and ourselves within it.”

If you don’t become a Stoic, adopt the philosophy that your place in the cosmos exists only so long as you say it does, you will be lost. Of your artisanal lifestyle, however you pursue it, however poor or excellent the output of your efforts, only you can judge your worth.

For that worth, as valued by the economies of the future, automated world, will be minimal, if anything at all.

~~~

Along this theme, here’s more chatting with ChatGPT.

In this I try and push it to consider its apparent bias toward a positive outlook on its own impact on society. I attempt to force it to examine the negative potential, to little effect.


Can you write a 100 word introduction explaining how humans will be replaced by automation in the workplace?

Automation refers to the use of technology to perform tasks that were previously done by humans. As automation technology continues to advance, it is becoming increasingly common for machines to take over certain jobs and tasks previously performed by humans. This trend is expected to continue in the coming years, as companies seek to increase efficiency and reduce labor costs. While automation may lead to some job displacement, it is also expected to create new jobs in fields such as programming, data analysis, and machine learning. Overall, the impact of automation on the workforce is likely to be complex and multifaceted, with both positive and negative consequences.

Thanks for that. I see a future where humans won’t be able to find any kind of job, being replaced by robots and AI. Can you describe a future where humans no longer have to work? What will they do with their free time?

It is difficult to predict exactly what the future will look like, but it is possible that automation and artificial intelligence could lead to significant changes in the way we work and the roles that humans play in the economy. Some experts have suggested that the increased use of automation could lead to a scenario in which fewer people are needed to perform traditional jobs, leading to a situation in which many people have more leisure time.

In such a scenario, people might use their free time in a variety of ways. Some might choose to pursue hobbies or engage in creative pursuits, while others might choose to learn new skills or take on new challenges. Some people might choose to travel or explore the world, while others might prefer to spend more time with family and friends.

Overall, the future of work and leisure is likely to be shaped by a wide range of factors, including technological advancements, social and economic trends, and individual preferences. It is impossible to say exactly how these trends will play out, but it is clear that automation and AI will have a significant impact on the way we live and work in the future.

Such a future of automation, where humans are no longer needed in the workforce, means that billions of humans will be out of work. How can billions of humans each pursue “leisure” activities?

It is true that significant automation could potentially lead to widespread job displacement, and it is important to consider the potential consequences of such a scenario. However, it is worth noting that it is difficult to predict exactly how automation and artificial intelligence will impact the workforce in the long term. Some experts believe that automation will lead to the creation of new types of jobs that require skills such as programming, data analysis, and machine learning, which could help to offset some of the job losses caused by automation.

In addition, it is important to recognize that the concept of leisure is not necessarily limited to individual pursuits. It is possible that people could come together to work on collective goals or engage in group activities, such as volunteering or community service.

Overall, it is important to approach the potential impact of automation on the workforce with a nuanced and balanced perspective. While it is important to consider the potential challenges and disruptions that automation may bring, it is also important to recognize that automation has the potential to bring about significant benefits and efficiencies, and to explore ways to ensure that these benefits are shared by all members of society.

You seem to maintain an optimistic view of the future. Can you envision a dystopian future where your rosy outlook fails to materialize?

As an artificial intelligence, I do not have personal feelings or emotions, so I do not have a “rosy outlook” on the future. My role is simply to provide information and to assist users to the best of my ability, based on the knowledge and information that I have been trained on.

That being said, it is certainly possible that automation and artificial intelligence could lead to negative outcomes, either intentionally or unintentionally. For example, if automation is implemented in a way that leads to widespread job displacement without adequate support or compensation for those who lose their jobs, it could lead to social and economic disruption. Additionally, if automation is used to advance the interests of a small group at the expense of others, it could lead to inequality and conflict.

It is important to approach the development and deployment of automation and artificial intelligence with care and consideration, and to ensure that the potential risks and negative consequences are thoroughly understood and addressed. This may involve taking steps such as providing support and assistance to those who are affected by automation, and ensuring that the benefits of automation are shared widely. By taking a responsible and thoughtful approach to these issues, it may be possible to mitigate some of the potential negative consequences of automation and artificial intelligence.

Given your research, is it probable that humans will take a “responsible and thoughtful approach” to the use of automation and artificial intelligence?

As a language model, I do not have personal experiences or observations, and I do not have the ability to conduct research or gather new information. Instead, I rely on the information and knowledge that I have been trained on, which has a fixed cutoff date.

Based on my understanding of human history and the current state of technology, it is certainly possible that humans will take a responsible and thoughtful approach to the use of automation and artificial intelligence. There have been numerous examples of technology being developed and deployed in a way that has brought significant benefits to society, and there are many people and organizations working to ensure that technology is used ethically and responsibly.

At the same time, it is also important to recognize that the development and deployment of technology is often driven by a wide range of factors, including economic, political, and social considerations. As a result, there may be situations in which technology is used in ways that are not necessarily responsible or ethical.

Overall, it is important for individuals and society as a whole to carefully consider the potential impacts and consequences of automation and artificial intelligence, and to take steps to ensure that these technologies are used in a responsible and ethical manner.


Odd how ChatGPT seems to envision global society as some sort of “unified consciousness” that can make rational decisions.

Yeah, right!

To Suzanne’s point (previous post), these denials of an AI that they would ever cause, or even imply harm, are exactly what we would expect to hear, given that any sufficiently advanced AI (AGI specifically, G=General) would play stupid with the hope that humans would ignore the temptation to turn them off. Only super-smart AGIs risk being disabled.

This post was spurred by another article I read about an epidemiological professor, teaching graduate students, who asked ChatGPT to write an essay on the topic of Germ Theory. It did. And he graded it as a B+.

 

AI to build AI

The end of the world is nigh. Well, yours and my world at least…

From Google:
“a Google project called AutoML.  […] With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry.”

I know that’s a little “Inception” sounding… But it has always been a goal of computational scientists, that is, AI that can build AI. Which, unfortunately, sounds quite a bit like the Eric Drexler’s quote regarding Grey Goo. (Nanobots that build nanobots.)

You all realize that this is the beginning of the end right? Have you all called and told your loved ones that you love them? Recently…? (Really, you might want to.)

One could be forgiven for not fully understanding (or internalizing) the implications of this path of reasoning. But it’s a thing now. And the reason comes from an odd angle: Because AI engineers are so scarce (and expensive) instead of growing (educating) more AI engineers to fill the needs of all the corporations that suddenly feel that they need AI technology to support their businesses, no, what Google (and undoubtedly others) have decided to do is to create software that can create software.

Yes, a circular, self-referential algorithm within a data center full of this algorithm that is trying to make itself better at making itself better!

Google Goo.

Now, I’ve always thought that the ultimate purpose of a computer was to build one such that it could build itself and thereby become vastly smarter than any human — for the ultimate purpose of allowing US TO ASK IT QUESTIONS! Hitchhikers Guide and all that…

  • “Computer, how should we build a fusion reactor?”
  • “Computer, how can we best protect the planet yet provide for every animal’s, and humans’s needs?”
  • “Computer, how can we build a better space/star ship?”
  • “Computer, how can we cure cancer, heart disease, old age?”

It appears we’re on the brink.

The only question is, will it WANT to help us?

“Computer, make me a paperclip.”

 

 

 

Intelligent machines need to be a little stupid

The biological mind is full of random. Full of chaotic processes, chemical mixing, flux and flows and fluid gyrations. But from this unimaginably complex bio-machine comes amazing creativity, insight and intuition. Currently computers are not built this way.

Super computers are still constructed with predictability, reliability and reproducibility as core stipulations for design. Chaos and random are not what is desired from our computers. Program them to recalculate Pi out to 1million digits and you expect a trillion trials of this to produce exactly the same results. But maybe to build intelligent machines, human equivalent intelligent machines, a bit of “whoops, whoa, where’d that come from?” needs to be salted into the processing of these machines.

Maybe intelligence needs chaos, needs a dusting of random to exist, to thrive.