👋 Hi, welcome to Till’s Newsletter, a weekly column all about AI for the AI-avoidant. Let’s learn together.
4 Min Read
Happy Thursday folks :) In today’s newsletter …
💭 Is prompting replacing search?
📝 A breakdown of a prompt guided by Harvard Review.
🤓 How GPT thinks we should prioritize our prompts.
In the future, I’m convinced your success won’t be measured by how much information you can cram into your head, but how well you can communicate what it is you want to know.
Prompt engineering: Providing AI with the context, direction, and guardrails it needs to get you the most helpful response possible.
Prompt engineering is one of those things that would be hard to explain to the average joe a little over a year ago. And by the way, certified average Joe here.
Anyways, that was before Chat GPT was released in November of 2022 and the question of how we communicate with the tech around us suddenly became much more relevant to the masses.
Unsurprisingly, Google searches for prompt engineering exploded in early 2023, shortly after the release of Chat GPT’s first model made AI widely accessible.
The concept of prompting AI is what makes the technology so revolutionary. It’s taking us one step closer to communicating with technology like we would a flesh-and-blood assistant.
With AI, you don’t have to bother with the generality of search, you can just ask for exactly what you need.
Let’s say you’re trying to prove to your manager that an email marketing campaign you have in mind would really make a difference in sales.
Instead of searching on a key word like “revenue increase” and sifting through every copy of that phrase in a 10-page article on the effectiveness of email marketing campaigns, you could prompt AI to analyze the document and “Please highlight and summarize any and all data showing increased revenue in direct result of investing in email marketing campaigns”.
From there you could ask for direct quotes, counterarguments made in the study, you name it.
You can have it, as long as you know how to ask for it.
In the future, I’m convinced your success won’t be measured by how much information you can cram into your head, but how well you can communicate what it is you want to know.
That, to me, is what’s at the core of prompt engineering. It’s an ironically liberal arts-heavy mindset for a technology so many feel is out of reach as a non-technical person.
Ok - let’s write a prompt to help me review an article about … prompt engineering.
Meta, I know.
Harvard Review published a great piece on Prompt Engineering tips. We could read it as is, but let’s see if we can get AI to prioritize if for us.
First, I copied the entire article into GPT 4. I could also have downloaded it as a PDF and uploaded it to GPT to analyze, but copying just felt quicker this time around.
Here’s the prompt I used, built with the tips from the Harvard Review article in mind (getting even more meta).
🟣 Who are you, the AI, in this scenario?
🔵 What is it you should focus on?
🟡 What should the output look like?
🟢 What are the parameters?
🟠 If this wasn’t enough info, what else do you need from me?
The most important thing to remember here is that this is a conversation – you can (and should) follow up.
In this case, I thought the bullet points GPT wrote back to me made the output too chunky. It was exactly what I asked for, but I didn’t want to paste all that text here, so I followed up with a request to “Please remove all the bullet points and only leave the numbered items. Maintain the same ranking you have here.”
But that didn’t work. The AI just reworded the bullets into paragraph text and kept it in the output. So I followed up again with, “Remove the text, only leave the headlines for each ranking.”
I dropped the “please” here (sorry GPT).
A perfect example of how prompting might be better thought of as a conversation than a singular question. It’s rarely perfect the first go around.
How GPT ranked the tips:
Most interestingly, GPT ranked the “Act as if …” role-play tip way lower in the ranking than in the Harvard Review article (6th vs. 2nd place). Granted, the Harvard Review piece wasn’t overtly ranking these 9 tips, but I do think it says something about how AI is perceived vs. what it can actually do.
It’s cool to imagine an AI that could embody the expertise and candor of an expert with a simple “act as if …” prompt, but, as GPT told me when I bugged it about this, trying to role-play with AI doesn’t influence the “specificity and clarity of the information provided.”
Maybe someday, but right now there’s more important things to prompt. First and foremost: be specific.
Quality in = quality out.
That’s all folks. I suggest you check out r/PromptWizards on Reddit, brought to you by people who care enough to start a reddit thread about it.
Till next week!