From the course: Introduction to Prompt Engineering for Generative AI

Taking GPT-3 for a ride

From the course: Introduction to Prompt Engineering for Generative AI

Taking GPT-3 for a ride

- [Instructor] GPT-3, or Generative Pre-trained Transformer 3, refers to a model, or really, an ecosystem of models, based on this well over 170-billion-parameter model, which is insanely huge, which has been trained on enormous amounts of data. These models vary in size and capability. They're tuned for different tasks. You can play with their configuration. You can actually use them for your applications. Some are optimized for zero-shot learning or few-shot learning, which are two kind of ways of looking at prompt engineering. So let's go ahead and see what all of this means. So if I head over to openai.com/api and I sign in, I'll see that I have the capability of going to a few places here. There's a quick tutorial that you can check out. There's also an overview, documentation, playground, but I like this section specifically, this example section. And these are various tasks done by GPT-3, and these are phenomenal examples. And I really like this example of Q&A, which is creating a sort of Q&A bot. I'm going to go ahead and click on that, and it's showing me the prompt that they have put together. Now, here are the various configurations. You can control the max tokens you want it to spew out, and this is important because you do have a limited amount of tokens to work with, and you do pay per token. Then you have various other settings. Notably, I want you to take a look at this, the stop sequence. This is how you separate examples, so to speak, in your prompt. And let's take a look at what this means. So the cool thing is you can go ahead and open this in a playground. And a playground is this cool graphic interface that lets you try out different things. Now, here, you'll see the first thing it does is it sort of seeds the model with context as to who it's pretending to be or who it is. So it says, I'm a highly intelligent question-answering bot, and this is all really suggestive, and this is going to really, really set grounds for who the bot is going to assume it is, so to speak. Of course, we're dealing with mechanics here. This is not a human, but you're kind of suggesting what tones you want to speak with. Here you have some examples of questions and answers. And this is really important, because you're kind of giving it a template, so to speak, of how you want it to answer questions. So what is the human life expectancy in the United States? And it gives it something that I guess is a fact that OpenAI has decided. It's 78. Who was the president of the United States in 1955? And it's giving it really factually accurate responses. And this is extremely important because the more you have these factually accurate answers, the more the bot sort of knows what kind of information you expect it to produce. Now, one interesting one is what is the square root of a banana? And that is a kind of strange question. And here you say, "Unknown." So you're telling it when you encounter a strange question, eh, don't try to come up with something creative. There's more. How does a telescope work? Et cetera, et cetera. And really, this is some really interesting example of a prompt that's engineered to really anchor this bot to information deemed factual. So let's see what happens when you ask it a question. Let's go ahead and ask it, what is the capital of the US? And yep, it answered "The capital of the US is Washington D.C." Now, I'm just going to go ahead and give it a nonsensical one the way that they did. And I'm going to ask it, "What is the capital city of a melon?" and it's telling me "Unknown." So using this carefully engineered prompt, you get this capability of really kind of anchoring things. Now, I want to take a look at one more thing. I'm going to go ahead and delete this, and I'm going to go ahead and ask it to provide me with probabilities. And this may really bring home some points. I want to go ahead and click Submit. And it's giving me the likelihood that each of these words would come up given this particular prompt thus far. And this is one of these ways where you can see how this distribution ends up in a really well-put-together sentence.

Contents