Part 2 What You'll Learn in This Class === Elizabeth: [00:00:00] So we're gonna talk a little bit about fine tunes and like why this class. , like I said, models are becoming more advanced every single quarter. I don't know when you're taking this class. I'm recording this right now in, the end of February of 2025. But. , no matter when an author comes to learn about AI and think that maybe I should take a look at this, the biggest thing that I hear from authors is I can't get it to write like me. A lot of them will work on prompt generations. They'll take prompts from people, they'll practice them, and they'll realize that they just get frustrated because they can't quite unlock making the model sound exactly like them. And truthfully, if your author voice is. More unique than not. You don't sound like the majority of the writing that the AI was trained on. So there can be a quite a big mismatch. So what you can do is the answer is a fine tune. You can fine tune select models, on open ai, minstrel, and Google. Minstrel's important minstrel means that you can fine tune models that will allow you to have not safe for work content. [00:01:00] So if you are a romance writer and are sick of these pedestrian love making scenes, you can actually fine tune, mytral so that it writes like you write your love making scenes or your sex scenes. You're not stuck with how AI often likes to just summarize it, you know, they start kissing, they start removing clothes, and all of a sudden everyone's happy. So right now the public cannot fine tune Claude models, however. I say the public because we do know that there are fine tuned capabilities for people on the enterprise level. It's just not in public. By the time you're watching this, it might be public. That's the speed at which this thing moves. Um, but if so, we will always update this course to make sure that you have the latest information to fine tune any AI model that allows it. So if you love Claude though, you can fine tune a cheaper model to write, just like Claude by using outputs from Claude as your synthetic data to feed into the other model to, to say, Hey, I want you to write like this. And that's really what fine tunes do best. They change the style, the tone, or the output of the LLM [00:02:00] response. So a fine tune can stop some of that nonsense of like little did she know her life would never be the same if you give it a bunch of chapters as examples where you never have those stupid paragraphs at the end of your chapters that are just bad writing, cliched writing. Those examples are enough to make that fine tune model no longer give you those kinds of trite paragraphs at the end of your chapters that most of us just have to lo off fine tunes only change the how the AI responds. There's a couple of things that they do not do. They do not help the LLM memorize facts. I'm gonna repeat that it does, and I'm gonna say it over and over and over again. A fine tune of an LLM does not help it memorize facts. , and it, they also don't really work very well with generic prompting. What I mean by that is that when we fine tune a model, we have a very specific prompting style that we already have adopted as authors that have gotten us to like 70 to 75% there. And those prompts are what we put into our fine [00:03:00] tuned dataset. And so when we use that fine tuned dataset, we wanna use that same style of prompting. So a fine tune, your story information, your scene briefs, your beats, your instructions, however it is that you like to prompt the AI to say, Hey, write this. Write this chapter plus. Consistent prompting with a fine tune is going to equal outputs that need minimal editing. And I think that's the dream, right? The dream is that the AI sits there inert, it doesn't do anything without us, but we come to it and we're able to say, okay, this is how I write. This is how I want you to write. This is my genius that I'm sharing with you. So that then you have, you are reflecting my genius, not some other rando generalized genius that's, out there. And so that the outputs I get. They're really close to me. They already are very similar to something. I would already write my fine tunes on my Jane Austen fan fiction writing. If someone was to run that fine tune, and I've tested this with outputs, I can't tell the difference between that output and like if I wrote it or not.[00:04:00] It's that good. It's that close to my own writing. And I've had my longtime editor even look at the AI writing, the fine tune stuff and she was like, oh girl, I can't tell that that's not you. And she's been editing me for over a decade. Additionally, as more and more authors turn to using AI to keep their readers happy, and it is to keep readers happy, that's why we use ai. What's the number one thing any reader says? The second you publish a book, where's the next one? That's all they care about. And I get it. I'm a reader too. I don't care how my books get to me. I don't care if they use a ghost writer, if they have two editors, if they had to be kidnapped and stuck into a hotel by their acquisitions editor and forced to finish the book series. I don't care how the book was written, really. As long as the book is good and it continues the story and I'm happy, it's okay that readers are selfish and they just want the next story. That's totally okay. But this is your ticket to writing with ai, but not sounding like you write with AI sounding like you. This is not something that you'll [00:05:00] find in a lot of different AI writing groups or AI tutorials and stuff on YouTube. And the reason for that is because most industries have no use case for an AI that writes like them. They never had a use case for anything to sound like a consistent voice. They're using AI in a, application usually of like processing data, sending form emails, crunching numbers, doing tasks. It's really only creatives or people who are writing nonfiction in like a creative way that they need to have AI sound like them. And that's why this is a use case that is not very often taught, or not very often shared because it's a very niche use case, but it is a use case that we have. So what will this fine tuned course provide you? Well, the big thing is it's gonna walk you through everything you need to know about making a fine tuned dataset so you can make your own here at the Future Fiction Academy. We're very, very big on, I'm not gonna fish for you, I will teach you how to fish. , there's a myriad of reasons for that. We know, we [00:06:00] know that a lot of you are like, can't you just make it easy for me and just like do it for me? I could. The problem is, especially when you're talking about a fine tune, if thousands of authors have the same fine tune, it's no different than, the main models writing , all generically the same. You know, everybody having that phrase, if you write with oh one that says flattery will get you nowhere, because it's one of those catchphrases cliche phrases that was overtrained in the model. And so it has a tendency to just blurt that out all the time. So a fine tune is best. When it's yours, we're gonna provide you lots of great examples and we're gonna show you how to change those examples. But at the end of the day, a fine tune should be as unique to you as your prompting is, if not more. So we're also going to give you access to, , easy software called Dyno Trainer, which will also be available for everybody else to use as well. Um, so if you don't need full education on. fine tuned. You already know how to make it. We're providing that tool, for the community. But it's gonna make it a snap for you to organize your data and [00:07:00] format it properly into JSOL. Now, there's two file formats that we'll be talking about a lot in this class, and that's J-S-O-N-J-A-J-S-O-N, json l. What's the difference? Js OL literally means, , json long and all it does is it allows you to have a js ON data set that allows for paragraph. , it is, Really ugly. It's really hard to do by hand. I mean, it can be done by hand, but you have to make sure you get every single comma and every single quotation mark. It's, it's just very tedious to do. JSON is the files that the Ner trainer actually takes, and that JSON file is, we have extra, categorizations on there to make it easier for you, the human to manage your data sets. What I mean by that is each JSON file, and we're gonna get deeper into this, so if this is like Elizabeth, you're in the weeds, I have no idea what you're talking about. Hold on. A js ON file just takes like a bunch of categories and then like gives a value for it. So like for example, you could have like [00:08:00] A-J-S-O-J-S-O category that was like, I color and it would say blip. For like a character profile. In this case we have JSON files that will say, , conversation header, and you could say chapter one with like what pen name you're training for. Those headers and things like that won't be accepted by the LLMs. You can't take those headers into your J-S-O-N-L file. So we use JSON to make it easier for authors to work with their dataset. Dyno trainer takes that JSO and turns it into J-S-O-N-L that you can take to open AI in Mistral, or it turns it into the CSV file that you can take to Google. Also the fine tune course will give you nine data sets to get started with so that you can modify them or test them as is. So here's what we're going to cover over the next 10 modules. Intro to fine tuning. You're, you're here. That's this class right now. Hi. What is the dataset? Your first data set. I'll be back with you for that one, which is going to help you make better outlines. 'cause outlines are a really easy way to see [00:09:00] immediately. Oh, that's not great. That is good. So we're going to start learning how to make data sets with data that's very simple to validate and say, Hey, that's good, or no, that's not so good. Consistent outputs, scene briefs. If you like to write with beats, you'll definitely like that module there because it's going to show you how to take that next step into scene briefs. Right. Like me, Steph, PA Jonas will be here. Um, she, she was one of the first people to ever figure out how to make the fine tunes right, like her, because she's a very distinct voice. That is first person, present tense. I think I, I mean it, it was amazing. The, her, her fine tune is actually the example I used, when I shared it on the forum and people couldn't tell which one was human and which one was ai. And that was back in December of 2023. New advances in fine tuning. I will be back, because these things are changing every couple of months. Even in the span of us recording this course material and putting it all together, I highly expect we will suddenly get more models. We can fine tune or new possibilities. For [00:10:00] example, Google only allows you to fine tune right now on one model. I have a feeling that's gonna change in the very near future, conversational AI data set. So this is one of those new advances actually that we'll talk about. The old fine tunes were only prompt. And then what you would say, like your prompt, which was system and user, and then the assistant part is what you were, demonstrating you would like the AI to say to you. So the first data sets were always just, here's how I wanna prompt you and here's how I want you to respond. Now we have the ability to be like, here's the system prompt. Here's my prompt, uh, here's the response I want from you. Here's my follow-up question. Here's the response I want from you. Here's my follow-up question. This is a huge advance. That means if you're someone who routinely is using AI for brainstorming or other applications where you're going back and forth, you can actually control how. Specific the AI is in its response, how much detail it's regularly giving you, you're demonstrating for it, the kinds of [00:11:00] answers that you want it to do instead of perhaps some of those answers that you've been getting in the past with just the, the base model, that's what we call it when it's not fine tuned. The base model, where sometimes it's a good response and sometimes you have to rerun it. Conversational AI dataset can prevent you from having to rerun it. Um, so you don't have to waste those tokens. , conversational AI is gonna be Stacy, by the way. Genre specific, fine tuned data sets. Steph will be back to show you how to make a data set that is specific for, , one genre, so that then you have this fine tuned data set that's really good at writing that genre. I'll be back to show you. Direct preference optimization, , on how to make 4.0 right, like oh one, which I'll also be using to, to basically it's, it's kind of like combining a lot of different things. The right, like me and everything like that. DPO is the newest form of data sets, in fine tunes. And that one is you give a prompt and you give a good example and a bad example. You give a prompt, you give a good example, and you give a bad example. That is also going to be the way that we fine tune the , reasoning models. So your oh one, [00:12:00] your oh three, that's the, the new innovation, , for those kinds of models. They're not available to fine tune to the public yet, but I guarantee that's coming in the next six months. Then finally the final module will be the three of us taking the data sets that we've done in the previous modules and swapping them. So we're each individually gonna show you how we would take a data set from some other author and change it for our needs, using it as a structure and then making our own. So you'll get three more data sets, , in that 10th one showing you how to modify, and make your dataset yours inside a diner trainer. Okay, let's learn more about fine tunes.