Part 2 What You'll Learn in This Class === Elizabeth: [00:00:00] So we're gonna talk a little bit about fine tunes and like why this class. , like I said, models are becoming more advanced every single quarter. I don't know when you're taking this class. I'm recording this right now in, the end of February of 2025. But. , no matter when an author comes to learn about AI and think that maybe I should take a look at this, the biggest thing that I hear from authors is I can't get it to write like me. A lot of them will work on prompt generations. They'll take prompts from people, they'll practice them, and they'll realize that they just get frustrated because they can't quite unlock making the model sound exactly like them. And truthfully, if your author voice is. More unique than not. You don't sound like the majority of the writing that the AI was trained on. So there can be a quite a big mismatch. So what you can do is the answer is a fine tune. You can fine tune select models, on open ai, minstrel, and Google. Minstrel's important minstrel means that you can fine tune models that will allow you to have not safe for work content. [00:01:00] So if you are a romance writer and are sick of these pedestrian love making scenes, you can actually fine tune, mytral so that it writes like you write your love making scenes or your sex scenes. You're not stuck with how AI often likes to just summarize it, you know, they start kissing, they start removing clothes, and all of a sudden everyone's happy. So right now the public cannot fine tune Claude models, however. I say the public because we do know that there are fine tuned capabilities for people on the enterprise level. It's just not in public. By the time you're watching this, it might be public. That's the speed at which this thing moves. If you love Claude though, you can fine tune a cheaper model to write, just like Claude by using outputs from Claude as your synthetic data to feed into the other model to, to say, Hey, I want you to write like this. And that's really what fine tunes do best. They change the style, the tone, or the output of the LLM response. So a fine tune can stop some of that nonsense of like little did she know her life would never be the same if you give [00:02:00] it a bunch of chapters as examples where you never have those stupid paragraphs at the end of your chapters that are just bad writing, cliched writing. Those examples are enough to make that fine tune model no longer give you those kinds of trite paragraphs at the end of your chapters that most of us just have to lo off fine tunes only change the how the AI responds. There's a couple of things that they do not do. They do not help the LLM memorize facts. I'm gonna repeat that it does, and I'm gonna say it over and over and over again. A fine tune of an LLM does not help it memorize facts. , and it, they also don't really work very well with generic prompting. What I mean by that is that when we fine tune a model, we have a very specific prompting style that we already have adopted as authors that have gotten us to like 70 to 75% there. And those prompts are what we put into our fine tuned dataset. And so when we use that fine tuned dataset, we wanna use that same style of prompting. So a fine tune, [00:03:00] your story information, your scene briefs, your beats, your instructions, however it is that you like to prompt the AI to say, Hey, write this. Write this chapter plus. Consistent prompting with a fine tune is going to equal outputs that need minimal editing. And I think that's the dream, right? The dream is that the AI sits there inert, it doesn't do anything without us, but we come to it and we're able to say, okay, this is how I write. This is how I want you to write. This is my genius that I'm sharing with you. So that then you have, you are reflecting my genius, not some other rando generalized genius that's, out there. And so that the outputs I get. They're really close to me. They already are very similar to something. I would already write my fine tunes on my Jane Austen fan fiction writing. If someone was to run that fine tune, and I've tested this with outputs, I can't tell the difference between that output and like if I wrote it or not. It's that good. It's that close to my own writing. And I've had my longtime editor even look at the AI writing, the fine tune stuff and she was like, oh [00:04:00] girl, I can't tell that that's not you. And she's been editing me for over a decade. Additionally, as more and more authors turn to using AI to keep their readers happy, and it is to keep readers happy, that's why we use ai. What's the number one thing any reader says? The second you publish a book, where's the next one? That's all they care about. And I get it. I'm a reader too. I don't care how my books get to me. I don't care if they use a ghost writer, if they have two editors, if they had to be kidnapped and stuck into a hotel by their acquisitions editor and forced to finish the book series. I don't care how the book was written, really. As long as the book is good and it continues the story and I'm happy, it's okay that readers are selfish and they just want the next story. That's totally okay. But this is your ticket to writing with ai, but not sounding like you write with AI sounding like you. This is not something that you'll find in a lot of different AI writing groups or AI tutorials and stuff on YouTube. And the reason for that is because most industries have [00:05:00] no use case for an AI that writes like them. They never had a use case for anything to sound like a consistent voice. They're using AI in a, application usually of like processing data, sending form emails, crunching numbers, doing tasks. It's really only creatives or people who are writing nonfiction in like a creative way that they need to have AI sound like them. And that's why this is a use case that is not very often taught, or not very often shared because it's a very niche use case, but it is a use case that we have. So what will this fine tuned course provide you? Well, the big thing is it's gonna walk you through everything you need to know about making a fine tuned dataset so you can make your own here at the Future Fiction Academy. We're very, very big on, I'm not gonna fish for you, I will teach you how to fish. , there's a myriad of reasons for that. We know, we know that a lot of you are like, can't you just make it easy for me and just like do it for me? I could. The problem is, especially when you're [00:06:00] talking about a fine tune, if thousands of authors have the same fine tune, it's no different than, the main models writing , all generically the same. You know, everybody having that phrase, if you write with oh one that says flattery will get you nowhere, because it's one of those catchphrases cliche phrases that was overtrained in the model. And so it has a tendency to just blurt that out all the time. But at the end of the day, a fine tune should be as unique to you as your prompting is, if not more. So we're also going to give you access to, , easy software called Dyno Trainer, which will also be available for everybody else to use as well. Um, so if you don't need full education on. fine tuned. You already know how to make it. We're providing that tool, for the community. But it's gonna make it a snap for you to organize your data and format it properly into JSOL. Now, there's two file formats that we'll be talking about a lot in this class, and that's J-S-O-N-J-A-J-S-O-N, json l. What's the difference? Js OL literally means, , json long and all it [00:07:00] does is it allows you to have a js ON data set that allows for paragraph. , it is, Really ugly. It's really hard to do by hand. I mean, it can be done by hand, but you have to make sure you get every single comma and every single quotation mark. It's, it's just very tedious to do. JSON is the files that the Ner trainer actually takes, and that JSON file is, we have extra, categorizations on there to make it easier for you, the human to manage your data sets. What I mean by that is each JSON file, and we're gonna get deeper into this, so if this is like Elizabeth, you're in the weeds, I have no idea what you're talking about. Hold on. A js ON file just takes like a bunch of categories and then like gives a value for it. So like for example, you could have like A-J-S-O-J-S-O category that was like, I color and it would say blip. For like a character profile. In this case we have JSON files that will say, , conversation header, and you could say chapter one with like what pen name you're training for. Those [00:08:00] headers and things like that won't be accepted by the LLMs. You can't take those headers into your J-S-O-N-L file. So we use JSON to make it easier for authors to work with their dataset. Dyno trainer takes that JSO and turns it into J-S-O-N-L that you can take to open AI in Mistral, or it turns it into the CSV file that you can take to Google. Okay, let's learn more about fine tunes.