Non-Lugubrious Ai Text Tool Use For SEO With Type.ai Founder Stew Fortier

January 10, 2025 00:37:32
Non-Lugubrious Ai Text Tool Use For SEO With Type.ai Founder Stew Fortier
The Unscripted SEO Interview Podcast
Non-Lugubrious Ai Text Tool Use For SEO With Type.ai Founder Stew Fortier

Jan 10 2025 | 00:37:32

/

Show Notes

Jeremy Rivera and Stew Fortier discuss the evolution and future of AI writing tools, particularly focusing on the product Type. They explore the selection of AI models, diverse use cases, and the integration of SEO strategies. The discussion also touches on enhancing writing styles, the role of images, and advanced writing capabilities that meet specific standards. Also they hit some specific SEO tasks that you can write down in your custom planner, you HAVE to add them to your to-do list ASAP.

In this conversation, Stew Fortier and Jeremy Rivera delve into the evolving landscape of AI in content creation, discussing the balance between quality and quantity, the limitations of language models, and the societal implications of AI on employment.

They emphasize the importance of understanding AI's capabilities and limitations, advocating for a more informed and cautious approach to using these tools in SEO and content strategy.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Hello, I'm Jeremy Rivera with Unscripted SEO. I'm here with Stu Fortier and we are going to talk about AI, because that's what Stu do. So why don't you give yourself an introduction, Stu, and we'll jump right into the deep end of the pool. [00:00:17] Speaker B: Great. You set it up perfectly. I do AI things as you know, as an early supporter of Type, but I'm Stu Fortier, co founder and CEO of Type. We're an AI writing product. Kind of like maybe the simplest way to explain it. We're trying to build Google Docs from the future, you know, a really, truly great, all in one writing tool that has all the cool, powerful AI capabilities for writing and editing, plus just the good old fashioned Doc editor functionality you need for a great writing experience. And Jeremy is an early adopter, so we owe our early traction here to folks like Jeremy willing to take a bet on us early. [00:00:54] Speaker A: It was, it was an early adoption thing. I saw him like, you know what? I'm really tired of Google Docs not being able to understand what I wanted to be H2 and made like half the document H2 every single time. Like, come on, it's not that hard. I want that. I'm highlighting that. Why is it making the whole paragraph? And then of course, like the iteration, I like. Um, you know, I've used a lot of different AI tools. I'm testing a lot of different AI models. Um, so I appreciate that you're using both Claude and GPT as options. What led to that decision? And is there other models you're going to utilize? Or are those just the two that have the most convenient API? Are they really that different? Let's start with those questions and let's stop asking questions and let you answer. [00:01:45] Speaker B: Well, you. It's a great question because it gets at, I think, our mission or goal or just purpose really of building type in the product, which is that for us, first and foremost, kind of our motivator is building a great and useful writing experience. And then secondary from that is selecting the technology and the AI and the models that we think are best suited kind of for that job. And in the case of like GPT versus Claude, these things actually are weirdly like subtly different, but different enough to be important where Claude can have these really great strengths with style and warmth and sounds a little bit more human and inviting. So in writing, where that's really important, we think that Claude is the right tool for the job. So that's why you have the option and type to switch to it and then if we're just like harder writing tasks, or maybe there's more complicated reasoning, or maybe you're just writing something more technical in nature or whatever. More, more logical, like something that depends on really airtight logic. GPT models tend to be a little bit stronger. And so for us that was part of the impetus is like, let's give folks the right tool for the job. The reason one, you clued into it correctly, those happen to be the models with really great developer experiences. But we've also found trying other models, you know, Llama three will come out and it's, you know, it appears to be beating all these benchmarks, but when you actually go to use it, it just flubs on the simplest things and it's actually really frustrating. And even though it cleared some not arbitrary, but some machine learning benchmarks, the user experience is actually kind of subpar. So we've strayed away from them for those reasons as well. [00:03:20] Speaker A: Makes sense. So what type of writing were you thinking that would be done through this? And what's some of the most interesting projects you've seen that you've tapped in or looked at that's being created with type? That surprised you? [00:03:35] Speaker B: I'll start with the first question, which is like, what were kind of our expectations when we started type? What people would do with it and in it. And our thesis still, and it definitely was at the beginning, is that writing tools, both a poet and a lawyer can use Google Docs because really writing interfaces tend to lend themselves to being pretty horizontal. You can do a lot of different types of writing given the right writing tool. And so we, we thought about that as well with type. If we build an interface in the right way, you can do quite a bit of quite a range of writing in it. The second trend was like these language models actually are general purpose as well. They know a lot about writing fiction, but they also know a lot about processing podcast transcripts and doing something with those. So maybe you could build an interface that both podcasters use and novelists and lawyers or whatever. So long way of saying that's kind of proven out. Like we really do have a wide range of customers and use cases. [00:04:33] Speaker A: Yeah. [00:04:34] Speaker B: The surprising one that has insane usage is people writing extremely steamy romance novels. This was not at all on my radar. But these are like the power users of the power users. It's incredible. So maybe we should just pivot and just do that. [00:04:53] Speaker A: That's spicy. Spicy, definitely. So there's, there's continual development in on the GPT side of New tricks and things that it can do. What are three things that are on that roadmap already of you're planning to do X or Y and you have a solid time and what are two things that you've looked at that are potentially getting added to that? [00:05:23] Speaker B: Yeah, they're how we, if I were to really oversimplify our kind of like roadmap, if you will, overarching theme is just let's go really deep on just like writing specific problems and all the various frustrations that come up when you go to write something, can we kind of build a solution for that? And together the writing experience as a whole will feel really powerful and delightful. And the current limitations of type I think are the pretty obvious ones, which is like, it's like having the most forgetful co worker ever. It doesn't retain a lot of knowledge about you. Since we were just talking before this, it doesn't know your style as well as it could. So that's kind of what we're working on now are some features to help folks bring in their knowledge from the outside world, use that in type to help inform some of the generations that it does and to really deepen its ability to understand your style and preferences. And that I think is like it's not necessarily one thing, it's kind of a combination of the next three or four things. But that's definitely like the, I think what's next? [00:06:27] Speaker A: Yeah, it would be amazing to have like a whole knowledge library where I could digest all of my knowledge docs for my SaaS and all the support tickets with different answers and then I could use that as to help shortcut create blog content, you know, like help me ideate based off of, you know, the biggest support tickets or problems articles that my salespeople can use and kind of connect those and A, A and B uses based off of like a whole Lexus of information. Because I, I think ultimately the bigger idea of LLMs that we're definitely probably going to see that happen as companies look at their own GPTs, you know, and look at, you know, how do we curate our own search experience. So it's a bit of a fracture from the SEO perspective where the search has almost always meant Google, but now search also means LLM based search. We've got Bing Index and it's Promethean chain bringing information first to the first version of Bing Chat and Bing AI, which was then renamed Copilot. But now that's also that same API interface allows GPT itself. I think perplexity is Also connected now to the Bing index and I think a few others are starting to connect that way. Not to Google. Google. Google hasn't made its index available to anybody else, which is an interesting play. Kind of makes sense, but also makes sense for Bing to be willing to do that. Is there an integration you're looking at or eyeballing to, to use that Internet, that index connection to help, you know, allow writers, SEO writers to check out competitors, popular blogs or find blog post titles based off of these competitors and what didn't they talk about? [00:08:19] Speaker B: Yes, this is something as a more horizontal tool we like. It's always this really difficult balance to strike. So to be a little bit more specific, I think there's a world in which type actually doesn't have the best ideation or SEO strategy experience. You might maybe drefs +AI build something really spectacular for the like strategy part of SEO and like really finding the gaps and where you should go. But then when you're ready to actually go create that content type is actually going to be the superior authoring experience for whoever's responsible for actually going and making that. And I think like we were talking a little bit about before this call, like a component of that, a component of authoring good search content is definitely the ability to cite high quality external sources, find great links to reference, reference your own internal content. So I think those sorts of capabilities that you need as you author something will be necessary. I'm, you know, we less have our eye on being maybe like the best strategy tool. [00:09:24] Speaker A: Right. [00:09:24] Speaker B: If we can also get that right, that's fantastic. But probably we just focus and go really deep on just the authoring piece of SEO work. [00:09:32] Speaker A: Yeah, that makes sense. Like more of a question of like do you think in the next six months you'll have that capability to have that because you have a chat interface, you've got the type, the type interface on this on the left which can you know, help you spin up new text or generate drafts or whatever. But what do you think as far as time frame potentially on getting access to the index in that chat function? [00:09:57] Speaker B: I'm sure I'll eat my words here but I feel very good it would be a this year thing, you know, like you know, knock on wood. I'm not even confident enough to bet give weeks or months forecast but I'll just say that it seems like you know, we've kind of been waiting this to be a more solved problem. There's been pioneers like perplexity and you know, even like you mentioned, copilot which of course is made by the same company as bing, but like ChatGPT search. Like finally I think folks have zeroed in on like presumably Anthropic has something coming. Like it's becoming a. Google's no longer the only show in town. And so we're not pioneers on this front per se, but I think it's just getting mature enough to where we can now implement it in a writing focused product like type. Yeah, so my, my spidey sense is like this is a this year type of time range where this could actually be integrated pretty well into the product and we don't have to go reinvent the, you know, the wheel. [00:10:53] Speaker A: So do you, does the, the tailor have tattered clothes or do you, do you eat your own dog food at your organization? Do you use type for your SEO internal process and what type of campaign or things that are you doing to address your own search volume? [00:11:14] Speaker B: I have, I'm so glad you asked that because I kind of love the answer, which is that we had a customer reach out, an SEO writer who had discovered type on their own. They were using it for some client work. This person then went onto our blog and was just like, hey, you've got some good stuff on here, but I have some ideas that I think could really elevate some of what you're doing on the blog. Let me do some writing for you. You know, let's just do, you know, a few posts, see if it's a fit. Blah, blah, blah. Pitched us on like, you know, helping us build out our blog content and had a really good sense of kind of where we could take things theme wise. Anyways, so we actually hired a human who uses type to write blogs blog posts on the Type blog. So. [00:11:59] Speaker A: Got it. [00:12:00] Speaker B: We're dogfooding it in two ways. One, we're actually like, you know, it's a customer who's now writing a lot of this content. Two, they're using type in their process. But three, before that I was doing most of the work myself and type was absolutely part of that process. And of course as you dog fooded, you see all the gaps and like you see, oh my God. Okay, here's what we have today. This is, this is pretty solid, but most of what you see is what's yet to build. [00:12:23] Speaker A: What are some unusual styles? Because when you are creating an article, you can tell it right in some feng shui, you know, some, some haiku styles on this input. What are some unusual styles that you've ended up adding to prompts? To get something unique. [00:12:40] Speaker B: There's. I wrote like a. One of the first blog posts I wrote on the type blog. Kind of, sort of alluded to this. One thing I've noticed, if you have a basic understanding, if you spend a little bit of time with these language models and even if you're. I don't have a background in machine learning, but like, if you get the gist of how they work, which is that they're, you know, they're trying to predict the most likely next set of tokens of letters and words in a sequence. And therefore in any given interaction, they're trying to give you the most likely response given the context of your conversation and your instructions. [00:13:15] Speaker A: Yeah. [00:13:16] Speaker B: And when it comes to writing, one consequence, and this is becoming a more solved problem, but like one consequence is AI, writing on its face can feel pretty bland and that's why you end up prompting your way kind of out of it. [00:13:28] Speaker A: Yeah. [00:13:28] Speaker B: And like something I notice is just like these small qualifier words like write this in a include a surprising statistic. Instead of saying just include a statistic or give me a counterintuitive way that I could frame this idea or like, make this, just make this more interesting. Literally you can just say that. And like, I've been quite pleased with how much impact that can have on the overall output. Like just these qualifier. Give me an interesting introduction. Give me a surprising statistic. Give me like a counterintuitive. [00:14:01] Speaker A: Give me a lugubrious entry for this. [00:14:05] Speaker B: Well, I'll first have to ask what that means because that's an SAT word if I've ever heard one. [00:14:10] Speaker A: It's creepy, crawly, evil. Yes, your lugubriousness. That's a perfect from Hercules. [00:14:18] Speaker B: I mean, that's exactly what I'm talking about though. Like, that's somewhere in the training data on these models. It's probably not seen a lot. So if you use it, it'll probably bring up these lessons. Likely slightly more surprising responses. [00:14:31] Speaker A: Have you put any thought into. Because there are times where I've come up with like a list of, you know, phrases like in don't never use in conclusion and skip, like skip the intro or cut the intro to itself or avoid, like make every other 10th sentence much, much longer and almost a run on sentence. Have you thought about like making little Personas and call them like Jeffrey and he, you know, he has a little personality. He avoids these things. And then maybe have a custom one where you can put in those negatives of don't do this, do this so you can Kind of have a internal writer that you can turn to and they'll. They'll help you write your stuff in that style. [00:15:21] Speaker B: There's like kind of two ways we've thought about that problem and solving for it. One is what you just said, which is developing over time kind of some Personas, some modes, some like personalities that you can kind of like, all right, I'm in Jeffrey mode. I'm in Hemingway mode. I'm in whatever mode. Even just like SEO best practices mode, where it has all this knowledge about that. I think that's one very likely outcome. That's what a lot of even other products have done. Kind of harder to pin down version of that that I think could work if done well is just be like, get the AI to give the AI enough examples of what you consider to be good writing in this context. Have it retain that kind of knowledge in some ways that's always primed with it so that when you go to write, it takes your prompting burden down quite a bit. And it just kind of knows in this, like, it almost feels like it has a spidey sense because it's seen enough examples of how you like to do this where you just don't even have to. You don't even have to manually guide it anymore. Even by select. It just kind of like, this is how Jeremy does his podcast descriptions. Like, it's harder to pin down, but I would love, like, I like it in spirit because there could be an interface that's even simpler that just as you use it more, you're noticing it's just like, I don't have to give it that instruction anymore. It just kind of knows, based on the previous feedback I gave it, not to do that. And it creates this more generalized personality that's very adaptive to what you do that makes sense. [00:16:45] Speaker A: Like, it's the difference between personalization over time versus specialization. [00:16:52] Speaker B: Yeah, totally. Where you're having to kind of go and select each tool for the job. It's gradually getting to know you in a more general sense. [00:17:02] Speaker A: Is a type going to have images? [00:17:05] Speaker B: Yeah, this is another one where it's like, all right, we're not going to be pioneers in images. Can we just. Will someone else develop something good enough to where we can sprinkle a little bit of our magic on it, integrate it into the product and make it great? Because images are. They're not like the centrally hard thing about the writing process, but they are this part of it that a lot of writers have. So we've kind of just been sitting A little bit on the sidelines, like, okay, can midjourney get a little bit better? We were talking about its ability to embed text into images. Can AI get better at character recognition, that sort of thing? Yeah, it's a long winded way of saying yes, but I think we're just waiting for it to mature a little bit before we feel like we can actually deliver a really good experience. [00:17:47] Speaker A: So Stanford actually came up with an AI powered tool. They call it Storm and they use it to tap into citations and use multiple agents in multiple layers to give it a topic. And it'll give you a plausible citable formatted document in a, like a scientific document format. Obviously that's like layering these agent ideas to do multiple different tasks. Far as like researching and citing, citing studied information. Is that somewhere where you think you want to get that API? Or maybe that's something like as a model you want to break down and say, oh, maybe there's something to, you know. Because there are certainly additional use cases for different types of writing that have very strict parameters as far as style, format, export, expectation, output. So what do you think about that? [00:18:42] Speaker B: It's almost like it falls into this similar theme of like our, you know, differentiation or what, whatever you'd like to call it is like, can we build. It's really largely about the interface and workflow. Can we build a good writing interface and a good editing workflow to where as the models, as the underlying models become more capable, as they even have like agent style capabilities, kind of what you're saying, whether that's something like searching the web or something much more complicated like you're describing, which is like reviewing and structuring and refining a document to really fit a certain protocol or shared standard. Those are things that hopefully our customers and users just get to benefit from as the models get more capable. But the reason you come to type to do those things is because like when that new structure to your document is proposed, we have the best way to review, accept any edits you do. Like giving feedback on what you want to change. Like the whole kind of workflow is there from end to end, even if maybe we're a little bit slower to get out the best AI on market or whatever you find yourself like, well, it's saving me a lot of time to just be able to do it here. So I'll wait, you know, I'll wait till they've added that and I'll do it in type. [00:19:51] Speaker A: Let's get philosophical here. So there's a legendary snake, ourobos which eats its own tail. Now we know that the only pure database of truly human creatures created content on the Internet is now pre. Because Even in like 2013, 2014 there were sites I was using, you know, it was Jarvis before they got threatened with lawsuits from Marvel and became Jasper. So even then it wasn't all human written, but more sites are supplementing their truly human crafted content with machine learned generating or assisted content. Some very large companies that I know of have used a massive scale of content creation using tools like yours. Obviously you know, LinkedIn has AI augmented comments. Yeah, say what you will about the output of that, but the reality is that more and more of the marketing side of things is going to be supplemented or and in some places replaced entirely with content that didn't, you know, 3/4 of it is direct from a GPT output or a Claude, what's your view on the degradation of content as we no longer will have models that can't be, that are purely human content trained as we look and see, you know, content contamination in the models leading to, to a diminishing of the quality of the content that these models themselves can generate? [00:21:32] Speaker B: Yes, completely. I think for me, one way I think about it is like we're currently in a race of sorts between the algorithms that kind of learn, you know, blindly consume everything on the Internet and learn from them and the algorithms that have some ability to understand the underlying quality of that content and determine its usefulness. So an example would be in social media. You know, whatever is in your feed on pretty much any platform has gone through some sort of algorithmic, you know, processing to determine is this something Jeremy's likely to find interesting or not? Is it of a certain interestingness bar like there are these things that are, there's a lot of crap you're not seeing, I'll put it that way, that the algorithm is deemed like just too damn, you know, uninteresting and unhelpful, you know, rightly or wrongly, and it's done its best to surface stuff it thinks we'll find interesting or at least engage with. Obviously the incentives can in reality be a little bit more cynical. So I would say, like I think there is a gray area in which it is not necessarily that having more AI generated content in the training data inherently puts the whole effort at risk, inherently is going to dilute the kind of collective intelligence or knowledge that will be able to accrue over time. But that is only so far as we counterweight it with good judgment systems of what stuff we should be paying extra attention to when Google searches, when the Bing index returns a page for some research you're doing, like, how did it determine that was the right one? We have to get both things right, I guess is kind of what I'm trying to allude to. [00:23:05] Speaker A: So you work a lot with LLM and language learning model. It's not data from Star Trek. It's not truly intelligent. In fact, in a lot of ways it's quite dumb. How are you seeing people's understanding of what these tools are actually doing behind the scenes impacting how they're being used? [00:23:29] Speaker B: Oh yeah. So like I guess there's a good. I guess there's good and bad. The bad thing, the thing that I still notice quite a bit is an under appreciation for how frequent hallucinations occur. Meaning people actually default trust these models to almost not a shocking extent. In that it shouldn't be shocking that something that's more convenient to use is more compelling to use. If we can Type something into ChatGPT and get a very personalized response that like seemingly is good advice or seemingly solves our problem, who isn't going to feel tempted to use that? I notice this when I'm doing research or working with LLMs on something that I feel like I know pretty well. [00:24:10] Speaker A: Yeah. [00:24:10] Speaker B: How often its advice or guidance or very plausible sounding suggestions are actually mediocre at best, but like dead wrong at worst. [00:24:19] Speaker A: Yeah. [00:24:20] Speaker B: And I think people are just trusting these things and I do it too. [00:24:23] Speaker A: Yeah. [00:24:24] Speaker B: On stuff right or not. I don't think people are fact checking and like that's considered. The models are getting better for sure. But it's still concerning that like yeah. [00:24:32] Speaker A: I call it my father the silver tongued fox effect. Like my dad, he could tell you like, hey man, what's happening in Europe right now? And you'd be like, well, you know, in the 14th century, the Mongol invasion came in and totally changed the face of Serbia. And that's why Slobodan Milosevic has come to power because of this. And like, wow. And then like a year later I'm in history class and like what, the Mongols never went into Serbia, you know, or like what happened there was totally different from what he said. It was so. Sounded really good, really so confident. So I think it's just like, you know, make it till you fake it is what GPT stands for. Like these LLMs stand for. Like they're just predicting. Predicting what? What is the next word? I think Wolfram Alpha's article on how LLMs work is foundational to anybody that is using these tools. Actually it should be a primer in grade school now, like yeah, as things progress forward, it should be foundational for them to understand that this isn't general AI. This is a specific trained system that's getting better with each iteration in some respects. But it is based off of math and it's not based off of accuracy. So I think the types of hallucinations are a little less blatant. [00:25:58] Speaker B: Yeah. [00:25:59] Speaker A: But they're still very prevalent of like hey, what are some articles by a New York author on the subject of taxi shootings in the 1970s? Nobody wrote about that. But you asked GPT. [00:26:10] Speaker B: Yes, it'll give you, it's going to. [00:26:12] Speaker A: Give you an answer. You asked it to give you an answer. So yeah, I think the blind credulity and kind of unaware usage of the tool is scary. Scary. [00:26:25] Speaker B: Yeah. And like the optimist in me is like okay, you know, like you said, you know, given like with your dad as an example. But I think we can obviously all think of our own version of that of like humans also have bugs who are incredibly compelling in these certain ways. But we can be totally wrong sometimes. And sometimes both those things happen at once and like often at large scale. And so part of me is like, all right, the optimist case is we actually get the average LLM to kind of be above human level reasoning, you know, or like, like just like marginally more factual than the average human or even subject matter experts. In the same way that if you can get a self driving car to be safer than the average human driver, you'd actually want more of those on the roads even if there are accidents, which is horrible. But you can't just, we can't just like never roll it out because also make mistakes something if we can get it, I think it's an unbelievably hard problem. But if we can crack it, that could be great. Right? That's really powerful. [00:27:23] Speaker A: So what you're saying is that the best approach is we have a salt lick next to our computer so we take everything with a grain of salt when we're using these models. Maybe that could be your, your conference gift. Take it with a grain of salt. Like here's your salt shaker, your, your type AI salt shaker. [00:27:44] Speaker B: Absolutely, yes. [00:27:46] Speaker A: Take your, take the results with a grain of salt. Because there are conversations I had, I talked to Nick Jane of Idea Scale and he's talking about, you know, 10 years generative AI and like truly generative AI is on the Horizon and the implications of that for writers, for the creative community. You know, I think there are a lot of people in the Philippines who are very unhappy because they were getting a lot of writing work and now they're not getting a lot of writing work. Or maybe they are kind of happy because a lot of their writing work now they're doing with GPT and their English has sure has improved. But you know, American marketing systems have offshored and outsourced before and now this is now kind of on shoring. But it's not like we're not going to be hiring as many people to do the writing because you're able to do a lot more of it. What do you think is the ultimate impact of that societally? As you know, there's, you know, it used to be like, oh, you need a good white collar job, but now you can do the work of 10 writers with one tool type. You're hoping, but you know, like being part of the problem and the solution at the time. How do you think that impacts the market going forward? [00:29:04] Speaker B: Of course, the honest answer is I have absolutely no clue. And it's all, we're all winging it or whatever. But here's something I've been hearing reasonably often from other founders I know, hey, I just let go, you know, six months ago, hey, I just let go of my assistant in the Philippines and now I'm just using ChatGPT or type to do whatever. More recently, hey, I'm looking for a writer I can hire who's using AI to be more productive because I need someone to manage all the AI workflows for me. And they're essentially hiring back that person. But now they're levered up with AI tools and I think what happens often with technology, there's a promise of it kind of automating away and reducing our working hours and you no longer have to do X. But of course, if all your competitors, if it's now easier for all of them, everyone in the market to do X, then you're going to get pushed to do Y. And I think it's going to be like what Y is. I'm not sure if it's. We're writing a lot more content overall, we're writing a lot better content. Like we're always. But there's always like a frontier of work that when the, the friction gets removed from this level, okay, now we're playing here. And what's, you know, what are those tasks or jobs to be done going to be? So I think like, I feel Pretty good. People are going to find needs, a lot of unmet needs and we're going to need people to solve them. Yeah, but the shape of what those are become are not always super clear. I think from the outset. [00:30:30] Speaker A: Yeah, I think, I think it's going to be an interesting impact because you probably will end up with you know, aspect of the super specialist evolving of like the organic content, you know, human certified. Hey, you know and like we get a gpu, we get a monitor to, to to prove and stamp that this every letter was actually written by a human and not by a GPT. That's going to be like organic steak in the store versus like sure. The general crops which you know like are genetically modified content. [00:31:09] Speaker B: That's right. [00:31:10] Speaker A: I think that there is going to come that side of it. But there's also you know, maybe an argument of like downscaling like not needing a four year degree in college in order to work at a marketing company societally might be a good thing because that opens so many doors to people who have innovation, drive, creative reasoning capability but can't afford, you know, to go to a four year college to get that marketing position instead. That opens up those pathways to being part of these white collar jobs and ecology the job ecosystem without having such a high, you know, because I hired, you know, I didn't do more than a year at community college and I had a friend who I knew, he went to school and he paid for the full degree and by the time he came out I was managing the department and I hired him. [00:32:05] Speaker B: Exactly. [00:32:06] Speaker A: So you know, not all of these systems work out societally working the way that they were. And you know, maybe technology is playing a role in leveling some playing fields. [00:32:17] Speaker B: Yeah, I would even add to that with this notion that like if you give you give like an artist access to midjourney or something, they like make better images than someone without that skill set even though we can all make these images now. And what I think is interesting and maybe somewhat what you're alluding to here is like if I think of using AI to help you write software or something, there's a lot of would be entrepreneurs that can't go do the Stanford computer science degree for whatever reason. The cost, the barrier to entry to get in there, like they don't live in California, whatever. Yeah, but they have enough drive and interest to like they have good ideas, they see opportunity, they like whatever. And now, okay, they have, you know, they are augmented if you will like their ability to see a hole in the market becomes really valuable. It gets wasted if they can't hire an engineer today, but now maybe they have a tool that can get them just a notch further than they could have done on their own. And that's certainly, you know, that's certainly the world I want to live in. Obviously it's a really delicate balance because who knows to what extent it replaces work that is more zero sum or whatever. [00:33:25] Speaker A: Yeah. [00:33:27] Speaker B: But yeah, I'd like to think that the folks who really have a, you know, the desire to contribute or to do something will be able to do so to a greater extent without some of the hang ups we have today. [00:33:40] Speaker A: That makes sense. So turning to wrap it up, I like to make things super actionable. Something that's after this call the SEOs listening to this podcast. I'm guessing you know, unscripted SEO. I'm guessing probably doing SEO day in, day out. What's something that you recommend that they do? You can be self promotional with type an unusual use case or a method or practice that you think they should adopt that's going make an impact today. [00:34:09] Speaker B: For anyone who is already, whatever, pretty deep down the AI kind of rabbit hole or whatever. This, this may sound terribly obvious, but it's taken me a while to fully appreciate this. So these tasks that many of these language models fail at today that you've maybe tried helping you come up with a strategy for your content, helping you maybe even do keyword research or something. It often fails not because the model isn't smart enough. It often fails because it does not have the context it needs, which you have in your head maybe or you have sitting in your inbox. But it has not been exposed to. And I did this the other day doing some SEO kind of keyword strategy thinking for us. And I just like straight up and I think it was a conversation with 01 or something in ChatGPT. I like pasted in and embedded our competitors websites. I pasted in the SERP results for certain keywords. I pasted in our current search console data and I was kind of shocked at the quality, how much that impacted the quality of advice and the recommendations I got. It went from like a borderline useless conversation to getting a couple of insights that I was actually willing to kind of act on. It's a, there's a very long winded way of saying just kind of give these models all you got on a problem. [00:35:23] Speaker A: Yeah. [00:35:23] Speaker B: And I think you'll be pretty surprised at how much better they can get. [00:35:27] Speaker A: Yeah. Give it some more food. I know there's like an Interface where you can paste like your source document. Is there like a limit and like in type, like how much, how many characters or how long, how much can I paste in there to use as a knowledge based reference for what I'm generating? [00:35:45] Speaker B: For the most part our limits are pretty similar to just what the, the models have out of the box. So with our Gener Draft feature, I think it's like around 100,000 words all in between all your attachments. [00:35:57] Speaker A: That's a lot of words. [00:35:58] Speaker B: I know it's a lot of words. Our chat is a little bit more variable, like for various reasons and the actual documents can't be more than 10,000 words. I think you're editing for more for the interface and whatnot. But we do try to take kind of as much advantage as we can of these context windows. [00:36:13] Speaker A: Right. [00:36:15] Speaker B: Because yeah, the more. Very often more is better as long as it's more reasonably useful stuff. [00:36:21] Speaker A: Yeah. Okay, so run your context window to get better results from your LLM usage. [00:36:30] Speaker B: That would be it in a sound bite. [00:36:32] Speaker A: Dope. Thanks so much. SIM for your. Thank you so much for your time. Stu. I can use words better than GPT. [00:36:41] Speaker B: Absolutely. [00:36:42] Speaker A: Where can people find you? Your product is in any. Anything you want to plug you go and be at a conference or where. Where should people connect with you? [00:36:53] Speaker B: Yeah, I think honestly if you're, if you're the person on your team, an agency, solo, as a freelancer who does a ton of writing, definitely check out type and see what you think. Just type. AI is the domain. [00:37:06] Speaker A: We're. [00:37:06] Speaker B: We're built, we're built for the people who do, you know, the folks who do the writing and then me. You can catch all my shower thoughts at Twitter or x now. No, it's 48 is my name. But yeah, I refuse to switch over. So I do occasionally tweet variable usefulness. But you can find me on Twitter with X Com, my handle or my name. [00:37:28] Speaker A: Thanks so much for your time. Bye. [00:37:30] Speaker B: Absolutely. Thanks.

Other Episodes

Episode

February 05, 2024 00:03:18
Episode Cover

Technical SEO Decoded: The Essential Elements for Success

Dive into "Technical SEO Decoded: The Essential Elements for Success," an insightful podcast that simplifies the complexities of technical SEO for both beginners and...

Listen

Episode

October 19, 2023 00:02:46
Episode Cover

Data Democracy: Why We Deserve Access to Our Own Analytics

In the vast expanse of the digital age, data is gold, but do creators truly own their treasures? Dive deep into this riveting discussion...

Listen

Episode

March 16, 2023 00:55:29
Episode Cover

Living the Dream: Sam Partland's Programmatic SEO Nomad Life

Sam Partland is an SEO Nomad who travels around the world whilst working freelance as an Enterprise & Programmatic SEO Consultant at DigiSearch and...

Listen