Episode Transcript
[00:00:01] Speaker A: Hello, I'm Jeremy Rivera, your unscripted podcast host. I'm here with Nick Jain, and last time that I interviewed Nick, last year he was with a company called IdeaScale, but now he's with Content Hurricane. So why don't you give yourself an introduction, a little bit of your past and leading into your current, which is a little bit different. Playground.
[00:00:23] Speaker B: Perfect. Thanks so much again for having me on the show. So my quick background spent about 10 years on Wall street as a professional investor, and for the last five or six years I have been a profess CEO running a $100 million revenue trucking company, a small shoe company, and for the last three or four years, I was the CEO of IdeaScale, which is the largest innovation software company in the world.
I recently left there about a month ago actually. So this is pretty new to start my own company for the first time called Content Hurricane. And what Content Hurricane does is it's basically a really, really simple but powerful blog writer for SEO purposes. So think of almost like firing up ChatGPT and it to write a blog article on Apples or a war or history, but doing that in a much, much more thoughtful way that actually improves the quality of the output you get. Instead of you asking ChatGPT 10 different times, make this better, make this better, make this better, we've done that all for you in a way that should be pretty impactful to most people's businesses.
[00:01:26] Speaker A: I'm curious to see what is the internal agent framework that you have tooled to try to upscale, improve the baseline output of that AI content? Because I use Claude, I've used GPT, I've used Perplexity, and with varied results as far as outcome, and none of it I ever recommend, you know, directly copying and pasting or directly publishing that content.
So what are the steps that you've taken to reconsider that, or what are the pluses and minuses that you've thrown into the mix to come up with your usp?
[00:02:10] Speaker B: Sure. So for now, we're using Gemini, but that's not actually where our magic lies in terms of the the AI underlying us.
So as you said, there's kind of three different approaches you can use in terms of using AI to generate content, whether that be blog content or video content. You can go right to the Gemini or ChatGPT or Claude and say, write me a blog article. It'll spit out something. It kind of sounds generic. And then you say, well, change this, change this, or you'll go in and manually edit it. That's Method one, Method two is, and for transparency, a lot of kind of AI powered content writers or content publishers out there. Some of the early entrants in the space were Jasper or Copy AI who are our direct competitors. And I think what they're. And then there's Content Hurricane and I think there were two specific things that we've tried to do differently. Number one is on the user experience side, we basically dumbed it down. So there's like three buttons and you can get your entire workflow done in about 30 seconds or a minute. A minute 30. So we've intentionally taken away a bunch of the knobs and the settings, like what tone do you want it, what color? Or we've ripped all that out intentionally just to make it faster. So the end user is not worried about what settings or what knobs to change. The second thing we've changed is really on the quality of the output which you commented about. So we've targeted, and I'll give examples of this in a second, we've targeted it so you can literally, you don't even have to copy paste whatever it spits out, it spits it right into your WordPress site or your webflow site or your Shopify site. And number one, you don't have to copy paste it. Number two, it should be high enough quality that you don't even have to edit it. And the quality we measure on two dimensions. Number one, what is the actual text there? Right? What does it say? Is it interesting? Is it unique? Does it meet Google's EEAT standards from a search engine perspective? But number two, does it also look pretty like you could have the greatest essay written ever? But it's really hard to read because it's bulleted or funny fonts.
So we have made it both high quality in terms of presentation, but also the substance in a way that we feel confident you don't actually have to go edit it at all.
[00:04:12] Speaker A: Interesting. So what are some of the structural considerations as far as SEO that have fallen into that workflow? Like what are your components that you have considered as additions? Or if it's a kitchen recipe, what's your flour, what's your sugar, what are your spices that you're putting into this thing to pop out? Ta da. It's a fully done article that's actually going to be helpful.
[00:04:42] Speaker B: Sure. And I'll give one practical example before I answer your question. I trust our software enough that it is our. It is the only real marketing effort we're using. So if you go look at content hurricane.com's own blog. About 90% of the content there is 100% generated with AI, with me not even touching it at all. So the meta optimization, the, sorry, the actual article, the meta optimization, all of that was done with our software in like one click or three clicks.
So to answer your question, I'm going to answer in a slight, in a slightly obtuse way because I don't want to give away our full secret sauce. But I'll talk about some of the ingredients that didn't go into the recipe. So there's two things we tried not to do on the recipe that I think a lot of people SEO, SEO folks screw up. Number one, we told it we intentionally did not design the AI to do keyword stuffing. There's nowhere in our prompt engineering or our workflow or the code that goes into it that says stuff these eight keywords or go find tail or long tail keywords. All the typical stuff that SEO engineers try and figure out. We, we actually, that's not part of our recipe at all. It's not on our, you know, ingredient list. The second thing we is really go back to, rather than thinking about SEO as a set of technical skills or things that you want to do, we thought about why is what is Google's algorithm or any of the search engine algorithms trying to achieve? And what they're trying to achieve is identifying really high quality content that is useful to the end user. And you know, one of the battles that search engine optimization folks get into is like, okay, Google made this algorithmic change, so I'm going to react to it to try and hack Google's algorithm. But that's a very short term way of thinking because Google updates their algorithm every few weeks. Weeks or every few months.
What we said is, look, let's not try and hack Google's SEO algorithm because that's going to keep fighting us the entire way. We are just going to try and philosophically solve the problem that Google's algorithm is trying to do, which is just create awesome content.
So the cool recipe or the secret ingredient, and again, it is kind of a little bit of secret is we said let's just try and make awesome content that Google will recognize is awesome, rather than worrying about all the SEO hacks that a lot of people focus on.
[00:06:55] Speaker A: Interesting. So I have a lot of technical SEO friends I was talking about vector embeddings with my friend from Right thing agency, Michael McDougald.
Is that type of approach of understanding content in terms of machine learning, part of what you tried to Capture or utilize in this, or are those more advanced factors something that informed your process at all?
[00:07:28] Speaker B: Well, the opposite. So I want to be clear, let me back up people who are doing technical SEO optimization. That strategy does win, but it is a short term strategy because every time you figure out how to hack Google's algorithm or weak in Google's algorithm, a weakness in Google's algorithm, they'll come back in a few months and revise it, realizing that, hey, all the SEO folks are taking advantage of this weakness and they're manipulating this weakness. So we will update our algorithm. Those hacks or technical SEO approaches do work.
Number one, it does work, but it does work in a short term way. And we didn't want to go the short term. Right? We didn't want to be fighting Google all the time because we think that's a losing battle. They have a thousand times my IQ points and a billion times my budget.
The second thing is the specific tool that you mentioned. Vector embeddings is a way of understanding relationships between phrases or words. It's a very technical way of looking at things from a machine learning perspective. It does work. Again, it is an SEO hack. But at some point, either a month from now or a year from now, Google say, hey, people are manipulating vector embedding or the vector embeddings in Gemini or ChatGPT's vector space and we're going to fix that. It'll, they'll like patch that gap. We, so we did not do that. Instead again, we really went back to this philosophical approach of like what is Google trying to do? Or what is Bing trying to do? Or whatever. You search for anything, right? You could be searching for a file on your Windows or Mac computer. What is that search engine trying to give you? It's trying to give you something that is useful to you as the end user, whether you're trying to find a pict, your kids or a great article on the history of China.
And so we, how do I put like, there are zero technical SEO hacks in our, in our software workflow. Right? So you will find, literally, if you were to go through our code, you would find no, you know, discussion of vector embeddings or keyword stuffing or H1 optimization or include the key phrase six times in your article. All the stuff that a lot of technical SEO folks do because we feel those are short term advantages that will go away at Google's algorithmic update and we don't want to be beholden to Google all the time.
[00:09:39] Speaker A: All right, so Lynn, let's talk about in theory or in practice, like what are the hallmarks of great content and how do those come forward in.
Yeah, let's just leave it at that. What are the hallmarks of great content?
[00:09:56] Speaker B: Okay, I'll try and answer in three ways. Number one is what Google tells you. They say that they want a great content to have four things. It's E, E, A, T and I always forget the acronym. It's E authority and trust.
And they go, there's a page on Google's website where they talk about what that means. That's number one. You want, you want to have articles that are experienced and sorry, they have show real human experience, authority and trust.
Number two is Google has said they don't really care philosophically whether content was generated by AI or human beings or a combination of the two, as long as they meet those EEAT standards.
And then the third is a qualitative assessment of how do human beings, the consumers of all this information, actually consume information.
And there's three things that a human being needs for information to be consumed effectively for them. Number one, it actually needs to be domain relevant. Okay, so if I'm searching for the history of China, you should not serve me a recipe for applesauce, right? You want to have a close proximity and that's kind of where some, where some of the vector space hackings get into both practically and technically.
So it needs to be close to the domain of information that I'm looking for.
Number two is it needs to be presented in a way that's informationally digestible for me. So if I start writing to you in words from Shakespeare in English, you could probably understand it, but it would require effort for your brain to process. That's called cognitive load. You want to decrease the amount of cognitive load, but not so much that I'm speaking to you, like if you were a three year old. Because then you feel that my article is not as useful, is not as intelligent, not as authoritative, right? So you got to dumb it down, but not too dumbed down.
And number three is you actually this goes to the psychology rather than cognitive load, but you actually have to serve it in a way that's attractive. Think of a fancy restaurant, right? They may give you mediocre food, but they dress it up really beautifully so you still have a good experience.
And so practically the way when you are creating great content, whether that be written form content, video, music, you need to hit all of those three things, right? Number one, kind of have make the content relevant. If you're Hungry, serve them food, not books. Number two, make it easy for them to intellectually or cognitively digest that to lower the intellectual load or cognitive load. And number three, the presentation of the psychological element, you have to make it pretty. And I think it is a delicate balance to achieve all. Well, the second and third are, is a delicate balance.
The first one, you just have to make useful content and that is a matter of real valuable information and informational proximity to the, to the queries, desires.
[00:12:39] Speaker A: So here's an AI LLM conundrum.
How do you create real world expertise in content that is created by an LLM?
[00:12:54] Speaker B: Well, two things.
I'll give three answers.
The simple answer is can an LLM have a new experience, much less any experience?
You could argue no, LLMs can't have experiences. That's not actually true. Okay, LLMs are not just regurgitating information or I mean historically they were, but that's not actually how they operate today.
Number two is can you actually encourage an LLM to have a new experience or novel experience? The answer is yes, you can write an AI today that goes out and runs a marketing campaign. And just like you would run a marketing campaign, you would learn from it and you could be able to tell an anecdote or story. AI can do that today. They can actually, actually do things right. They can run robots, they can write marketing campaigns, and they can provide novel insight from that experience.
And then the third is a little bit more pedantic, but it's true. Remember, remember, an LLM does contain the sum total of substantively all of human knowledge. So they do have knowledge. And if you are thoughtful, either as a prompt engineer or just someone typing into chat GPT or someone using a fancy AI tool like ours, you can be thoughtful about extracting all of that information that is stuck within the LLMs brain in a way that is useful and new that no one else has said. Right? And LLMs can, using a silly example, an LLM can write a novel. That novel is coming from bits and pieces of Shakespeare and Arthur Conan Doyle and Lao Tzu, but that is still a novel permutation of information that already existed within its knowledge base.
[00:14:26] Speaker A: That's an interesting, interesting challenge or a concept to wrap your head around. And it does go back to something that we perhaps take at a surface value of Google's statement of they want expertise, experience, authority and trust.
But remember that those are qualitative, subjective, qualitative gauges that are being applied by the quality raters guideline which then gets fed into the algorithm and rewards people. But there is no, it's not, there's no biometric blood pulse, fingerprint proofing keyboard that you type into that proves, you know, that types onto the blockchain and proves that this is written by a human.
Right? So that's one, so there's, there isn't a mandate that it is written by a human. And two, any formulation that Google is using algorithmically, you know, it uses machine learning to understand the relevance of the content that it's consuming.
That is looking for specific hallmarks of content or structures and forms that signify or are used to display your expertise. In other words, even if you are a human, you can write content that doesn't have EEAT in mind at all and that would then flip it around. You could say then that a non human could write content with factors that would mimic or meet the criteria of what is expected out of a well written, authoritative cited, incited piece. Right?
[00:16:30] Speaker B: So I've got a funny example that proves that lays your point out. So there was a study somebody did about three, four years ago that looked at 100,000 essays written by U.S. college students, native English speakers, right. And compared the media and essay written by people who are in college at that point. Right.
Compare that to a very, very simple prompt given to an LLM. And the LLM was better than like 95% of essays written by Americans who are in college, which is Already the top 25% most educated people in America.
So from a point of like somebody reading an essay or reading a piece of content, and it could be again watching a piece of content, the LLM's already better at producing more expertise and easier to read information than the vast majority of people.
And that's really important, right? From Google's business point of view, they don't really care whether it's AI or human generated. They're just trying to serve you, Jeremy, or me, Nick, content that we want to consume.
That is their business goal, right? And if I am, if I love AI generated content because it's easier to read, it has more information, then that's what they're going to serve me. I don't really care as a end consumer whether it's coming from Jeremy, whether it's coming from Jessica, or whether it's coming from Joe Bot.
[00:17:45] Speaker A: Interesting.
So let's take it to some specific use case examples and, and explore this concept. So let's say that you're a, you know, a car dealership in, in New York, you know, Queens or whatever.
What is it about an article that could be prompted up that you're going to be concerned with AI spitting it out.
[00:18:15] Speaker B: Could you rephrase the question? I didn't understand what you were asking.
[00:18:18] Speaker A: So like I'm taking it to a specific use case like, like brainstorming out what from your perspective are the concerns that a marketer for, say a car dealer, would need to overcome to have trust in just using an AI generated program to create a part of its content marketing strategy?
[00:18:45] Speaker B: Okay, I would say this the standard you there's really only two questions you need to ask yourself. Number one is, is the AI going to on average make worse, sorry, make more frequent or worse mistakes than a human being, remembering that human beings are fallible. Right. Will the AI do worse than your intern or your 20 year marketer on average, more frequently or in a work in a more severe manner? And if the answer is no, no, then that's great. Your AI will make less errors and you can actually go into a lot of academic studies that AI on average makes fewer and less severe errors than human beings.
So that's question number one. Is it going to be what happens in a downside case? Is it going to screw up more badly than a human being? And usually the answer is no. The second is what do I gain by using the AI? And you typically gain two things by using an AI. Number one, it's often cheaper or gives more leverage to your existing marketing professional or team. And number two, it is faster.
So let's imagine we'll use a practical example from auto dealerships. So car tariffs went into effect about a week ago, right? And it's reasonably quick that, what is it? Car tariffs were announced about two or three months ago. If you went through a normal content production cycle where someone saw the news that hey, there's going to be car tariffs on imported cars.
You see the news a weekly late and then you go to your marketing team and say, hey, we need to sell some cars with a big blowout sale before these tariffs go into effect. So we can sell cars 20% cheaper. That's going to go through a three or four week content cycle with production, writing, drafting, graphics, etc. And it may only be launched a month before the tariffs come into effect. Whereas with an AI, any AI, really, you can get that done, you know, by end of day today. And that's that speed is really important even if you're forever quality. And usually you're not actually foregoing quality either.
[00:20:37] Speaker A: So my last question, and this kind of was from a conversation with Matt Brooks of SEOT you know, one of the the challenges he has had with content scaling or addressing using this is fact checking and, or hallucinations. So you know, new coming into this at some point we were going to talk about hallucinations. So what is your hallucination checking process in your LLMs that should make us feel confident that we can punch the button to start and publish without having a human fact check in between?
[00:21:17] Speaker B: Sure.
I'll answer in three ways. Number one is human beings hallucinate too. We always case it's all I philosophically believe that it is inappropriate to expect an AI to be perfect unless you can also hold that human human being to that same standard. And you never expect your marketing person to be perfect or your CEO to be perfect. Right. They make spelling errors, they hit send at the wrong time. So that's number one. We all have to have a fair basis for comparison. Not that the AI is perfect, but rather is it at least as good as the human being in its error rate. That's one number. Number one.
So human beings hallucinate, they get drunk, they get high, they make typos, they copy paste the wrong thing. AI does the same thing on occasion. Number two is there's actually some very simple techniques that. And this is not something we've done brilliantly. This is, we just copy this because people have solved the notion of hallucination. You just add us ask a second AI to go fact check the first AI.
So we do that. It's not brilliant, candidly like that technique that hallucinations were a problem two years ago. Somebody figured out how to solve this about 18 months ago. Whereas if anyone is building an AI tool and they can very easily solve hallucinations if they spent five minutes like sticking in a second fact checking robot.
So we do that, but it's not brilliant. It's not our secret sauce. Anybody can do that. It costs you nothing to do it. If you're not, then you basically don't know what you're doing in building AI these days.
And then the third, third is actually kind of at the, at the front end there's something called most. Well Google calls a temperature. Different AIs call it different things. Temperature is basically how much do you want the AI to hallucinate. So there's actually a parameter and you can't do. See this when you actually go into ChatGPT or Gemini and type your questions. But if you're actually doing it in code, there's actually a little knob called temperature where you can say do I want my AI to hallucinate more or less less. And when you ask it to hallucinate more, it actually becomes more creative, but then it makes more stuff up. And when you ask it to hallucinate less, it'll be super factual, but it'll be very, very boring. So there, there's a bit more of an art deciding how much you know the temperature knob should be turned up.
[00:23:24] Speaker A: Awesome. Well, what is the last thing that you would want somebody to know before they pull the trigger off on dropping a prompt on Content Hurricane for their next blog post?
[00:23:43] Speaker B: So I'll make two quick comments. One is, regardless whether you're using our tool or a peer tool or even a direct competitor tool, the fair basis for comparison should not be whether AIs are perfect, but rather whether they are doing better than a human being. And remember that human beings are fallible, myself included. That's number one.
Number two, specific to Content Hurricane, what we've tried to do is make it from a design philosophy perspective, stupid simple. You literally have to click the next button three or four times, you have to type in five words and everything else is taken care of for you in terms of quality checking, in terms of fact checking, in terms of writing, in terms of optimizing it, in terms of publishing it.
So given that it takes two minutes for somebody to try our tool, like go try it out. If you don't like it, just don't publish the content.
And by the way, it's free for the first five articles entirely with no credit card. So there's really no downside to trying it out.
[00:24:41] Speaker A: All right, five free articles. Love it.
I'm going to go try it out.
Give a shout out to what social media platform? If people have questions about LLMs, about content creation, where are you hanging out these days?
[00:24:56] Speaker B: Sure. So two best ways to get in touch with me or us? Well, if you want to get in touch with the company, best way is just email the company. The email address is nickjontenthurricane.com our Twitter account is actually a lot of practical modern SEO stuff, really based off of our blog, which I think is actually quite good. And number three, if you want to get in touch with me personally. Additionally, separate from Content Hurricane, if you want to chat about like LLMs and neural networks and stuff like that, reach out to me on LinkedIn. I'm perfectly glad and excited to chat with people about all sorts of nerdy stuff that is totally separate from the world of marketing.
[00:25:31] Speaker A: Awesome. Thanks so much for your time.
[00:25:33] Speaker B: Thank you for having me again. Jeremy.