Why & When You Should Use Claude 3 Over ChatGPT

The AI Advantage
6 Mar 202416:59

Summary

TLDRThis comprehensive video delves into the comparison between GPT-4 and Claw-Free, a new large language model by Anthropic, hailed as a potential 'GPT-4 killer'. After extensive testing, the verdict on whether to switch from GPT-4 to Claw-Free is nuanced, depending on specific use cases. Despite lacking some features of ChatGPT, Claw-Free excels in certain areas, particularly with its foundational model and image processing capabilities. The video offers a detailed examination of both models' performance across various tasks, from content creation to prompt engineering, highlighting Claw-Free's strengths in idea generation and working with images. The conclusion suggests a complementary use of both models, showcasing the continuous evolution of AI tools.

Takeaways

  • 😀 Claw-Free is a new large language model that's been compared to GPT-4, with some claiming it surpasses GPT-4 in benchmarks and practical applications.
  • 👍 The foundation of Claw-Free is robust for specific use cases, though it lacks many of ChatGPT's features.
  • 📚 The user tested Claw-Free extensively in a variety of everyday use cases, focusing on content creation and idea generation.
  • 💁‍💼 A dedicated website allows users to test Claw-Free for free, offering direct comparison with GPT-4 outputs.
  • 💰 Pricing for Claw-Free is mentioned as $20 a month, with some limitations on availability in Europe and model access.
  • 📈 Claw-Free boasts a 200k context window, significantly larger than ChatGPT's, enhancing its ability to handle extensive information.
  • 📸 The model shows superior performance in image-related tasks, suggesting a stronger integration of multimodal capabilities compared to GPT-4.
  • 📗 For niche use cases like prompt engineering and improvement, Claw-Free appears to offer more detailed and actionable outputs.
  • 💬 Some traditional tasks, such as simple math or palindromes, presented challenges for Claw-Free, indicating room for improvement.
  • 👤 Persona modeling and role-playing tasks are notably restricted in Claw-Free, aiming for a safer and more controlled AI experience.
  • 📝 In creative writing and content creation, the reviewer found Claw-Free's output similar or slightly inferior to GPT-4, with a preference for GPT-4's director-like guidance.
  • 🔥 The reviewer plans to use both Claw-Free and GPT-4 going forward, leveraging each model's strengths for different tasks, especially valuing Claw-Free for image inputs and brainstorming.

Q & A

  • What is Claw-Free, and how does it compare to GPT-4?

    -Claw-Free is another large language model presented as a potential GPT-4 competitor. It claims to surpass GPT-4 in benchmarks according to its creator, Anthropic, and also in practical applications according to various online sources. While it lacks some of ChatGPT's features, its foundational model excels in certain use cases.

  • How did the person conducting the test assess Claw-Free?

    -The tester devoted extensive time to evaluating Claw-Free across various standard and niche use cases, especially focusing on daily tasks like content creation and idea generation, to determine its effectiveness compared to GPT-4.

  • Where can Claw-Free be used for free?

    -Claw-Free can be used for free at chat.lms.y.org, offering users the chance to experience its capabilities directly and compare them with GPT-4 through a side-by-side comparison feature.

  • What are some of Claw-Free's key features and limitations compared to ChatGPT?

    -Claw-Free is praised for its large context window and powerful model. However, it lacks many of ChatGPT's features, such as code interpretation, image generation, voice input/output, and plugin actions. Despite these limitations, its core text generation capabilities are highly regarded.

  • How does Claw-Free perform with image-based prompts compared to GPT-4?

    -Claw-Free demonstrates superior performance in handling image-based prompts, suggesting it integrates vision and language models more effectively than GPT-4. This makes it particularly appealing for tasks requiring image interpretation.

  • What are the specific use cases where Claw-Free excels over GPT-4?

    -Claw-Free excels in idea generation and brainstorming, prompt improvement, and handling large context windows effectively. It also outperforms GPT-4 in interpreting complex images and generating detailed, actionable prompts for specific professions.

  • Did the reviewer encounter any failures with Claw-Free?

    -Yes, the reviewer noted certain failures with Claw-Free, such as incorrectly answering a simple math problem related to the number of books read, indicating that while powerful, it is not infallible.

  • What are the drawbacks of Claw-Free's approach to ethical AI and role playing?

    -Claw-Free's strict stance on ethical AI and preventing jailbreaking leads to limitations in persona modeling and role playing, restricting its ability to engage in certain types of interactive or creative tasks.

  • How does the content creation capability of Claw-Free compare to GPT-4?

    -The tester felt that Claw-Free's content creation capabilities might be slightly inferior to GPT-4's, particularly noting that GPT-4 tends to take more responsibility in directing content creation.

  • What is the reviewer's overall conclusion on Claw-Free vs. GPT-4?

    -The reviewer plans to use both Claw-Free and GPT-4 going forward, acknowledging that Claw-Free has superior capabilities in certain areas, particularly with image inputs and specific use cases like idea generation and prompt improvement.

Outlines

00:00

🤖 Introduction to Claw-Free: A Potential GPT-4 Competitor

The video introduces Claw-Free, a new large language model presented by Anthropic as a competitor to GPT-4, touting superior performance in benchmarks and practical applications. The creator shares their comprehensive testing experience, evaluating Claw-Free's foundational model for various use cases while noting its lack of features compared to ChatGPT. A direct comparison between Claw-Free and GPT-4 is made, highlighting the significance of usability, consumer preferences, and practical performance over mere benchmark wins. The creator also mentions testing the model for content creation, idea generation, and its performance in handling images and custom instructions, emphasizing Claw-Free's potential as a powerful tool for specific tasks despite its current limitations in features.

05:00

🛠 Testing Claw-Free: Niche Use Cases and Direct Comparison

This section dives deeper into testing Claw-Free against GPT-4 for niche and day-to-day use cases. The creator emphasizes content creation and idea generation as primary uses, detailing their approach to evaluating the models. Claw-Free's superior handling of image inputs and its effective use in generating video ideas based on the creator's YouTube channel analytics are highlighted. However, comparisons with GPT-4 reveal mixed results in creative suggestion quality, with Claw-Free excelling in some areas while underperforming in understanding context and producing relevant content recommendations. The creator also explores Claw-Free's pricing and accessibility, noting its advantage in image processing and its limitations in persona modeling and role-playing capabilities.

10:01

🔍 Deep Dive into Use Cases: Where Claw-Free Shines and Falls Short

The creator provides an in-depth look at specific use cases where Claw-Free either excels or underperforms compared to GPT-4. They discuss the model's effectiveness in prompt engineering, image prompt generation, and handling specific tasks like enhancing creativity and improving prompts based on context. While Claw-Free shows promise in certain areas, such as prompt engineering and handling images, it struggles with basic reasoning tasks and persona modeling due to its strict ethical guidelines. This section also touches on the limitations imposed by Claw-Free's safety features, which restrict certain types of interactions, and compares its performance in creative writing, suggesting that while it's useful for idea generation, it may not surpass GPT-4 in content creation.

15:03

🤷‍♂️ Concluding Thoughts: Claw-Free's Place in the AI Landscape

The video concludes with the creator's reflections on Claw-Free's capabilities and its comparison to GPT-4 across a range of practical applications. While acknowledging Claw-Free's strengths in certain areas, especially in handling images and prompt improvement, the creator remains undecided on its overall superiority. They plan to continue testing both models for different use cases, particularly valuing Claw-Free for tasks involving image inputs. The creator anticipates updates from OpenAI in response to Claw-Free's emergence and invites the audience to share their experiences and preferences between the two models. The emphasis is on the evolving landscape of AI tools and the importance of finding the right tool for specific tasks.

Mindmap

Keywords

💡Claw-free

Claw-free is introduced as a competing large language model to GPT-4, touted for its superior performance in benchmarks and practical applications as per claims by its creators and various online discussions. The reference to Claw-free in the video script underscores a shift in the landscape of AI language models, highlighting a potential challenge to the dominance of GPT-4. The comparison aims to evaluate Claw-free's utility across different use cases, particularly emphasizing its foundational model's strength in certain scenarios despite lacking some of ChatGPT's features.

💡Anthropic

Anthropic is mentioned in connection with Claw-free, suggesting its role in the development or promotion of the model. In the context of the video script, Anthropic's involvement is a signifier of credibility and innovation in AI research. The mention serves to anchor Claw-free's claims of superiority over GPT-4 in both benchmarks and real-world applications, suggesting that its backing by a notable entity in AI could be a reason to pay attention to its capabilities.

💡Usability

Usability, in the context of this video script, refers to the practicality and effectiveness of a language model when applied to everyday tasks and professional workflows. It's a critical aspect of the comparison between Claw-free and GPT-4, emphasizing not just theoretical superiority but also the impact on real-world applications. The discussion around usability touches on elements like speed, pricing, output quality, and the ability to handle different types of content, including images.

💡Benchmarks

Benchmarks are standardized tests used to evaluate the performance of AI models across various tasks and metrics. In the script, benchmarks are cited as evidence of Claw-free's capabilities, surpassing those of GPT-4 according to claims. However, the narrative also conveys skepticism about benchmarks' ability to fully capture a model's practical efficacy, suggesting that real-world testing and personal experimentation are more indicative of a model's value to users.

💡Context window

The context window refers to the amount of information a language model can consider at one time when generating text. The script mentions Claw-free having a 200k context window, contrasting with GPT-4's 32k (within ChatGPT) and 128k (via API), highlighting the potential for Claw-free to maintain coherence over longer conversations or more complex queries. This capability is portrayed as a significant advantage, especially for tasks requiring detailed and extensive background information.

💡Image use cases

Image use cases involve the ability of a language model to interpret and generate content based on visual inputs. In the script, Claw-free's performance in handling images is spotlighted, with its apparent superior capability in accurately describing and responding to images compared to GPT-4. This distinction is important for tasks where visual context is crucial, showcasing Claw-free's advanced multimodal integration as a standout feature.

💡Prompt engineering

Prompt engineering is the skillful crafting of queries to elicit the best possible response from a language model. The video script delves into how Claw-free and GPT-4 handle complex prompts, particularly for generating creative and task-specific outputs. Claw-free is noted for its superiority in prompt engineering applications, suggesting that it offers more nuanced and relevant responses, making it valuable for power users seeking to optimize their interactions with AI for productivity and creativity.

💡Persona modeling

Persona modeling refers to the practice of instructing AI models to adopt specific characters or roles to tailor responses. The script discusses how Claw-free restricts persona modeling to prevent misuse, contrasting with GPT-4's more flexible approach. This limitation is significant for users who rely on persona-based prompts for generating content or simulating interactions, highlighting a trade-off between safety measures and functional versatility in Claw-free's design.

💡Ethical AI

Ethical AI is a principle that guides the development and application of AI technologies to ensure they are used responsibly and for the benefit of society. In the video script, Claw-free is positioned as an 'ethical AI', emphasizing its creators' focus on safety and reliability for enterprise applications. This designation affects the model's functionality, such as restrictions on persona modeling, illustrating the balance between innovation and ethical considerations in AI development.

💡Content creation

Content creation in the script refers to the use of language models to generate articles, scripts, and other forms of written content. The comparison between Claw-free and GPT-4 assesses their capabilities in supporting creative workflows and generating high-quality outputs. While Claw-free is lauded for its idea generation and prompt engineering, its effectiveness in producing ready-to-use content is still under scrutiny, suggesting that each model may have unique strengths in the content creation process.

Highlights

Claw-free is a new large language model that challenges GBD4, boasting superior benchmarks and practical usability according to its developer, Anthropic, and various internet sources.

The foundational model of Claw-free excels in specific use cases despite lacking most of ChatGPT's extended features.

Claw-free is compared directly to GBD4, the current leader in large language models, for its exceptional performance and usability.

Claw-free Opus, the flagship model, is accessible for testing via chat.LMSy.org, offering direct comparison with GBD4.

Claw-free boasts a 200k context window, significantly surpassing GBD4's capabilities and providing enhanced information retrieval.

The user interface of Claw-free is intuitive, with a clean design for easy interaction but lacks the advanced features found in ChatGPT.

In tests, Claw-free matched or exceeded GBD4's performance in generating high-quality content for basic prompts.

Claw-free's ability to handle images and provide detailed, accurate descriptions surpasses that of GBD4, especially in complex scenarios.

When comparing the generation of creative content, Claw-free's output is very similar to GBD4's, although it may slightly lag in content creation tasks.

Claw-free's larger token output capacity allows for generating more content in a single prompt than GBD4.

The model shows a significant improvement in prompt engineering tasks, providing more detailed and actionable responses.

Claw-free demonstrates superior performance in incorporating and analyzing images within prompts, making it a go-to choice for image-related inquiries.

Some advanced use cases, like persona modeling, are restricted in Claw-free due to its strict adherence to ethical AI practices and prevention of jailbreaking.

In tests involving logic and simple math, Claw-free occasionally misunderstood the context, suggesting room for improvement in understanding natural language intricacies.

The review concludes that while Claw-free is not a comprehensive replacement for GBD4 due to certain limitations, it excels in specific areas and is a valuable addition to the toolkit for users relying heavily on visual context or seeking high-quality, detailed prompt responses.

Transcripts

00:00

claw-free another large language model

00:01

that claims to be better than gbd4 in

00:03

benchmarks according to anthropic but

00:05

also in practice according to a lot of

00:07

the internet I really wanted to take my

00:08

time before releasing this video because

00:10

the question today is should you drop

00:13

gp4 for claw-free the short answer to

00:16

that question is well it depends on what

00:18

you're doing but probably it likes most

00:20

of chat gbt's features but the

00:22

foundational model is really good for

00:24

certain use cases I haven't really left

00:26

my apartment since release I tested this

00:28

in every way that I could conceive and

00:30

here's what I learned clfree B anthropic

00:33

the gbd4 killer question mark so look

00:35

first things first I think it makes

00:36

sense that a lot of people compare this

00:38

to gbd4 it is the king in the category

00:40

of large language models and it has been

00:42

at that spot ever since release for a

00:44

reason it's just really damn good

00:46

although a lot of Alternatives like open

00:48

source models or Gemini have come out

00:50

none of them have really defr gbd4 in

00:53

terms of usability and consumer

00:55

preferences but I think this might have

00:57

changed now so here's the plan first

00:59

I'll give you a quick rundown of

01:01

everything you need to know really the

01:02

key points for you as a user what specs

01:04

does this have what matters in terms of

01:06

usability and then I want to Dive Right

01:08

into use cases because what I did is I

01:10

tried this on all the ways that I use

01:12

large language models on a day-to-day

01:14

basis there's a lot of Niche use cases

01:16

there's a lot of fancy workflows or

01:17

specific automations I have but those

01:19

are not the day-to-day use cases things

01:21

like content creation Assistance or idea

01:24

generation those are the things I use it

01:26

all the time for and that's what I care

01:27

about so that's what we'll be looking at

01:28

here today and I'll give my honest take

01:30

if I'll be using this over gbd4 or not

01:33

and if yes why but before we even talk

01:35

about the specs let me show you a site

01:37

where you can actually use it for free

01:39

so if you head on over to chat. LMS

01:41

y.org you're going to be able to go to

01:43

direct chat here and pick claf free Opus

01:46

that is their new flagship model okay so

01:48

they released multiple models you can

01:49

check out all the details this video is

01:51

not going to be a summary of the blog

01:52

post that they released although it does

01:54

contain a lot of great information like

01:56

yay it wins all the benchmarks fantastic

01:58

we know that so do many other models but

02:01

in practice they're not better so I as a

02:03

power user kind of stopped even looking

02:05

at that I mean good winon all benchmarks

02:07

great let's move on what matters to me

02:08

is this retrieval what is the pricing

02:10

going to be what is the speed of it and

02:12

how's the quality of the outputs okay so

02:14

basically it's priced at $20 a month but

02:16

this website here allows you to actually

02:18

use it now sometimes it's a bit

02:19

overloaded but hey it's free you can go

02:21

ahead and test it out if you go to Arena

02:23

side by side you can actually compare it

02:25

to gp4 and run a prompt in here and you

02:27

get both outputs gp4 included and the

02:29

this is free which is kind of wild they

02:31

have VC funding and they basically want

02:33

to create a leaderboard for chatbots

02:35

which they're successfully doing this is

02:36

one of the best ways to evaluate

02:38

different models it just updates every

02:39

two to 3 weeks so this leaderboard is

02:41

not updated yet but basically this is a

02:43

way for me as somebody sitting in Europe

02:44

to use this without a VPN now this is

02:46

the next point if you want to use claw

02:48

free it's not available on Europe and

02:49

then the best model Opus is gated behind

02:51

a $20 pay wall okay so those are some of

02:54

the most important points as a user

02:55

except of the fact that it has a 200k

02:58

context window now if you're using GPT

02:59

for today inside of chat GPT you have a

03:02

32k context window right but it

03:04

retrieves all the information with it

03:05

wonderfully if you use the 128k context

03:08

window of the gb4 API it's not so

03:11

perfect anymore sometimes the info in

03:12

the middle it just gets lost as you

03:14

might know it gets tested with this

03:16

Benchmark called needle in a hyack where

03:18

they basically hide a little line inside

03:19

of a very very long document that maxes

03:21

out the context and then you prompt it

03:23

to retrieve that piece of information

03:25

and this graph actually really matters

03:27

because it does visualize how well it

03:28

retrieves the hidden piece of

03:30

information in other words we have a

03:32

very large context window that actually

03:34

works with a model that is extremely

03:37

powerful this looks very promising

03:39

across all dimensions and this is the

03:41

interface it's nice and intuitive you

03:42

have your history down here you can

03:44

start new chats attach PDFs or images

03:46

now I do have to say if you're using

03:47

this web interface and if we compare

03:49

that to chat GPT it does lack pretty

03:51

much everything that chat GPT has

03:53

outside of the text generation there's

03:55

no code interpreter there's no image

03:56

generation there is no voice input or

03:58

output there are no plugin AKA actions

04:01

there's no custom instructions you can't

04:02

edit the messages that you sent

04:04

previously but the core of this product

04:06

is the answers it gives so let's talk

04:08

about that how does it do well let me

04:10

tell you it does really well and many of

04:12

the super basic prompts like write me an

04:14

essay or research this topic it performs

04:17

pretty much equally to gbd4 and by the

04:19

way everything I'm about to say here is

04:21

purely subjective right this is all a

04:23

perspective of a power user who spends

04:25

all his time pretty much experimenting

04:26

with these tools and then teaching other

04:28

people what I find but I got a say at

04:30

the base level it's just seemed

04:31

identical but then if you go a little

04:33

deeper and you start expanding the

04:35

context and if you're watching this

04:37

channel you will know the more context

04:38

you provide in the prompts the more you

04:40

can expect in the output it will be

04:41

custom tailored and more relevant and if

04:43

you do that I want to start with this

04:45

one use case that really blew me away

04:46

here you're going to get incredible

04:48

results so I'll just show you this

04:49

little conversation that I had with it

04:51

and this one really impressed me this is

04:53

incredible so the prompt is super basic

04:55

okay so I like to do this a lot and this

04:57

is how I teached it in the course and

04:59

previous YouTube videos basically you

05:00

can have super easy prompts if you have

05:02

your custom instructions with it okay so

05:04

I have my own set of custom instructions

05:05

that I crafted for myself over time down

05:07

here and then I just include this super

05:09

easy prompt but as a third point of

05:11

context I include a screenshot of my

05:13

most recent 12 YouTube videos so

05:15

basically takes the context from my

05:17

custom instructions and the image which

05:18

is very rich in data right there's view

05:21

numbers here there's titles there's all

05:23

the thumbnails and then all I

05:24

practically need is a simple prompt like

05:26

this and here's the deal this result I

05:29

got a I agree with most of these These

05:31

are fantastic video ideas all of them it

05:33

it proposes various shows and when I

05:35

look at these I just have the feeling of

05:37

and again this is more of a feeling than

05:38

anything else but then picking videos is

05:40

more of a feeling than anything else I

05:41

mean you can look at data to inform that

05:43

decision but at the end of the day it's

05:44

like y this would makees sense I want to

05:46

create this and in this case my feeling

05:48

just tells me these are all incredibly

05:50

spoton I mean look at this chat GPT

05:53

memory series diving into how the model

05:54

builds up context and memory during a

05:56

conversation demonstrate multi-step

05:58

interactions like absolutely that might

06:00

not be the packaging for the video but

06:01

it's a great concept I would like to do

06:03

Hands-On tutorials and prompt

06:04

engineering I mean I have a whole

06:06

library of those videos right you can

06:07

check out a playlist on channel for

06:09

those comparison of cat GPT with other

06:10

large language models that's what we're

06:12

doing right now ai tools of the week

06:14

that's my Friday show right just all of

06:16

these are relevant but if you run the

06:18

same thing inside of chat GPT so the

06:20

only difference here is that I have them

06:22

inside the custom instructions as I

06:24

usually do and then instead of chat GPT

06:25

when I look at these ideas they're all

06:27

okay but I would say maybe two or three

06:30

of these are something I would actually

06:31

want to create I mean it makes sense

06:34

yeah AI ethics and governance create

06:37

content that traces the history of AI

06:39

like this might be interesting but it's

06:40

not what we're doing on this channel

06:42

right we're focused on what's happening

06:43

today and what you can use today not on

06:45

the history of AI like these are all

06:47

relevant topics but they're not relevant

06:49

to me and I did provide it with a lot of

06:51

context Tech I gave it 12 videos that I

06:53

just created you know if somebody showed

06:55

me that this is their YouTube channel

06:57

and they asked me what kind of videos

06:58

they should keep creating I don't think

06:59

I would recommend that I should be

07:01

reviewing AI startup pitches or creating

07:03

content around the history of AI again

07:06

this is all fantastic stuff but it's

07:07

just not what I do it's not the context

07:09

that I provided like the custom

07:10

instructions Clearly say we have a focus

07:12

on generative AI specifically chat GPT

07:14

and related Technologies why does this

07:16

give me recommendations like this I

07:18

don't know but there's a reason that I

07:19

kind of gave up on some of these use

07:20

cases because just the results were

07:22

never that good Claude nailed this these

07:25

are great okay so that's one use case

07:27

right but if you go deeper like the one

07:29

thing that I really found is that it's

07:30

just so good at taking in images it

07:32

really just feels different when you

07:34

work with the images and I guess if you

07:35

want a quantitative way of expressing

07:37

that you can look at the benchmarks on

07:39

Vision capabilities and how Opus

07:40

outperforms GPT 4V but the best way I

07:43

can describe it is in gp4 it feels like

07:45

they have a large language model and

07:47

they have a vision model and then they

07:48

just like plug them into each other and

07:50

let them work together and that is great

07:52

but with CLA just like with Gemini it

07:54

just performs differently if it's

07:55

multimodal from the ground up and that

07:57

is literally the case I mean if you're

07:58

using vision through the API and not

08:00

through cat gbt they're two different

08:02

API end points so yeah just from a

08:03

practical point of view this really blew

08:05

me away and all the other image use

08:07

cases were better when I compared it

08:08

with complex images like for example

08:10

this one that I found on Reddit Claud

08:12

described it perfectly not a single

08:14

mistake as far as I can tell but chat

08:16

GPT actually went ahead and said that

08:18

the left snowman is wearing a green hat

08:19

with a red band and small holy

08:21

decoration okay fair enough and the blue

08:23

scarf my man the left snowman is not

08:25

wearing a blue scarf here and this is a

08:28

minor thing so fair enough you know who

08:30

cares well if you use this stuff for

08:31

work and if you use it inside of your

08:32

automations you do care you're not going

08:34

to be looking over the shoulder of the

08:35

language model in every generation right

08:37

you just want it to work so I guess when

08:39

you're working with images it's just

08:40

clear that claw wins from everything

08:42

that I've seen so far and this is the

08:44

one that I tested extensively cuz I love

08:47

prompting with images it's so simple

08:48

it's the simplest way of putting in a

08:50

lot of context like when I want to just

08:52

get something done I don't spend 90

08:53

minutes engineering The Prompt so it's

08:55

perfect I do that for things where the

08:56

task repeats and if it's just like a

08:58

quick onetime prompt I just throw an

09:00

image at it and that's the context I

09:01

provide along with some custom

09:03

instructions and maybe I extend the

09:04

prompt two three sentences but I use

09:06

these models to keep me efficient right

09:08

I want to be fast on my feet I want to

09:09

have an assistant a coworker that works

09:11

together with me and for that I use

09:13

images a lot and cloud is just better at

09:15

that but not to get stuck on this point

09:16

so let's move on here so here's another

09:18

use case that is very important to me if

09:20

you've been following the channel you

09:21

know that we have a free newsletter and

09:23

you get this massive chat GPD resource

09:24

with it and in it my personal favorite

09:26

part are the prompt generators so for 10

09:29

different professions you get 10 prompt

09:30

generators and you get to customize

09:32

those for yourself or then we sell this

09:34

big product where we pregenerated a

09:35

thousand of them so you have no work

09:37

left with it so this is one of them and

09:38

what it basically does is based on the

09:40

custom instructions at the bottom it

09:42

generates pretty prompt formulas so

09:44

they're very Universal this one

09:45

particularly is for a growth hacker and

09:47

I tested this rigorously in both models

09:49

I have a lot of experience with this

09:50

prompt and I keep using it over and over

09:52

again in different variations based on

09:54

the custom instructions to find new use

09:56

cases for AI this is really my favorite

09:57

way when people ask me or how do you

09:59

find new stuff for chat GPT to do this

10:01

is my answer run this prompt customize

10:03

these custom instructions here at the

10:04

bottom and then it just spits out what

10:06

you can do today because that's how this

10:08

prompt is designed and I ran this many

10:09

times and what I found is it performs

10:12

equally as well okay so to me this is a

10:14

prompt where I've seen the output

10:15

hundreds of times so I feel like I can

10:16

be quite objective I don't care if I use

10:18

CLA free or gp4 in this instance both

10:20

work super well one note should be that

10:22

the gp4 output is limited so when I run

10:25

it in GPT 4 it gives me around 22

10:27

prompts depending on the length of them

10:29

because the output is limited I mean

10:30

it's no big deal I just prompt continue

10:32

or press the button where it just

10:33

continues generating but CLA has more

10:35

token outputs which is nice to have but

10:37

here's an interesting point where clot

10:39

actually does differentiate it itself so

10:40

I do have this workflow where I can take

10:42

one of these prompts and I improve on

10:44

them based on the specific context that

10:45

I'm using them in okay and look this is

10:47

the result of that workflow it's a

10:49

prompt that is a bit more fleshed out

10:51

okay so this would be the chat GPD

10:52

generation this would be the claw

10:53

generation now I'm not going to go into

10:55

all the details that's not the point of

10:56

this video but I do prefer this version

10:58

it is more more detailed it's more

11:00

actionable it preserves the variables

11:01

more effectively which is what I want

11:03

based on my input and I found this to be

11:05

consistent across improving multiple

11:07

prompts and multiple prompt generation

11:09

workflows so my conclusion is if you're

11:11

using a large language model for prompt

11:13

engineering CLA is actually

11:15

significantly better good to know right

11:17

so then I went ahead and I tested it for

11:19

image prompt Generations right we also

11:21

covered this on the channel and I gave

11:23

you the prompt for that you can create

11:24

these photo realistic images that are

11:26

incredible because all I do in this

11:27

prompt at the end I say a cat with a hat

11:29

and then it flashes it out and it really

11:31

gives you rich detail which then allows

11:32

you to easily customize this turns out

11:34

there's no difference whatsoever between

11:36

the chat GPT prompt and the CLA free

11:38

prompt as you can see here first one is

11:39

chipt second one is CLA essentially the

11:41

same thing so there it doesn't matter

11:43

but if you're generating prompts for

11:44

large language models I did find that it

11:46

does matter now look this might depend

11:48

on your workflow and your prompting but

11:50

I'm just trying to compare apples to

11:51

apples I've been developing some of

11:52

these prompt since quite a while and

11:54

I've been surprised by how many perform

11:56

bettering claw just right off the bat

11:58

now look not to Hype it up too much

12:00

there was actually a few use cases where

12:01

it actually completely failed now for

12:03

example here's another prompt that I

12:04

found on Reddit very simple Sams 50

12:07

books in his room he reads five of them

12:08

how many books are left in his room well

12:10

Claud Frey seems to think it's 45 books

12:12

but he just read the books he didn't

12:13

remove them they're still in the room so