Episode Transcript
[00:00:00] Speaker A: This was the test query that I was trying to do, find me a PDF document on. And I gave the exact host name where I should find it with the description diagram of an example, blah, blah, blah, blah. Right.
So nothing could find this.
It was just, it was like, oh yeah, there's no, there's no PDF on this page. It would just, nothing would find it. And then, and then I enable this on LM Optimizer on Edge Optimization.
And then one hour later I asked ChatGPT, I asked Rock, and it gave me the exact URL to the PDF itself.
[00:00:34] Speaker B: An hour. Just took an hour.
[00:00:36] Speaker A: An hour.
[00:00:37] Speaker B: That's wild.
[00:00:43] Speaker A: Today we're going to be taking up Adobe LLM Optimizer, which is a brand new product, a brand new category actually for Adobe. We did some very interesting experiments which yielded some very interesting results. And we're going to give you the real data, show you some real graphs first, couching it in, why this is a problem that needs to be solved and what Adobe is doing about it. That being said, let's get right into it.
[00:01:04] Speaker C: Well guys, today on the Arbor Digital Experience podcast, we're going to talk LLM, LLM Optimizer or optimization with Tad Reeves, our principal architect, and Frank Townsend, our Edge delivery developer and Designer is episode 28 and go ahead and take us away, Tad. Tell us about LLM optimization and why it matters to people.
Well,
[00:01:31] Speaker A: in order to really.
Okay, let's, let's frame that first too for anybody who's listening and doesn't necessarily know what an LLM is because I mean theoretically anybody listening to this podcast should know what an LLM is.
But a shocking number of people probably couldn't even define like they know that there's such a thing called ChatGPT.
But what does GPT even stand for?
So, so, so let's, let's, let's start out there as to what, what are these things and, and what are you optimizing for? Because there's been, there's been AI out, out and about for, you know, for basically the entirety of the Internet.
But not, but, but you was, this wasn't something that you had to optimize specifically for the difference here. So you have, you have something that came about really, it's only what 2017 was the first of these large language models that big scientific paper attention is all you need. And that brought about this whole idea of a transformer that could go and take long bits of text and actually give you some meaningful output at the end of that. And so because Google has been Giving you really, you know, summaries and snippets and things like that for quite some time. But always in the past, you were trying to type in a search result and that would lead you to an index of pages and then Google would send you off to a page. They weren't actually really trying to summarize and composite an answer to a question inline. You weren't trying to say, show me how to update my AEM dispatcher configurations and have it go and composite a bunch of text for you. That would, that would actually show you how to do that and then have to have you then, you know, check the math afterwards, say, well, can I, can I trust this computer?
Or, or is this not from a trustworthy source? Is this going to go and bork everything?
So that's the, that's the world that we're living in right now. I'm in this case large language model. This is where you are, you are taking a massive amount of text from, from someplace and using, using A, an AI such as your copilot or ChatGPT or Gemini or Grok or Perplexity or any of these ones that are out there to basically generate a result that, that tells you what to do. That, that, that, that gives you an answer that you want.
And how do you optimize for that is a, this is a brand new world really. This is, this is, this is a question that we're going to try to dive into. What is, what is the tool that, what. Or the set of tools that, that Adobe has on offer for this, which are actually exciting to talk about, why
[00:04:23] Speaker C: you need to optimize it too, if you're sure we'll get there.
[00:04:25] Speaker A: Yeah, yeah, exactly. But what, what, what even. What is the problem? Like what, what's the problem? What are you trying to solve?
The biggest one right there is that we're going from like the first one is, are you just trying to send somebody to a page or are you trying to answer their question? And if they're trying to answer their question.
So in the past you had Google Search console that could tell you some like, good, well, here are the keywords that people are typing in. You don't have that anymore. There's no, there's no Copilot Search console. There's no Perplexity Search Console that tells you what are the phrases that people are typing in and are you being well represented in that? So that's already a problem. You have no idea, you have no idea what people are typing.
So that's the problem. That Adobe has been trying to solve in this case.
[00:05:17] Speaker B: And it's really like the visibility of it too, because the root of it is a human is going to see a page completely different than what an LLM is going to.
[00:05:27] Speaker A: Right? Well, so there's, so there's, yeah, there's that. So human's going to see a page differently. There's also.
And what do they see and what do they get out of it? Because if you say, if you type some like, let's say, okay, good, I want a new fork for my bike, right? So good. So what's it, what's a good cross country fork for. For a mountain bike? And what's a good price that I should pay for that?
So, so if I type that into Grok or Perplexity or something like that, right, Then it's going to give me, it's going to, it's going to spit out a bunch of articles that it composites from a whole mess of posts. It might talk nicely about this rockshox sid that I've got on there right now. Or it might say, oh, you should be getting something else. You know, you should have a manitou or you should have a fox or something like that because nobody does rock shocks anymore or whatever. Right? So how do you know whether that sentiment that, that is being fed out of one of those LLMs is good or bad about your brand?
Right? So, so that these are all.
Because when I go and actually purchase, I might be purchasing in my bike shop, right? So the purchase may not happen.
So rockshox may never see my traffic.
It may be entirely off site that all this traffic is happening. So you're also trying to measure, is your brand being portrayed nicely?
[00:06:47] Speaker B: Right.
[00:06:47] Speaker C: Or portrayed at all?
[00:06:49] Speaker A: Yeah. Or portrayed at all. Right. Are you even showing or is, is, is the content that you prepare, which is the, the official, like, what should you use or what. What is the official data about your brand? Is that even appearing in the results? Is it being, is it being used to generate the result or is it. Is. Is all the stuff that your marketing team types, is it just totally going off into the wind and, and not even being picked up at all?
[00:07:15] Speaker C: It's. It's a wild new world and it's really going to matter for anyone selling anything on the Internet.
[00:07:22] Speaker A: Yeah, it is, it is.
So, so why don't I just get into it? Because I. This is going to be a little bit of a different podcast than I usually do because I think I'm going to be a little bit more screen Share heavy, because I have. So I have some interesting experiments that I've been doing recently because there's two ends of this that Adobe's been working on, like Frank was mentioning. There's the, um, there's the. How do you keep track of this and how do you see what, what, what people are typing? And how do you, how do you measure something that you're not getting necessarily? There's no official backend feed that you're getting. There's no equivalent of Google Search Console or Bing Webmaster Tools. Right. There's no official version of that. So how do you, how do you get data about what people are doing? And there. And that is a, that in and of itself is a very interesting problem to solve.
And then how do you optimize for what the LLMs are doing so that you can portray yourself better? So that's. So there's two different prompts.
There's the analysis and then there's the doing that both have ends of it that Adobe's been working on. And this is all super new also. It's like so new by, like so new. It's like this wasn't even a thing.
It wasn't even a product. It wasn't even in planning at Adobe Summit last year.
Like, so, so if you're wondering, like, didn't I, didn't I see this at Adobe Summit? No, it hadn't even, it wasn't even on the drawing board as of Adobe Summit last year. So. So this is like super, super new.
And, and it could totally change by the time we get to Adobe Summit in a month. So we'll see.
But here I'm actually, I'm going to show you real life stuff because we've got our blog plugged into this and.
And, uh. Yeah, this is gonna be fun.
Oh, give me a second. Let me pull this up for y'. All. Um, all right, so.
All right, so this is Adobe LLM Optimizer. So this is, this is plugged into our blog right here. So. And the.
So what is the first interesting thing to know about this is that this is a lot of. A lot of Adobe's tools are designed only to be used if you are also running everything that you have on Adobe Experience Manager. Lot of it is all designed to be all. All one thing. So you gotta be just all in on Adobe or you're not using the tools. This is, this is not where they're going with this. They designed LLM Optimizer to be able to be used. Yes, it's. It's a, it's a first class citizen on, on Adobe Gear, but it's gonna be mostly fully functional on almost any other CMS also. So if you're running stuff on Sitecore, if you're running stuff on WordPress, you got other blogs on other things, other sites on other things, then it's totally going to be operational.
So, so there's a couple, so a couple things. There's a lot of, a lot of fancy reports and a lot of things that I can show you, but I want to first show you what, what, what is it, what is it getting? What data can you get and what, what, what can you put into this that you can then start charting and looking at afterwards? So one of the first. So so like I said, you, because you got no search console and you got no webmaster tools, then where are you going to get data of, of what people are doing with an LLM? So if you're saying like what are people typing into their Google AI mode to be able to that, that you might possibly, that might possibly be surfacing your content whether or not they click through.
So one, one way to do it, and this is right now, one of the only ways to do it is to prompt the LLM yourself and basically just say, all right, good, I'm gonna, I'm gonna start typing prompts and I'm, and, and also where you, where you, where you type the prompt from? Like are you in, are you, are you in Washington D.C. or are you in Beijing or are you in London or are you in Athens? So you might get a completely different response.
And some of these LLMs work really well with English and they don't work so well with other languages.
So it's because the tokenization is totally different.
Like as an example, German is a really, really hard language to be able to do a lot of this stuff. And because of all their compound words, they've got all these words where you might have a compound words that, that's this big, that is all one big thing. Whereas you can break apart these different words in English. So different languages produce completely different results.
So, so you can prompt these LLMs, different LMs. So you have to go and you have to go and prompt Perplexity and prompt Google AI mode and prompt Gemini, which gives you different responses than AI mode and prompt Grok and so forth and just say good. What happens when you do this? So then you have to say good. Well what do I ask it?
So there's a bunch of different ways that you could feed feed this thing with, with with prompts.
[00:12:04] Speaker C: So we're talking about the testing process you've gone through to basically have. Well, no, not just testing.
[00:12:10] Speaker A: This is the LLM setup process to actually set this up so it can actually start giving you some interesting data.
So what, what you would do in this case is so a first thing that you could do. You can go and connect this to Google Search Console.
So you can go and tell it and say, all right, good. So it can start getting what your top search terms are out of Google Search Console. Then you could say, oh well, people are like, how do I access the admin API according to this. Oh, good. Well that's something that we could tell it.
We could say this would be a good question to prompt the LLMs with. How can I monitor AEM performance with new relic? Well, we've written a blog post about that.
So it's obviously coming up out of our Search console. So with each one of these things you can go and tell LM Optimizer you can say, good, I want to add this as things that it would potentially ask. So Google Search Console is one place where you can go and start, start trying to go and, and grab some example things that you can go and, and, and, and feed this tool with.
So you can also add some things manually. You can also as, as this thing starts to learn and, and starts to grab things, it might go and suggest other things like see these little magic wands? One like, you know, it's a possibility Adobe like how is it possible to buy Adobe stuff? Right? We, we've got all of our blog posts translated so it, it already knows like, oh, people are already searching stuff in French, it's already searching stuff in German and so forth. So we've got, you know, what's, what's the best one.
These are all suggested prompts that we could potentially add in here. You can see some of these other prompts are.
[00:14:02] Speaker C: When you add a prompt in here, you're just tracking against that prompt.
[00:14:05] Speaker A: What was that?
[00:14:07] Speaker C: When you add a prompt here, you're just tracking against that prompt.
[00:14:10] Speaker A: That's right.
[00:14:11] Speaker C: Get data on it.
[00:14:12] Speaker A: That's right. So what it then does. So once you add the prompt. So once you, once you say, hey, I want you to, I want you to add this prompt in there. These are either. So these ones here that say a person. These are ones where I went and said, I want you to ask LLMs this and I want to see if I'm coming up because I, I'm assuming I'm going to come up like, what's the difference between AM65 and 6 6? As an example, our site should come up here because I'm the only one who ever called it AM66.
So if you search a M66, we're the only site that comes up because there's no such thing as AM 66. Well, there is.
The artifact in the back end is AM 66. But I digress. So we're the only ones that publicly have been harping on that. It's really 65 lts.
So once you add all this stuff in there and you see market us, you can tell it also like, hey, I want to add a prompt and I want to, you know, like, let's just say a specific, specific word or phrase or something like that that I want to prompt the LLM with. And where do I want to go from? Maybe I want to go from India and just see am I getting. If I say, how can I, how can I buy Adobe training? Let's just say, right, you might get a completely separate response in India than you would get in the United States.
And that's also. And that's correct because some, some places go and have completely different product offerings in other countries. So, so it would, it would, it would make sense that you would, you would. That your prompts would give different responses.
So once you got all that in there, so you've got stuff coming in from Search Console, you have stuff that you've put in manually, you have stuff that it's generated out of all that.
So you have another place that you can get data which is out of your CDN, what LLM Optimizer supports, it supports a whole mess of different CDNs. So if you've got your, let's say you've got Akamai on the front end or you've got your own fastly CDN on the front end, or if you've got Cloudfront or Azure front door or something like that configuration examples that Adobe has for how to plug LL Optimizer into any one of those, usually we were using our own CDN and we had this all plugged into that. So we had that configuration. Now we're using Adobe's managed CDN on this. And so that has some nice pieces to it because it has. And I'm going to talk about this in a second because it's also got an auto optimization deployment which this is a super fun thing that I'm going to talk about in a second, but I don't want to get ahead of Myself.
[00:16:56] Speaker B: So you can bring your own key or you can just use the manage stuff.
[00:16:59] Speaker A: You can bring your own stuff.
So basically, basically this works with anything. This works with, this works with almost any CDN setup. Because the idea is too is that you need, so you want to know what people are typing and you want to know whether or not people are arriving.
So because in some cases, because, so you want to know are they arriving? So did they click through? Because in some cases you've got a click through signal like a little query string that comes on the end of it says oh yeah, I clicked through from ChatGPT.
But also you want to know are the, are the crawlers from Those, from those LLMs coming to your, to your front door? Also because, and then there's a, so there's a really good explanation of this from Adobe Developers Live that Cedric did that I'm gonna, we're gonna link in this, that talks about how LLMs work. But when you, when you first type in a question like, let's just say one, what's the difference between a M65 and 66. Okay, so an LLM may not have that data to hand. That may be just like an off the rails question that nobody asks, right? So it may not have enough data to be able to answer that question straight up that it's already processed. So it needs to do what's called a fan out search or fan out query afterwards where it says, okay, I don't know the answer to this, I'm going to go and search the back end and I'm going to see what I can get. And maybe I don't have immediate results, so I need to go and start going and pulling down pages that answer that question so that I can feed that text back out to my user, right?
So if a, if, if an LLM is coming back down and saying, let's say, let's say Google AI mode is like, I don't know, I don't know the answer to difference between AM65 and 66. I'm going to go find a blog post that answers this and it goes. And the crawler goes to the back end and finds our blog post and then pulls it back up and says, good. Well these guys kind of summarized it good. Let me crunch that around. So it answers the user's question in French and then, but then ideally then we've got a reference on that. So ideally, if it's one of these ones that actually does put the citation of like where does this come from? Then if the User wants to click through and say, well, was this cited correctly? Did they draw the correct, you know, answer to this? Because sometimes all these LLMs screw that up all the time where they just, they draw the completely incorrect conclusion from the text that was there.
So, so that's, so that's where you need the CDN data also because the CDN data is going to give you the fan out query type stuff where it's, where it's coming in and asking questions and then did their user click through.
[00:19:38] Speaker C: I'm really curious on what people can do to their websites to make that fan out process and that search result like get queried by LLMs.
[00:19:48] Speaker A: I'm sure it's yes, what's this?
[00:19:51] Speaker C: Right, so that's okay stuff.
[00:19:56] Speaker A: Yes. So, okay, so here. So, so let's, so let's get right into that. So, so there's a, there's a big difference at this time right now. This may not be the case next year when we're, when we're doing the same thing, but at this point in time LMS LM crawlers do not tend to execute JavaScript.
All they do is they just look at the exact text that's on the page, which can be a problem.
So if you take something like, if you take something like this, like we ran into this.
Yeah. Okay, so, so here is, so if you take something like this page. Okay, so we've got, we've got a page here that has all of the, a bunch of different resources for edge delivery that we've generated over time. So this is, this is fun. This is a, this is, it's got some category searching and some various things, right? This, this whole list is generated with JavaScript.
So 1, 1 should be curious of like if you go and you write a nice description in here. So let's just say any of these descriptions are descriptions that you would really want an element like if somebody asked a question like what's the best.
Let's just say one of these really had a really great.
Is, is there, is there a repository that, that shows you how to the boilerplate for doing Universal editor and edge delivery? And let's just say this description really well nicely summarize that you would want an LLM to grab that description and say oh yeah, I found something for you and here's a link you might want that text. This is substantive stuff that an LLM should grab onto.
If the LLM can't see that, then this page is of no value and it will not get cited.
So what ends up happening here. So if we take this, I'm going to just give you the.
I'll put in the Helix URL for this.
The reason I'm going to do. Whoops.
[00:22:21] Speaker B: Are you doing this so you could view the plane? Is that.
[00:22:24] Speaker A: Well, no, so, so, so here's the thing you should know. So there is a, there is a really nifty plugin that, that Adobe came out with this A.E.M. sorry, A.I. content visibility checker. And what this does is it basically goes and lets you know what, what an LLM is seeing versus what a human is seeing. So you might look at all this and you say, oh, this is super great. Like we've got all this, all this cool data, right?
They're saying, oh well, this is actually only 42% readable. Half your content is inaccessible. Like seriously. So if you go and you look at the actual, actual factualness, right?
If you look at this, the only thing that is showing up for an LLM, so all this stuff, see all this green on this side, this is all what a human would see and on this side is what an LLM would see.
[00:23:21] Speaker C: This is a really cool view.
[00:23:22] Speaker A: Nothing.
Yeah, because all this was generated as JavaScript. So the LLM sees absolutely none of it. The only thing the LLM is going to see is the about us like author bio at the very bottom, that's the text that sees us. When it's saying 42% is visible, it's counting the text of the author bio.
But all of the substantive content of the page is totally missing.
So, so that is a problem. That's a big problem.
[00:23:55] Speaker B: So now is that from. It's being accessed rather from a browser, it's being accessed directly or why is that?
[00:24:03] Speaker A: Well, so, so the, the reason for that is that this, in order to get this data here, you're going to need. So this data is coming out of an index in edge delivery. This is all being created client side.
So it basically. So what that, that plugin is showing you is showing you the fact that there is no, there's no actual server generated HTML that creates all this text.
So, so the. So, so it's missing.
[00:24:28] Speaker B: Yeah, it's not there.
[00:24:29] Speaker A: So googlebot might be able to see this.
An LM bot is not seeing this. It's totally, it's. It's just blank. So the result of this is that when an LM bot might look at this page, it would just say there's nothing, there's nothing interesting here for me to cite.
I'm just not going to cite any of this. I'm just, I'm going to move along. I'm not even going to say that you had anything interesting to say.
So which, which can be a problem if you have, if you have something else that is even, even more substantive. So we did it. So, so we did a, we did actually a little bit more of, of a, of an in depth experiment on this and I'm going to show you that experiment.
[00:25:16] Speaker C: So I could see this having massive impact into how websites were designed in general. Like.
[00:25:23] Speaker A: Well, it's huge because. Well, so here's the thing. And this is, and this.
I want to get into the question of this in the future or later on the podcast. But so in the past we've had this problem of optimizing for Googlebot and what nobody wanted to do in the past was you would get dinged if you, if you told your, if you sent a different page to googlebot than what humans see. What humans would see.
That was called cloaking.
So you basically, if you sent Googlebot, like every time googlebot came, if you said, oh, if your user agent is googlebot, I'm gonna send you this super keyword rich page.
And then the humans get this other cpo.
[00:26:19] Speaker C: If you do it that way, right.
[00:26:22] Speaker A: And you would get nuked. Like, if Google found out that you were doing that, you would just get nuked off the face of the earth.
And so it was a huge no, no, because it's a bait and switch.
Right.
But it's a different thing altogether.
If you're trying to feed a bot with data that it needs, then now you're not bait and switching. If you're just making a page that they can read, then, then who cares? Who cares if that like, like, so there is still, amongst the SEO community, there is still a little bit of like, nervousness is like, is this cloaking 2.0? Are, are LLMs gonna get wise to this? But, but the question is why would they want to. What would be in it for them?
Nothing really.
Right. Because if you're, if you're chatgpt and you're just trying to answer the question of what's the difference between AEM 65 and 66?
Yeah. Do you care where it came from? Like, do you care if it came to. From the human readable site or the machine readable site?
[00:27:31] Speaker C: You just want the accurate and good content. Yeah.
[00:27:34] Speaker B: You want it fast too.
[00:27:37] Speaker A: Yeah. You want it instantly and you want it accurate.
[00:27:40] Speaker B: So no hoops or anything. It just, when it hits your page, it wants to know the answer. It's, it's asking for and that's it.
[00:27:47] Speaker A: That's right. That's right.
So.
So it remains to be seen whether, whether there is some sort of penalty involved or there's some downside to any system by which you would give a different page or different set of text or optimize it at all. For an LLM, my gut feeling is that an LLM wants the answer. It doesn't, it doesn't care if it's a slightly different page.
So because there's not really, you're not, you're not gaming anybody by offering, offering the answer because it's the answer, you're just going to cite the answer.
[00:28:24] Speaker C: And just like traditional SEO, the search. So like ChatGPT gets to decide how it rates or how ChatGPT goes and finds the content.
Right. Or what it wants to dash against or promote as far as good content for ChatGPTB Read.
So just like Google became the beast of traditional SEO, where their rules set what ranked, there's going to be a whole new rule set for each one of these models that you have to optimize for as a, as a, as a website producer, as a content producer, right?
[00:28:58] Speaker A: And they got ratings, right? So it's like, is the answer good or did it suck?
So and so if you're just user, user judged.
That's right.
[00:29:08] Speaker C: Use your steer like it always is.
And that's what LLM L M O or Elmo really does, right? It gives you the view into what's actually happening. So you can steer your content or steer your strategy to rank for it.
[00:29:25] Speaker A: I still am going to call it Elmo. They can't stop me.
It's so much more pronounceable.
[00:29:30] Speaker B: It's so much better.
[00:29:33] Speaker A: I've got kids.
Emily's got a special place in my heart.
So here, let me show you this experiment because what I tried to do is I tried to formulate an experiment that was actually a little bit more like a real life type of a problem.
What would be a real life. What would be a real life example of a search that somebody would do that or a query, an answer that somebody would try to get out of an LLM that might be answered in something where you've got the answer wound up in some kind of JavaScript that's not necessarily nicely visible already. Right.
So here is the example that I came up with is on One moment, please Call is very important to us.
All right. So we did a meetup a couple weeks back in Carrie. So as A part of that meetup, what we did is we shared some.
Whoops, here, let me get the real deal version.
So as a part of this meetup, we shared a bunch of PDFs, and those PDFs have a bunch of system architecture diagrams that aren't available any other place.
So these are nice detailed. The diagrams themselves have text in them.
And so this is, this is substantive data that you would like if you wanted to, if you wanted to ask a question, like, can you show me the system architecture diagram that was shared in the meetup?
Or show me, can you excerpt some of the main slides of Laurel's presentation where she had, where she presented advanced search for document authoring or something like that. So like, so here's a. You're asking a question and you need the data that comes from these slides. Okay, so these slides right here, this. So what we did is we. The easiest way. And this is, this is a super common solution for this. We took all these slides and we're storing them in Adobe Experience Manager Assets.
So in AM Assets, you go and you upload a PDF. AM Assets goes and creates the thumbnails and stuff for you. So all I had to do in this case, I was, I made a little edge delivery block that says, look at this, look at this folder of the AMDM and dump me out all the PDFs and show me the thumbnail for it. So I've got a little edge worker that goes and throws that in there and blame puts it on there.
Here's the problem that, so that activity that's all done in JavaScript, it's all client side. So if I was to do the same thing on this. Right? So if we go back to that.
[00:32:43] Speaker B: Well, so references.
[00:32:45] Speaker A: Yeah, so that AI content visibility bit, I'm not going to be able to do that as easily now because we've actually already solved it. So.
But what you should know though is that, so this, all this was. So in that same visibility checker, this was blank. It was just a. It was just blank. So just like that, that other thing was coming up blank. This was coming up blank. I can't show you that because when I show you it before, beforehand, all this is generated using a rule that we put in the Adobe Managed cdn. So, so we should try to show you outside of the cdn. It's not going to show you anything. It's going to be busted anyway.
So, so, but here is the, Here is the, Here is the journey that I went on. I was like, okay, so, so I Posted this blog post and I said good.
Now I'm going to do all the normal regular white hat SEO stuff where you.
So you post the blog post, then you post it on LinkedIn and Reddit and Twitter and Blue sky and blah blah, blah, blah, blah. So I've got a link to it from Experience League, I've got a link to it from my personal blog. So I've got a bunch of inbound links. There's plenty of different ways that all the bots should be able to find this page, right? So and the day all the bots find on the page pages crawled just fine. I update this blog all the time, so it just, it gets crawled all the time by googlebot. Anyway, it's got, just got, it's got decent SEO. So, so it's all primed and then I say okay, good. Now I, I'm going to do a specific search and see this text right here, this text that talks about this architecture diagram.
This text comes out of the dam. Out of the am Dam. So this text is.
[00:34:30] Speaker B: Doesn't exist yet.
[00:34:31] Speaker A: Yeah, doesn't exist on the page? Yeah, this text like if you look on the page.
So by here we say okay, this is another so for anybody who hasn't seen this yet. So if you are new to edge delivery, this is a fun trick.
You have the edge delivery sidekick. Even if you don't have access to the site, you can see how the site was constructed with edge delivery. So if I right click View Document source then you see these are all the blocks that we use to make this site an edge delivery. So at the very bottom here you see this is a PDF display block. In this case I call it dam display. So and it shows you the path in the dam where this is coming out of and. And what sort of this is a. It's displaying as a PDF as opposed to images, which is this other one here.
This is the only HTML that's on the page. So when, when, when. So Googlebot knows how to grab some of the, some of the JavaScript. In some cases it didn't seem to do this well, but in some cases it does.
None of the LLMs could find this because. So with this being the only HTML that's there. So I was searching for like I did this exact search, right?
Show me a diagram of an example Edge delivery DA AMA cloud service implementation. So I just typed this exact text said I even put it in quotes. So it was like give me this hard exact text. Where is a diagram that shows this none of the LLMs could find it, copilot couldn't find it. Grok, Chatgpt, Gemini, nobody, nothing could find it, right?
So I went and I made a, I made a blog post and I said, hey, I'm having a hard time finding this. Eventually, eventually, with enough things pointing at this, some of the LLMs could say, well, we can tell based on external references that there might be PDFs on this page.
I'm like, good, well give me the link to the PDF. I can't.
So it was, it was totally invisible
[00:36:34] Speaker B: because the JavaScript, the JavaScript, the JavaScript renders it when it's loaded. So it's like it's not.
[00:36:39] Speaker A: Right?
[00:36:40] Speaker B: It's not going to see anything. It's more resources for the LLM to do that. Like in my head.
[00:36:46] Speaker A: Yeah, well, but, but, and that's, but that's, that's the whole problem is that when you are. Because if you are an LLM and you, and you need to be able to do these, these fetches in the background where you, where you say show me, blah, blah, blah, blah, blah, it needs to be able to answer the question now, right? It doesn't have time to go and execute JavaScript on all these. It's like so resource intensive, right?
[00:37:10] Speaker B: I mean you've seen Lighthouse scores on people's sites. Like sometimes it'll take seconds for the page to even load and it's like
[00:37:17] Speaker A: right, but, but what if that's, that's how long it takes the page to load. That's not how long. Like what if you were. So just, just think of this resources that you would have to have if you're chatgpt, right?
In order to, in order to get this, in order to rip this data off the page.
If it's not server side, you need to fire up like 50, you know, in memory server side Chromium browsers in real time and execute all the pages that you're doing, fan out like searches on and then analyze the results of that.
It's that, that's just like if you
[00:37:56] Speaker C: did with your home computer it would be slow even if you have a good bit of RAM and all. Like if you did 50 pages at once it would, right?
[00:38:03] Speaker A: And then you're doing that at scale for every user.
That's that, that. So that's, that's. I mean in the future I can imagine that might be a thing, but just like right now, like that amount of compute might as well just tell it to start generating videos. Like, like the, the, the compute is just insane. That you would need for that. Right, so the obvious answer then is stick something in the middle that gives the
[00:38:34] Speaker B: rendered JavaScript.
[00:38:36] Speaker A: Exactly. Exactly. So that's what this is. So an LM optimizer. So in here, so once you've, in your CDN setup so you can say okay good, I want to. And this, this is a thing that we enabled in this case of like, good, so enable routing of AI bot traffic. So it's, and it's very specific about. So on the Adobe end of like what they route because you don't, they don't want to get you in trouble on, on the cloaking end of things. So they don't, they're not sending googlebot to, to us to any specialized nonsense. But for, for these, for these ChatGpt Gemini Grok Copilot stuff then. Yeah, absolutely.
So then under opportunities here. Right, so what, what, what it'll allow you to do in this case is say all right, I want to recover content visibility in this case.
So this is, this is a, this is a bit where, where it'll allow you to say hey, I want to.
I just told it to say do everything.
But at first when I was doing this thing I said okay, just, just why don't you just do, just do this one page that I'm, that I'm trying to handle.
So take, take whatever the, take whatever the, the page was that was not, that was not showing and let's, let's, let's show that page.
So what this basically does though is, is it goes and it routes to, to kind of a, kind of a. It's like a sidecar. I wanted to have a diagram here for you, but I don't have one ready. But you know how I like my spaghetti diagrams.
But basically it's a sidecar infrastructure that goes and goes and optimizes a page, goes and fetches via the front door a full copy of the page and basically pre bakes that basically pre hydrates all the content and makes that ready to be served out to an LLM who's asking what's on this page.
Grab all the links, grab all to get all that stuff right?
So and in some cases too it can also do some other fancy stuff like it can, it can generate LLM friendly summaries that are on there like oh, this, this, this page has blah blah blah blah blah on it. So if, if an LLM is quickly trying to say what what does this page have?
So, so it pre hydrates a bunch of that stuff onto there.
That's the theory Anyway, as to whether or not it works, that was what I was trying to resolve with this experiment, right? Because people say lots of fancy things and I don't believe it until I watch it go.
But what happened was, so I enabled this, it took one hour after I enabled this and that, then suddenly the same exact searches that were generating absolutely nothing started generating results.
Here, I've got a version of this. So here's my, my, this is my personal blog and I took folks through my. All the things that I was trying to do here. But this was the, this was the test query that I was trying to do, find me a PDF document on. And I gave the exact host name where I should find it with the description diagram of an example. Blah, blah, blah, blah, blah, blah, blah. Right?
So nothing could find this.
It was just, it was like, oh yeah, there's no, there's no PDF on this page. It would just, nothing would find it. And then, and then I enabled this on LLM Optimizer on Edge Optimization. And then one hour later I asked Chat GPT, I asked grok, and it gave me the exact URL to the PDF itself.
[00:42:19] Speaker B: An hour. Just took an hour.
[00:42:21] Speaker A: An hour.
[00:42:22] Speaker B: That's wild.
[00:42:23] Speaker A: Yeah.
And then I was getting, then I was getting citations, then I could do other stuff like then all the other more substantive queries or more like, this is obviously a really contrived query. Like, nobody's going to really type that exact query in. But something like, let's say you didn't have time to attend our meetup and you're like, can you give me a summary of like, look at the. So there's, there's, there's, there's a, there's a PDF on this page.
There's this or there's PDFs of both of the presentations that Tad and Laurel gave. Give me a summarization from the slides.
Now it will actually go and open up the slides and do it.
What's funny is that like Gemini respects the fact that it says Confidential still on the slides. It says, oh yeah, by the way. So I read the slides, I can, I can tell you a little bit about them. But it says confidential, so I can't actually summarize the slide. I'm like, oh, you're such, you have such a good manner.
[00:43:18] Speaker B: Yeah, I would not have expected that.
[00:43:21] Speaker A: I know. I was like, what a well mannered robot actually following the rules. How about that? Not, not summarizing it if it says Confidential at the bottom.
But, but anyway. But that was the, that was the Gist, it's, it's, it's tech that works right now. And, and so
[00:43:41] Speaker C: my, I think the value is there too for that tool and that solution. Is there like an added cost or added resources cost to like do that for that page or what's the impact on that end?
[00:43:55] Speaker A: Yeah, yeah. So we're Adobe partners, so we're not really supposed to talk about how much stuff costs, but they can't stop me.
So the, the, so the way that this is being offered right now is for anybody who's got an AEM as a cloud service subscription presently, like if you're running an AM as a cloud service site, they've already enabled LLM optimizer on your site. You basically just have to log into it. So you go to LMO now and you can log in and basically they'll give you one site to do like your, your main site. You just set up your, your site. You can add people to it so that, so that you can have like your team go and access the reports and so forth and they'll allow you to use up to 200 prompts to be able to go and, and start working on your site.
You can get some pretty meaningful data from 200 prompts because it can start to composite a really nice picture of, of what your, of what's going on your site. I didn't really show any of the, any of what that looks like in terms of those reports, but I can show you real quick. This is so most of the presentations you see, Adobe likes to show this the, the, their test content.
So because they don't have necessarily the okay to show brand stuff,
[00:45:24] Speaker B: we can,
[00:45:25] Speaker A: we can, I don't care.
So, so this is, this is, this is real life live data.
This is one interesting thing right here. 100% citation readability.
This was sitting at something like 13% beforehand or something like that. Actually, no, it wasn't that bad. Sorry. It was, it was like 47% before I enabled the Edge Optimus. Because we, we, we pretty liberally use a bunch of these JS bits where it goes and makes, you know, lists of things and lists of related articles and stuff like that.
[00:46:03] Speaker B: I mean, you know, it's something you want to do. You want to do that still.
[00:46:07] Speaker A: Like you can just go and just start slapping stuff on pages and stuff. It's so fast. So you want to use that power.
But unfortunately the robots don't love that.
But now they do. Now they do. Now it's, now it's 100% readable.
So you have that you have.
So you have. So it'll tell you things like what are people's sentiment when they start searching and finding things on your site. So right now nobody's saying anything negative about us.
Please, nobody use this as an opportunity. Um, but we try to be nice.
We're nice even to our competitors. Um, so, so, but it'll tell you things like how much referral traffic are you getting? Stuff like that. This is just kind of a. This is, this is the summary view. But you can also. This is also just. This is Google AI mode and it'll show you up to eight weeks. I just found out. So if I say let's see, I forget how to do it. But anyway, if you, if you go and you search manually, you can apparently do up to eight weeks. But that's, that's as far as it goes right now.
[00:47:12] Speaker B: I think they were talking about expanding that, weren't they? I think I.
[00:47:15] Speaker A: They were talking about it. It. It may, it may be a thing that they can do in the future, but you can just say if you want specifically what, what is paid chat GPT doing that. That may give you different. May give you, give you different data or if I just want you get from that. So, so again, this is all different. That's all different data.
And then stuff like, okay, for your brand.
There we go. So how, how often is your brand mentioned next to other competing brands?
It's thinking that it thinks that Adobe is a competing brand.
Wow.
Week before we were getting more mentions than Adobe.
Imagine that.
But then it'll tell you data like so for your individual prompts that you've got. So what this thing does is it goes out and daily executes all the prompts that you've got in your library. So it'll go through and it'll just sit there and just pound. So if you've got them and if they're happening in France and India and stuff like that, it'll re execute them in every single one of these locales.
And so, so, and this is the reason why Adobe has to have a limit on how many prompts it'll do because it's actually on your behalf executing all these prompts every single day. So you, so you couldn't just throw thousands of prompts in there because then Google or sorry, Adobe would have to then for thousands of times every single day, prompt and store the results of this.
So the free version that they've got here lets you do up to 200 of these prompts and then they'll they'll, they'll sell you the ability to do more, you know, thousands if you, if you need to. Which would make sense if you were a brand, if you were like a, some big sporting goods or if you're L.L. bean or if you're, you know, whatever.
[00:49:06] Speaker C: Like selling bike forks.
[00:49:08] Speaker A: Yeah, exactly. If you're, if you're somebody with a lot of SKUs, you have a lot of different brands and a lot of different products that you would want to make sure that you're showing up for.
So, so the, it harkens back, it
[00:49:22] Speaker B: harkens back to like when you would you do all your SEO stuff on your old website and then you just start googling random things kind of related to your website because you're like I want to see if it works. Like is my SEO working?
[00:49:34] Speaker A: Like yeah, but this is like, it's like having a robot do all that for you.
[00:49:37] Speaker B: Exactly.
[00:49:38] Speaker A: Really fun.
So, but then it'll tell it, it'll, it'll give you data of like so, so when I, when I, when, when it goes and prompts like how should I configure an AEM health check? Blah blah, blah, blah blah. So are you visible? Are you getting cited? Do you appear in the citations or do you not appear at all?
Because this, this is one, this is one very interesting day. And this is. This again came from Cedric's talk last fall is that Adobe found that if your response times on your site are more than three seconds, you're not ever going to show up in the citations for most of these places. So if it's taking those backend fan out searches to more than three seconds to pull your site, it's just going to say to heck with you, you're not showing up because I can't get a response from you in a decent amount of time.
[00:50:35] Speaker B: That same as humans. Joe,
[00:50:38] Speaker C: Sorry, I don't know if you know the answer to this, but I know Adobe recently acquired Semrush.
Is this using their stuff?
[00:50:46] Speaker A: This is very interesting too because so in their sites Optimizer product they've got, you were able to use hrefs as a, as a data source.
So which is as everybody knows that's like the total mortal enemy of Semrush.
So we'll see. That, that, that, that hasn't, that whole acquisition as far as I know hasn't even finalized yet. So the, these products haven't.
[00:51:11] Speaker C: This looks very similar to that.
[00:51:13] Speaker A: Yeah, these products haven't gone and converged yet. But obviously this is an area where Adobe's putting a lot of attention.
But then other things like referral traffic is, is another thing that we're, we're only at the very beginning beginnings of how to measure. Because the end state that a lot of folks are kind of assuming that this LLM world is going to go is if you've got an action that needs to be taken, that you're just going to basically be exposing web components to the ChatGPTs and Geminis and perplexities and copilots of the world. So that if somebody says hey, hey, can you sign me up for the, for the innovation conference that's happening from, from the such and such a company and then it'll just expose a component that you can then go and sign yourself up for and say, oh, do you want me to autofill your. Your details? Yes. Okay, good. Go. You never even went to the actual website site of that of that company. Yeah, you did that entire interaction inside of, inside of a component inside of ChatGPT or whatever or you know, similarly, you know, I need, I need new, whatever. I need new cross country tires for my mountain bike. I need, you know, Maxis Ardent 2.4s, find me the best price and buy it. Like all that's going to happen inside of the browser. So that referral traffic in this case may end up being not just a referral to your website, but it may be is your web component getting picked up and stuff like that.
So as these tools evolve on what they expose, then this is also going to. I mean basically there's new graphs in here. I'll bet you there are new graphs that I haven't seen in here because it's changing like every other week there's something new in here.
So.
But yeah, How often are you hit from LLMs? Which one are you talking about here? Because the actual traffic you're getting from GPT is going to be different from copilot, etc.
So yeah, but again, this is super evolving.
And to me the biggest wow moment was on exposing the fact that a lot of our content wasn't even showing a lot of the stuff that I'm putting painstaking work into. Because everybody, y' all know how much I love doing my diagrams and if nobody can see my diagrams, then I'm going to be bent out of shape.
So the fact that nobody could see my diagrams, they were gone, they were not visible. And with a simple optimization you could put them into play is super nice.
[00:54:11] Speaker B: The fact that you can also pick between each of the models too is awesome because it's like depending on what your business is, you can almost say if, like I'm a data company or something. Like people are probably querying perplexity more. They are a chatgpt or something because it's more statistics oriented or whatever.
[00:54:32] Speaker A: Right?
Yep. So this is going to evolve. But this is a cool tool. It's available right now.
And again, if you're already an Adobe customer and you've already have things on cloud service, then you already have access and you may not even know that you already have access. So this is already a tool that you can start leveraging instantly.
And it is, it is better and nicer if you're using the Adobe cdn. The Adobe cdn.
[00:55:03] Speaker B: I.
[00:55:04] Speaker A: It's hard. It's like. It's weird to be a CDN fanboy. It is kind of weird, but I am, I am kind of a CDN fanboy. So I like the way that they're going with this.
As an infrastructure guy, you can do a lot of fun things like this that would have been really kind of scary to implement previously.
[00:55:24] Speaker C: So I did want to mention there, if you're a company looking to set this up, we can help you do it.
We are arguably industry leaders in this and this is, this is the team you'd be working with. So call us, email us, look us up on ChatGPT.
[00:55:46] Speaker A: Well, the other thing there too is that this, and I guess one of the last bits that I'd want to say is that I got to put in a shameless plug to get to Adobe Summit. Like if you, if you, if you haven't signed up already, go.
But if it's, if it's, if it's too late and you can't get the budget for it, then a lot of the, a lot of Summit you can stream online.
And this is, this is definitely what, there is so much influx in terms of what, what sort of products you should have your eye on that this is, this is not the time to, to be out of the loop and
[00:56:20] Speaker B: stuff's moving so fast with AI and everything. It's only going to get faster.
[00:56:24] Speaker A: Yep, exactly. So how to market in this age is something that everybody needs to know. I feel like Adobe, some of these need to know.
[00:56:32] Speaker C: Adobe's going to be putting lines in the sand on what products they're using.
The best practices for these products, where right now it's still kind of in the air. It's like Summit, they're going to say it, they're going to announce it.
[00:56:44] Speaker A: That's right. That's right. But, yeah, this is the time to.
To be aware of what all these things are and can do.
[00:56:53] Speaker C: Agreed. Look forward to seeing you guys there.
[00:56:55] Speaker A: Yes, indeed. Oh, yeah.
All right, thanks, everybody, for listening.
[00:57:00] Speaker C: See you next time.