SlideShare a Scribd company logo
Let’s Talk About...
Discourse in the SEO fandom is really heating up, along two notable axes:
@ruthburr #MozConShould SEOs be “chasing the algorithm”? In other words, when Google updates one of its many, many algorithms, is it a good use of our time to
try to figure out what’s changed?
@ruthburr #MozConAnd, more recently (and arguably more heatedly): are Google’s Quality Rater Guidelines worth paying attention to? Or are they a bunch of smoke
and mirrors on Google’s part?
@ruthburr #MozConI don’t find this conversation very interesting TBH! But I get it – SEOs are busy, and it’s really frustrating to see advice being put out there that
you know isn’t going to help people do SEO. It hurts to watch busy people waste their time. I think what both these conversations are missing is
where these activities might fit in to a larger framework or approach to SEO. How do we know where to spend our time?
Real Company Stuff
Wil Reynolds
@ruthburr #MozCon
https://www.slideshare.net/wilreynolds/do-real-company-stuff-mozcon-2012-version
People in the “that stuff is a waste of time” camp often point to Real Company Stuff – the idea that, as SEOs, we should be focusing more on
doing great marketing at a big-picture level and less on things like making tweaks to title tags and anchor text. I don’t disagree with any of that…
WHEN THE $50M AD CAMPAIGN
POINTS TO A MICROSITE
@ruthburr #MozCon…but it’s entirely possible to do these things in a way that does not create SEO success, which is a wasted opportunity. RCS is not an excuse not
to learn SEO fundamentals or technical SEO.
@ruthburr #MozConIf we focus solely on Google, we lose sight of our actual customer. But if we ignore Google to focus solely on the customer, we’re missing out
on opportunities to connect with that customer online. It’s both! It has to be both.
@ruthburr #MozCon
Icons made by Freepik from Flaticon
My personal framework for approaching changes in our industry is: we are humans, creating things for humans, with a machine serving as the
intermediary between the two.
@ruthburr #MozConHow do you “just create good content”? You figure out what information people need to complete their task and provide it. Where do you get
that information? User testing is useful, but expensive – and you don’t always have access to your clients’ users.
@ruthburr #MozCon
Source: Sparktoro.com
Thanks to their giant, scary, problematic market share, Google has more data than anyone on how people use search engines to find
information.
@ruthburr #MozCon
Source: Learning Semantic Textual Similarity from Conversations
They also have more data than anyone on how people talk about things (entities, topics, and the connections between them) using written
language.
@ruthburr #MozConParamount Pictures 1993
Google wants humans to find what they’re looking for when they search, which means Google spends a lot of time and money on helping their
machine try to understand what people are looking for when they search.
@ruthburr #MozConWhat do we mean when we say “Google”? Google is a COMPANY – humans making and communicating decisions about (among other things)
search
@ruthburr #MozCon
Source: Multi-stage query processing system and method for use with tokenspace repository
And Google is also the ALGORITHM – or, more accurately, a whole bunch of algorithms running consecutively and concurrently. At its very, very,
simplest, an algorithm takes inputs (information), processes them, and then returns some sort of output.
INPUTS OUTPUT
Link volume
Crawlability
Keyword use
Metadata
As SEOs, we tend to focus on optimizing algorithmic inputs – because that’s what we can control. For success, we tend to focus on the output,
i.e. the SERP. Hence the constant SEO chatter over whether or not something is a “ranking factor.”
@ruthburr #MozConBut Google is focused on results - do people find what they want? The SERP is only a means to that end. We should be looking at Google’s
outputs not as an end in and of themselves, but as information of how to achieve those results.
@ruthburr #MozCon
One way to understand Google is to leverage Google’s huge body of user data to understand the human-readable quality signals that Google (as
both a company and an algorithm) have deduced are important, and then to create that quality, while optimizing machine-readable equivalents
or proxies of those signals.
@ruthburr #MozCon
How can I demonstrate
quality in ways that are both
human-readable and
machine-readable?
Humans
and
Robots
Don’t think the
same way
@ruthburr #MozCon
What is
this page
about?
@ruthburr #MozCon
Topic
modeling
TF-IDF
Keyword use
Synonyms,
stems, close
variants
@ruthburr #MozCon
On-Page SEO for 2019
Britney Muller
@ruthburr #MozConhttps://moz.com/blog/on-page-seo-2019
Is this a
good page
about this
topic?
@ruthburr #MozCon
Related topics/
supplemental
content
Natural Language
Processing
Links
@ruthburr #MozCon
@ruthburr #MozConhttps://twitter.com/dannysullivan/status/1044276
380098158593
@ruthburr #MozCon
“Fits your industry and business processes
Deploy a solution that delivers built-in best
practices specific to your industry, the most
flexible customer data model on the market
today, and a highly configurable, tightly
integrated platform, ensuring that solutions
will be fast to implement.”
@ruthburr #MozCon
Keyword: customer
relationship
management
Rank: 19
Understanding how well Google’s NLP AI can understand your content gives you insight into whether or not that content is sending strong
“good page on this topic” signals. This is great because humans are actually terrible at creating readable content that’s clearly about what they
think it’s about.
@ruthburr #MozCon
Clear
Concise
Accurate
To-the-point
Avoids jargon
Covers subtopics
Fortunately, optimizing for machine-readability has a lot in common with optimizing for human-readability.
https://cloud.google.com/natural-language/ @ruthburr #MozCon
@ruthburr #MozCon
Google’s NLP API extracts entities and assigns them a “salience score” between 0 and 1 based on how relevant to the piece of content as a
whole they’ve determined each entity is. In the example above, you can see that the NLP API has successfully found “customer relationship
management” to be the entity of the page, but without much confidence. Probably part of why this page ranks on page 2.
On-Page SEO for NLP
Justin Briggs
@ruthburr #MozConhttps://www.briggsby.com/on-page-seo-for-nlp
Can this
site be
trusted?
@ruthburr #MozCon
Ownership signals
HTTPS
Topic and author
authority
Positive link
signals
@ruthburr #MozCon
@ruthburr #MozCon
> <
>
Icons made by Freepik from Flaticon
Link valuation is an example of a machine-readable equivalent of a human information-gathering experience. How would a human decide
whether or not someone was credible, using social signals? Lots of people saying it is better than just one person; if all those people are related,
we trust that less than if unrelated strangers all say the same thing; and an expert’s opinion carries more weight than some rando’s.
All Links are Not Created Equal
Cyrus Shepard
@ruthburr #MozConhttps://moz.com/blog/20-illustrations-on-search-
engines-valuation-of-links
Is this site
pleasant
to use?
@ruthburr #MozCon
Page speed
Crawlability
Information
architecture
Engagement
metrics
@ruthburr #MozCon
@ruthburr #MozConWhat’s interesting with Lighthouse is that we can see Google trying to evolve its understanding of page load time to more closely mimic the way
a human would experience it - this is why metrics like time to interactive have become important alongside total page load time. The machine-
readable and human-readable signals are the same.
How to Run Lighthouse Reports
at Scale
James McNulty
@ruthburr #MozConhttps://www.upbuild.io/blog/lighthouse-reports-
multiple-pages/
Quality
Rater
Guidelines
For humans, by
humans
@ruthburr #MozConThe QR Guidelines were written by humans, for humans. They do not give insight into the algorithm(s), but they DO give insight into the quality
signals Google is intending to send, and are useful through that lens
@ruthburr #MozCon
Source: How to Build Your Own Search Ranking Algorithm with Machine Learning
Feedback from Quality Raters is likely used in two ways: to beta test changes in the algorithm and verify that returned results are relevant, and
to create a data set of High Quality, Needs Met pages that Google use as a training set for machine learning.
@ruthburr #MozCon
“[The guidelines] don’t tell you how the
algorithm is ranking results, but they…
show what the algorithm should do. ”
Ben Gomes
Google VP of Search Engineering
https://www.cnbc.com/2018/09/17/google-tests-
changes-to-its-search-algorithm-how-search-
works.html
@ruthburr #MozConThe big topic coming out of the QR Guidelines lately is E-A-T. SEOs are spending a lot of time arguing about whether or not E-A-T is useful or
important for SEO.
@ruthburr #MozCon
Is E-A-T a ranking factor?
No.
@ruthburr #MozCon
How expert
is this
content?
7!
E-A-T isn’t a ranking factor for the same reason “brand” isn’t a ranking factor – because things like “expertise” aren’t a single, quantifiable
quality that an entity does or doesn’t have. They’re made up of dozens, perhaps even hundreds, of discrete smaller signals.
@ruthburr #MozCon
Is E-A-T a ranking factor?
No. I don’t care.
@ruthburr #MozCon
You don’t know
what you’re
talking about!
I don’t care if E-A-T is a “ranking factor” or not because that’s not how that information is intended to be used. E-A-T is something humans at
Google have said to other humans. E-A-T and its attendent signals are human-readable ways of thinking about the question, “How do we tell if
someone knows what they’re talking about?”
@ruthburr #MozConMarie Haynes is going to talk more about optimizing specifically for E-A-T later in the conference, so I’m not going to go into it here, but what I’m
interested in when I read about E-A-T is:
@ruthburr #MozCon
Which of these signals is
machine-readable?
@ruthburr #MozCon
“High E-A-T medical advice should be
written or produced by people or
organizations with appropriate medical
expertise or accreditation…”
I know how I would go about figuring out if someone had appropriate medical expertise – where might a machine look for that same
information online? This is an area in which actually shelling out for real experts to create your expert content pays off; the difference is clear in
the content itself but also in the expertise signals that you can build online.
@ruthburr #MozCon
“..[and] written or produced in a
professional style and should be
edited, reviewed, and updated on a
regular basis.”
These signals are easy, Google can find all of that on page. Great! We can optimize for that.
@ruthburr #MozCon
How might a machine find
the same information in
another way?
@ruthburr #MozCon
Which of these signals can
we boost? What can we make
easier for a machine to find
and read?
2.6.4 How to Search for
Reputation Information
@ruthburr #MozConDid you know the QR Guidelines contain step-by-step instructions for conducting a link reputation audit? And all of these considerations are
things a machine can do as well. They might not go about it the same way a human would, but that’s fine, because I’m a human, and this is a
great way for me to look for ways to boost reputation signals.
@ruthburr #MozCon
Which of these signals is
easiest to game or fake, and
therefore less valuable?
@ruthburr #MozConPeople are constantly trying to “game the system” - the challenge for search engines is to create a sophisticated enough set of machine-
readable inputs that will separate the wheat from the chaff
@ruthburr #MozConThe easier a signal is to game, the less value it’s likely to pass on its own. It’s likely that certain signals like anchor text are only valuable in
context with other strong quality signals.
@ruthburr #MozConCarolco Pictures 1991So take another look through the QR guidelines and ask yourself: how would a machine look for these same signals?
@ruthburr #MozCon
https://twitter.com/jenstar
If you follow one person on the topic of QR Guidelines, make it Jennifer Slegg. She posts a detailed walkthrough every time an update is made.
Algorithm
Updates
For humans,
using
machines
@ruthburr #MozCon
@ruthburr #MozCon 61
@ruthburr #MozConCarolco Pictures 1991What does it mean to ”chase the algorithm”? If you chase it too much, it’s going to end up chasing you.
@ruthburr #MozCon
Keywords in title
tags are now
0.82% more
important!
Scrutiny of algorithm updates tends to over-focus on optimizing for inputs. If you do that, you’re focusing on the wrong information. But that
doesn’t mean that there’s nothing to be learned from algo updates.
@ruthburr #MozCon@ruthburr #MozConYou shouldn’t be solely reactive to algorithmic changes, but when they happen, it’s worth trying to understand what Google’s going for.
@ruthburr #MozCon
What can this tell me about
changes in my target market?
@ruthburr #MozConCarolco Pictures 1991 With that in mind, here’s what I look at in the wake of an update:
@ruthburr #MozCon
https://moz.com/blog/how-often-does-google-update-its-algorithm
First of all, Google updates multiple times per day. You may not even notice most of the changes – but if a change drastically impacts your site,
it doesn’t really matter if the rest of the index is affected. Information on updates is mostly useful at scale, and within your niche.
@ruthburr #MozConFirst question: how did this impact the sites I work on?
@ruthburr #MozCon
Page-Specific or Sitewide?
Is traffic impacted sitewide? If so, it’s likely that Google has devalued something you were benefitting from. If it’s specific pages, you’re
probably not being “penalized.”
@ruthburr #MozCon
Source: https://blog.searchmetrics.com/us/google-june-2019-core-update/
Most of the time it’s not that you lost rankings - it’s that someone else gained them. What does the SERP say? Note that it is not always
possible to regain rankings or traffic for a keyword you’ve lost.
@ruthburr #MozConSometimes it’s that the kind of page, or kind of information, that Google has determined users are looking for has changed.
@ruthburr #MozCon
“Sometimes what users expect evolves
and...the way that we try to determine
relevance, they evolve as well.”
GWC Hangout: June 14, 2019
John Mueller
https://www.youtube.com/watch?v=x0qLJRivdmY
@ruthburr #MozConUnderstanding not the what, but the why of a given algorithmic pattern or change can give you insight into how your target audiences are using
search to make decisions.
@ruthburr #MozCon
What do these changes say
about what my customers
are looking for?
@ruthburr #MozCon
How can I provide these
things for my users in ways
that Google can see and
understand?
There’s a Better Way to Classify
Search Intent
Kane Jamison
@ruthburr #MozConhttps://www.contentharmony.com/blog/classifyin
g-search-intent/
How to Scrape Google SERPs to
Optimize for Search Intent
Rory Truesdale
@ruthburr #MozConhttps://www.searchenginejournal.com/scrape-
google-serp-custom-extractions/267211/
@ruthburr #MozConCarolco Pictures 1991
When an algorithm is focused on improving Google's results quality in a certain respect or fixing an existing flaw, it's almost certainly not the
last time they'll try to improve there. An algo update around content quality probably means they're going to continue iterating on that – look at
what happened with Panda.
@ruthburr #MozCon
Source: https://blog.searchmetrics.com/us/google-core-algorithm-update-march-2019/
Even health sites that weren’t hit by the March update should have taken it as a wake-up call to start ramping up quality signals. Don’t ever
assume that changes in your niche are over.
@ruthburr #MozCon
Does Google look at [X thing]
as a ranking factor when
evaluating pages?
If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it. If it’s easy to game, they may have devalued
it or still be working on how to make it less gameable. Optimizing for human-readable quality and then making it machine-readable means you
can care less about “ranking factors” – worst case scenario, you’ve still improved things for humans.
@ruthburr #MozConOptimizing just for the algorithm is too reactive – optimizing just for humans is too inefficient. It has to be both.
Homework
Yes, I know it’s
only Day 1
@ruthburr #MozCon
@ruthburr #MozCon
Re-read the Quality Rater
Guidelines with machine-
readable quality signals in
mind
@ruthburr #MozCon
Consider: What is your site
currently benefiting from that
Google is likely to be trying to
discount or filter out?
@ruthburr #MozCon
Write clearer, simpler, (better)
more machine-readable
content
@ruthburr #MozCon
Improve topic salience using
Google’s NLP API
@ruthburr #MozCon
Examine SERPs to better
understand (and meet) intent
@ruthburr #MozCon
Create expert content and
promote your experts online
@ruthburr #MozCon
Conduct a reputation audit
@ruthburr #MozCon
Stop fighting about this stuff,
it’s boring

More Related Content

Human -> Machine -> Human: Ruth Burr Reedy MozCon 2019

  • 1. Let’s Talk About... Discourse in the SEO fandom is really heating up, along two notable axes:
  • 2. @ruthburr #MozConShould SEOs be “chasing the algorithm”? In other words, when Google updates one of its many, many algorithms, is it a good use of our time to try to figure out what’s changed?
  • 3. @ruthburr #MozConAnd, more recently (and arguably more heatedly): are Google’s Quality Rater Guidelines worth paying attention to? Or are they a bunch of smoke and mirrors on Google’s part?
  • 4. @ruthburr #MozConI don’t find this conversation very interesting TBH! But I get it – SEOs are busy, and it’s really frustrating to see advice being put out there that you know isn’t going to help people do SEO. It hurts to watch busy people waste their time. I think what both these conversations are missing is where these activities might fit in to a larger framework or approach to SEO. How do we know where to spend our time?
  • 5. Real Company Stuff Wil Reynolds @ruthburr #MozCon https://www.slideshare.net/wilreynolds/do-real-company-stuff-mozcon-2012-version People in the “that stuff is a waste of time” camp often point to Real Company Stuff – the idea that, as SEOs, we should be focusing more on doing great marketing at a big-picture level and less on things like making tweaks to title tags and anchor text. I don’t disagree with any of that…
  • 6. WHEN THE $50M AD CAMPAIGN POINTS TO A MICROSITE @ruthburr #MozCon…but it’s entirely possible to do these things in a way that does not create SEO success, which is a wasted opportunity. RCS is not an excuse not to learn SEO fundamentals or technical SEO.
  • 7. @ruthburr #MozConIf we focus solely on Google, we lose sight of our actual customer. But if we ignore Google to focus solely on the customer, we’re missing out on opportunities to connect with that customer online. It’s both! It has to be both.
  • 8. @ruthburr #MozCon Icons made by Freepik from Flaticon My personal framework for approaching changes in our industry is: we are humans, creating things for humans, with a machine serving as the intermediary between the two.
  • 9. @ruthburr #MozConHow do you “just create good content”? You figure out what information people need to complete their task and provide it. Where do you get that information? User testing is useful, but expensive – and you don’t always have access to your clients’ users.
  • 10. @ruthburr #MozCon Source: Sparktoro.com Thanks to their giant, scary, problematic market share, Google has more data than anyone on how people use search engines to find information.
  • 11. @ruthburr #MozCon Source: Learning Semantic Textual Similarity from Conversations They also have more data than anyone on how people talk about things (entities, topics, and the connections between them) using written language.
  • 12. @ruthburr #MozConParamount Pictures 1993 Google wants humans to find what they’re looking for when they search, which means Google spends a lot of time and money on helping their machine try to understand what people are looking for when they search.
  • 13. @ruthburr #MozConWhat do we mean when we say “Google”? Google is a COMPANY – humans making and communicating decisions about (among other things) search
  • 14. @ruthburr #MozCon Source: Multi-stage query processing system and method for use with tokenspace repository And Google is also the ALGORITHM – or, more accurately, a whole bunch of algorithms running consecutively and concurrently. At its very, very, simplest, an algorithm takes inputs (information), processes them, and then returns some sort of output.
  • 15. INPUTS OUTPUT Link volume Crawlability Keyword use Metadata As SEOs, we tend to focus on optimizing algorithmic inputs – because that’s what we can control. For success, we tend to focus on the output, i.e. the SERP. Hence the constant SEO chatter over whether or not something is a “ranking factor.”
  • 16. @ruthburr #MozConBut Google is focused on results - do people find what they want? The SERP is only a means to that end. We should be looking at Google’s outputs not as an end in and of themselves, but as information of how to achieve those results.
  • 17. @ruthburr #MozCon One way to understand Google is to leverage Google’s huge body of user data to understand the human-readable quality signals that Google (as both a company and an algorithm) have deduced are important, and then to create that quality, while optimizing machine-readable equivalents or proxies of those signals.
  • 18. @ruthburr #MozCon How can I demonstrate quality in ways that are both human-readable and machine-readable?
  • 22. On-Page SEO for 2019 Britney Muller @ruthburr #MozConhttps://moz.com/blog/on-page-seo-2019
  • 23. Is this a good page about this topic? @ruthburr #MozCon
  • 27. “Fits your industry and business processes Deploy a solution that delivers built-in best practices specific to your industry, the most flexible customer data model on the market today, and a highly configurable, tightly integrated platform, ensuring that solutions will be fast to implement.” @ruthburr #MozCon Keyword: customer relationship management Rank: 19 Understanding how well Google’s NLP AI can understand your content gives you insight into whether or not that content is sending strong “good page on this topic” signals. This is great because humans are actually terrible at creating readable content that’s clearly about what they think it’s about.
  • 28. @ruthburr #MozCon Clear Concise Accurate To-the-point Avoids jargon Covers subtopics Fortunately, optimizing for machine-readability has a lot in common with optimizing for human-readability.
  • 30. @ruthburr #MozCon Google’s NLP API extracts entities and assigns them a “salience score” between 0 and 1 based on how relevant to the piece of content as a whole they’ve determined each entity is. In the example above, you can see that the NLP API has successfully found “customer relationship management” to be the entity of the page, but without much confidence. Probably part of why this page ranks on page 2.
  • 31. On-Page SEO for NLP Justin Briggs @ruthburr #MozConhttps://www.briggsby.com/on-page-seo-for-nlp
  • 33. Ownership signals HTTPS Topic and author authority Positive link signals @ruthburr #MozCon
  • 34. @ruthburr #MozCon > < > Icons made by Freepik from Flaticon Link valuation is an example of a machine-readable equivalent of a human information-gathering experience. How would a human decide whether or not someone was credible, using social signals? Lots of people saying it is better than just one person; if all those people are related, we trust that less than if unrelated strangers all say the same thing; and an expert’s opinion carries more weight than some rando’s.
  • 35. All Links are Not Created Equal Cyrus Shepard @ruthburr #MozConhttps://moz.com/blog/20-illustrations-on-search- engines-valuation-of-links
  • 36. Is this site pleasant to use? @ruthburr #MozCon
  • 38. @ruthburr #MozConWhat’s interesting with Lighthouse is that we can see Google trying to evolve its understanding of page load time to more closely mimic the way a human would experience it - this is why metrics like time to interactive have become important alongside total page load time. The machine- readable and human-readable signals are the same.
  • 39. How to Run Lighthouse Reports at Scale James McNulty @ruthburr #MozConhttps://www.upbuild.io/blog/lighthouse-reports- multiple-pages/
  • 40. Quality Rater Guidelines For humans, by humans @ruthburr #MozConThe QR Guidelines were written by humans, for humans. They do not give insight into the algorithm(s), but they DO give insight into the quality signals Google is intending to send, and are useful through that lens
  • 41. @ruthburr #MozCon Source: How to Build Your Own Search Ranking Algorithm with Machine Learning Feedback from Quality Raters is likely used in two ways: to beta test changes in the algorithm and verify that returned results are relevant, and to create a data set of High Quality, Needs Met pages that Google use as a training set for machine learning.
  • 42. @ruthburr #MozCon “[The guidelines] don’t tell you how the algorithm is ranking results, but they… show what the algorithm should do. ” Ben Gomes Google VP of Search Engineering https://www.cnbc.com/2018/09/17/google-tests- changes-to-its-search-algorithm-how-search- works.html
  • 43. @ruthburr #MozConThe big topic coming out of the QR Guidelines lately is E-A-T. SEOs are spending a lot of time arguing about whether or not E-A-T is useful or important for SEO.
  • 44. @ruthburr #MozCon Is E-A-T a ranking factor? No.
  • 45. @ruthburr #MozCon How expert is this content? 7! E-A-T isn’t a ranking factor for the same reason “brand” isn’t a ranking factor – because things like “expertise” aren’t a single, quantifiable quality that an entity does or doesn’t have. They’re made up of dozens, perhaps even hundreds, of discrete smaller signals.
  • 46. @ruthburr #MozCon Is E-A-T a ranking factor? No. I don’t care.
  • 47. @ruthburr #MozCon You don’t know what you’re talking about! I don’t care if E-A-T is a “ranking factor” or not because that’s not how that information is intended to be used. E-A-T is something humans at Google have said to other humans. E-A-T and its attendent signals are human-readable ways of thinking about the question, “How do we tell if someone knows what they’re talking about?”
  • 48. @ruthburr #MozConMarie Haynes is going to talk more about optimizing specifically for E-A-T later in the conference, so I’m not going to go into it here, but what I’m interested in when I read about E-A-T is:
  • 49. @ruthburr #MozCon Which of these signals is machine-readable?
  • 50. @ruthburr #MozCon “High E-A-T medical advice should be written or produced by people or organizations with appropriate medical expertise or accreditation…” I know how I would go about figuring out if someone had appropriate medical expertise – where might a machine look for that same information online? This is an area in which actually shelling out for real experts to create your expert content pays off; the difference is clear in the content itself but also in the expertise signals that you can build online.
  • 51. @ruthburr #MozCon “..[and] written or produced in a professional style and should be edited, reviewed, and updated on a regular basis.” These signals are easy, Google can find all of that on page. Great! We can optimize for that.
  • 52. @ruthburr #MozCon How might a machine find the same information in another way?
  • 53. @ruthburr #MozCon Which of these signals can we boost? What can we make easier for a machine to find and read?
  • 54. 2.6.4 How to Search for Reputation Information @ruthburr #MozConDid you know the QR Guidelines contain step-by-step instructions for conducting a link reputation audit? And all of these considerations are things a machine can do as well. They might not go about it the same way a human would, but that’s fine, because I’m a human, and this is a great way for me to look for ways to boost reputation signals.
  • 55. @ruthburr #MozCon Which of these signals is easiest to game or fake, and therefore less valuable?
  • 56. @ruthburr #MozConPeople are constantly trying to “game the system” - the challenge for search engines is to create a sophisticated enough set of machine- readable inputs that will separate the wheat from the chaff
  • 57. @ruthburr #MozConThe easier a signal is to game, the less value it’s likely to pass on its own. It’s likely that certain signals like anchor text are only valuable in context with other strong quality signals.
  • 58. @ruthburr #MozConCarolco Pictures 1991So take another look through the QR guidelines and ask yourself: how would a machine look for these same signals?
  • 59. @ruthburr #MozCon https://twitter.com/jenstar If you follow one person on the topic of QR Guidelines, make it Jennifer Slegg. She posts a detailed walkthrough every time an update is made.
  • 61. @ruthburr #MozCon 61 @ruthburr #MozConCarolco Pictures 1991What does it mean to ”chase the algorithm”? If you chase it too much, it’s going to end up chasing you.
  • 62. @ruthburr #MozCon Keywords in title tags are now 0.82% more important! Scrutiny of algorithm updates tends to over-focus on optimizing for inputs. If you do that, you’re focusing on the wrong information. But that doesn’t mean that there’s nothing to be learned from algo updates.
  • 63. @ruthburr #MozCon@ruthburr #MozConYou shouldn’t be solely reactive to algorithmic changes, but when they happen, it’s worth trying to understand what Google’s going for.
  • 64. @ruthburr #MozCon What can this tell me about changes in my target market?
  • 65. @ruthburr #MozConCarolco Pictures 1991 With that in mind, here’s what I look at in the wake of an update:
  • 66. @ruthburr #MozCon https://moz.com/blog/how-often-does-google-update-its-algorithm First of all, Google updates multiple times per day. You may not even notice most of the changes – but if a change drastically impacts your site, it doesn’t really matter if the rest of the index is affected. Information on updates is mostly useful at scale, and within your niche.
  • 67. @ruthburr #MozConFirst question: how did this impact the sites I work on?
  • 68. @ruthburr #MozCon Page-Specific or Sitewide? Is traffic impacted sitewide? If so, it’s likely that Google has devalued something you were benefitting from. If it’s specific pages, you’re probably not being “penalized.”
  • 69. @ruthburr #MozCon Source: https://blog.searchmetrics.com/us/google-june-2019-core-update/ Most of the time it’s not that you lost rankings - it’s that someone else gained them. What does the SERP say? Note that it is not always possible to regain rankings or traffic for a keyword you’ve lost.
  • 70. @ruthburr #MozConSometimes it’s that the kind of page, or kind of information, that Google has determined users are looking for has changed.
  • 71. @ruthburr #MozCon “Sometimes what users expect evolves and...the way that we try to determine relevance, they evolve as well.” GWC Hangout: June 14, 2019 John Mueller https://www.youtube.com/watch?v=x0qLJRivdmY
  • 72. @ruthburr #MozConUnderstanding not the what, but the why of a given algorithmic pattern or change can give you insight into how your target audiences are using search to make decisions.
  • 73. @ruthburr #MozCon What do these changes say about what my customers are looking for?
  • 74. @ruthburr #MozCon How can I provide these things for my users in ways that Google can see and understand?
  • 75. There’s a Better Way to Classify Search Intent Kane Jamison @ruthburr #MozConhttps://www.contentharmony.com/blog/classifyin g-search-intent/
  • 76. How to Scrape Google SERPs to Optimize for Search Intent Rory Truesdale @ruthburr #MozConhttps://www.searchenginejournal.com/scrape- google-serp-custom-extractions/267211/
  • 77. @ruthburr #MozConCarolco Pictures 1991 When an algorithm is focused on improving Google's results quality in a certain respect or fixing an existing flaw, it's almost certainly not the last time they'll try to improve there. An algo update around content quality probably means they're going to continue iterating on that – look at what happened with Panda.
  • 78. @ruthburr #MozCon Source: https://blog.searchmetrics.com/us/google-core-algorithm-update-march-2019/ Even health sites that weren’t hit by the March update should have taken it as a wake-up call to start ramping up quality signals. Don’t ever assume that changes in your niche are over.
  • 79. @ruthburr #MozCon Does Google look at [X thing] as a ranking factor when evaluating pages? If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it. If it’s easy to game, they may have devalued it or still be working on how to make it less gameable. Optimizing for human-readable quality and then making it machine-readable means you can care less about “ranking factors” – worst case scenario, you’ve still improved things for humans.
  • 80. @ruthburr #MozConOptimizing just for the algorithm is too reactive – optimizing just for humans is too inefficient. It has to be both.
  • 81. Homework Yes, I know it’s only Day 1 @ruthburr #MozCon
  • 82. @ruthburr #MozCon Re-read the Quality Rater Guidelines with machine- readable quality signals in mind
  • 83. @ruthburr #MozCon Consider: What is your site currently benefiting from that Google is likely to be trying to discount or filter out?
  • 84. @ruthburr #MozCon Write clearer, simpler, (better) more machine-readable content
  • 85. @ruthburr #MozCon Improve topic salience using Google’s NLP API
  • 86. @ruthburr #MozCon Examine SERPs to better understand (and meet) intent
  • 87. @ruthburr #MozCon Create expert content and promote your experts online
  • 88. @ruthburr #MozCon Conduct a reputation audit
  • 89. @ruthburr #MozCon Stop fighting about this stuff, it’s boring

Editor's Notes

  1. The discourse.
  2. SEOs talk a lot about following algo updates...
  3. ...and whether paying attention to the QR Guidelines is worthwhile
  4. I don’t find this conversation very interesting TBH! But I get it – SEOs are busy, and it’s really frustrating to see advice being put out there that you know isn’t going to help people do SEO. It hurts to watch busy people waste their time. I think what both these conversations are missing is where these activities might fit in to a larger framework or approach to SEO. How do we know where to spend our time?
  5. People in the “ignore that” camp often focus on Real Company Shit
  6. I don’t disagree with this at all - but it’s entirely possible to do these things in a way that does not create SEO success, which is a wasted opportunity. RCS is not an excuse not to learn SEO fundamentals or technical SEO.
  7. It’s both! We can’t optimize for just humans or just Google. It has to be both.
  8. My personal framework for approaching changes in our industry is: we are humans, creating things for humans, with a machine serving as the intermediary between the two.
  9. How do you “just create good content”? You figure out what information people need to complete their task and provide it. Where do you get that information? User testing is useful, but expensive – and you don’t always have access to your clients’ users.
  10. Google has more data than anyone on how people use search engines to find information
  11. They also have more data than anyone on how people talk about things, i.e., entities and topics, using written language.
  12. Google wants humans to find what they're looking for when they search, which means Google spends a lot of time trying to understand what people are looking for when they search.
  13. There is Google the COMPANY: Humans making and communicating decisions about what’s important to the company
  14. And there is Google the MACHINE: algorithms on top of algorithms
  15. SEO tactics often come down to optimizing algorithmic inputs and tracking algorithmic outputs - hence the constant discussion over whether or not something is a “ranking factor.”
  16. But Google is focused on results - do people find what they want? The SERP is only a means to that end. We should be looking at Google’s outputs not as an end in and of themselves, but as information of how to achieve those results.
  17. One way to understand Google is to leverage Google’s huge body of user data to understand the human-readable quality signals that Google (as both a company and an algorithm) have deduced are important, and then to create that quality, while optimizing machine-readable equivalents or proxies of those signals.
  18. The question then becomes: how can I demonstrate quality in a machine-readable way?
  19. A human might ask, “What is this page about?”
  20. How does a robot answer “What is this page about?” -> Keyword use; synonyms, stems, and close variants; topic modeling, TF-IDF (need to add a graphic of a robot here, looking at the factors, or otherwise convey the concept)
  21. “Is this page a good example of a page about this thing?” (need to add a graphic of a human person here, having the thought, or otherwise convey the concept)
  22. “Is this page a good example of a page about this thing?” (need to add a graphic of a robot here, looking at the factors, or otherwise convey the concept)
  23. Natural language processing and neural matching
  24. This is great because humans are terrible at creating content that’s about what they think it’s about!
  25. Fortunately, optimizing for NLP has a lot in common with optimizing for readability.
  26. Natural language processing
  27. Natural language processing
  28. “Can this site be trusted?” (need to add a graphic of a human person here, having the thought, or otherwise convey the concept)
  29. “Can this site be trusted?” - Trust factors, HTTPS, author entity authority (need to add a graphic of a robot here, looking at the factors, or otherwise convey the concept)
  30. Google’s evaluation of link signals is already very similar to how humans decide if information is credible
  31. “Is this site easy to use?” (need to add a graphic of a human person here, having the thought, or otherwise convey the concept)
  32. “Is this site easy to use?” - Page speed, crawlability, UX factors (need to add a graphic of a robot here, looking at the factors, or otherwise convey the concept)
  33. What’s interesting here is that we can see Google trying to evolve its understanding of page load time to more closely mimic the way a human would experience it - this is why metrics like time to interactive have become important alongside total PLT
  34. So now that we’ve reminded ourselves of what we’re trying to do, let’s revisit those two SEO battlegrounds we talked about earlier. The QR Guidelines: Do not give insight into algorithmic inputs - DO give insight into the quality signals Google is intending to send - useful through that lens
  35. Used for two things: to TEST and to TRAIN
  36. E-A-T
  37. Expertise, authoritativeness, and trust are not quantifiable qualities that an entity either does or does not have (need to add graphics here of a human and a robot having this same conversation)
  38. These are human-readable ways of thinking about the question “how do we tell if someone knows what they’re talking about?”
  39. Marie is going to talk more about this later and you should listen to her
  40. What, of those signals, is machine-readable?
  41. How might a machine find the same information in another way?
  42. What, of those signals, can we boost?
  43. What, of those signals, is easiest to game/fake, and therefore likely to be weighted more lightly?
  44. People are constantly trying to “game the system” - the challenge for search engines is to create a sophisticated enough set of machine-readable inputs that will separate the wheat from the chaff
  45. The easier a signal is to game, the less value it’s going to pass on its own - it’s likely that certain signals around e.g. keyword relevance are only valuable in context with quality signals.
  46. So take another look through the QR guidelines and ask –what would a machine do?
  47. If you follow one person w/r/t QRGs make it Jen Slegg
  48. What does it mean to “chase the algorithm”?
  49. The end result should not be optimizing for inputs (keywords in title tags are now 0.82% more important). If you do that, you’re focusing on the wrong information. But that doesn’t mean that there’s nothing to be learned from algo updates.
  50. You shouldn’t be solely reactive to algorithmic changes, but when they happen, it’s worth trying to understand them to validate that you’re on the right track
  51. With that in mind, here’s what I look at in the wake of an algorithmic update:
  52. Information is usually mostly useful: At scale Within your niche Talk about recent update re: health sites
  53. How, if at all, did this impact the sites I work on?
  54. Across the board? Check SEO Twitter, WebmasterWorld Forums. What do other impacted sites have in common with mine?
  55. Most of the time it’s not that you lost rankings - it’s that someone else gained them. What does the SERP say? It is not always possible to regain rankings or traffic for a keyword you’ve lost Sometimes it’s not a matter of a penalty or of something that was working, no longer working
  56. Sometimes it’s that the kind of page, or the kind of information, that Google has determined users are looking for has changed E.g. “best whatever” almost always returns aggregators over individual whatever providers
  57. Understanding the end goal (not what, but why?) of a given algorithmic pattern or change can give you insight into how your target audiences are using search to make decisions
  58. Ask yourself: What do these things say about what my customers are looking for?
  59. How can I provide those things… ...in ways that Google can see and understand?
  60. There's a compelling argument to be made that when an algorithm is focused on improving Google's results quality in a certain respect or fixing an existing flaw, it's almost certainly not the last time they'll try to improve there. An algo update around content quality probably means they're going to continue iterating on that.
  61. Like with health sites, it would have behooved anyone to start really ramping up the quality signals on their sites after the March update
  62. Does Google look at (X Thing) as a ranking factor? If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it If it’s easy to game, they may still be working out how to make it less gameable You should want high organic CTR and good dwell time because they’re a signal that you’re providing humans with what they want
  63. That’s the future.
  64. So now that we’ve reminded ourselves of what we’re trying to do, let’s revisit those two SEO battlegrounds we talked about earlier.
  65. Does Google look at (X Thing) as a ranking factor? If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it If it’s easy to game, they may still be working out how to make it less gameable You should want high organic CTR and good dwell time because they’re a signal that you’re providing humans with what they want
  66. Does Google look at (X Thing) as a ranking factor? If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it If it’s easy to game, they may still be working out how to make it less gameable You should want high organic CTR and good dwell time because they’re a signal that you’re providing humans with what they want
  67. Does Google look at (X Thing) as a ranking factor? If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it If it’s easy to game, they may still be working out how to make it less gameable You should want high organic CTR and good dwell time because they’re a signal that you’re providing humans with what they want
  68. Does Google look at (X Thing) as a ranking factor? If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it If it’s easy to game, they may still be working out how to make it less gameable You should want high organic CTR and good dwell time because they’re a signal that you’re providing humans with what they want
  69. Does Google look at (X Thing) as a ranking factor? If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it If it’s easy to game, they may still be working out how to make it less gameable You should want high organic CTR and good dwell time because they’re a signal that you’re providing humans with what they want
  70. Does Google look at (X Thing) as a ranking factor? If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it If it’s easy to game, they may still be working out how to make it less gameable You should want high organic CTR and good dwell time because they’re a signal that you’re providing humans with what they want
  71. Does Google look at (X Thing) as a ranking factor? If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it If it’s easy to game, they may still be working out how to make it less gameable You should want high organic CTR and good dwell time because they’re a signal that you’re providing humans with what they want
  72. Does Google look at (X Thing) as a ranking factor? If it’s a machine-readable proxy to a human-readable quality signal, they’ve probably at least tried it If it’s easy to game, they may still be working out how to make it less gameable You should want high organic CTR and good dwell time because they’re a signal that you’re providing humans with what they want