The platform can set up all of the filters and views in Google Analytics automatically. There's a free version so you can build one on your own as well.
This document provides an overview of 11 advanced uses for the SEO tool Screaming Frog. It begins with an introduction to Screaming Frog and its history of updates. It then discusses using Screaming Frog to crawl tricky sites like those with JavaScript, large sites, or password protection. Other sections cover scheduling crawls, integrating APIs like Google Analytics and Ahrefs, and performing post-crawl analysis of things like pagination, Hreflang, and XML sitemaps. Later sections discuss visualizations, auditing structured data and page speed, and using Screaming Frog for content marketing tasks like scraping news sites. The document concludes with some bonus uses like reviving old Google Search Console reports and scraping SERP features.
A technical tour de force of tips, tricks and practical guides on how to squeeze every possible bit of performance from your WordPress site. Jono will myth-bust performance issues and security worries; demonstrate the ultimate performance combination of hosting configurations, plugin combinations and technical SEO; and give you the tools you need to outperform the competition
Are you sending mixed messages to Google? There are many different signals that feed into URL selection for search engines, and when these signals aren’t implemented correctly search engines have to make their own assumptions about your website and what’s important. In this talk, Rachel will share examples where a website’s signals can be ignored or overruled, as well as how to test your site setup. Don’t leave anything to chance – be sure that the most important areas of your site are getting the attention the attention they deserve.
The document summarizes a presentation about location-free local SEO. It discusses categorizing queries as implicitly local, explicitly local, or "near me"; categorizing search results as local pages, national pages, or local businesses; and assessing visibility share across categories. It cautions against creating too many irrelevant local landing pages and provides tips for implementation, including internal linking, on-page copy, and starting small. The key lessons are to analyze search results using the provided framework, focus on useful on-page content instead of filler text, and begin with a limited scope.
With more and more publishers adopting a paywall due to the decline in print circulation and revenue, media brands are looking to subscription models for the future of digital news and journalism. But how do paywalls impact SEO? Learn the technical SEO best practices when it comes to integrating a paywall. Avoid the most common mistakes as they are revealed and get strategic insight into how Independent.ie went from 0 to 35k paying subscribers in a year. And think more than content when it comes to your paywall SEO strategy.
Google Discover is a content recommendation feature on Google mobile apps and Chrome. The document discusses measuring Discover traffic, which can only be reliably tracked in Google Search Console. It also provides tips for optimizing content for Discover, including using clicky headlines, 1200px images, and structured data schemas. While some advice suggests AMP, links, or rapid indexing, the document found clicky headlines, large images, and schema were most important. It also warns that some reported traffic in analytics may actually be from Discover.
The document provides an overview of technical SEO best practices. It discusses on-page SEO elements like titles, meta descriptions, headings, images and URLs. It also covers technical aspects like sitemaps, indexing APIs, robots.txt files, redirects and canonical tags. The document recommends prioritizing content, links and proper indexing as the most important ranking factors. It also lists various tools for SEO audits, monitoring and troubleshooting technical issues.
This document discusses using the Natural Language Toolkit (NLTK) for keyword research and analysis. It provides instructions on installing NLTK and other Python libraries, preparing keyword data, and running scripts to classify and cluster keywords to identify trends and topics. The document demonstrates how to automate aspects of keyword research using NLTK to help analyze large datasets.
After focusing in previous years on the future of search technologies, this presentation focuses instead on the future of marketing for search. What do we need to change about our marketing to continue to appeal to a changing Google?
In today’s always-on world our “to do” lists never seem to shrink. Fortunately, when it comes to SEO there are ways to work faster AND better. One sure-fire way to increase efficiency and effectiveness is automation. Join Catalyst’s Paul Shapiro as he discusses specific ways to use automation to deliver better results in less time. You’ll leave with an understanding of how automation technology can simplify technical SEO processes. Audiences will learn how to: • Leverage SQL databases to automatically collect data from Google search console over time • Automate keyword research with an open-source tool called KNIME • Use programming concepts, such as regex for data extraction, and work with APIs to enhance your data analysis • Implement data visualization strategies to quickly recognize critical patterns and trends
Craig Campbell provides information on setting up and using private blog networks (PBNs) effectively for search engine optimization. He discusses the risks of low-quality PBNs and emphasizes building unique, natural-looking sites with relevant content, local citations, social profiles and varied hosting/IP addresses. Campbell also introduces several tools for automating social profiles, managing PBNs and monitoring rankings. While PBNs can increase traffic, he warns of manual penalties from Google if sites leave footprints and advises an overall SEO strategy before relying on PBNs.
There is a lot to cover about SEO for large websites/enterprise. In this talk we'll cover primarily the data analysis and the technical SEO side of things. In future presentations we'll look at more.
This document discusses leveraging information architecture for ecommerce SEO. It addresses the importance of category/product listing pages, opening these pages up to Googlebot, and leveraging keyword research. Specifically, it recommends allowing limited access to filters on category pages to improve searchability, developing canonicalization rules, and using keyword data to determine optimal product attributes and filters to structure category page filtering and content. The goal is to improve organic traffic to category pages by better utilizing information architecture principles.
Webinar with Craig Smith, Founder, and CEO of Trinity Insight, in which I talk about how to get more work done faster with fewer resources to drive the performance of your SEO program and increase traffic.
My deck from SMX London 2019 on merging logfiles with data from GA, GSC and web crawling for better SEO insights.
At SMXL, I presented a talk about crafting effective, authoritative content by understanding entities. People, places, objects, and ideas have facets. Human users have unique perspectives and their language changes as their relationship to an entity changes. It's time we stop chasing keywords-- a byproduct of search intent-- in favor of strategic entity-based strategy. This deck includes insights into how to access the data behind Google's knowledge graph, use external links to boost the search engine's understanding, and ways to become an authoritative and trusted source.
This document outlines a step-by-step process for SEO professionals to accurately predict organic website traffic. It involves compiling a comprehensive keyword list, categorizing keywords, analyzing competition for each keyword, predicting keyword ranking changes over time, and using historic click-through rates to estimate future traffic from predicted rankings. By tracking keyword rankings and volumes month-to-month, an SEO professional can use this process to forecast 90% of future organic traffic to a website.
The document discusses keyword research and strategies for understanding user intent. It provides tips for mapping keywords to user journeys and personas to gain deeper insights. Various tools for keyword research are also mentioned, including APIs that can be used to gather additional data without coding. Pivot charts and other visualizations are suggested to analyze keyword opportunities based on metrics like search volume and difficulty.
The document discusses the limits of machine learning algorithms and how experiments can help uncover unknown factors. It describes an experiment where the author built a machine learning model to predict search rankings but found it was overfitting the data by just learning domain hierarchies. Split testing various SEO strategies on real websites showed some conventional techniques like alt text had no effect, while others like adding structured data increased organic traffic. The author argues split testing is important to identify practices that truly impact search rankings.
Mike King examines the state of the SEO industry and talks through knowing information retrieval will help improve our understanding of Google. This talk debuted at MozCon
Exploring how you can harness the huge amounts of data available to build an effective, empirically-led SEO strategy using machine learning resource such as natural language processing (NLP). Including useful and practical tips on areas such as topic modelling, categorisation and clustering, so you can get started on using NLP in your own SEO strategy right away.
Showing the complexity of Google's search results, and the lack of understanding we generally have of what works and what doesn't - meaning we need to use a more scientific approach. Finally - a bunch of lessons and data from split tests we have run
This presentation describes what Search Analytics is, what value it brings to the table, how it can be used, what additional functionality and values can be build with search data, etc.
From past experiences with data, Dave knows relying on your gut can be a mistake. Instead, we need to take comfort in the validation of solid data to ensure we’re making profitable decisions. Sharing real client examples, Dave will run through the essential steps: how to decide on a hypothesis, create conditions, and gather data.