40 Deep #SEO Insights for 2023: -In 2022, I told to focus on Natural Language Generation, and it happened. -In 2023, F-O-C-U-S on "Information Density, Richness, and Unique Added Value" with Microsemantics. I call the collection of these, "Information Responsiveness". 1/40 🧵. 1. PageRank Increases its Prominence for Weighting Sources Reason: #AI and automation will bloat the web, and the real authority signals will come from PageRank, and Exogenous Factors. The expert-like AI content and real expertise are differentiated with historical consistency. 2. Indexing and relevance thresholds will increase. Reason: A bloated web creates the need for unique value to be added to the web with real-world expertise and organizational signals. The knowledge domain terms, or #PageRank, will be important in the future of a web source. 3. AI and #automation filters will be created. Reason: Google needs to filter the websites that publish 500 articles a day on multiple topics to find non-expert websites. This is already happening. 4. #Google will start to make mistakes in filtering websites that use spam and AI. Reason: The need for AI-generated content filtration forced Google to check and audit "momentum", in other words, content publication frequency. I used the "momentum" first in TA Case Study. 5. Google uses #Author Vectors, and Author Recognition. Reason: LLMs use certain types of language styles and word sequences by leaving a watermark behind them. It is easy to understand which websites do not use a real expert for their articles, and content to differentiate. 6. #Microsemantics will be the name of the next game. Reason: The bloating on the web will create bigger web document clusters, and being a representative source will be more important. Thus, micro-differences inside the content will create higher unique value. 7. Custom #LLMs will be rented. Reason: Custom and unique LLMs will be trained and rented to the people who try to create 100 websites with 100,000 content items per website. NLP in SEO will show its true monetary value in mid-2023. 8. Advanced Semantic SEO will be a must for every SEO. Reason: 20 years of websites will lose their rankings to the new websites that come with 60,000 articles. This creates the need for advanced #Semantics and Lingusitics capabilities for SEOs. 9. Cost-of-retrieval will be a base concept for #SEO, as TA. Reason: TA explains a big portion of how the web works. Information Responsiveness and Cost-of-retrieval will complete it further. For two books, I will be publishing only these two concepts. 10. Google Keys Reason: The biggest Google leak after Quality Rater Guidelines will happen in 2023. And, I will be involved, but no more information, for now, I am not allowed to share more. Check the slides for the next SEO Insights for 2023. #searchengineoptimization #future #nlp #semantic #chatgpt #ai #content #quality #publishing #trend #seotrend #seo #searchengineoptimisation
This document summarizes how Google search results are evolving to include more semantic data through direct answers, structured snippets, and rich snippets. It provides examples of direct answers being extracted from authoritative sources using natural language queries and intent templates. It also discusses how including structured data like tables, schemas, and markup can help search engines understand and display page content in a more standardized way. While knowledge-based trust is an interesting concept, current search ranking still primarily relies on link analysis and does not consider factual correctness.
The document describes a Python script that can automatically generate new subcategories for an ecommerce website based on clustering product names. It discusses: - Using NLTK to generate n-grams from product names to cluster related products - Filtering the n-grams to keep only those with commercial value by checking for search volume and CPC data - Running the script on a large home improvement site to identify over 1,650 new subcategory opportunities with a total search volume of over 13 million - Sharing the script so others can automate subcategory identification for their own sites to scale up an important SEO tactic.
A look at search-related patents from Google that people who do SEO may be interested in learning about
Bill Slawski presented a webinar on analyzing patents related to search engines and SEO. He discussed 12 Google patents covering topics like PageRank, Google's news ranking algorithm, analyzing images to detect brand penetration, and building user location history. The patents described Google's work in building knowledge graphs from web pages, ranking entities in search results, question answering, and determining quality visits to local businesses.
Google's search results now include entities and concepts. Entities refer to people, places, things, and 20-30% of queries are for name entities. Google uses meta data like Freebase to build a taxonomy of entities and their relationships. This supports features like the Knowledge Graph, which provides information panels, and allows querying of nearby entities which may soon be available in search results.
What percentage of an Inbound marketer's day doesn't involve working with spreadsheets? How much of this work is time-consuming and repetitive? In this interactive session, you will learn how to manipulate Google Sheets to automate common data analysis workflows using Python, a very easy to use programming language.
This document discusses digital marketing strategies focused on establishing authority through valuable, timeless content. It recommends creating content such as articles, videos, and academic papers on topics that will remain relevant for years to establish expertise. Creating a steady stream of high-quality content over time builds an online presence and credibility without major risks of losses, and may lead to job offers, clients, or other opportunities. It provides examples of interactive dashboards and open-source software that gained popularity and users by continuously publishing improvements and documentation without needing to rely on things like resumes or company profiles.
1) Knowledge graphs are structured databases that represent real-world entities and their relationships to each other. They help search engines like Google understand topics at a deeper level. 2) Entities (topics) are becoming more important than keywords for search engines to understand content. Google's entity understanding can be checked using their natural language processing tool. 3) Semantic SEO techniques like tightly linking topics both internally and to relevant external pages can help improve how search engines understand and represent the entities within a website through their knowledge graphs.
1) Google uses various techniques to extract structured information like entities, relationships, and properties from unstructured text on the web and databases. This extracted information is then used to generate knowledge graphs and provide augmented responses to user queries. 2) One key technique is to identify patterns in which tuples of information are stored in databases, and then extract additional tuples by repeating the process and utilizing the identified patterns. 3) Google also extracts entities from user queries and may generate a knowledge graph to answer questions by providing information about the entities from sources like its own knowledge graph and information extracted from the web.
This document provides SEO metrics and comparisons for the website hangikredi.com over several time periods between April 2019 and September 2019. It shows substantial increases in key metrics like organic traffic, clicks, impressions, and average position after Google algorithm updates in May, June, July, and September. However, it also shows significant drops in these metrics during a server outage in early August. Overall the data demonstrates the site's strong SEO performance and organic growth over the 6-month period analyzed.
The document discusses using Python for SEO applications such as data extraction, preparation, analysis, machine learning and deep learning. It provides an agenda and examples of using Python to solve challenging SEO problems from site migrations and traffic losses. Methods demonstrated include pulling data from Google Analytics, storing in DataFrames, regular expression grouping, and training machine learning models on page features to classify page groups and identify losses. Later sections discuss using deep learning with computer vision models to classify web pages from screenshots.
Whilst passage indexing may seem like a small tweak to search ranking, it is potentially much more symptomatic of the beginning of a fundamental shift in the way that search engines understand unstructured content, determine relevance in natural language, and rank efficiently and effectively. It could also be a means of assessing overall quality of content and a means of dynamic index pruning. We will look at the landscape, and also provide some takeaways for brands and business owners looking to improve quality in unstructured content overall in this fast changing landscape.
This document discusses internal linking strategies and techniques. It begins by explaining the benefits of connecting entities within content, rather than just words, and translating those connections into internal links. It then provides an overview of technologies like PageRank, the reasonable surfer algorithm, topical PageRank, chunking, and natural language processing that search engines use to understand contexts and how those ideas can be applied to internal linking at scale. Specific options for approaches to internal linking existing pages are also outlined.
This document provides an overview of entity SEO, including: - What an entity is and why entity SEO is important as search engines have evolved from information engines to knowledge engines - How search algorithms like Panda, Penguin, and Hummingbird helped drive this transition by prioritizing high-quality content over low-quality sites - Techniques for entity SEO including entity research, topical maps, schema, internal linking, and case studies - Tools like Google's Knowledge Graph that can help with entity research and understanding how entities are ranked
How to approach SEO in a world where Google has moved from strings and keywords to things, topics and entities. Dixon JOnes is the CEO of InLinks, who have build a proprietory NLP algorithm and Knowledge Graph designed for the SEO Industry.
Optimising Site Architecture discusses improving the organization of information on websites. It recommends starting with detailed keyword research to categorize keywords into multiple levels. This informs mapping the site architecture visually using tools like Miro or Lucid App. Optimizing the information architecture benefits both users, making content findable and understandable, and search engines, by ensuring important pages with search volume exist.
My talk from BrightonSEO 2021; focusing on using Google's image category labels (glancing into the Knowledge Graph and Google's image annotation processes) for better topic research and content optimization.
Caso de éxito. Plan de Marketing para el lanzamiento de un producto tecnológico: aplicación Microlearning Aprende-e. Plan de Marketing del lanzamiento de un app de formación online.
Portadas Nacionales, Think! Mercadotecnia.
Portadas Nacionales, Think! Mercadotecnia.
En esta diapositiva verás nociones básicas del curso de Marketing