Category

Features

Google: Happy to Share YOUR Knowledge Around the Web

By Archives, Features, News, SEO No Comments

By Sam Hollingsworth

Google’s Knowledge Graph—an information base that enhances the search experience with semantic-search information collected from a wide variety of sources—continues to evolve and expand.

So does Bing’s version of its similar search feature, known as Snapshot or Satori. So how might these offerings impact traffic to for-profit websites from which information is being collected? It’s too early to tell but well worth examining.

Most recently, Google’s Knowledge Graph added hotel booking information directly on the SERP (search engine results page), and soon after that added cocktail recipes as well. That’s right: type in your favorite alcoholic beverage, let Knowledge Graph supply you with the recipe.

Knowledge Graph

This is pretty useful, and is certainly enhancing the search experience for most users. If I’m looking for a basic recipe for a common mixed drink, but I’m not sure which website to click, nor which will offer me the recipe without being drowned out by advertisements or inconsistencies, Knowledge Graph’s quick and easy delivery should be a helpful tool.

The same can be said for the new hotel-booking feature. Similar to Google’s “Flights” feature, Google offers top deals on hotels through vendors like Orbitz, Booking.com, Priceline, right on the SERP for fast and easy price monitoring and booking.

KG2

This is convenient but, personally, I opt to only check prices this way and then book the hotel of my choice through the official hotel website.

Assuming most users don’t use the same tactics as me, we have to consider the increased number of hotel bookings that will be generated via Google Search to providers like Orbitz, Booking.com, and Priceline with the latest travel-related Knowledge Graph update. Not to mention the plethora of Google Ads for hotels once the “View hotels” link is clicked in the Knowledge Graph, generating your hotel search via Google Hotel Finder.

NG3

But what does this mean for the overall search experience, and what does the future look like?

While Google and some of its competition (like Bing) are working to enhance the overall search experience for its users, we have to also consider “what’s in it for them?” And just like many of the Knowledge Graph updates since its inception in May 2012, this is clear: more information on the SERP, but also more ads on the SERP, and most importantly, fewer clicks off the SERP, which means more time spent on the SERP.  And less time—or no time at all—spent visiting the websites that offer the information.

Yes, we are still not at the point that this happening to the extent that it effects traffic and conversions drastically for for-profit websites, but it’s already happening to the websites that Knowledge Graph depends on the most, like Wikipedia.

Since Knowledge Graph’s U.S. launch in 2012, Wikipedia traffic has decreased in unprecedented fashion after increasing year-over-year for about a decade, in English versions as well as other languages. Despite these stats, Wikipedia has openly welcomed the use of its data by Google’s Knowledge Graph, Wikipedia’s only real competition in the space, since no direct correlation has been proven yet. But other sites may not be as accepting.

In February 2015, Google announced it would be serving trusted medical advice on its SERP as part of a Knowledge Graph update. It claims the medical information it serves was created “with a team of medical doctors (led by our own Dr. Kapil Parakh, M.D., MPH, Ph.D.) to carefully compile, curate, and review this information. All of the gathered facts represent real-life clinical knowledge from these doctors and high-quality medical sources across the web, and the information has been checked by medical doctors at Google and the Mayo Clinic for accuracy.”

However, the “gathered facts” come from a variety of sources. “First, our algorithms find and analyze health-related information from high-quality sites across the web,” Google says. It will also be interesting to see how traffic data is affected for those high-quality sources, which include: ScienceDirect, Medscape, Nature, Mayo Clinic, and WebMD, as well as a number of government websites like the National Institutes of Health (NIH), National Library of Medicine (NLM), Centers for Disease Control and Prevention (CDC), National Cancer Institute (NCI), Food and Drug Administration (FDA), and ClinicalTrials.gov.

Are these medical websites next to endure a decline in site visits? And just how severe with the decline be if there is one? And who/what industry is next?

While it’s obvious the direction Google is headed with its Knowledge Graph and the overall search experience, it will be interesting to see how this enhanced search capability renders for websites and businesses across the Web.

We checked in with Sam after his brush with death in February.

 

Bio:

SamHeadshotSam Hollingsworth is an SEO Manager at Acronym with an emphasis on Content Marketing and Social Media. Originally from Upstate New York, he now resides in Manhattan and enjoys watching his New York Rangers and New York Knicks just a few of blocks away from work at the Empire State Building. You may also find Sam watching horse racing at Belmont Park or Aqueduct Racetrack throughout the year, or at Saratoga Race course near his hometown in Saratoga Springs during summer. Sam can be reached via Twitter at @SearchMasterGen

 

What To Do When Your Client Can’t Make It To Your Joint Conference Presentation?

By Events, Features, Live on 65, Video No Comments

footer-ad-1200x120
CrispinMikeYou could panic. You could call it off. Or, you could bundle him up digitally and take him with you. Which is exactly what I did at the recent eMetrics Summit in Boston. And it worked so well so I wanted to share it with you.

Crispin Sheridan heads up search and all things digital at SAP. Long time clients (ten years with Acronym, in fact) and close friend had to head to the UK while I was due to head to Boston. So the night before, we got together (yes, on the 65th floor) and recorded what we would have done live at the event.

So, yes, you’ll hear him (and me) making references to being live on stage in Boston. But you’ll also get an insight into exactly what it takes to set up and manage a search center of excellence inside the world’s third largest software company.

Comes in four short parts.

To get the full context of what the session was about, you can see the title and abstract on the eMetrics site http://www.emetrics.org/boston/2014/emetrics-web/#600


[share title=”Share This Article” facebook=”true” twitter=”true” google_plus=”true” linkedin=”true” pinterest=”true” reddit=”true” email=”true”]

mikeGheadshotMike is currently CMO & Managing Director at Acronym where he is responsible for directing thought leadership programs and cross platform marketing initiatives, as well as developing new, innovative content marketing campaigns.

Prior to joining Acronym Mike was global VP, Content, at Incisive Media, publisher of Search Engine Watch and ClickZ, and producer of the SES international conference series. Previously, he worked as a search marketing consultant with a number of international agencies handling global clients such as SAP and Motorola. Recognized as a leading search marketing expert, Mike came online in 1995 and is the author of numerous books and white papers on the subject and is currently in the process of writing his new book “From Search To Social: Marketing To The Connected Consumer” to be published by Wiley later in 2014. He is chair of the SES advisory board and in March 2010 was elected to SEMPO’s board of directors.

Welcome to Live On 65!

By Archives, Events, Features, Live on 65, Video No Comments

Meet Danny Sullivan

Live On 65: Danny Sullivan from Acronym on Vimeo.

 

One of the coolest things about working at Acronym is that we’re smack, bang in the center of town on the 65th floor of the iconic Empire State Building. Apart from the great views, it means we’re just so easy to get to when we have friends and colleagues in town.

And because we do get so many visitors dropping by, we thought why not have a little video feature to introduce our guest to the TMN audience. Kind of like a fireside chat, only without the fireside. It’s a totally informal thing.

And we’re delighted that the first guest to join us and launch this little featurette is probably one of the best known names in the search marketing industry. Yes, it’s Danny Sullivan, founding editor of Search Engine Land, founder of SMX Conference & Expo (and of course, Search Engine Watch and SES Conference & Expo prior to that).

But more importantly, for me anyway. He’s my longtime pal. And it turns out that, almost exactly 14 years ago was when I did my first interview with Danny for the book that I was writing at the time and also a newsletter I had launched (actually, let’s call that a blog!).

So, I thought, why not? Let’s go back 14 years and see what we were talking about then. And then fast forward to see how much of it is still relevant now.

Spoiler alert…. Content… Yes, 14 years later we’re still talking content. And so we should.

 

 Transcript:

Don’t have time to watch the video? No worries, the transcript can be found by clicking the link below for your convenience.

Danny Sullivan is a name long synonymous with the search marketing industry. As the original “search beat” reporter in the industry, Danny has witnessed the change from a type of cottage-industry of back bedroom search engine optimizers to the enterprise level, global agencies and consultants that make up the industry in 2014.

Fourteen years ago, Mike Grehan, another name synonymous with the search industry, was researching and writing what would become a seminal book on the subject of search engine optimization. As part of his research, he did his first interview with Danny Sullivan almost exactly 14 years ago.

So, as Danny was in town for SMX east, Mike invited him to whoosh up to the 65th floor of the Empire State Building (Acronym Global HQ ) to take a trip back in time and discuss the evolution of the search industry from 2000 to where we are now in 2014

Makes for very interesting reading (and watching!) Read the transcript here…

 

 

jim-yu

Whooshing up to the 65th floor of the Empire State Building next edition is Brightedge CEO, Jim Yu.

 

 

 

 

 

 

 

 

 

 

mikeGheadshotMike is currently CMO & Managing Director at Acronym where he is responsible for directing thought leadership programs and cross platform marketing initiatives, as well as developing new, innovative content marketing campaigns.

Prior to joining Acronym Mike was global VP, Content, at Incisive Media, publisher of Search Engine Watch and ClickZ, and producer of the SES international conference series. Previously, he worked as a search marketing consultant with a number of international agencies handling global clients such as SAP and Motorola. Recognized as a leading search marketing expert, Mike came online in 1995 and is the author of numerous books and white papers on the subject and is currently in the process of writing his new book “From Search To Social: Marketing To The Connected Consumer” to be published by Wiley later in 2014. He is chair of the SES advisory board and in March 2010 was elected to SEMPO’s board of directors.

My SEO Kung Fu Is More Powerful Than Your SEO Kung Fu

By Archives, Features, SEO No Comments

seo-kungfu-image
by Mike Grehan

So, the headline is my analogy for a conversation I seem to have had so many times. When an SEO does some competitive analysis and sees that, their efforts seem (to them) to be better than the competitor. And yet, the competitor seems to have more visibility at search engines. The conclusion they so often arrive at is that, there must be something missing in their SEO tactics, or that the competitor must be doing something sneaky.

However, the answer has, more often than not, to do with the way that search engines analyze end user behavior and fold that into the mix. There’s a whole lot more going on under the hood at search engines that can affect what ranks and what doesn’t. And more to the point, what frequently gets re-ranked as a result of end user intelligence.

Without getting too deep in the weeds, I want to take a little look under the hood to highlight some of the techniques that are pretty much standard in information retrieval terms, but rarely get a mention in SEO circles.

Did Google Just Mess Around With That Query?

Let’s start with the query itself. We imagine that the end user inputs a certain number of keywords and that a search engine then looks for documents that contain those keywords and ranks them accordingly. However, that’s not always the case. Frequently, documents in the corpus are more relevant to a query, even when they don’t contain the specific keywords submitted by the user.

That being the case, by understanding the “intent” behind a keyword or phrase, a search engine can actually expand the initial query. Query expansion techniques are usually based on an analysis of word or term co-occurrence, in either the entire document collection, a large collection of queries, or the top- ranked documents in a result list. This is not at all the same as simply running a general thesaurus check, which has proven to be quite ineffective at search engines.

The key to effective expansion is to choose words that are appropriate for the “context” or topic of the query. A good example of this would be where “aquarium” would be a good expansion for “tank” in the query “tropical fish tanks.” That would mean if you’re specifically targeting the term “fish tanks” but a page (resource) talking about “aquariums” proves to be more popular to the end user, then that’s the one most likely to be served. And subjective as it is, it’s the quality of the content end users are happy with, regardless of whether the actual words they typed appear in the content.

There are a number of different techniques for query expansion. But how does a search engine know that the expanded query provides more relevant results? The answer is “relevance feedback.”

Implicit data provided by the end user gives huge clues as to what are the most closely associated query terms. Early expansion techniques were focused on expansion of single words, but modern systems use full query terms. What this means is that semantically similar queries can be found by grouping them based on relevant documents that have a common theme, rather than (as already mentioned) the words used in the query.

This rich source of relevance data is then bolstered with click-through data. This means that every query is represented by using the set of pages that are actually clicked on by end users for that query, and then the similarity between that cluster of pages is further calculated.

Techniques for relevance feedback are not new; you can trace them back to the early Sixties. However, in this new realm of “big data” what I have described above (in a the most basic way possible to keep it simple) actually provides the “training data” (the identified relevant and non-relevant documents) for “machine learning” at search engines.

What The Heck Is “Machine Learning?”

It’s a subfield of artificial intelligence (AI) concerned with algorithms that allow computers to learn stuff. Keeping it simple, an algorithm is given a set of data and infers information about the properties of the data – and that information allows it to make predictions about other data that it might see in the future.

So, having mentioned click-through data above, let’s dig just a tiny bit deeper into the importance of “implicit end user feedback.”

Whenever an end user enters a query at a search engine and clicks on the link to a result, the search engine takes a record of that click. For a given query, the click-throughs from many users can be combined into a “click-through curve” showing the pattern of clicks for that query. Stereotypical click-through curves show that the number of clicks decreases with rank. Naturally, if you interpret a click-through as a positive preference on behalf of the end user, the shapes of those curves would be as you might expect, higher ranked results are more likely to be relevant and receive more clicks. Of course, with search engines having access to “user trails” via toolbar and browser data (literally being able to follow you around the web) they now have an even richer seam of data to analyze and match for ranking.

Learning From Clicks.

Google and other search engines receive a constant stream of data around end user behavior. And this can immediately provide information about how much certain results in the SERPs are preferred over others (users choosing to click on a certain link or choosing not to click on a certain link). It’s no hard task for a search engine such as Google to design a click-tracking network by building an artificial neural network (more specifically, a multilayer perceptron (MLP) network). And this is a prime example of “machine learning” in action. No, I’m not going to explain the inner workings of an artificial neural network. There’s tons of data online if you’re so inclined to go find it.

But I do want to, in a simple fashion, explain how it can be used to better rank (and often re-rank) results to provide the end user with the most relevant results and best experience.

 

“Take no thought of who is right or wrong or who is better than. Be not for or against.”

 

First the search engine builds a network of results around a given query (remember the query expansion process explained earlier) by analyzing end user behavior each time someone enters a query and then chooses a link to click on. For instance, each time someone searches for “foreign currency converter” and clicks on a specific link, and then someone else searches for “convert currency” and clicks on exactly the same link and so on. This strengthens the associations of specific words and phrases to a specific URL. Basically, it means that a specific URL is a good resource for multiple queries on a given topic.

The beauty of this for a search engine is that, after a while the neural network can begin to make reasonable guesses about results for queries it has never seen before, based on the similarity to other queries. I’m going to leave it there as it goes well beyond the scope (or should I say purpose) of this column to continue describing deeper levels of the process.

There are many more ways that a search engine can determine the most relevant results of a specific query. In fact, they learn a huge amount form what are known as “query chains,” which is the process of an end user starting with one query, then reformulating it (taking out some words or adding some words). By monitoring this cognitive process, a search engine can preempt the end user. So the user types in one thing at the beginning of the chain and the search engine delivers the most relevant document that usually comes at the end of the chain.

In short, search engines know a lot more about which media are consumed by end users and how, and which is deemed the most relevant (often most popular) result to serve given end user preferences. And it has nothing to with which result had whatever amount of SEO work on it.

I’ve written a lot over the years about “signals” to search engines, in particular the importance of end user data in ranking mechanisms. In fact, it’s coming up to ten years now (yes, ten years!) since I first wrote about this at ClickZ.

And, on a regular basis, I still see vendor and agency infographics suggesting what the strongest signals are to Google. Yet rarely do you see end user data highlighted as prominently as is should be. Sure, text and links send signals to Google. But if end users don’t click on those links or stay on a page long enough to suggest the content is interesting, what sort of signal does that send? A very strong (and negative) one I’d say.

So, going back to the headline of this column. Next time you scratch your head, comparing your “SEO Kung Fu” to the other guy, give some extra thought to what search engines know… And unfortunately you don’t.