Search engine optimisation is an incredible field. i am grateful that operating in "SEO" has given American state the chance to function a pacesetter, a coach, a teacher, a planner, Associate in Nursing social scientist Associate in Nursingd an investigator. once nearly fifteen years operating within the search selling trade, I still learn one thing new on a daily basis.

Now, for my guilty admission: I've ne'er printed a commentary on SEO. Not one one. that does not feel right, particularly considering what proportion I've learned from of us like Bill Slawski, Kelsey Libert, Richard Baxter, Mike King, Glenn Gabe, Dr. Pete Myers, Jen Slegg, Rand Fishkin and a lot of gifted folks that I will moderately name.

In the spirit of giving back to a community that freely shares most knowledge and data, here square measure fifteen mistakes, lessons and observations that I've knowledgeable about over fifteen years of search marketing:

1. connection will depend on one WORD

A web site that I managed graded within the initial position for the search "best phone camera" within the summer of 2013. That modified once Nokia declared the "Lumia 1020," a forty one megapixel smartphone. the web swooned and delineate the phone in superlative terms inside minutes of the announcement. All of the news articles (and user searches) that mentioned Lumia 1020 within the same breath as "best phone camera" trained Google into thinking that the Lumia 1020 was the most effective phone camera. My #1 ranking became a page four result.

When we updated our page to acknowledge the 1020's existence, we have a tendency to rocketed back to the primary position. This was no fluke. I've seen cases wherever the absence (or existence) of one word torched a page's rankings or created its fortunes.

2. THE 304 (NOT MODIFIED) standing is beneficial ON massive SITES

304 is one among the foremost obscure protocol standing codes. it is a thanks to tell a crawler that nothing has modified since it last visited a webpage. little sites needn't trouble with this standing code, however sites with legion pages will take pleasure in victimization it. In short, it is a tool for managing crawl budget. The trick is deciding what counts as a modification to the page. as an example, if the links within the right rail modification daily, is that the page modified? victimization the 304 standing code effectively is each art and science.

3. INVISIBLE CHARACTERS will create mayhem

I once created Associate in Nursing innocuous modification to a robots.txt file that prevented a complete domain from being crawled. this is not a horror story, thankfully. A member of my team discovered the difficulty inside minutes (always use the robots.txt Tester in Search Console!). Even so, it took America Associate in Nursing hour to find the basis of the problem: There was a phantom character within the robots.txt file.

The character was invisible during a browser and invisible in most text editors. we have a tendency to finally discovered the spectral character during a true plain-text editor. Beware: Associate in Nursing invisible character during a airt map or Apache configuration file is even as deadly.


Nothing puts a chill in my bones over server errors, specifically, 500, 502 and 504 response codes. (A 503 Service out of stock response may be a legit thanks to handle a page or a web site that's quickly down.)

I've learned the arduous manner that for each real person or computer program crawler that encounters a 50X error on your web site, you must expect to lose five to ten visits over time. that is as a result of search engines quickly deindex pages with server problems and users could become jaded by the error screen. (I bet the 50X error screens on your web site are not as pretty or as helpful as your 404 page.)

I once worked for a money media company that had the misfortune to botch a content delivery network migration the evening that Steve Jobs died. A flood of users hit a noticeable white screen that same nothing except "502." Organic search traffic fell by four-hundredth within the following weeks and it took months to recover to its previous trend. I've seen similar things (and aftermaths) play out enough times to stay American state up in the dark.

5. GOOGLE will build massive MISTAKES

Remember Google's authorship program (that displayed author photos within the search result pages)? I certain do. I had inspired the employees writers of alittle business web site to make Google and accounts in order that they might make the most of the program.


In December 2013, Google declared that they'd show fewer author photos. Shortly thenceforth, the tiny business website's traffic and rankings went haywire. I spent hours of panic-stricken investigation attempting to find why a number of the site's articles were fully born from Google's results. Then, I discovered the common thread: authorship.

Google could have supposed to prevent displaying Associate in Nursing author's icon, however a bug caused the complete article to disappear from Google's results. Google mounted the bug inside some days and also the site's traffic and rankings came back to traditional. Whew!

6. watch out HAVING MULTIPLE TIMESTAMPS ON an equivalent PAGE

I once saw traffic to Associate in Nursing evergreen article drop in ninety six albeit search demand was steady and also the article was recently updated. after I checked the search results page, Google was showing a timestamp from 3 years earlier. wherever did the previous date come back from? the primary investigate the page.

All of the dates on the article were wrapped in the correct schema tags (including the timestamps on comments) and Google shouldn't have been confused, but it was. One of our engineers solved the problem by changing the timestamp format on comments to appear in minutes/hours/days/weeks/years ago. It's OK to have more than one date on a page, but try to make sure that the timestamp of an article is the only date that appears in an International Standards Organization (ISO) format.


WebSub (formerly called PubHubSubBub) is a realtime way to let subscribers know that a feed has been updated. It's an old technology that never really caught on, but Google still supports it in 2018. Wordpress supports WebSub natively and that's how I learned how effective it can be.
I've seen Google index a news article within seconds of it being published, even though the site that published it isn't known for breaking news. The URL that Google indexed had an RSS feed tracking parameter on it (Google eventually scrubbed the URL down to its canonical root).

Share this

Related Posts

Next Post »