Google Tips Research tips Uncategorized

Google Search Trick

I discovered yesterday that Google was returning results from all years by default. I changed that so Google will return results no more than a year old. Use this trick with caution if you are doing deep research.

1. Search as you normally would. This screenshot shows the Chrome browser:


2. See the “Tools” option at the far right? Click on that.


3. Then select whatever time period you would like:


4. Google now sorts the results accordingly.

Google Tips Writing tips

Unique Content and Unique Value

When I’m writing newspaper or magazine articles I don’t worry about SEO ideas like unique content and unique value. My research and writing is original and I have no worry that I will be ranked lower due to repetitive content. But because I often have to rewrite news stories for my Vancouver employer I should probably learn more about these related subjects.

I’ve touched on unique content before (internal link). Copying a story word for word produces no unique content. Google doesn’t like that. So we rewrite. Substantially rewriting a story produces a higher unique content simply by using different words than in the original story.  Run your variation through Copyscape (external link) to see how it passes. Let’s take an extreme example.

A famous Melville quote goes like this:

“Towards thee I roll, thou all-destroying but unconquering whale; to the last I grapple with thee; from hell’s heart I stab at thee; for hate’s sake I spit my last breath at thee.”

One might rewrite it thus:

I’m coming after you Mr Unstoppable Whale, rowing to you in my longboat. I’m going to fight you to the end, chuck my spear into you, and, just because I hate you so much, spit on your watery grave.”

This would achieve a 0% match with the original. You’ve just rewritten a story that Google should now see as unique content. But you’ve added no unique value. Nothing of your own has been added.

Rewriting the whale story with unique value would mean more than adding a quote from Wikipedia or a purloined anecdote from a whale expert at the Discovery Channel. Ideally, you would bring in your own original experiences with a whale, along with original images.

This is totally impossible, of course, when you have to rewrite news stories that are breaking and have to be posted immediately. Perhaps the best we can achieve is a rewrite. But how much of a rewrite? We certainly don’t have time for 100% unique. What then? 50% unique? 25%?

I’m still mulling over these ideas. A great video presentation on the topic is at the link below. Check it out to learn about a vexing problem in our information age:





The Mystery of Photos with Google Search

A few years ago Google started including photographs of authors in search results (internal link). You had to set up a profile with Google and then set a link to your website but that wasn’t hard. Then sometime last year Google announced they were doing away with photos (external link). Well, it looks like they are back:


I always liked the photos and I thought they encouraged more clicks. It seems that content like this blog, when linked to Google Circles or Google + or whatever they are calling it now (external link), will trigger a photo op. If you are a blogger I think it is well worth spending a half hour to get your writing linked. And to spend a little time finding a good photo.

Update: November 9th, 2015. Photos continue to be used. See image below while doing a search for 3Play Media.



SEO and Disclaimers As Image Files

Google’s search engine likes fresh content but dislikes repetition. A website should have regularly updated pages or a blog that’s frequently contributed to, to rank higher in search. But what if you have pages that require disclaimers, similar text that will occur over and over throughout your pages or blog?

One solution is to have a single disclaimer page and hope that everyone on your site reads it before they try practicing surgery on themselves. The other solution is to put up the repetitive text as an image file, like what you see below.

There! Google will not read an image and therefore not penalize you for using the same content on many pages. It’s a somewhat clumsy solution in that an image file will not adjust to screen sizes as easily as text on a web page. (The text will be tiny on tiny screens.) This disclaimer approach, however, will inform your readers without pulling your ranking down.



Copyscape, Google search, and Unique Pages

I’m learning about Copyscape (external link) and I am confused. I have many questions, but are they the right ones?

Copyscape is a web service that detects plagiarism on the net. It also reports on whether submitted content is unique. First things first.

Let’s say you have a web page that’s been on-line for a while. You want to know if people are copying its content. You enter your URL into Copyscape’s interface and it will return, if you are unlucky, a list of pages that are plagiarizing your writing.

Copyscape’s Premium service goes further. Submit content into its search field and it will tell you if that writing already exists on web pages and to what degree. It’s a good checker for a teacher to validate the originality of an essay, or a web site builder to check on whether a freelance has provided their own work. Using methods I don’t completely understand, it will return a percentage rating. “Your content is 32% unique.” Or 7%. Or whatever. To demonstrate that rating, Copyscape will show you the pages where copied work appears and it will highlight the exact words and phrases it has problems with. What’s my problem? Let me give you a typical example, and, again, I am only slowly understanding this technology.

If you are a dentist in Sacramento, California, you probably already have a website with the usual pages. You have an “About” page, a “Home” page, a “FAQ” and so on. To make your site more appealing to the search engines, you might have some pages on the general practice of dentistry, original content, written for your site to add value to your readers. Now we get tricky.

Your practice has grown and you are expanding to three more cities. Naturally you’d like to port the content that you’ve paid for to the new websites you’re building for each new office. Apparently that won’t make the search engines happy. They don’t like to see copied text and will rank the new sites much lower than they should be. Google will run a Copyscape like search across the web, see what it thinks is plagiarized or copied content, and whoosh, down goes your ranking.

Despite Google’s supposed super-sophistication, it can’t see that your websites are all run by the same group. Or perhaps it sees that they are but still insists that each page be individual. What it considers, as does Copyscape, “Unique.” What, then is “Unique?” Is 60% unique good enough for Google? Or does it have to be 90% unique? Does one have to rewrite every single duplicate page for every website? Can I just rearrange the sentence structure or do I have to build each page anew? Good questions. All I can find out is that it all depends. Good grief.

I’ll have more on this in future posts as I struggle to understand it more. It appears that rephrasing and rewording are not good enough. In those cases you are not adding anything to something that has already been written. You are not bringing anything new. And, no, it can’t be just fluff or padding. One forum had this quote, which is pointing me in the direction I will continue investigating. “There is no content on the web, not even peer reviewed articles, that are 100% unique. The uniqueness or the originality of content lies in your ability to add some information or value to what others have done.”


An excellent page on search and the quality of unique:

Update: April 6, 2015. Does the word unique mean the same thing to Google and Copyscape? I don’t know. I will be looking into that.


Notes on Statistics

It’s agreed that statistics can be useful, perhaps invaluable to a website’s success, but I find they raise more questions than they settle. And are their statistics and keyword search results really valid to begin with?

According to Yahoo, 32% of the traffic to is from Semalt, a shadowy group thought to be Ukrainian spammers. Great. At, Google Analytics says that 29.4% of my traffic comes from people using Brazilian Portuguese as their language. Really? At, 29.4% of my traffic is Brazilian. Of course, it is all a fraud. The average time these ‘people’ spend at either site in every session is exactly zero seconds. More likely this is all robot generated traffic, sent out to every website, for reasons only these web crawler companies know. I am now trying to find specific IP addresses for these groups so I can use Analytics’ filters to block them. But finding this information at Google Analytics is very difficult, indeed, I find Analytics so hard to use that I am discouraged from using it. Speaking of Google, have you used their Webmaster Tools?

Google’s Webmaster Tools is an adjunct to Google Analytics. You go there for more and different information than Analytics. I’ve registered three of my sites with this property, and I continue to mull it over with worry and wonder. Like, what does it all mean? For example, Tools says the second most used search term for people coming to my site is “hobbess.” I can’t remember ever using that word, whatever it means, and I can’t imagine Google providing a link to my site because of it. Yet there it is. On a more practical note, there are discussions worth having, something that keyword search results can foster.

Question. If the most popular search term used to find my plant site is “Colorado Blue Spruce,” should I be forever penning articles on our spiky mountain friend? Based on keyword results alone? I just did a Google search for Colorado Blue Spruce, and my site does not come up within the first four pages. My site is there, somewhere at Google, but obviously buried deep. And with thousands of sites mentioning blue spruces, I can see no profit doing more writing on the subject, especially since I don’t know why the Blue Spruce traffic was generated to begin with. Hmm. It is perhaps not enough to have the data, but to be able to interpret it correctly. I continue to work at that.

Search image