Make your own free website on Tripod.com

Search Engine Optimization Expert offers SEO, Search Engine Promotion & Link Popularity Building Services in Hyderabad, India.

Search Engine Optimization : SEO BOOK.

Chapter 7


Dirty Deeds - Facing The Consequences

Everyone wants to fool the search engines. And the search engines know it. That's why search engine optimization is such a strange business, a hybrid of technology and I dunno… industrial espionage, perhaps? The search engines don't want you to know exactly how they rank pages because if you did, you would exactly how to trick them into giving you top positions. The search engines try to hide their methods as much as they can, but it sometimes becomes apparent what the search engines want, and at the that point, people start trying to give it to them, in a manner that the search engines regards as manipulative. This chapter discuss all these things you should avoid doing because you risk upsetting the search engines and getting penalized - potentially even getting booted from a search engine for life! You should also be careful about working with search engines marketers that tout these methods, because they can lead you into trouble.

Understanding The Basic Principles of Tricking The Search Engines

Before getting down to the nitty-gritty details about tricking the search engines, I focus on two topics: why you need to understand the danger of using dirty tricks and what the overriding principles behind tricking the search engines are based on.

Should you or Shouldn't You?
Should you use the tricks in this chapter, and if not, why not? You will hear several reasons for not using tricks. The first I'm going to belabor, because I, m not sure the argument behind this reason is very strong: ethics . You'll hear from many people that the tricks in this chapter are unethical, that those who use them are cheating and one step on the evolutionary ladder above pond scum. Many people have tried search engine tricks because they have invested a lot of money in Web sites that turn out to be invisible to the search engines . These folks cant afford to ababdon their sites and start again. One could argue that doing pretty much anything beyond the basics is cheating. After all, smart Webmasters armed with a little knowledge make the sorts of adjustments, pushing their web sites up in the ranks above sites that are more appropriate for a particular keyword phrase yet are owned by folks with less knowledge. Ethics aside, the really good reason for avoiding egregious trickery is that it may have the opposite effect and harm your search engine position. And a corollary to that reason is that other, legitimate ways exist to get a good search engine ranking

How are Tricks Done?
The idea behind most search engine tricks is simple:
To confuse the search engines into thinking your site is more appropriate for certain keyword phrases than they would otherwise believe, generally by showing the search engines something that the site visitor doesn't see. The search engines want to see what the site visitors see, ye t the same time they know they cant. It will be a long, long time before search engines will be able to see and understand the images in a web page. Right now, they cant even read text in the images, although that's something that could be possible soon.

The search engine designers have started with this basic principle:
What we see - with the exception of certain things we are not interested in images, javascripts and so on and of certain technical issues that are not important to the visitor - should be what the user sees. For various reasons, the searchbots are not terribly sophisticated. They generally don't read Javascript, Cascading Style Sheets, and so on, because of the complexity of doing so. In theory, they could read these things, but it would greatly increase the time and hardware required. So by necessity, they ignore certain components. Here's one other important principle:

The text on the page should be there for the benefit of the site visitor, not the search engines.

Ideally, the search engine designers want Web designers to act as if the search engines don't exist. The search engine designers want their programs to determine which pages are the most relevant for a particular search query. They want you - the Web designer - to focus on creating a site that serves your visitors' needs and let the search engines determine which site is most appropriate for which searcher. What the search engines don't want is for you to show one version of a page to visitors and another version to the search engines because you feel that version is what the search engine will like most.

DO These Tricks Work?
For the moment, all the tricks in this chapter do work, at least in some circumstances for some search engines. This may be a little surprising, that some of these tricks are very crude and have been known to the search engines for a long time. You'll still find crudely keyword-stuffed pages and pages with hidden text sometimes ranking well in the search engines. Some of this search engine spam does filter through. But most major search engines are much better at recognizing the tricks and eliminating those pages. If your competitors are ranking higher than you, it's not because of the tricks they played, its because of the good, solid search engine optimization work you didn't do." These tricks can be dangerous. You may get caught in one of several ways:

· A search engine algorithm may discover your trickery, and your page or your entire site could be dropped from the search engine.
· A competitor might discover what you are doing and report you to the search engines. Google has started that it prefers to let its algorithms track down cheaters and users reports of search engine spamming to tune these algorithms, but Google will take direct action in some cases.
· Your trick may work well for a while, until a major search engine changes its algorithm to discover the trickery… at which point your site's ranking will drop like a rock.

Concrete Shoes, Cyanide, TNT - An Arsenal for Dirty Deeds
Here comes some of the search engine tricks that are employed on the Web.

Keyword stacking and Stuffing
You may run across pages that contain the same word or term, or may be several words or terms, repeated over Andover again, or may be several words or terms, repeated over and over again, often in hidden areas of the page, though sometimes visible to visitors, This is one of the earliest and crudest forms of a dirty deed, one that the search engines have been aware of for years. You'd think keyword stacking wouldn't work, but the search engines aren't perfect, and sometimes keywords- stacked pages slip through. The terms keyword and keyword stuffing are often used interchangeably, though some people regard keyword stuffing as something a little different - placing inappropriate keywords inside image ALT attributes and in hidden layers.

Hiding (and shrinking) Keywords
Another oil trick is to hide text. This trick is often combined with keyword stuffing , placing large amounts of text into a page and hiding it from view. If you suspect that someone has hidden text on a page, you can often make it visible by clicking inside text at the top of the page and dragging the mouse to the bottom of the page to highlight everything in between. You can also look in the page's source code. How did this designer make the text disappear? Down at the bottom of the source code, I found this:

<FONT SIZE=7COLOR="#ffffff"><H6>glucosamine glucosamine glucosamine glucosamine glucosamine emu oil emu oil emu oil kyolic kyolic kyolic wakunaga wakunaga wakunaga</H6></font>

Notice the COLOR="#ffffff" piece; ffffff is hexadecimal for the color white. The page background is white, so, abracadabra, the text disappears.
Here are some other tricks used for hiding text:
· Placing the text inside<NOFRAMES> tags some designers do this even if the page isn't a frame-definition document.
· Using hidden fields. Sometimes designers hide words in a form's hidden field. (<INPUT TYPE="HIDDEN">)
· Using Hidden Layers. Style sheets can be used to position a text layer underneath the visible layer or outside the browser.

Because hidden text takes upspace, designers often use a very small font size. This is another trick that search engines may look for and penalize.

Hiding Links
Some Web designers create links specifically for the search engines to find, but not intended for site visitors. Links can be made to look exactly like all the other text on a page or may even be hidden on punctuation marks - visitors are unlikely to click a link on a period, so the link can be made invisible. Links may be placed in transparent images or invisible layers, in small images, or in <NOFRAMES> tags, or hidden in any of the ways discussed earlier.

Using Unrelated Keywords
This is a crude and perhaps little-used technique: using keywords that you know are being searched upon frequently, yet which have little or nothing to do with the subject of your site. A few years ago, many Web designers thought it was clever toplace the word is sex in their keywords meta tag or hide it somewhere in the page. This technique is used less frequently these days because these words are so incredibly competitive, and anyway, its better to focus on keywords that can actually attract the right kind of visitor.

Duplicating Pages and Sites
If content with keywords is good, then twice as much content is better, and three times as much is better still, right? Some site developers have duplicated pages and even entire sites, making virtual photocopies and adding the pages to the site or placing duplicated sites at different domain names. Sometimes called mirror pages or mirror sites, these duplicate pages are intended to help a site gain than one or two entries in the top positions. If you can create three or four Web sites that rank well, you can dominate the first page of the search results, with from four to eight entries out of the first ten. Some people who use this trick try to modify each page just a little to make it harder for the search engines to recognize duplicates. The search engines in particular Google, have designed tools to find duplication and will often drop a page from their indexes if they find it's a duplicate of another page at the same site. Duplicate pages found across different sites are generally okay, which is why content syndication can work well, but entire duplicate sites are something the search engines frown on.

Page Swapping and Page Jacking
Here are a couple of variations on the duplication theme:

· Page Swapping: This is a little-used technique of placing one page at a site and then, after the page has attained a good position,   removing it and replacing it with a less-optimized page. One serious problem with this technique is that some search engines now   reindex pages very quickly, and its impossible to know when the search engine will return.
· Page Jacking: Some truly unethical search engine marketers have employed the technique of using other peoples' high-ranking   Web pages, in effect stealing pages that perform well for a while. This is known as page jacking.

Doorway and Informatin Pages
A doorway page is created solely as an entrance from a search engine to your Web site. Doorway pages are sometimes known as gateway pages and ghost pages. The idea is o create overly optimized pages that are picked up and indexed by the search engines and, will rank well and thus channel traffic to the site. Search engines hate doorway pages because they break one of the cardinal rules: They are intended for search engines, not visitors. The sole purpose of a doorway page is to channel people from the search engines to the real Web site.

One man's doorway page is another man's information page - or what some people call affiliate pages, advertising pages. The difference between a doorway page and an information page is that the information page is that the information page is designed for use by the visitor in such a manner that the search engines will rank it well, whereas the doorway page is designed in such a manner that its utterly useless to the visitor because its intended purely for the search engine; in fact originally doorway pages were stuffed full of keywords, duplicated hundreds of times.

Doorway pages typically don't look like the rest of the site, having been created very quickly or even by some kind of program. Doorway pages are part of other strategies. The pages used in redirects and cloaking are in effect, doorway pages.
Where do you draw the line between a doorway page and an information page? Suppose, however a lots of pages designed for use by the site visitors pages that, until my client started thinking about search engine optimization, would have been deemed unnecessary. Surely these pages are , by intent doorway pages, aren't they, even if one could argue that they are useful in some way? Varying degrees of utility exist, and I know people in the business of creating "information" pages that are useful to the visitor in the author's opinion only! Also a number of search engine optimization companies create doorway pages that they simply call information pages.

Using Redirects and Cloaking
Redirects and Cloaking are pretty much the same thing. The intention is to show one page to the search engines but a completely different page to the site visitor. Why do people want to do this? Here are a few reasons:
· If a site has been built in a manner that makes it invisible to search engines, cloaking allows the site owner to deliver indexable   pages to the search engines while retaining the original site.
· The site may not have much textual content, making it a poor fit for the search engine algorithms. Although search engine    designers might argue that this fact means the site isn't a good fit for a search, this argument clearly doesn't stand upto analysis    and debate.
· Each search engines prefers something slightly different. As long as the search engines cant agree on what makes a good search   match, why should they expect site owners and developers to accept good results in some search engines and bad results in   others ?

Understanding Redirects
A redirect is the automatic loading of a page without user intervention. You click a link to load a Web page into your browser, and within seconds, the page you loaded disappears, and a few one appears. Designers often create pages that are designed for the search engines - optimized, keyword-rich pages - that redirect visitors to the real Web site which is not so well optimized. The search engines read the page, but the visitors never really see it. Redirect can be carried in many way:
· By using the REFRESH meta tag. But this is an old trick the search engines discovered long ago; most search engines wont index a   page that has a REFRESH tag that bounces the visitor to another page in less than ten seconds or soon.
· By using Javascripts to automatically grab the next page within a split second.
· By using Javascript that is tripped by a user action that is almost certain to occur.

You are unlikely to get penalized for using a redirect. But a search engine may ignore the redirect page. That is if the search engine discovers that a page is redirecting to another page - and to be honest, the search engines often don't find these redirect pages unless they use a REFRESH meta tag - it simply ignores the redirect page and indexes the destination page. Search engines reasonably assume that redirect pages are a merely a way station on the route to the real content.

Examining Cloaking
Cloaking is a more sophisticated trick than a redirect, harder for search engines to uncover than a basic REFRESH meta tag redirect. When browsers or searchbots request a Web page, they send information about themselves to the site hosting the page - e.g. , "I am Version6.1 of Internet explorer," or the device requesting the page. If the device isn't listed, the cloaking program tells the Web server to send the regular Web page, the one intended for site visitors. But if the device name is listed in the searchbot list - as it would be for Googlebot, e.g. the cloaking program sends a different page, one that the designer feels is better optimized for that particular search engine. Here's how the two page versions differ:
· Pages provided to the search engine: Often much simpler; created in a way to make them easy for the search engines to read; have lots of heavily keyword-laden text that would sound clumsy to a real person.
· Pages presented to visitors: Often much more attractive, graphic-heavy pages, with less text and more complicated structures and navigation systems.

The search engines don't like cloaking, Conservative search engines marketers steer well clear of this technique. Here's how google defines cloaking:
"The term 'cloaking is used to describe a Web site that returns altered Web pages to search engines crawling the site."
Well, its clear that cloaking is cloaking but wait for a minute:…

In other words, the Web server is programmed to return different content to Google than it returns to regular users, usually in an attempt to distort search engine rankings."

Hang on, that's not the same thing:
This can mislead users about what they will find when they click on a search results. To preserve the accuracy and quality of our search results, Google may permanently ban from our index any sites or site authors that engage in cloaking todistort their search rankings." Notice a few important qualification: altered pages…usually in an attempt to distort searchengine ranking…cloaking to distort their search engine results.

The Ultimate Penalty
Just how much trouble can you get into by breaking the rules? The most likely penalty isn't really a penalty. Its just that your pages wont work well with a search engines algorithm, so they wont rank well. It is possible to get the ultimate penalty, to have your entire site booted from the index. One of the dangers, then of using tricks, is that someone might report you, and if the trick is bad enough, you will get the boot.

What do you do if you have been penalized? Suppose your site is dropped from Google. It may not be a penalty - perhaps your site was not available at the point at which Google tried to reach it. But if the site doesn't return to the index after a few weeks, then you may have been penalized. Sometimes sites get penalized due to unintentional mistakes. Perhaps you hired a Web-development team that implemented various tricks without your knowing, or perhaps your company gave you a site to look after long after the tricks were used. Or maybe you purchased a domain name that was penalized due to dirty tricks in the past. Stuff happens: Be truthful and explain the situation to the folks at Google.

If you think your site has been banned, clean away any dirty tricks from your site and then email help@google.com to explain that you fixed your site. Don't expect a rapid reply. Wait a couple of weeks and then try again. Still no reply? Try again after another couple of seeks, and if you still cant get a response, try calling 650-330-0100 and then pressing 0 to ask the operator who you can talk to about the problem. You may be given another email address to try, along with a password to put in the Subject line.



All Rights reserived by SEOTOPPERS@2004