What’s Important to Search Engines and What’s NotMarch 21, 2007 I recently had an inquiry from someone who was looking for some possible SEO consulting with me. He was in the process of a redesign and wanted to be
The interesting part of the email was this person’s misconceptions about
* Little or no Flash.
This is a huge misconception to many who are trying to design
* All scripts should be called from external files.
This is a great idea to keep file size down and make it easy to update your
* The site should be designed using CSS as extensively as possible.
Another myth. CSS doesn’t have any special properties that search engines
* The CSS should be called from external files.
Same as calling up scripts in external files — nice to do, but not a search
Why not? I’m not sure where this myth came from, but I suppose if you’re
* A large percentage of the code on each page needs to change from page to
Nope. You certainly do NOT have to change the code in your pages to avoid
* All picture links should have text links under the pictures.
No reason for that at all. Image links that make use of the image alt
* DO NOT use drop-down or fly-out menus using JavaScript.
This is fairly good advice; however, there are very easy workarounds if you
* Must use basic HTML link navigation (textual navigation, no JavaScript
Yes and no. JavaScript links are definitely a no-no. But there are plenty
Why? This has nothing to do with search engines. It’s nice to do, though.
‘Fraid not. Dynamic pages are just as easy to crawl and rank as static
* The site needs to be browser-compatible and screen-resolution-compatible.
This is another thing that’s nice to do for your site visitors, but it has
Phew! I hope this helped clear up a lot of misconceptions that anyone else Post Comment ![]() (ach, line 2 edited out my code sample): like comment-start The password to the database is googleyeyes commend-end ![]() Comment @ 03/22/07 at 11:25 am Interesting and good points. However, regarding the image links, if you read Google Webmaster Guidelines, the recommendation is: “Try to use text instead of images to display important names, content, or links. The Google crawler doesn’t recognize text contained in images.” I’m not saying image links won’t work, but I believe it’s a good idea to follow Google’s guidelines. Same goes for the validation issue. Google recommend: “Check for broken links and correct HTML.” ![]() Comment @ 03/22/07 at 11:48 am A wonderful post written by someone who’s not done their testing homework. The calling files from outside the page and CSS controlled layouts actually do make a difference. Try making two identical pages, and make them rank for stupid terms. Like your own private SEO contest. Then do one with CSS and javascript etc all in the code, and one called externally. See which one ranks better. : ) Fun little test for you. I don’t doubt the difference is small, but it’s there. ![]() Comment @ 03/22/07 at 12:06 pm
![]() Comment @ 03/22/07 at 1:30 pm Interesting and good points. However, regarding the image links, if you read Google Webmaster Guidelines, the recommendation is: “Try to use text instead of images to display important names, content, or links. The Google crawler doesn’t recognize text contained in images.” Who cares what Google says? It’s simply not true. Image links are not now, nor have they ever been a problem. Why they say that is completely beyond me as there’s simply no reason for it. It is definitely true that they can’t read the text that’s written as an image, but that’s what the alt attribute text is for! 1) You say: Dynamic pages are just as easy to crawl and rank as static pages. Google says: # Don’t use &ID= as a parameter in your URLs. That’s correct. The vast majority of dynamic pages don’t use that parameter, however. You guys really have to get away from looking at Google Guidelines. That’s all they are “guidelines” not gospel. You never need to read guidelines if you simply use common sense. ![]() Comment @ 04/12/07 at 5:50 am “You never need to read guidelines if you simply use common sense.” The reason why there’s so much misinformation out there is too many people follow common sense. “I’m here to tell you that there is no such thing as a search engine penalty for duplicate content.” Absolutely not true, according to Adam Lasnik: “In the rare cases in which we perceive that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we’ll also make appropriate adjustments in the indexing and ranking of the sites involved. However, we prefer to focus on filtering rather than ranking adjustments … so in the vast majority of cases, the worst thing that’ll befall webmasters is to see the “less desired” version of a page shown in our index.” Common sense will not lead you to the truth when it comes to search engines. Why? Because Google is a piece of code, and with any piece of code, there are bugs. Things happen that aren’t supposed to happen. Cloaking that *should* be detected goes ignored. HTML that should get parsed correctly isn’t. Irrelevant .edu parasite hosted pages that should never rank for “buy vi@gr@” outranks vi@gr@.com. Let’s stick to facts, not common sense. |
|||
Comment @ 03/22/07 at 4:35 am
This is a GREAT article. Thank you! Some questions/comments:
- The cool thing about CSS is the push toward xhtml. Clean code makes it easier to parse text from markup. So that’s an SEO benefit.
- Comments: I think this myth originated from programming. People leave comments in code forgetting that HTML and javascript comments are published. Like
- Do you have any idea why javascript isn’t read? That’s never made any sense to me. JavaScript is really easy to parse, do you think it’s because they just haven’t bothered to parse it yet? I wonder because there are so many millions of template websites out there that provide only javascript for navigation.
- Valid HTML and valid CSS - same as above, it’s easier to parse so the content is more easily discerned