Ade Camilleri - Digital Strategist

A lot of marketers nowadays forget about the simple and basic SEO and instead develop these super complex, powerful SEO campaings, focusing on many different elements, except for the most simple ones… Making sure Google bots can easily crawl your website or if you really submitted the XML sitemaps to Webmaster’s panel – these are crucial steps to make sure your campaing will definitely succeed.

Do a quick search on Google for “SEO tips” and you’ll get over 14 million results. That’s a lot of tips to wade through when trying to figure out the focus of your SEO strategy. What’s more overwhelming is that’s just one search.

Each year there are new posts of list of the “hottest” tips and tricks that are “guaranteed” to work. While many of these tips are great, to really see results, you need to have a good foundation. In this post, I want to talk about getting back to the basics of SEO and why they are essential to long-term success.

When it comes to optimizing your site for search, the basics are some of the most important, yet often overlooked, aspects of SEO. The recent push of “content is king” has also caused many to forget the essentials and just focus on content distribution.

Here’s the deal: you can post all the content you want, but if your site isn’t optimized, you’re not going to get the rankings you want. So here are few basics you should cover before ever diving into the more complex elements of search.

Crawler access

If search engine crawlers have a hard time crawling your site, they’ll have a hard time indexing and ranking your pages, too. As a site owner or SEO, your first and most important job is to make sure that your site is crawlable. Using the robots.txt file, you can help direct and assist the web crawlers that are crawling your site.

There are certain pages on your site that you probably don’t want the crawlers to index, such as login pages or private directories. You can block files, pages and/or directories by specifying them as “disallowed,” like so:

User-agent: *
Disallow: /cgi-bin/
Disallow: /folder
Disallow: /private.html

You can also block certain crawlers from accessing your site using the following (replace “BadBot” with the actual bot name you’re trying to block):

User-agent: BadBot
Disallow: /

Just be careful when blocking crawlers from your entire site; in fact, don’t do it unless you know for a fact that a particular bot is causing you trouble. Otherwise, you may end up blocking crawlers that should have access to your website, which could interfere with indexing.

If you are using WordPress, there are a number of plugins that can help you do this. If you are not using WordPress, you can also easily set up a robots.txt file on your server. Learn more about robots.txt here.

After you’ve created your robots.txt, it’s important to make sure Google can crawl your site. To do so, you’ll first need to create a site map. This can be done manually or with third-party tools. (If you have a WordPress site, there are many plugins available to create site maps for you.)

Read more about SEO on SearchEngineLand

Ade Camilleri is a leading Internet marketer and Data Marketing Analyst for some of the worlds leading Comapnies with a focus on Tourism, Public Companies, Government Organmisatioons and iGaming. He forms part of the Media Plan group of Copmpanies who provide Digital Marketing Services for iGaming, Tourtism and Fortune 500 Companies.