Sitemap Xml plugin

Sitemap. There are two popular versions of sitemaps, and nopCommerce supports both of them. An XML sitemap is a structured format of a site that a visitor doesn't need to see. The sitemap gives information about the pages on a site to search engines: their relative importance to each other, and how often they are updated. HTML sitemaps are designed for users, to help them find content on a page. They don't need to include each subpage. HTML sitemaps help both visitors and search engine bots find pages on the site.

Sitemap Xml plugin - Features
  • FreeInclude/Exclude Home page
  • FreeInclude/Exclude Product Search page
  • FreeInclude/Exclude Contact Us page
  • FreeInclude/Exclude News Archive page
  • FreeInclude/Exclude Blog page
  • FreeInclude/Exclude Boards page
  • FreeIncluded "https://ima9ines.com" to the sitemap.xml
  • ProExclude Categories
  • ProExclude Products
  • ProExclude Manufacturers
  • ProExcluded "https://ima9ines.com" from the sitemap.xml
Sitemap Xml plugin - FAQs
  1. Download Sitemap Xml plugin - download link
  2. Extract the package
  3. Copy ima9ines.Nop.SitemapXml folder, and then paste it under Plugins folder
  4. Go to Admin > Configuration > Plugins > Local plugins
  5. In the Local plugins page, click on the Reload list of plugins button
  6. Navigate to the ima9ines - Sitemap Xml plugin, click on the Install button
  7. After it installed, click on the Edit button, and then enable it
Sitemaps - FAQs

Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site.

Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site.

Sitemap 0.90 is offered under the terms of the Attribution-ShareAlike Creative Commons License and has wide adoption, including support from Google, Yahoo!, and Microsoft.

As with all XML files, any data values (including URLs) must use entity escape codes for the following characters: ampersand (&), single quote ('), double quote ("), less than (<), and greater than (>). You should also make sure that all URLs follow the RFC-3986 standard for URIs, the RFC-3987standard for IRIs, and the XML standard. If you are using a script to generate your URLs, you can generally URL escape them as part of that script. You will still need to entity escape them. For instance, the following python script entity escapes http://www.example.com/view?widget=3&count>2

$ python
Python 2.2.2 (#1, Feb 24 2003, 19:13:11)  
>>> import xml.sax.saxutils
>>> xml.sax.saxutils.escape("http://www.example.com/view?widget=3&count>2")

The resulting URL from the example above is:

http://www.example.com/view?widget=3&amp;count&gt;2

Use W3C Datetime encoding for the lastmod timestamps and all other dates and times in this protocol. For example, 2004-09-22T14:12:14+00:00.

This encoding allows you to omit the time portion of the ISO8601 format; for example, 2004-09-22 is also valid. However, if your site changes frequently, you are encouraged to include the time portion so crawlers have more complete information about your site.

For static files, this is the actual file update date. You can use the UNIX date command to get this date:

$ date --iso-8601=seconds -u -r /home/foo/www/bar.php
>> 2004-10-26T08:56:39+00:00

For many dynamic URLs, you may be able to easily compute a lastmod date based on when the underlying data was changed or by using some approximation based on periodic updates (if applicable). Using even an approximate date or timestamp can help crawlers avoid crawling URLs that have not changed. This will reduce the bandwidth and CPU requirements for your web servers.

Sitemaps should be no larger than 50MB (52,428,800 bytes) and can contain a maximum of 50,000 URLs. These limits help to ensure that your web server does not get bogged down serving very large files. This means that if your site contains more than 50,000 URLs or your Sitemap is bigger than 50MB, you must create multiple Sitemap files and use a Sitemap index file. You should use a Sitemap index file even if you have a small site but plan on growing beyond 50,000 URLs or a file size of 50MB. A Sitemap index file can include up to 50,000 Sitemaps and must not exceed 50MB (52,428,800 bytes). You can also use gzip to compress your Sitemaps.

Yes. You need to include the protocol (for instance, http) in your URL. You also need to include a trailing slash in your URL if your web server requires one. For example, http://www.example.com/ is a valid URL for a Sitemap, whereas www.example.com is not.

Most Viewed Products

Follow us