Oracle Data Cloud Crawler is an automated robot that visits web pages to examine and analyze the content. In this sense it is similar to the robots used by search-engine companies.
Oracle Data Cloud Crawler is identified by one of the following user-agents:
Mozilla/5.0 (compatible; GrapeshotCrawler/2.0; +http://www.grapeshot.co.uk/crawler.php)
Mozilla/5.0 (iPhone; CPU iPhone OS 8_3 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12F70 Safari/600.1. 4 (compatible; GrapeshotCrawler/2.0; +http://www.grapeshot.co.uk/crawler.php)
Oracle Data Cloud Crawler can be identified by requests coming from Oracle- owned IP address ranges. If you are suspicious about requests being spoofed, check the IP address of the request against the appropriate RIPE database using a suitable whois tool or lookup service. At time of writing the only addresses in use for Oracle Data Cloud Crawler are:
Oracle Data Cloud assists advertisers placing contextual advertising on web pages. In order to do this it is necessary to examine, or crawl, the pages to determine which category or categories are the best matches.
Pages are only visited on demand. If Oracle Data Cloud Crawler has visited your site it means that an ad was recently placed on a page where the information was either not yet available or needed to be refreshed. For this reason, you will often see a request from Oracle Data Cloud Crawler shortly after a user has visited a page. The crawler systems are engineered to be as friendly as possible. They limit request rates to any specific site and automatically back away if a site is down or slow or repeatedly returning non-200 (OK) responses.
A significant chain of systems may cause Oracle Data Cloud to analyze your site. Oracle Data Cloud Crawler provides real-time contextual information to a number of Real Time Bidding (RTB) systems, such as Rubicon, AppNexus, and more. These RTB systems are often used by third-party ad server systems as part of their ad serving strategy.
Oracle Data Cloud does not provide a search engine system to anyone, and never makes the crawled contents of your site available by any search or other system. We only analyze your site when an ad has been placed that has caused us to be queried about the context of the page.
You can use robots.txt files to block Oracle Data Cloud Crawler from your site, as shown in the following examples:
We take requests to desist crawling any site, or parts of a site, or any other feedback on our operations seriously and will act on it in a prompt and appropriate manner. As a best practice, we check robots files once a day, so changes to a sites file could take 24 hours to become active. Contact us at email@example.com and we will exclude your site, or otherwise investigate immediately.
If you think your site is being visited in error, or if Oracle Data Cloud Crawler is causing problems on your site, contact Oracle Data Cloud at firstname.lastname@example.org and we will investigate.