Pages are Indexed but not Cached by Google. Why?
-
Here's an example:
I get a 404 error for this:
But a search for qjamba restaurant coupons gives a clear result as does this:
site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all
What is going on? How can this page be indexed but not in the Google cache?
I should make clear that the page is not showing up with any kind of error in webmaster tools, and Google has been crawling pages just fine. This particular page was fetched by Google yesterday with no problems, and even crawled again twice today by Google Yet, no cache.
-
Thanks Travis,
I have discovered that for some users the initial page loads up to 3 times nearly immediately without it being visual. It never happens to me or any or the browser/system combinations I use on remote machine testing - even if I match someones setup who is getting the problem, but there is no question it is happening. This was triggering the robot message I was giving. I don't yet know the cause as the typical culprits don't apply. I relaxed the rule by 1 more load in 30 seconds, which is why you didn't get a message. Am going to use someone's computer tomorrow that gets the problem to try and narrow it down.
Agree on the human testing. Thanks for the suggestions.
take care
-
Good to hear you may be getting closer to the root of the problem. Apologies that it took so long to get back to you here. I had 'things'.
I followed the steps and you should be able to determine the outcome. Spoiler Alert: No block, this time.
It's a whole other can of worms, but should you need more human testing on the cheap; you may find Mechanical Turk attractive. One could probably get a couple hundred participants for under a couple hundred dollars, with a task comparable to the one above.
Just a thought...
-
Hi Travis.
Thanks for the info re Screaming Frog.
I didn't white list your IP. I just changed the number in my files to something else, which would already be unblocked since a day has passed.
The scenario you gave would be quite rare and wouldn't create a block because that requires everything happening in 30 seconds or less (and the session wouldn't expire in that time frame), or that the same IP address also tried to crawl my site with a bot in the useragent (your scenario with screaming frog).. But yours and Max's experience is looking more and more like it is commonplace, and I'm the fool who hasn't known that's what's happening because I can't distinguish between you and a robot (which doesn't keep sessions).
All I need is to verify that I have a sessions problem. Here's what it takes:
1. use a desktop or laptop
2. remove all cookies related to qjamba
3. go to http://www.qjamba.com
4. choose an option (restaurants) and a location (Saint Louis) and click
5. dont do anything else - just close out the tab
6. If your ip address changes, let me know when you do this so I can find it in the logsIn all my usage it keeps the session between steps 3 and 4. It looks like for you and Max it doesn't do that, which means many of my users would be having the same terrible experience as you, for the 3 months since it has been live. It's a disaster. But I have to first verify that it really is a problem and unfortunately I have to rely on strangers like you who are experiencing the problem to do that.
If you just do those steps I promise we'll be done
Thanks, Ted
-
I can't really argue with log files, in most instances. Unfortunately, I didn't export crawl data. I used to irrationally horde that stuff, until I woke up one day and realized one of my drives was crammed full of spreadsheets I will never use again.
There may be some 'crawlability' issues, beyond the aggressive blocking practices. Though I managed to crawl 400+ URI before timeouts, after I throttled the crawl rate back the next day. Screaming Frog is very impressive, but Googlebot it ain't, even though it performs roughly the same function. Though, given enough RAM, it won't balk at magnitudes greater than the 400 or so URIs. (I've seen... things... ) And with default settings, Screaming Frog can easily handle tens of thousands of URI before it hits it's default RAM allocation limit.
It's more than likely worth your while to purchase an annual license at ~$150. That way, you get all the bells and whistles - though there is a stripped-down free version. There are other crawlers out there, but this one is the bee's knees. Plus you can run all kinds of theoretical crawl scenarios.
But moving along to the actual blocking, barring the crawler, I could foresee a number of legit use scenarios that would be comparable to my previous sessions. Planning night out > Pal sends link to site via whatever > Distracted by IM > Lose session in a sea of tabs > Search Google > Find Site > Phone call > Not Again... > Remember domain name > Blocked
Anyway, I just wanted to be sure that my IP isn't white listed, just unblocked. I could mess around all night trying to replicate it, without the crawling, just to find I 'could do no wrong'. XD
Otherwise it looks like this thread has become a contention of heuristics. I'm not trying to gang up on you here, but I would err on the side of plenty. Apt competition is difficult to overcome in obscurity. : )
-
Massimiliano, my guess of your path was the most logical conclusion based on the fact that I have 3 records of the urls you went to on my site, and showing that the program didn't keep any session variables between the 3 urls you came to. You first went to wildwood. Then you went to the home page. This implies that you either did that in a new tab, or you hit the back key, or you modified the url and removed the wildwood part to go to the home page, as opposed to clicking on something on the page. Telling me I'm wrong at least lets me know I may have a serious problem to fix, but you are mistaken to think that this is a robot problem. It is a php session variable problem, apparently, that none of my extensive (hundreds of hours) testing has ever had.
This is a serious problem unrelated to the OP and about 100 times more important than the OP that I was hoping to get some help with because it is very difficult to diagnose without feedback from those having the experience that you had with my site,. However, that's my problem I'll have to deal with. I don't know if you just don't remember or aren't telling me because you think it is a robot problem, but if you do happen to recall the steps (or at least tell me it was all done in the same tab or you hit the back key) I'd appreciate whatever it is you can tell me. If I can't solve the problem it probably means I'll have to shut down my website which I've put more than 4 years of my life into. Seriously.
Thanks for your various other responses though.. Take care. Ted
-
You are totally wrong guessing my path. You are going down a tunnel which doesn't have a exit. Personally I think, in this thread, you got some good advice about what you should focus on, so I would stop feeling in dismay, and confidently steer away from bad practices. Good luck.
-
Massimiliano,
Can you tell me your steps that led to that error? It looks like you went directly to www.qjamba.com/local-coupons/wildwood/mo/all and then you opened up a separate tab and went to www.qjamba.com and then either refreshed the home page or opened the home page again in another tab -- all within 30 seconds. That's the only way I have been able to reproduce this , because it looks for 3 searches without any existing session within 30 seconds by the same ip address, and the home page wipes out the session and cookies, and those are the urls the db table shows that you went to, and in that order.
Normally a user stays in the same tab, so with the 2nd search will have a session -- but your ip had no session with each search. And, normally you can't go to the home page from a location page. So, I'm confused as to what you did if it wasn't like what I wrote above. If you didn't do this then I'm worried of a serious programming problem having to do with the php sessions getting dropped between pages.
I"ve put a lot of time into this website and a ton of testing of it too, and just went live a few months ago, so these kinds of problems are disheartening. Ironically, your experience is almost identical to that of Travis, except that in your case you must have moved a little faster since you got a different message. But, it would REALLY help me to get some feedback from both of you confirming what I wrote or setting me straight if you did something different.
-
The second example is talking about a new page that never existed before, i.e. new-example.html... So you created a wholly new page on your site. You see that it gets crawled, you go to Google to see if it gets indexed.
Again though, the lower your site's domain authority and trust, the higher chance of that site getting pages indexed slower, de-indexed, and not showing up in high in the rankings.
Remember my earlier suggestion video? You're sweating the computer and details and minutiae way too much at the expense of doing what would really move the needle for optimization at your site's stage (getting reputable links from other domains). Same goes with what you're doing with trying to block certain activity on your site. Normal user activity is getting messed up--Massimiliano and Travis' experience.
This is probably the best advice at this stage: instead of spending one more second on this Q/A thread and trying to see how many minutes transpired between your own changes and seeing them in Google, spend that time to go get 10 good links. No need to even thank me. I'll take the silence as your newly enlightened bliss.
-
Geez, I'm so pedantic sometimes. Just need to understand what this means:
<<or, you="" make="" a="" new="" page.="" it="" gets="" crawled.="" <strong="">Checking if it's indexed... no, no, no, no, yes?! That's how long it takes.>></or,>
How do you do the bolded? site:www.site.com/thepage "my content change on the page" ?
And, you did say one can change and not the other yet the page really has been indexed, right?
-
THANK YOU!
-
Thanks for sharing that. I was only kidding above, but obviously it's no joking matter when a user gets blocked like you did.
I just looked and see that it blocks when something/someone clicks 3 times within 30 seconds. EDIT: but that's only if it isn't keeping the session between clicks--see next post
-
Yeup! Indexing time varies. You'll be able to tell the time between crawl and indexation by when Google shows your page version B in it's cache after you made changes from A, so if the 'example.html' page is already in Google's index you'll see this:
You make changes on a page, example.html (version A is now version B)Google crawls example.html (version B)
You check Google to see if example.html is version A or B in the cache
no?
no?
no?
no?
yes. That's how long it takes.OR, you make a new page. It gets crawled. Checking if it's indexed... no, no, no, no, yes?! That's how long it takes.
Again, this time period varies and having a site with excellent domain strength and trust usually makes it a shorter time period. It also tends to influence how many pages Google decides to keep in its index or show to users. Pretty much everything gets better for a site the stronger its domain authority and trust are.
-
First of all, I was just browsing and I got blocked as bot see below:
I would remove that cloaking.
Second, understanding your visitors behavior is one of the most complex task, you don't know your user behavior until you run a lot of test, survey and so on...
-
Well, then I totally agree with you, Ryan, thanks for the answer. With a DA of 1, you are absolutely right.
-
I'm sorry, but once I know they have crawled a page, shouldn't there be a way to know when it has also been indexed? I know I can get them to crawl a page immediately or nearly, by fetching it. But, I can't tell about the indexing--are you saying that after they crawl the page, the 'time to indexing the crawled page' can vary by site and there really is no way to know when it is in the new index? that is, if it shows as newly cached that doesn't mean it has been indexed too, or it can be indexed and not show up as a site:www... , etc..?
-
Masimilliano, thanks for your input. So you're on of them,huh? Good points, the last thing I want to do is annoy users, yet I also want to track 'real' usage, so there is a conflict. I know it is impossible to block all that I don't want as there is always another trick to employ..I'll have to think about it more.
Yeah the cut and paste blocking is annoying to anyone that would want to do it. But, none of my users should want to do it. My content is in low demand but I hate to make anything easier for potential competition, and some who might be interested won't know how to scrape. Anyway thanks for your feedback on that too.
-
Ah, that answer really varies per website. For example, if you're site is a major news site, Google's indexation is extremely fast, measured in seconds not days. Even if you're not a news site, major sites (high domain authority) get crawled and indexed very rapidly. Since you're going to be testing your own changes you'll learn how long this takes for your particular site.
-
Well, I'm ready to test -- but still not quite sure how since I don't know how to tell when Google has indexed the new content, since sometimes it doesn't get cache'd and sometimes it disappears from the site:www.. listing. I've read it only takes a couple of days after Google crawls the page, and can go with that, but was hoping there is a way to actually 'see' the evidence that it has been indexed.
So, while I've gotten some great input, I am somewhat unsatisfied because I'm not sure how to tell WHEN my content has really been put in the index so that the algorithm is updated for the newly crawled page.
-
Great! Well you have lots of insights here. It sounds like you're ready to test in the near term, and build up the domain in the long term. Good luck!
-
I think there's been a misunderstanding. I'm not writing a bot. I am talking about making programming changes and then submitting them to Google via the fetch tool to see how it affects my ranking as quickly as possible, instead of waiting for the next time Google crawls that page -- which could be weeks. I think the early reply may have given you a different impression. I want to speed up the indexing by fetching in Google the pages and then look to see what the effect is. My whole reason for starting this thread was confusion over knowing how to tell when it was indexed because of unexpected results (by me) with the cache and site:www... on Google.
-
Some of my pages are on Google's page 2, 3 and a few on page 1 for certain search terms that don't have a lot of competition but that I know SOME people are using (they are in my logs). and those pages have virtually no backlinks. I want to boost those on page 2 or 3 to page 1 as quickly as possible because p1 is 10x or more better than p2. Time/Cost is an issue here: I can make changes overnight at no cost as opposed to blogging or paying someone to blog.
Because domain authority and usage takes so long, it seems worth tweaking/testing NOW to try to boost certain pages from p2 or 3 to page 1 virtually overnight as opposed to waiting for months on end for usage to kick in. I don't know why Google would penalize me for moving a menu or adding content--basically for performing SEO on page, so it would be nice to be able to figure out what tools (cached pages, site:www. GWT, GA or otherwise) to look at to know if Google has re-indexed the new changes.
Of course, the biggest pages with the most common search terms probably HAVE to have plenty of backlinks and usage to get there, and I know that in the long run that's the way to success overall when there is high competion, but it just seems to me that on page SEO is potentially very valuable when the competition is slimmer.
-
Hi Massimliano. I would disagree with myself if I was talking about your site too... ;^) But in this specific case, qjamba.com is a site that needs the fundamental quality of backlinks more so than it needs Teddy to write a bot that is constantly pinging Google in order to try and decipher the incremental on-site changes he's making. I'm speaking to his need to prioritize that aspect of optimization. Copying what an SEO does when creating a nonsensical site with gibberish words in order to test on-page optimization as purely as possible with a normal, public facing website is a bad idea.
Obviously on-page optimization is important, but again, in this specific example, Teddy isn't even discussing his keyword rankings, rather he was looking to go down an on-site optimization path that might make him more and more frustrated instead of bringing about much more positive results. Cheers!
-
Thanks. There are two main parts to Google "figuring out" your site. One: indexation. That's been solved. We know that you're getting indexed. Two: ranking. Your site being so new and young, it's going to need backlinks and network growth to experience dramatic ranking changes. If your menu isn't causing your site to be indexed poorly and your pages are being counted as unique enough, then you're ok now there as well. The next most important step is getting your domain trust and authority up.
-
Let me say straight forward, all that bot blocking is not a good idea.
I have been there in the past few times, especially for e-commerce, scraping to compare prices is very common, and I tried blocking scrapers many times, maybe I am not that good, but at the end I gave up because the only thing I was able to do was annoy legitimate users, and legitimate bots.
I do scrape other website too for price comparison, tens of websites, since I don't want to be blocked I split the requests among different tasks, I add a random delay between each request, I fake header data like user agent pretending to be Firefox from a windows pc, and I cycle through different proxies to continuously change IP address.
So as you can see, it's much harder to block scrapers than it seems.
Neither I would use JS to block cut&paste. I have no data to base my judgement on. But it's annoying for users, it doesn't sound compliant with accessibility, it stinks and google usually doesn't like things which stinks. Plus... If someone wants to scrape your content you are not going to block him that way.
-
Ryan, I don't agree. It's true external factors (in other words backlinks) nowadays have the biggest impact, but on-page optimization as far as my little experience tell, still does affect ranking and it's worth working on.
And if we don't keep track of changes on pages and change on ranking how can we know what is working and what is not?
Especially since there's no gold rule and what works for one site doesn't necessarily work for another.
To make some example, I had a page which was ranking in position 1 for a search query with a volume of 50+k and very high competition. I expanded content to improve ranking for some additional queries, and it worked, it climbed from 2nd and 3rd serp page to 1st for a couple of those queries (I use both Moz ranktracker, semrush, and proracktracker to monitor ranking).
Unfortunately ranking for the search query with the highest volume moved from position 1 to postion 2, I changed the content a little bit, to add some keyword, which made sense because was re-balancing the keyword density now that the content was bigger. And in 24 hours it got back to position 1, without damaging the other search query improvement.
**In many other cases, I improved ranking on pages without any backlink, just improving the content, and I am talking about business critical pages with a high competition.
So I would say on-page optimization is still worth spending time on, to test the effect of the changes is a must and to monitor google ranking fluctuation is a must too.
Of course I am not saying off-page optimization is not important, is fundamental, I am giving that for granted.**
-
What a great answer Ryan! Thanks. I'll tell you what my concern is. As a coupon site I know that users don't want a bunch of wording at the beginning of pages. They just want to find the coupon and get it, but from what I've read Google probably would reward the site more if there was beefier wording in it instead of a bunch of listings that are closer in some ways to just a bunch of links, resembling a simple link page. I also have a 'mega-menu' on some of my pages which I think is pretty user friendly but have read that Google might not know for sure if it is part of the page content or not, as some forums I found talk about how their rankings improved when they simplified their menus. Lastly, I have a listing of location links at the top of the page for users to 'drill down' closer to their neighborhood. This is just about the first thing Google sees and may again be confusing to Google as to what the page is all about.
So IF the lack of 'wording content' and the early location of menu-type content is making my site hard to figure out from Google's perspective, I have alternatives and thought I could test those with Google ranking. For example, I can enter wording content early on so as to 'beef' up the page so that it isnt just a bunch of coupon offer links. I also could ajax the stuff that is above the 'coupon content' so that Google doesn't read that and get confused, and then put the actual links for Google to read down at the bottom of the page. Both of those would be moves soley to satisfy Google and with no effect on the user Google isn't perfect and i don't want to be penalized on ranking as a result of not addressing Google's 'imperfections', as it seems every edge counts and being on page 2 or 3 just isn't quite good enough. I view this as reasonable testing rather than devious manipulation, but of course what matters with Google ranking is what Google thinks.
So in these cases the user response will be neutral -- they generally won't care if I have wording about what is on the page (especially if most requires clicking on 'more info') or am ajaxing the menu information--they again just want to find coupons. But, if Google cares, as I have read they do, then it would be nice to be able to verify that with some simple tests. It may be that my issues are somewhat unique as far as the typical webpage is concerned.
Having said all of that I do think your advice makes a ton of sense as the user is really what it is all about ultimately.
Thanks very much, and I'm giving you a 'good' answer as soon as I hear back!
-
Sorry about your being blocked. A day hadn't passed due to the timing of your second visit--sorry. I just changed the ip address in the table so you aren't blocked now.
Ok, I think I figured out what happened. You first went to the ferguson site. You may clicked on something but the same page was reloaded. Then in a different tab you clicked on my home page from a google search results page. Then in a third tab you went directly to my home page. Then you ran screaming frog and the program stopped it without a message, seeing the word 'spider' in the useragent. Then you tried it again and it recognized that as a stopped bot and gave the message about suspicious activity.
The program wipes out sessions and cookies when a user goes to the home page (it's not even a link anywhere) since that is just a location-choosing page, so when you opened it in a different tab the sessions were wiped out. It had nothing to do with you being in incognito or not having cookies allowed.
Does this sound like what you may have done, and in sequence?
That's what it looks like, and that, if correct, is a huge relief for me since that is not usual user activity. (Although I may have to reconsider whether its still a poor approach).
I don't know about what happened with your second visit and the timeout. curious that you got some 60 pages crawled or so--I don't suppose you have anything that would tell me the first 3 of those so I can look into why it timed out? The table isn't keeping the ip on crawling so I can only look those up by the url crawled and the time.
-
I'll PM my public IP through Moz. I don't really have any issue with that. Oddly enough, I'm still blocked though.
I thought an okay, though slightly annoying, middle ground would be to give me a chance to prove that I'm not a bot. It seems cases like mine may be few and far between, but it happened.
It turns out that our lovely friends at The Googles just released a new version of reCAPTCHA. It's a one-click-prove-you're-not-a-bot-buddy-okay-i-will-friend-who-you-calling-friend-buddy bot check. (One click - and a user can prove they aren't a bot - without super annoying squiggle interpretation and entry.)
I don't speak fluent developer, but there are PHP code snippets hosted on this GitHub repo. From the the documentation, it looks like you can fire the widget when you need to. So if it works like I think it could work, you can have a little breathing room to figure out the possible session problem.
I've also rethought the whole carpenter/mason career path. After much searches on the Yahoos, I think they may require me to go outside. That just isn't going to work.
-
You're welcome Teddy. Something that goes undermentioned when SEOs run very precise tests on specific page side changes is that they're typically doing them on completely brand new domains with non-sense words and phrases because of the chance that their manipulations might get the site blacklisted.There's no loss to them if that happens other than unanswered questions. If the site does survive for a bit maybe they'll learn a few new insights. This level of granular, on site testing isn't a practical method for legitimate, public facing sites.
When it comes to sites that serve a business function aside from testing possible granular ranking changes, you're going to be much better served by measuring your changes against your user interaction instead of your search engine rankings. In that vein, design and test for the users on your site, not the search engines. If your site is getting visits but none of your business goals are being met, go nuts on testing those factors. Split test, iterate, and improve things with the focus of better conversions. Dive deep into services like Optimizely and research by the likes of Conversion Rate Experts. Use focus groups and usability testing to see how the minutiae of your changes affects interaction. You can go as extreme as you want to in that regard.
Most importantly, the bulk of search engine ranking strength comes from external factors: the number and variety of sites linking to your site, the quality of sites linking to your site, the trust and high reputation of sites linking to your site, the semantic agreement of sites linking to your site, etc. These factors are going to have many times greater influence in your ranking than your onsite tweaks in most cases. If your site is functional and complies with Google's own guidelines (http://static.googleusercontent.com/media/www.google.com/en//webmasters/docs/search-engine-optimization-starter-guide.pdf) you've covered the bulk of what you need to do on site. Focus instead on off site factors.
The site: search function exists mostly to provide searchers the ability to find a piece of information on a given domain. For example, let's say a reporter wants to cite an NBA stat from NBA.com, they'd use "stat thing" site:nba.com as a search. For users, that's useful in searching specifics, and for Google that makes them look all the better at "categorizing the world's information." Cache demonstrates the amount of information Google has archived and how quickly it's available. Back in the day--story time--Google used to advertise how quickly and how broadly they indexed things. In fact, they still do! If you look at a search result you'll notice a light gray statistic at the very top that says something like, "About 125,000,000 results (0.50 seconds)" for a search about hot dogs for example. This is Google saying, "We're BIG and FAST." The precise details of your site are way down the list to Google's own story of being big and fast.
If you focus your efforts in off site optimization, linking with other reputable sites, and building your network you'll be way better served because you'll be getting referral traffic as well as lift in Google. Cheers!
-
Travis,
First of all, I absolutely appreciate all the time you are taking to address my issues here. Second of all, it IS very tempting to join you and any others here to go build houses or do something else, especially given the last few days..:)
Ok. I'll try to keep it short:
I wasn't thinking you had any bearing on my site going down, but that maybe there was a 'moz effect'. hope not.
Re: Chrome Incognito Settings:
I'm really worried now that there is a sessions problem since anyone with cookies allowed should have the session ID saved between pages--in which case they would have only 1 entry in my 'user' table and you had 3 in a short amount of time. That's why it thought you were a robot. I don't know how to duplicate the problem though because I've never had it personally, and I use a program that connects to other machines with all kinds of combinations of operating systems and browsers and computers and have never had this problem with those. It's my problem. I'll have to figure it out somehow. I have many session variables and it would be a huge overhaul to not use sessions at this point. If you have any ideas (I'm using php) I'm all ears.re:Fun w/ Screaming Frog:
The IP for the 8.5 hour later instance was the same as your first one. Yet, if you were spoofing it shouldn't have said screaming frog in the user agent, right? It was in my 'bot-stopped' file as instantly stopping because it was an unexpected bot. So, confused unless perhaps you tried it separately from running with the spoof?<<normally screaming="" frog="" would="" display="" notifications,="" but="" in="" this="" instance="" the="" connection="" just="" timed="" out="" for="" requested="" urls.="" it="" didn't="" appear="" to="" be="" a="" connectivity="" issue="" on="" my="" end,="" so...="" yeah...="">>
Ok.Fun w/ Scraping and/or Spoofing:
<_>
I'll have to check into it. I've run Yslow and Gtmetrix without problems. I see you tried to run it on the ferguson page and the home page. I just ran the ferguson page in gtmetrix - which uses Page Speed Test (Google?) and Yslow both - and it ran ok, although not a great grade.<<while i'm="" running="" off="" in="" an="" almost="" totally="" unrelated="" direction,="" i="" thought="" this="" was="" interesting.="" apparently="" bingbot="" can="" be="" cheeky="" at="" times.="">> That is interesting.
I'm worried now most about the session issue, as that may be affecting a lot of my users and I've assumed multiple entries were from robots that generally don't keep sessions between page crawls (actually quite a few of the seo crawlers do -- but google, bing, yahoo don't). If you are ok with going to my home page without incognito and clicking on a few pages and letting me know the first part of your IP when you do that it might really help me. Your shouldn't be blocked anymore (it lasts 1 day). But, no worries if you're ready to move on.</while>_</normally>
Sorry, wasn't so short after all. Thanks again. Ted
-
Thanks Ryan. I am new to this so I appreciate the cautions. Are you saying it is a bad idea to try and run tests on specific page changes (ie like reducing the size of a menu or adding more content in paragraph form, for example) to see how they affect ranking on given searches? I'm struggling with why that is a problem for Google.. The SEO experts who run tests--to get a sense of what factors are important -- isn't that exactly what they are doing in a lot of cases? I plan on using the MOZ tools and have much to learn, but it just seems so logical to test things out like I want to do also.
The GWT index status page shows an increasing number of pages, but no detail on which ones have been indexed. That's why I was looking into the site: command and the cache command in the first place. It seems those exist for a reason, and I thought it was to help us with specific pages. If that isn't it, why do they exist? Other than showing the number of indexed pages I don't see much value to the index status page. Perhaps there is a good tutorial somewhere that helps people know how to use the GWT not just to find crawl errors or see general trends but to actually implement SEO page changes to improve their site?
I hope this doesn't sound argumentative..I just don't quite understand your perspective on it.
-
Rest assured, that I don't scrape/hammer so hard that it would knock your site down for a period. I often throttle it back to 1 thread and two URI per second. If I forget to configure it, the default is 5 threads at two URI per second. So yeah, maybe a bit of the moz effect.
Chrome Incognito Settings:
Just the typical/vanilla/default incognito settings. It should accept cookies, but they generally wouldn't persist after the session ends.
I didn't receive a message regarding cookies prior to the block notification.
On a side note, I don't allow plugins/extensions while using incognito.
Fun w/ Screaming Frog:
It's hard to say if the 8.5 hour later instance was my instance of Screaming Frog. The IP address would probably tell you the traffic came out of San Antonio, if it was mine. I didn't record the IP at the time, but I remember that much about it. Otherwise it's back in the pool.
Normally Screaming Frog would display notifications, but in this instance the connection just timed out for requested URLs. It didn't appear to be a connectivity issue on my end, so... yeah...
Fun w/ Scraping and/or Spoofing:
Screaming Frog will crawl CSS and JS links in source code. I found it a little odd that it didn't.
I also ran the domain through the Google Page Speed tool for giggles, since it would be traffic from Googlebot. It failed to fetch the resources necessary to run the test. Though cached versions of pages seemed to render fine, with the exception of broken images in some cases. Though I think that may have something to do with the lazy load script in indexinit.js, but I didn't do much more than read the code comments there.
In regard to the settings for the crawler, I had it set to allow cookies. The user agent was googlebot, but it wouldn't have came from the typical IPs. Basically just trying to get around the user agent and cookie problem with an IP that hadn't been blocked. You know, quick - dirty - and likely stupid.
Fun w/ Meta Robots Directives:
A few of the pages that had noindex directives appeared to lack genuine content, in line with the purpose of the site. So I left that avenue alone and figured it was intentional. The noarchive directive should prevent a cache link. I was just wondering if one or more somehow made into the mix, for added zest. Apparently not.
While I'm running off in an almost totally unrelated direction, I thought this was interesting. Apparently Bingbot can be cheeky at times.
Fun w/ The OP:
It looks like Ryan had your answer, and now you have an entirely new potential problem which is interesting. I think I'm just going to take up masonry and carpentry. Feel free to come along if you're interested.
-
I would caution against creating a tool to do what you're describing as you might end up violating Google's terms of use. Instead, use a dedicated tool for monitoring rankings--like Moz's pro feature set--around a specific set of keywords that have value for you instead of each and every page on your site. Chasing after the immediate effect of ranking and changes is akin to trying to precisely unravel Google's ranking algorithm, something Google very much doesn't want you to do. Instead look at past performance of content (analytics, server logs, etc.) and whether or not it improves after changes. The improvement is also subjective. Maybe you get less users and sessions, but much higher conversions...
Within GWT you're going to want to look at Index Status https://www.google.com/webmasters/tools/index-status? and compare it with the number of pages in your sitemap(s). This most likely isn't going to be an exact match as Google at times limits the amount it caches and indexes a site based on its own determination of page worthiness (high percentage of the page is duplicate content for example). So look for a decent percentage of indexation versus exacting numbers. Also, having pages that perform really well for you indexed and ranking well is more important that 100 that don't.
Ultimately the more precisely you try to deconstruct Google the more difficult things will be. Take old Ben's advice, "Let go..."
-
Thanks. I will give you a 'good answer soon', but dont want to close this out yet since there are some other issues going on with the other response. In addition I'm trying to figure out the practical way to use all this info:
Here's what I was trying to do: Make a program change, fetch with google, and see in a couple of days if the site has risen in the rankings. It was complicated (for me) by the absence of the fetched site being cached (which even the one you found is not a recent cache), and the the site:www... sometimes not appearing to have changed at all even if there was a new cache. How is a person to test the immediate effect on ranking when they make a change if they can't tell what Google is doing?
I haven't found in GWT a list of my indexed pages, or the dates they were indexed, and certainly not the content that has been indexed, yet your answer above seems to indicate there is more under the hood than what i have found there. Is there really a way in GWT to see if a specific page has been indexed (my site has thousands so I would need to search for it), and when it was last indexed? I can tell from my dbtables what pages and when they were crawled but not whether Google has added the new info into their indexing algorithm.
-
The "site:" operator isn't as precise as what you'll find in Google Webmaster Tools (GWT) and doesn't return each and every page for a site Google has indexed for a few reasons. One reason, Google is preventing people without private access to the site from seeing each and every page it indexes for a site to keep that data from being scraped publicly. In your case, that's good if your competitor is running similar searches like you're doing now in the attempt to copy your site. Instead Google gives you that information privately via GWT.
The same goes for cached pages. The overarching reason is that it's about preventing over exposure publicly both in how Google operates and how a site is constructed. Ultimately you'll have to trust GWT and your own site's server records more than what you can find searching Google as an average user.
-
Thanks for your reply Ryan..
Well, that was easy. How come for some pages they have that code and others they don't? It seems like a backward way to find the cached version of your own pages too..
Thanks much.
Now, for a twist, suddenly I checked just now for site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all
and it is no longer there. It was there yesterday. Why would it suddenly disappear? I again checked webmaster tools--not finding any crawling errors or problems for it. So, the question suddenly gets reversed: how now can I have a cached version without a site:www... listing?
-
The type of URL you posted will deliver a 404 from Google because they use a session code after the "cache:" parameter. If you go to their site and run a search result then select the down arrow to open up the "Cached" and "Similar" menu you should be able to click through and see their cahced result. After doing so, you'll notice some code like ":l8wcNgU5elwJ:" before your URL. With this it works, delete it and you get a 404.
-
Glad you didn't get frustrated. Ok, lets see, there are a number of issues:
Screaming Frogs--I see the entry, and we know why it was stopped yesterday (robots.txt).
Your activity yesterday--I see 3 entries for you yesterday. I'm very concerned I might have a sessions problem because if you had cookies enabled my tables should have shown only one entry for you with several searches. If you didn't have cookies enabled you should have gottena message saying the site requires cookies. Did you get that message regarding cookies for each page you went to last night? IF you didn't get that message i may have a serious issue to address -- it would really help in this case to know which items you have checked in Chrome for the cookies settings. I used the site this morning incognito without blocking cookies and my table showed just one entry and no blocking, so It would really help me to know what your setttings are for cookies, as I may have to program differently.
On your second visit last night:
First, did you run Screaming Frogs on that second visit? Someone did about 8 1/2 hours after your first visit.Re the scraping with your googlbot spoof: My program shouldn't block the IP for google ever, so are you sure it got blocked--did it give you a message? If so I need to figure that one out too and it would help to have the IP you grabbed if possible. As for the CSS and Javascript, I don't know enough about that--I don't think I have any hyperlinks in the javascript that it would find. I have a both internal and external CSS and JS.
RE the robots file and googlebot: It's crawling ok and was verified in their Webmaster Tools, and the format I use is based on what I read. While you are right that bad bots ignore robots.txt. I want to keep a lot of the neutral or even 'good' bots out, so thats why I do it that way. I have htaccess blocking too, although its never a perfect thing..
RE: the ferguson page OP issue While I haven't formally submitted the ferguson page (how'd you like that for a city choice?) to google via a site map (that's a long story), google has been crawling that page since at least November 15, 2014. I don't know if a cache existed at that time as that is something I have just learned about in the last few days after I was trying to figure out if Google indexed some changes I put in before fetching just a few days ago.
Re this:<_> I"m not sure what a noarchive directive is, so I'd have to say probably not on that one.
Thanks for whatever more help you can provide--I really hope to solve that session issue if it exists (as that has been a concern for several months but I wasn't sure how distinguish between a robot (which doesn't keep sessions) and a real user -- like you! ), and of course the actual OP issue too!_
p.s my server/site was down this morning and my host didn't know why. Is it a bad idea to put one's website urls on the moz forum like I have (ie bad bots crawl the moz site looking for urls to torture?)
-
No worries, I'm not frustrated at all.
I usually take my first couple passes at a site in Chrome Incognito. I had sent a request via Screaming Frog. I didn't spoof the user agent, or set it to allow cookies. So that may have been 'suspicious' enough from one IP in a short amount of time. You can easily find the screaming frog user agent in your logs.
Every once in a while I'll manage to be incorrect about something I should have known. The robots.txt file isn't necessarily improperly configured. It's just not how I would have handled it. The googlebot, at least, would ignore the directive since there isn't any path specified. A bad bot doesn't necessarily obey robots.txt directives, so I would only disallow all user agents from the few files and directories I don't want crawled by legit bots. I would then block any bad bots at the server level.
But for some reason I had it in my head that robots.txt worked something like a filter, where the scary wildcard and slash trump previous instructions. So, I was wrong about that - and now I finally deserve my ice cream. How I went this long without knowing otherwise is beyond me. At least a couple productive things came out of it... which is why I'm here.
So while I'm totally screwing up, I figured I would ask when the page was first published/submitted to search engines. So, when did that happen?
Since I'm glutton for punishment, I also grabbed another IP and proceeded to spoof googlebot. Even though my crawler managed to scrape meta data from 60+ pages before the IP was blocked, it never managed to crawl the CSS or JavaScript. That's a little odd to me.
I also noticed some noindex meta tags, which isn't terrible, but could a noarchive directive have made it into the head of one or more pages? Just thought about that after the fact. Anyway, I think it's time to go back to sleep.
-
HI Travis,
Thanks you for your reply and I'm sorry for the frustration. I looked at my db tables to try and figure out which one was you and what actions you took so I could diagnose why it gave you that message. Sounds like I should relax the restrictions on user actions.
I know it would be tempting to ignore me at this point but it really could help me if I can find your activity in my dbase records that the program keeps so I can come up with a better solution.Could you give me part of your IP address so I can try and find your activity in my dbase table records so I can solve the problem that gave you that irritating message? I would really appreciate it, as there actually may be an underlying problem with how it is storing sessions that I've had a real hard time pinpointing because it is very hard to distinguish real users that the program is having a sessions problem for from a returning robot that never keeps session cookies.
As for the robots.txt file, I purposefully block all but a handful of crawlers to avoid getting bombarded by crawlers which can slow down my site, and make it difficult to gauge real users. This is nothing new for my site, and Google crawls my site every day. So, I think perhaps you tried using a tool on it that it blocked and I'm sorry for any confusion that caused. Which tool did you use?
I'm somewhat new to robot activity and how to deal with the pesky guys without causing user problems, so am open to suggestions.
While I don't think these problems you had relate directly to the OP I certainly can see how they would relate to anyone trying to address the OP with a tool-oriented or hands on interaction with my site--again sorry for the frustration. Thanks for your feedback.
-
For starters, the robots.txt file is blocking all search engine bots. Secondly, I was just taking a look at the live site and I received a message that stated something like; "This IP has been blocked for today due to activity similar to bots." I had only visited two or three pages and the cached home page.
Suffice to say, you need to remove the User-agent: * Disallow: / directive from robots.txt and find a better way to handle potentially malicious bots. Otherwise, you're going to have a bad time.
My guess is the robots.txt file was pushed from dev to production and no one edited it. As for the IP blocking script, I'm Paul and that's between y'all. But either fix or remove it. You also don't necessarily want blank/useless robots.txt directives either. Only block those files and directories you need to block.
Best of luck.
Here's your current robots.txt entries:
User-agent: googlebot Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: sitelock Disallow: User-agent: Yahoo! Disallow: User-agent: msnbot Disallow: User-agent: Facebook Disallow: User-agent: hubspot Disallow: User-agent: metatagrobot Disallow: User-agent: * Disallow: /
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Number of indexed pages dropped. No manual action though?
I have a client who had their WordPress site hacked. At that point there was no message from Google in webmaster tools and the search results for their pages still looked normal. They paid sitelock to fix the site. This was all about a month ago. Logging into Webmaster Tools now there are still no messages from Google nor anything on the manual actions page. Their organic traffic is essentially gone. Looking at the submitted sitemap only 3 of their 121 submitted pages are indexed. Before this all of them where in the index. Looking at the index status report I can see that the number of indexed pages dropped completely off the map. We are sure that the site is free of malware. This client has done no fishy SEO practices. What can be done?
Intermediate & Advanced SEO | | connectiveWeb0 -
Should I set up no index no follow on low quality pages?
I know it is a good idea for duplicate pages, blog tags, etc. but I remember somewhere that you can help the overall link juice of a website by adding no index no follow or no index follow low quality content pages of your website. Is it still a good idea to do this or was it never a good idea to begin with? Michael
Intermediate & Advanced SEO | | Michael_Rock0 -
404 in Google cache for one of my blog posts
Hey Moz People, I'm getting a 404 when I cache: this blog post http://www.inscopix.com/blog/decoding-brain-initiative and I'm not able to see what's causing it. Can someone take a look and let me know if they see anything standing out? Thanks,
Intermediate & Advanced SEO | | jacobfy0 -
Google Not Indexing XML Sitemap Images
Hi Mozzers, We are having an issue with our XML sitemap images not being indexed. The site has over 39,000 pages and 17,500 images submitted in GWT. If you take a look at the attached screenshot, 'GWT Images - Not Indexed', you can see that the majority of the pages are being indexed - but none of the images are. The first thing you should know about the images is that they are hosted on a content delivery network (CDN), rather than on the site itself. However, Google advice suggests hosting on a CDN is fine - see second screenshot, 'Google CDN Advice'. That advice says to either (i) ensure the hosting site is verified in GWT or (ii) submit in robots.txt. As we can't verify the hosting site in GWT, we had opted to submit via robots.txt. There are 3 sitemap indexes: 1) http://www.greenplantswap.co.uk/sitemap_index.xml, 2) http://www.greenplantswap.co.uk/sitemap/plant_genera/listings.xml and 3) http://www.greenplantswap.co.uk/sitemap/plant_genera/plants.xml. Each sitemap index is split up into often hundreds or thousands of smaller XML sitemaps. This is necessary due to the size of the site and how we have decided to pull URLs in. Essentially, if we did it another way, it may have involved some of the sitemaps being massive and thus taking upwards of a minute to load. To give you an idea of what is being submitted to Google in one of the sitemaps, please see view-source:http://www.greenplantswap.co.uk/sitemap/plant_genera/4/listings.xml?page=1. Originally, the images were SSL, so we decided to reverted to non-SSL URLs as that was an easy change. But over a week later, that seems to have had no impact. The image URLs are ugly... but should this prevent them from being indexed? The strange thing is that a very small number of images have been indexed - see http://goo.gl/P8GMn. I don't know if this is an anomaly or whether it suggests no issue with how the images have been set up - thus, there may be another issue. Sorry for the long message but I would be extremely grateful for any insight into this. I have tried to offer as much information as I can, however please do let me know if this is not enough. Thank you for taking the time to read and help. Regards, Mark Oz6HzKO rYD3ICZ
Intermediate & Advanced SEO | | edlondon0 -
Wordpress blog in a subdirectory not being indexed by Google
HI MozzersIn my websites sitemap.xml, pages are listed, such as /blog/ and /blog/textile-fact-or-fiction-egyptian-cotton-explained/These pages are visible when you visit them in a browser and when you use the Google Webmaster tool - Fetch as Google to view them (see attachment), however they aren't being indexed in Google, not even the root directory for the blog (/blog/) is being indexed, and when we query:site: www.hilden.co.uk/blog/ It returns 0 results in Google.Also note that:The Wordpress installation is located at /blog/ which is a subdirectory of the main root directory which is managed by Magento. I'm wondering if this causing the problem.Any help on this would be greatly appreciated!AnthonyToTOHuj.png?1
Intermediate & Advanced SEO | | Tone_Agency0 -
Sitemaps / Google Indexing / Submitted
We just submitted a new sitemap to google for our new rails app - http://www.thesquarefoot.com/sitemap.xml Which has over 1,400 pages, however Google is only seeing 114. About 1,200 are in the listings folder / 250 blog posts / and 15 landing pages. Any help would be appreciated! Aron sitemap.png
Intermediate & Advanced SEO | | TheSquareFoot0 -
404'd pages still in index
I recently launched a site and shortly after performed a URL rewrite (not the greatest idea, i know). The developer 404'd the old pages instead of a permanent 301 redirect. This caused a mess in the index. I have tried to use Google's removal tool to remove these URL's from the index. These pages were being removed but now I am finding them in the index as just URL's to the 404'd page (i.e. no title tag or meta description). Should I wait this out or now go back and 301 redirect the old URL's (that are 404'd now) to the new URL's? I am sure this is the reason for my lack of ranking as the rest of my site is pretty well optimized and I have some quality links.
Intermediate & Advanced SEO | | mj7750 -
Pages un-indexed in my site
My current website www.energyacuity.com has had most pages indexed for more than a year. However, I tried cache a few of the pages, and it looks the only one that is now indexed by Goggle is the homepage. Any thoughts on why this is happening?
Intermediate & Advanced SEO | | abernatj0