The other day an Asp.Net developer approached me with a problem he was seeing interacting with SharePoint. He had code in place for months that called the OData SharePoint web service and it worked perfectly. However, the data he was pulling was just moved from one Web Application to another and now his service was returning “400 Bad Request” errors on the new URL. Pointing the identical code back to the old location (because you never delete the old data right away – right?) worked perfectly. Both the old and new web applications were in the same farm and both required Windows authentication in order to see the content – or so we thought.
After some digging around in the icky parts of SharePoint, I noticed that the new web application was configured to allow anonymous access to the web application itself because there was one site collection under managed sites that was configured to allow anonymous access. The root site collection did not allow anonymous access and the URL to the web service was quite solidly in that site collection so it again looked like permissions should not be an issue.
We fired up Fiddler and noticed immediately that if we accessed the web service in a Internet Explorer 1) worked and 2) negotiated a windows authenticated session. Firefox would fail with a generic “Request Error” message but would not even attempt to authenticate. When we executed the C# code we saw that it always accepted the first connect attempt which was anonymous and thus the same behavior as we saw in Firefox. No amount of fiddling with the code, meddling with credentialcache settings and so forth would change that.
Here’s where the speculation begins… the web service page itself is under the /_vti_bin/ directory which is an alias that points to pages that exist outside of the site collection and in the file system of the server – 14/ISAPI in this case. What I suspect was happening was that SharePoint parsed the URL, saw the /_vti_bin/ and handed the request off to the physical page rather than process it through the site collection’s security. The page then loaded and attempted to access the requested list but since the list itself was secured, the page shrugged and failed with the “400 Bad Request” because it didn’t know what else to do. A 404 is not proper because the page that was requested was in fact accessible, it was just underlying data that was not. A “401 Unauthorized” was also not appropriate because as an anonymous user the page couldn’t even access enough of the site collection to determine permissions. It was effectively a perfectly functioning door that opened into a brick wall – but only for anonymous users.
The “solution” was to call a page in the site collection that was secure and would thus force the proper authentication handshaking to take place, and then use that session to call the web service. It is not optimal as it requires an additional server hit before calling the service itself so I would be open to any alternatives that I haven’t yet tried. I tested every combination of security settings and .Net related objects I could think of and then googled until it cried for mercy – all without any luck.
void CallODataService() { WebClient client = new WebClient(); client.UseDefaultCredentials = true; Uri secPage = new Uri("http://webapp.company.com/Pages/Default.aspx"); Uri svcUri = new Uri(secPage, "/Region/State/_vti_bin/listdata.svc/Stores?$filter=endswith(Name,'.docx')&$select=Name"); // get an authenticated session client.DownloadString(secPage); // now call the service using the authenticated session string result = client.DownloadString(svcUri); }
There is another mystery here though and that is that if I go back into Fiddler and look at the call to the web service that executes after the call to the secure page, it appears to pass almost no information whatsoever to the server. No cookies, one header and almost nothing else. Logically, this call should fail outright yet it works every time. My thought on this is that the keep-alive time in IIS is holding the connection open for this client which, if true, might possibly cause an issue in a load balanced environment. Again, this is just a shot in the dark though. I’m also not ruling out the influence of rogue garden gnomes.
Hopefully, even this less than ideal solution will at least point anyone else seeing similar behavior toward the real issue and save them a day’s frustration.
David, it looks like:
1st web app uses Windows authentication (and it works)
2nd web app uses Claims auth and it doesn’t.
Please confirm.
In this particular case, it is one web application that happens to allow anonymous access to some of the site. If it were pure anonymous or pure windows auth this wouldn’t be a problem.