Tag Archives: Infrastructure

MSDN Azure credits… its not for “you”, its for “us” …

So I recently found out about a great new set of offers that Microsoft are offering for all MSDN subscription owners called “Windows Azure Benefit for MSDN Subscribers“. You basically get free Azure credits every month and discounted pricing.

  • MSDN Ultimate – $150 per month
  • MSDN Premium – $100 per month
  • MSDN Professional – $50 per month

This is also combined with a 25% discount in the charge rate for each machine that you are running, and this is fantastic value.

For most SharePoint development and testing teams you will be looking at the top two, although much more expensive the Premium ($2.5k per year) and Ultimate ($4.2k per year) these are the MSDN subscriptions which include Office and SharePoint software for development and testing purposes (check out the MSDN Edition comparison for more details). There are other options out there but a lot of development teams will be using MSDN.

Equally if you are doing SharePoint development which in the Azure world will typically mean an “Extra Large” VM (8 Cores and 14GB RAM). This rolls in at $0.48 per hour of operation, and probably raises another major point … only 14GB RAM?

With Windows 8 Hyper-V (free) and VMWare workstation (around $100) and most contractors running insane dual-SSD 32GB laptops you gotta wonder, why would I want a 14GB VM in the cloud when I can run a 24GB VM locally? Also .. what happens if I am on a train / airport lounge / plane and can’t access the internet?

Well .. good point ..

$100 a month is great, but Azure VMs are very expensive!
Now, I know a few contractor friends of mine in the industry who have looked at this and decided that its not for them .. I am one of them (yes that is right .. I’m advocating a new service which I myself am not going to use).

But this is not really for the sole contractor, and certainly not someone who works all of gods hours (either doing research, writing books or blog posts and preparing for conferences and user groups).

Now this is where the average contractor gets off the Azure train. If you are very busy and put in a lot of extra hours it is not uncommon to run your VM for 12 hours a day plus some conference / user group work at weekends … this can quickly add up*

* note – I realise not everyone works these kinds of hours .. I personally don’t, I have a wife and baby daughter at home and generally work a 9-5 work day .. but I know some people work longer hours, and I sure put in extra time when prepping for conferences

5 days a week @ 12 hours per day, plus another 12 hours over the weekend = 72 hours per week
72 * $0.48 = $34.56 per week
$34.56 * 52 = $1,797 per year
$1,797 / 12 = $149.76 per month

So if you are rolling with MSDN Premium you are going to be out of pocket, and even if you are lucky enough to be an MVP (and have MSDN Ultimate) or just have deep pockets .. you are still scraping the barrel and probably watching the clock every week to make sure you don’t go over the limit.

“You” are not their target audience … “We” are ..
I suppose this really rounds to my core point .. this subscription model is not aimed at the individual developer or contractor. It is aimed at development teams. The place I’m currently at has 5 developers working in three different countries all running MSDN Premium. This gives them a combined allowance of $500 per month of Azure credits.

Being an office-based development team it pretty much runs off standard office hours. The development machines only need to be on for office hours (typically 8am – 6pm unless there is a major version launch coming up) and almost certainly don’t need to run at weekends. With a group of users you can also look at consolidating your infrastructure (why not run a shared SQL instance so you can drop your VM hardware?). Equally you probably don’t need to run all of the services all of the time on every development machine (if you aren’t building a search solution then turn it off!).

With $500 per month to spend they can run 5 XL VMs 9am-5pm every week for free (some weeks you won’t need to have all 5 machines running .. so turning them off when you aren’t using them can help to pay for those other times when you need to run them for longer!).

Even if you do use more horsepower than that .. try putting the figures in front of your IT Manager / Head of Infrastructure … You might be surprised how happy they are to pay for the “extra” over and above those free Azure credits (some months it might cost you an extra $100 or so .. some months you won’t have to pay anything … compare that to other hosting providers and see how much it would cost you!)

How about using it for testing?
One of the other big boons (and possibly the reason I might use a farm like this) is for testing.

It doesn’t really matter how powerful your laptop is, you are never going to be able to build a truly enterprise farm on it (with redundancy in all places and all of the lights and switches turned on). The same credit you get in Azure could be used to model and build massive farms you could use for testing new topologies, or testing load balancing scenarios, or performance and load testing?)

Don’t forget, you only pay for the machines while they are turned on so instead of running 1 XL VM for 20 days a month .. why not create 30 Large VMs and run them for 5 days a month of testing?

Conclusion…
Well, this is a very interesting move from Microsoft .. and stacked up alongside their hosted Team Foundation Server offering this does create a very attractive and extremely low-cost cloud-based development scenario.

It encourages people to stick with MSDN and give Azure a go for development and testing, and I’m sure this will end up leading to many companies taking a much closer look at how Azure works for their production environments as well.

For me ? Well .. I might well use it for the next time I do a Kerberos / Load Testing presentation (the idea of setting up a massive 20 server farm to run for a few days for free sounds pretty cool and a great learning experience to boot).

If nothing else, I’m tempted to setup a VM which I leave turned off and only use it in emergencies (my laptop is broken / stolen  or my VMs are dead for some reason).

Either way .. if you have an MSDN subscription, head over and take a look. You might be surprised how useful it is!

Scaling to 10,000 unique permissions – Part 2 – The Solution

This follows on from my previous post; Part 1 – The Problem

The main requirement was:

  • One SharePoint 2010 site
  • 10,000+ uniquely permissioned objects each with a different user account

In this post we will be discussing the solution which involves programmatically creating unique permissions in a way which will scale for (what should be) well over 10,000 uniquely permissioned items…

Introducing yet another little known SharePoint API call …

This is only possible because of one of the new SharePoint 2010 API calls;

SPRoleAssignmentCollection.AddToCurrentScopeOnly()

https://msdn.microsoft.com/en-us/library/microsoft.sharepoint.sproleassignmentcollection.addtocurrentscopeonly.aspx

This basically adds the specified SPRoleAssignment but does not create any of the Limited Access scopes on the parent objects.

This is pretty straight forward and works in exactly the same way to normal Role Assignments in SharePoint 2010, we simply use the AddToCurrentScopeOnly() method instead of using the Add() method, for example:

 

   1: // fetch the Principal object which we are granting access to

   2: SPUser user = web.EnsureUser("Domain\\UserAccount");

   3:  

   4: // create a Role Assignment binding

   5: SPRoleAssignment roleAssignment = new SPRoleAssignment(user);

   6:  

   7: // apply contribute permissions

   8: roleAssignment.RoleDefinitionBindings.Add(

   9:     web.RoleDefinitions["Contribute"]);

  10:  

  11: // grant permissions to the list item using the CURRENT SCOPE ONLY

  12: // this ensures that Limited Access scopes are NOT created

  13: // for parent objects (we're going to have to do that bit ourselves!)

  14: item.RoleAssignments.AddToCurrentScopeOnly(roleAssignment)

 

It is very important to understand that you still need to grant “Limited Access” (it wasn’t put in just for laughs, it does have a purpose). Granting “Limited Access” means that the object has access to core information on parent objects to enable construction of things like the breadcrumb, and retrieval of core files needed to render the interface.

This then means it is up to us (the developers) to go back and create each of those in a more efficient way. The problem is .. you can’t assign “Limited Access” programmatically…

What do you mean .. I can’t assign Limited Access??

Well, I don’t really know why they did this, but if you try and assign it programmatically (Limited Access is actually a “Permission Level” in SharePoint) you will get errors (admittedly you can’t do this through the user interface either!).

So, the workaround (again) is to create your own permission level which includes exactly the same permissions that “Limited Access” would have granted. This is:

  • View Application Pages
  • Browse User Information
  • Use Remote Interfaces
  • Use Client Integration Features
  • Open

You can call this anything you like (I called mine “SP Limited Access”) as long as you know what it means.

The code to do this is as follows:

 

   1: internal SPRoleDefinition GetLimitedAccessRole(SPWeb web)

   2:         {

   3:             string strRoleDefinition = "SP Limited Access";

   4:  

   5:             // only exists in webs with unique role definitions

   6:             if (web.HasUniqueRoleDefinitions)

   7:             {

   8:                 try

   9:                 {

  10:                     // try to retrieve the role definition

  11:                     return web.RoleDefinitions[strRoleDefinition];

  12:                 }

  13:                 catch (SPException)

  14:                 {

  15:                     // SPException means it does not exist

  16:  

  17:                     // create our custom limited access role

  18:                     SPRoleDefinition roleDef = new SPRoleDefinition();

  19:  

  20:                     // give it a name and description

  21:                     roleDef.Name = "SP Limited Access";

  22:                     roleDef.Description = "Identical to standard " + 

  23:                         "Limited Access rights. " + 

  24:                         "Used to provide access to parent objects of " + 

  25:                         "uniquely permissioned content";

  26:  

  27:                     // apply the base permissions required

  28:                     roleDef.BasePermissions = SPBasePermissions.ViewFormPages 

  29:                         | SPBasePermissions.Open 

  30:                         | SPBasePermissions.BrowseUserInfo 

  31:                         | SPBasePermissions.UseClientIntegration 

  32:                         | SPBasePermissions.UseRemoteAPIs;

  33:  

  34:                     // add it to the web

  35:                     web.RoleDefinitions.Add(roleDef);

  36:                 }

  37:  

  38:                 return web.RoleDefinitions[strRoleDefinition];

  39:             }

  40:             else

  41:             {

  42:                 // try the parent web

  43:                 return GetLimitedAccessRole(web.ParentWeb);

  44:             }

  45:         }

I’ve created my new Limited Access Permission Level .. now what?

One thing does need to be made clear, there is absolutely no point you just creating all of the Security Scopes that SharePoint would have created (you’ll end up with the same mess we were trying to avoid in the first place).

The solution is to create a group for all of the “Limited Access” users for that List or Web. It really is up to you whether you use Active Directory Security Groups or SharePoint Groups. I decided to use AD security groups; mainly because I didn’t want to clog up the Site Collection “groups” functionality, and didn’t want idiot Site Collection admins from removing the group members (or worse .. the groups themselves!) and breaking the site collection.

Note – I haven’t included the code to create and modify Active Directory Security Groups here, if nothing else because there are thousands of resources out there showing you how to modify AD groups programmatically, and Code Project has a particularly good reference: Howto: (Almost) Everything In Active Directory via C#

You will need to create a group for each parent object which has unique permissions although in my example it is only really the SPWeb (web site) that we are worried about as the libraries and folders are well within the security scope threshold.

So we have our 20 libraries and our root web site. So in our example we would have to create 21 different AD security groups:

  • One group to store all Limited Access users for the root web site
  • 20 groups to store all Limited Access for the libraries (one for each library)

Then, following this example you can then use the following code to grant “Limited Access” to one of the libraries (and just rinse and repeat for the other libraries and the root web site);

 

   1: // fetch the "SP Limited Access" role definition

   2: SPRoleDefinition limitedAccessRole = GetLimitedAccessRole(web);

   3:  

   4: // get SPPrincipal object for the AD Group we created

   5: SPUser adGroup = web.EnsureUser("My Custom AD Group Name");

   6:  

   7: // set the role assignments for this group

   8: SPRoleAssignment roleAssignment = new SPRoleAssignment(adGroup);

   9: roleAssignment.RoleDefinitionBindings.Add(limitedAccessRole);

  10:  

  11: // grant "Limited Access" to the AD Group for this list

  12: // we only have to do this once! After this we simply 

  13: // need to add members to this AD Group every time we 

  14: // add users to one of the parent objects!

  15: list.RoleAssignments.AddToCurrentScopeOnly(roleAssignment)

So having done this for all of the parent objects we now have our 21 custom Active Directory groups, each one of which has been granted “Limited Access” to one of the required “parent” objects for our folders.

From here on in it should be smooth sailing. You simply need to make sure that every time you programmatically add a new user to one of the folders you also make sure they get added to the relevant AD Groups (so that the “Limited Access” chain is not broken).

The following diagram really explains what we have done:

Folders_New

I have tested this model for over 16,000 unique AD accounts across hundreds of folders in hundreds of document libraries and I cannot notice any discernable drop off in performance (nothing that can’t be explained by simply having a really large number of libraries and folders anyway!) so initial tests show that this is working very well indeed 🙂

What I also ended up doing (to make this slightly more robust) is to build my own application page which users can use to Grant Permissions through the UI (so we don’t need to write custom code every time a new “Limited Access” scope is needed).

I then wrote an HttpModule to auto-redirect any requests to the out-of-the-box page (_layouts/AclInv.aspx) to the custom page so that if anyone tried to use the native user interface it would ALWAYS be executing my own custom code (which creates all of the AD Groups and SP Limited Access scopes programmatically, without the user having to worry about it!)

The great thing about this solution is that it doesn’t matter how many users or groups you are adding to your SharePoint site .. you only ever have 1 Limited Access security scope for each List / Web!

Thanks for sticking with me through these two posts .. if you made it this far then thanks for reading and I would love to hear your comments! 🙂

Scaling to 10,000 unique permissions – Part 1 – The Problem

This post was borne out of a client requirement which popped up on my radar. I’m currently working for a leading global Business Intelligence provider in London, and they were looking to implement a particular third party piece of software. This software relies on SharePoint for file storage and my client wanted to roll this out to their customers “extranet” style with each customer having uniquely secured content (files and folders).

Now .. first off their customers include in excess of 10,000 different companies (i.e. > 10,000 users) so early warning bells immediately started ringing in terms of scalability.

Secondly, to make this worse, the software required all content to be stored in a single SharePoint site .. so now my early warning system had gone into full meltdown and a state of high alert was reached.
So to boil this down …

  • One SharePoint 2010 site
  • 10,000+ uniquely permissioned objects each with a different user account

A Library with 10,000 uniquely permissioned folders?? Possible? My first instincts said no… so it was time to get my problem solving hat on and do some digging ..

Investigating the Limits of SharePoint 2010

I would like to think that any SharePoint {Consultant | Developer | Architect | <insert profession>} worth their salt would have read the Software and Capacity Planning guidelines (or at least be aware of it!) .. so that was my first pit-stop.

Note – I also stumbled across a great blog post by SharePoint infrastructure veteran Joel Oleson and his Best Practices for Enterprise User Scalability in SharePoint. This goes into detail about the specific size of an ACL (and the reason why this is limited, specifically in Windows) which although a good read wasn’t really relevant to my problem.

The Microsoft TechNet article SharePoint Server 2010 capacity management: Software boundaries and limits (https://technet.microsoft.com/en-us/library/cc262787.aspx) is a great resource and contains one absolutely key entry:

Security Scope – 1,000 per list (threshold)
The maximum number of unique security scopes set for a list should not exceed 1,000. 

A scope is the security boundary for a securable object and any of its children that do not have a separate security boundary defined.  

A scope contains an Access Control List (ACL), but unlike NTFS ACLs, a scope can include security principals that are specific to SharePoint Server. The members of an ACL for a scope can include Windows users, user accounts other than Windows users (such as forms-based accounts), Active Directory groups, or SharePoint groups.

So what is a Security Scope then? Ok I admit it does tend to get a bit bogged down in terminology.
To put it simply … each time you grant access to a new principal (user account or group) then you are creating a new Security Scope.

The other thing to consider pickup is that this is not just limited to lists! Any list that inherits permissions will pick up their permissions from the parent web (site) so you also need to adhere to this at the web level too!

This means that you should not have more than 1000 security scopes at EITHER the Site or List level.

Ignoring this limit can do real damage to your farm …

There is even a Microsoft Knowledgebase article explaining why; SharePoint performance degradation with a large number of unique security scopes in lists (https://support.microsoft.com/kb/2420771)

This is really explained in far more detail in two most excellent blog posts:

The first post describes the problem of trying to create more than 1000 security scopes, and what happens when you do this: https://wbblog.datapolis.com/2011/03/setting-item-permissions-with-workflow.html

The second post is by James Love (a.k.a. @jimmywim) and goes into real “deep dive” detail looking into the root cause of the problem (SQL Server and ACL GUIDs) and how this problem can actually bring down your ENTIRE FARM and not just the list / site you are working on!
https://e-junkie-chronicles.blogspot.com/2011/03/sharepoint-2010-performance-with-item_23.html

A quote from the second post is as follows:

“When you load up a huge list with lots of item level permissions, a single operation gets every single GUID associated with the ACL for that item and passes that back to the data access layer of SharePoint. When the database retrieves the actual list item data, it will pass in all of the ACL Guids back in as one long string, all concatenated together. The query to get the data creates a table variable re-assembles the the item level ACL Guid associated with each item. How the rest of the query deals with this is anyone’s guess at the moment – this table variable might just be passed back to the calling COM object (though I thought they couldn’t be used this way….) for the COM object to then sort out which item should be visible to which “scope” (or ACL).

So, what can we take away form this? Passing 640k of data about the place, for a SQL Query to do some substring math and converting to Guids will soon bring your database server to its knees. This is one request and it takes 2000ms to work. Imagine if you have 5 requests per second or more hitting this list!”

Both are excellent appendums to this post and well worth looking at for another angle and a bit more detail!

Why does this become my problem?

Now .. looking back to my original problem some of you may be thinking, OK no problem; you can just create yourself 20 different lists / libraries .. and have 500 unique permissions in each list??

Well .. so you might think .. and here I introduce the juggernaut that is Limited Access Scopes!

Anyone who has spent any time around SharePoint security will have noticed the odd “Limited Access” permission popping up in their site from time to time. “Limited Access” is automatically allocated to a parent Folder, List or Web whenever a child object has a unique permission allocated to it.

You can easily see these being created if you break permission inheritence to a list and just add a few accounts to that list. The parent Web will not have a “Limited Access” scope created for each user account you have added.

Now hopefully the bright will already have spotted the problem .. it doesn’t matter how many lists or libraries you create .. every single user or group that you add will end up in the parent Web site with “Limited Access” (and every single Parent Web heading upwards).

The following diagram explains why.

You simply cannot get away from this fact. If are adding 10,000 unique permissions with different user accounts then you will end up with 10,000 security scopes at the root web!

Note – It should be noted that the number of “Limited Access” scopes created is limited to the number of Security Principals you are adding.

If you are adding from a pool of 50 users then you will only ever be adding a maximum of 50 new “Limited Access” scopes (one for each user account).

For this reason it is a good idea to use Groups when adding permissions as this limits the number of “Limited Access” scopes which are created .. but this won’t solve your problem if you have over 1000 different security principals!

So that was the crux of my problem .. on investigation this does look to be a major major problem (and an “impossible fix”) but it would seem not! There IS a workaround (one which I have tested to over 15,000 unique user accounts and works very very well indeed)…

The solution, workaround, and code samples are all included in Part 2

I’m speaking at SUGUK London

Well I am very pleased to announce that I will be speaking at the SharePoint User Group UK (London) on Thursday 25th August.

I am basically doing a “practice run” of my Configuring Kerberos in a SharePoint 2010 Farm which I am doing at SharePoint Saturday UK later this year.

This will involve;

  • Configuring Kerberos Live on a SharePoint 2010 farm, taking it from NTLM to Kerberos/Negotiate authentication
  • Configuring SQL and Analysis Services to use Constrained Delegation
  • Configuring SP2010 Excel Services to pass through the authentication credentials using the Claims to Windows Token Services
  • How to prove it is all working using “out of the box” tools
  • A few other resources, caveats and tricks

This is a FREE event at LBi offices in central London, full details, signup and map details can be found on the SUGUK forum: https://suguk.org/forums/thread/27083.aspx

Should be a great event, hope to see you there and have a SharePint afterwards.

Forays into SharePoint 2010 Performance Testing with Visual Studio 2010

Over the past six months I have increasingly become an evangelist of Performance Testing. It has always previously been an area that I was aware of but I never really got massively involved in, but recently I’ve seen it as an increasingly important part of my work, especially on the larger scale projects with load balanced web front ends (for performance, not just redundancy) and you start hitting I/O limits on SQL. I suppose this may have been triggered by the SharePoint Conference 2009, and one of my follow up blog posts “Load Testing SharePoint 2010 with Visual Studio Team Test“.

So in this post I firstly wanted to look at why you should do Performance Testing?

It sounds like a bit of a stupid question (with an obvious answer) but it really is surprising how many people don’t do it. How many of you have ever asked the following questions on a project?

“How many users can the production system support?”
“What would be the impact of doubling the number of users?”
“What impact with backups have on performance?”
“How fast will the solution perform during peak hours?”
“What is the most cost-effective way of improving performance?”

All of these are questions that you absolutely HAVE to be able to answer. The client (whether it is your organisation, or another organisation who you are running a project for) deserves to know the answers to these, and without them how can you have any idea whether your solution is going to be fit for purpose?

Sure, you can read up on Estimating Performance and Capacity Planning in SharePoint, but all that gives you is some rough guidelines.. we need to be able to apply some science to the process!

The last question is probable the most compelling. Re-configuring farms and buying new hardware is an expensive process, the consultancy alone can cost thousands of pounds, and you don’t want to have your client coming back asking why they just spent tens of thousands of pounds on a new state of the art iSCSI SAN array, to have zero impact on performance (“hey .. we thought it would help .. but we didn’t really know!”) because the bottleneck was actually the CPU on the Web Front End (WFE).

The story often gets even worse when things do start going wrong. If you have ever been in the unfortunate position where you are troubleshooting a system that is performing badly, these kinds of questions are quite common:

“What is causing the poor performance?”
“How can we fix this?”
“Why did you not notice this during development?”

Again, the last two questions is the killer.. if you don’t do any Performance Testing then you won’t know that you have a problem until it is too late. The earlier you can get some metrics on this, the faster you will be able to react to performance issues (in some cases finding them and fixing them before the client even knows about it!)

Equally, without performance testing you won’t know WHY the problems are occuring. If you don’t know why then you can’t know HOW the best way is to fix them!

So the key messages are this:

  • Early Warning .. catch problems early on and they will be easier to fix. There is no point waiting until users are hitting the system to find out the solution can’t cope with the load!
  • Knowledgewhat is causing the problems, and how do you fix them?
  • Confidence … not just that you know what you are doing, but you can prove it. This instils confidence in your sales, confidence in your delivery, and confidence from your clients too!

Performance Testing with Visual Studio 2010
I’ve been using Visual Studio 2010 Ultimate edition. It is the only “2010” product that incorporates Web Performance Tests and Load Tests, the two critical pieces that you will use to test the performance on SharePoint 2010 (or any other web based system). It also integrates tightly with Team Foundation Server and provides “Lab Management” capability, but that is out of the scope of this blog post.

In order to do comprehensive testing you really need 4 different software packages:

  1. Visual Studio 2010 Ultimate: This is where you create your tests and control the execution of them.
  2. Visual Studio 2010 Test Controller: Part of the Visual Studio Agents 2010 ISO, this allows you to co-ordinate tests executed by several “agents”, as well as collecting results and storing all of the test results (and performance counters) in a database. The license for this is included in Visual Studio 2010 Ultimate.
  3. Visual Studio 2010 Test Agent: Part of the Visual Studio Agents 2010 ISO, this can be installed on machines that will simulate load and execute tests. They are connected to a “Controller” which gives them instructions. The license for this is included in Visual Studio 2010 Ultimate.
  4. Visual Studio 2010 Virtual User Pack: This is a license that allows you to increase the number of virtual “users” you can simulate by 1,000 (for each pack that you purchase). This is a separate license that must be purchased separately (there is no trial version!)

If you need any help installing these and getting them running then there is a great MSDN article which you should read: Installing and Configuring Visual Studio Agents and Test and Build Controllers or the equally awesome article from Visual Studio Magazine: Load Testing with Visual Studio 2010.

So what about actually creating the tests?

Well, the interface is pretty simple. You can create your “Web Performance Tests” using a simple Browser Recorder (literally using a Web Browser which records all of your actions, and then click “stop” when you are finished). This works great, but there are a few caveats:

  • You might want to use the “Generate Code” option if you are adding documents or list items. This converts your recorded web test into a code file, allowing you to programmatically change document names, or field values .. useful to make sure you are not just overwriting the same document over and over again
  • Web Service tests require a bit more “knowledge” of how they work, needing the SOAP envelope (in XML) and the SOAPAction header.

It is worth noting that there is an excellent Code Plex project available: “SharePoint Performance Tests“. Although this was written for Visual Studio 2008 (you can convert it to 2010 if you want) it contains a number of configurable tests (via XML) that allow you to dynamically create tests for generic SharePoint platforms .. well worth a look!

You can then very easily create a “Load Test” which allows you to pick’n’mix tests, and a distribution of which tests you want to run.

My personal favourite is the “Tests Per User Per Hour”. For this you would sit down with your client and work out “what would a typical user do in an hour of using the system..” one such activity resulted in this kind of activity distribution:

  • Hit the site home page 50 times
  • Execute 10 searches
  • Upload 5 documents
  • Respond to 20 workflow tasks

This kind of valuable information allows you to build your tests and then distribute them using the Load Test. All you do then is plug in how many users you want to simulate and away you go!

Counting the Counters?
All of this so far is great stuff, but without the performance counters you really aren’t going to get much legs from Visual Studio. You might get the WHAT is going on (i.e. do the tests complete very quickly?) but you certainly won’t get the WHY information which is oh-so important (i.e. is it the CPU, RAM or Disk?)

For this you need to add Performance Counters… thankfully this is rediculously simple. You have something called “Counter Sets” which you can configure to collect from the computers that operate in your farm.
There are a bunch of pre-defined counter-sets you can choose from:

  • Application
  • ASP.Net (I pick this for my WFE Servers)
  • .Net Application (I pick this for my Application Servers)
  • IIS
  • SQL (I pick this for my SQL Servers)

I won’t go into any more detail than that. A step-by-step walkthrough of the options (including screenshots) can be found at the Load Testing with Visual Studio 2010 article at Visual Studio Magazine.

What about the Results?
Well, there isn’t a really simple answer to this. You really need to have a good understanding on how the different hardware components interact, and what limits you should be looking for.

The big hardware counters (CPU usage, Available Memory) are the obvious ones. Any server which exceeds 80% CPU usage for any sustained period is going to be in trouble and is close to a bottleneck. Equally any server which starts to run out of memory (or more importantly .. slowly loses memory, suggesting a memory leak!) should be identified.

But it’s the deeper, more granular analysis that proves most useful. On a recent client project I was looking at a Proof of Concept environment. We knew that we had a bottleneck in our WFE (CPU was averaging around 90%) and it was extremely workflow heavy, but the page performance was far too bad to put down to just the CPU.

On closer inspection we found a direct correlation between ther Page Response Time and the Disk Queue Length in SQL Server:

The top-left corner is the Disk Queue Length in SQL Server, and the Top Right is the Page Response Time for the Document Upload operation (bottom right is the overall Test Response time), clearly the spikes happened at the same time.

This is the true power of using Visual Studio. All of the tests and performance counters are time-stamped, allowing you to drill into any specific instance and see exactly what was happening at that moment in time!

Looking closer at the SQL Disk usage, the Write Time (%) and Read Time (%) show us even more interesting results:

The top of the graph shows the Disk Write Usage (%) and the bottom half shows the Disk Read Usage (%). Clearly, the disk is very busy writing (often being at 100%) while it does very little reading. This fits perfectly with our test results as most of the “read” operations (like viewing the home page, or executing a search result) were extremely fast … but most of the “write” operations (like uploading a document) were much slower.

So the WHAT is slow write performance (uploading of documents).
The WHY is now very simple, the disks on the SQL Server need looking at (possibly upgrading to faster disks, or some optimisation in the configuration of the databases).

Conclusion
To be honest I could talk about this subject all day, but hopefully this gives you some indication of just how crucial Performance Testing is .. and how powerful Visual Studio can be as a testing tool.

The ease of creating test scripts, the vast flexibility and power of the enormous performance counters available, and the ability to drill into a single second of activity and see (simultaneously) what was going on in all of the other servers .. its an awesome combination.

I’ll probably be posting more blog posts on this in the future, but for now good luck, and hope you get as much of a kick out of VS2010 as I have 🙂

How to install SharePoint 2010 on Windows 7

For those of you interested in running the SharePoint 2010 Beta 2 on your home computers, or have a Windows 7 workstation and want to get SharePoint up and running, there are a few small pit-falls that you need to be aware of!

Note – all of these tips are covered in the great MSDN article which explains these steps in detail.

64 Bit

the really obvious one is that SharePoint 2010 is 64-bit only, so you need to have Windows 7 x64 edition installed!

Pre-requisites

The second kicker is that you will have to install the pre-requisites manually! the auto-installer for Server 2008 simply doesn’t work on Windows 7.

One of the easiest ways to achieve most of these is to install the Visual Studio 2010 Beta 2. This sets up many of the pre-requisites, including .Net Framework 4 and Silverlight 🙂

Some of the other steps (such as making sure you have IIS and SQL installed) you will have to do manually though.

Note – The MSDN article contains detailed information on which pre-requisites you need to install, and how to do it!

Installing SharePoint 2010

Ok .. the first thing you will notice is that the installer won’t actually let you install it. You will get an error message warning you that you actually need to be running Windows Server 2008. (Fear not .. you haven’t got the wrong download!)

There are 2 easy steps to get it all working:

1) Open a command prompt, and run the installer with the /Extract argument. This will extract the files from the installer to a specified directory. For example:

OfficeServer.exe /extract:c:\OfficeServer

2) Once the files have extracted, go to the “..\Files\Setup\config.xml” and open this file for editing (e.g. C:\OfficeServer\Files\Setup\config.xml)

You need to add in a line in the tag, just before the Configuration tag:

<Setting Id=”AllowWindowsClientInstall” Value=”True” />


Now you can run the Setup.exe from the extracted folder (e.g. C:\OfficeServer\setup.exe), and installation of SharePoint 2010 begins!

cheers!

Martin

IIS7 broke my Content Deployment! (404 – Not Found error)

Important one to bear in mind this, especially as SharePoint 2010 is likely to have the same limitation (IIS7!)

Content Deployment works by creating exports of the Site Collection data, and splitting it up into CAB files with a default size of 10MB. These are then shipped off to the target Central Admin Application (using a Layouts page called DeploymentUpload.aspx). Once all the CAB files have been received they are imported into the database of the target site collection.

All very simple, but what do you do when your Content Deployment job starts throwing errors like “404 – Not Found”?? (note – Content Deployment errors end up in the Windows Application Logs).

Well, the first place to look would be your IIS logs for your Central Administration application (which for IIS7 are located in the C:\inetpub folder). Look for a reference to “DeploymentUpload.aspx” with a 404 error reference.. which I have an example of below:

2010-02-02 10:16:29 ::1 POST /_admin/Content+Deployment/DeploymentUpload.aspx filename=%22ExportedFiles18.cab%22&remoteJobId=%22b8a556bc-8eef-4f6f-97b6-6bac54ae8d99%22 40197 – ::1 – 404 13 0 78

Now, I have highlighted the error fragment which states that this is a 404.13 error (or Content Length Too Large)! The reason for this is that one of the CAB files is too big, and IIS7 has a file upload limit of about 27MB!

Now, the quick witted among you will remember I said that the CAB files automatically split up into 10MB chunks .. but if MOSS comes across a file that is too big it will simply expand the file size until it can fit that file in!! In the case of a project I’m working on this lead to a 68mb CAB file!

The only workaround is to configure the IIS7 Virtual Directory for Central Administration to allow file uploads big enough for my CAB file to get through (in my case, I set it to 80,000,000 bytes, approx 75MB).

To do this, open up a command prompt, navigate to C:\Windows\System32\Inetsrv\ and execute the following command:

appcmd set config “SharePoint Central Administration v3” -section:system.webserver/security/requestFiltering -requestLimits.MaxAllowedContentLength:80000000

Restarted the Content Deployment job and all good .. working again 🙂 So something to bear in mind … why IIS7 can break your Content Deployment Jobs!!

NewSID is dead.. duplicate machine SID no longer a problem?

Well, this one was gob-smacking! A colleague of mine (Tristan Watkins) pointed me at this article from Mark Russinovich, the developer of the “NewSID” tool that so many people use for creating a new machine SID for a machine (typically after a re-image or copying a virtual machine).
Well, it seems we don’t need to bother anymore and never really did, at least in the vast majority of cases!
It’s a little surprising that the SID duplication issue has gone unquestioned for so long, but everyone has assumed that someone else knew exactly why it was a problem. To my chagrin, NewSID has never really done anything useful and there’s no reason to miss it now that it’s retired.
To read more, go check out Mark’s blog post here: https://blogs.technet.com/markrussinovich/archive/2009/11/03/3291024.aspx
Update – A good follow up summary has also been written by one of my colleagues, Tristan Watkins.

Load Testing SharePoint 2010 with Visual Studio Team Test

 

So exactly what do we mean by "load testing" when it comes to SharePoint 2010? There are lots of methods that people tend to point towards, and I’ve heard "hits/visits per day" and "throughput" bandied about, but at the end of the day it comes down to 2 things:

 

  1. Requests Per Second

The requests per second literally means how many requests for information each server is capable of responding to per second. Each page may consist of dozens of artifacts, and for each artifact the browser needs to make a "request", therefore the more of these  "requests" it can serve the better.

 

  1. Server Response Time.

The response time represents any processing on the server side (or TTLB – Time to Last Byte). This doesn’t factor in network latency or bandwidth though!

 

So the first thing you should think about is what can influence those metrics? And you end up with 5 different elements of your SharePoint 2010 farm:

  • WFE
  • Storage
  • Network
  • App Servers
  • SQL

 

This, as I’m sure you can imagine, can involve a LOT of testing. Simply testing the WFE on their own is going to be struggle for your average developer, and if you don’t have any industry testing experience you are going to have a hard time, but this is where the new SharePoint 2010 wave continues to make it’s presence felt. ..

 

SharePoint 2010 Load Testing Toolkit

This is a new set of tools being released with the SharePoint 2010 Administration Toolkit and represents the easiest possible way of load testing your SharePoint environment. The main objective here is to:

 

  • Standardise and simplify the cost of load testing.
  • Simulate common SharePoint operations
  • Be used as reference to create other custom tests (for custom code, for example!)

 

The whole thing relies on the IIS analysis logs. These logs give pointers on where users are going, what kinds of requests they are doing (GET / PUT) as well as the types of files they are typically accessing (ASPX / CSS / JS / JPEG / DOCX / etc…)

 

The Load Testing Toolkit will analyse your IIS logs and automatically generate a set of loads tests to appropriately match your environment, producing automated scripts that can be run in Visual Studio (either Team System or Team Test Edition).

 

How hard can it be?

It is really quite simple (well, according to the ridiculously simple explanation at the SharePoint 2009 conference!). You literally point the tool at your IIS logs, and it spits out an entire suite of tests, for WFE, SQL, Storage, etc .. Including all the metrics you would want (from CPU, RAM, Network, Disk I/O and even SQL , ASP.Net and .Net Framework specific performance counters).

 

Then you just run it and analyse the results!

 

Analyse That!

The analysis couldn’t be simpler. With "Requests Per Second" and "Response Times" two of the metrics generated by the Visual Studio test reports, you really can’t go far wrong.

 

If you do find a problem, then you can delve into the new SharePoint 2010 "Usage Database" (which now runs on SQL Server) in order to identify exactly what was causing your dip in performance (say when someone deletes a large list?).

 

Tips and Tricks

There are a few gotchas, one thing is to be careful of "Validation Rules" in Visual Studio. Typically it will be happy with pages that return "200" codes. This of course includes Error and Access Denied pages (which SharePoint will handle, and returns a perfectly valid page (hence the 200 code!)).

 

It is also recommended that you let your test "Warm up" for around an hour before you start taking the results seriously.  This allows all of the operations, timers and back-end mechanics of SharePoint to properly settle down, and means you are getting a realistic experience of what the environment will react like once it is bedded into it’s production environment.

 

Finally, the SharePoint Usage Logging Database is a great location to grab information out of, so why not leverage other great aspects of the Office 2010 family. You could pull through the Usage DB information into Excel 2010 (perhaps using PowerPivot?) so that you can spin out charts and pivot tables to easily drill down into your data.

 

Typically load testing tells you WHEN bottlenecks are occurring, but the Usage Database can tell you WHAT is causing the bottlenecks!

« Older Entries