Thursday 31 December 2015

Cloning Reddit to learn ASP.NET 5 - Part 2

In the previous post we got started in our quest to build a Reddit Clone. We ended up with a broken project so let's try to fix that.

The first thing is to add a UserDetails class and then link it to a RedditUser class, which will also need to be added to the Models folder.

UserDetails.cs
namespace Reddit.Models
{
    public class UserDetails
    {
        public int UserId { get; set; }
        public long LinkKarma { get; set; }
        public long CommentKarma { get; set; }
        public RedditUser User { get; set; }      
    }
}

RedditUser.cs
namespace Reddit.Models
{
    public class RedditUser
    {
        public int RedditUserId { get; set; }
        public string Nick { get; set; }
        public UserDetails UserDetails { get; set; }
    }
}

We will further modify the RedditUser class to allow users to login, that's the reason for the rather small class. If this were a real clone then we'd probably want to use Guids for the Keys.

At this point we should have a working project again, which can be verified by building and/or debugging it.

We now want to create the database with the RedditUser and UserDetails and the right relationships between them, namely a 1 to 1 relationship.

Firstly, modify the Startup.cs file so that the Configuration property is static, which will allow us access to this property without having to instantiate the class again and to add the EntityFramework service.
public static IConfigurationRoot Configuration { get; set; }

public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();
            services.AddEntityFramework()
                .AddSqlServer()
                .AddDbContext<RedditContext>();      
        }
Edit the appsettings.json file
 "Data": {
    "RedditContextConnection": "Server=(localdb)\\MSSQLLocalDB;Database=Reddit;Trusted_Connection=true;MultipleActiveResultSets=true"
  }
Finally let's edit the Context class (RedditContext.cs)
using Microsoft.Data.Entity;

namespace Reddit.Models
{
    public class RedditContext : DbContext
    {
        public RedditContext()
        {
            //Database.EnsureCreated();
        }

        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        {
            var connection = Startup.Configuration["Data:RedditContextConnection"];
            optionsBuilder.UseSqlServer(connection);
            base.OnConfiguring(optionsBuilder);          
        } 

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.Entity<RedditUser>()
                .HasKey(x => x.RedditUserId);

            modelBuilder.Entity<UserDetails>()
                .HasKey(x => x.UserId);
            
            modelBuilder.Entity<RedditUser>()
                .HasOne(x => x.UserDetails)
                .WithOne(x => x.User);

            base.OnModelCreating(modelBuilder);
        }
    }
}

The OnConfiguring method sets up the connection String to the database.

The OnModelCreating method sets up the relationship between the tables in the database, it's worth mentioning here that I'm using the Fluent API to do this, but it's also possible to do it via data annotations.

I'm torn as to what the best way of doing this is, as both have advantages and disadvantages. I suspect that I will stick to the this syntax for the time being, even if I probably won't need any of the features that it provides over and above data annotations

I have added explicitly added the Primary Keys for each table, this is only necessary for UserDetails as the Key is different to the table name (I think) Now we can add create the database, from src\Reddit run the following command, which will create a Migration called Start:
 dnx ef migrations add Start 
Note that a new folder called Migrations will be created and populated with two files: A migration file, normally named DateTime_MigrationName and a model Snapshot.

When I first run this, it created the database. This was due to the commented out logic in the constructor, which wasn't commented out, once it was commented out, the expected behaviour resumed and we need to run the following command to apply the changes:
dnx ef database update
Let's add a few more models for SubReddit, Posts and Comments:

using System.Collections.Generic;

namespace Reddit.Models
{
    public class SubReddit
    {
        public int SubRedditId { get; set; }
        public string Name { get; set; }
        public bool IsPublic { get; set; }
        public List<Post> Posts { get; set; }
        public List<RedditUser> { get; set; }
        public bool IsBanned { get; set; }

    }
}
using System.Collections.Generic;

namespace Reddit.Models
{
    public class Post 
    {
        public int PostId { get; set; }
        public string Title { get; set; }
        public string Content { get; set; }
        public string Link { get; set; }
        public int UpVotes { get; set; }
        public int DownVotes { get; set; }
        public List<Comment> Comments { get; set; }
        public RedditUser Author { get; set; }
        public int AuthorId { get; set; }
        public SubReddit SubReddit { get; set; }
        public int SubRedditId { get; set; }


    }
}
namespace Reddit.Models
{
    public class Comment
    {
        public int CommentId { get; set; }
        public string Title { get; set; }
        public string Text { get; set; }
        public int UpVotes { get; set; }
        public int DownVotes { get; set; }
        public RedditUser Author { get; set; }
        public int AuthorId { get; set; }
    }
}
I have added Foreign Keys to Comment and Post to make querying easier. We create another migration called Core and apply it
dnx ef migrations add Core
dnx ef database update
Oddly nothing happens, why?

Wednesday 30 December 2015

Cloning Reddit to learn ASP.NET 5 - Part 1

I've always found it a bit boring to follow a book or video to try to learn a new, or old technology, so I thought I would try a different approach this time.

I would set myself a project, which while perhaps not that much better than a Bicycle store at least it would be more interesting for me.

So I'm doing a clone of Reddit...obviously it won't be a fully functional clone of Reddit but it should have most of the functionality, which will create some interesting challenges.

I don't know how many posts there will be as I am making it up as I go along.

DISCLAIMER

I'm using this as a learning experience, so there is likely a lot that will be wrong, which I will try to amend when I realize that it is wrong so please bear that in mind if you're reading this. It also means that some things might get changed dramatically but I guess that's part of the learning process, right?

Pre-Requisites

  • Visual Studio 2015 (A free edition can be downloaded from this page)
  • ASP.NET 5 RC 1, which can be downloaded from here

We start by creating a new Web Application Project in Visual Studio, which we'll call Reddit:


Ensure that you select Web Application but rather than use the Out of the Box Authentication we'll use no authentication.



We are going to need to use a database to store all the data, controversially, I'm not using a NOSQL DB, although EF7 will support them

Let's add Entity Framework to the project.

We have to edit the project.json file to do this. The added parts in bold. 
"dependencies": {
    "EntityFramework.Core": "7.0.0-rc1-final",
    "EntityFramework.Commands": "7.0.0-rc1-final",
    "EntityFramework.MicrosoftSqlServer": "7.0.0-rc1-final",
    "Microsoft.AspNet.Diagnostics": "1.0.0-rc1-final",
    "Microsoft.AspNet.IISPlatformHandler": "1.0.0-rc1-final",
    "Microsoft.AspNet.Mvc": "6.0.0-rc1-final",
    "Microsoft.AspNet.Mvc.TagHelpers": "6.0.0-rc1-final",
    "Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final",
    "Microsoft.AspNet.StaticFiles": "1.0.0-rc1-final",
    "Microsoft.AspNet.Tooling.Razor": "1.0.0-rc1-final",
    "Microsoft.Extensions.Configuration.FileProviderExtensions": "1.0.0-rc1-final",
    "Microsoft.Extensions.Configuration.Json": "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging": "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging.Console": "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging.Debug": "1.0.0-rc1-final",
    "Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.0-rc1-final"
  },

  "commands": {
    "web": "Microsoft.AspNet.Server.Kestrel",
    "ef": "EntityFramework.Commands"
  },

This will pull the relevant DLLs into the project. It's worth pointing out that nuget will let you choose which version to use, so there is no need to memorize the version numbers :)

At this point we can use database migrations but we don't have any models yet... IF you've used EF 6, Migrations are done differently for EF 7. In essence, the command is (note that it needs to be run from the src folder):
dnx ef <command> <options>
Before adding any models, we need to be able to access the database.  First, let's add a folder called Models and then a simple Repository Interface that I'll call IRedditRepository, along with an implementation RedditRepository and a RedditContext class, which is the actual DB Context.

RedditContext.cs
using Microsoft.Data.Entity;

namespace Reddit.Models
{
    public class RedditContext : DbContext
    {
        public RedditContext()
        {
            Database.EnsureCreated();
        }
    }
}
IRedditRepository.cs

namespace Reddit.Models
{
    public interface IRedditRepository
    {
        UserDetails GetUserDetails();
    }
}
UserDetails is a Data Model and will be defined in the next post, so don't worry too much about these yet.

RedditRepository.cs

using Microsoft.Extensions.Logging;
using System;

namespace Reddit.Models
{
    public class RedditRepository : IRedditRepository
    {
        private readonly RedditContext ctx;
        private readonly ILogger<RedditRepository> logger;

        public RedditRepository(RedditContext ctx, ILogger<RedditRepository> logger)
        {
            this.ctx = ctx;
            this.logger = logger;
        }

        public UserDetails GetUserDetails()
        {
            throw new NotImplementedException();
        }      
    }
}
Other than having a broken project we've not achieved much, yet :(

Stayed tuned for part 2.

Friday 20 November 2015

Instantiating the OrganizationServiceProxy for IFD enabled organizations.

Last week was an interesting week at work, highlights include running an application from my laptop against the live service.

It took me a while to get this working because I was using the wrong endpoint, D'oh, and the wrong type of credentials, double D'oh.

In any case, here it is for posterity:
public class IFDCrmService

    {
        const string ServiceUrl = "https://{0}/XRMServices/2011/Organization.svc";

        public static IOrganizationService GetIFDCrmService()
        {
            ClientCredentials credentials = new ClientCredentials();
            credentials.UserName.UserName = ConfigurationManager.AppSettings["UserName"];
            credentials.UserName.Password = ConfigurationManager.AppSettings["Password"];

            if (ConfigurationManager.AppSettings["FQDN"] == null) { throw new ArgumentNullException("FQDN key missing."); }
            string fqdn = ConfigurationManager.AppSettings["FQDN"]; 
        
            return new OrganizationServiceProxy(new Uri(string.Format(ServiceUrl, fqdn)), null, credentials, null);
        }
    }
The relevant part of the app.config is below:
<appSettings>
    <add key="UserName" value="dev\crmapppool" />
    <add key="Password" value="P@55w0rd1" />    
    <add key="FQDN" value="crmdev.dev.com"/>
 </appSettings> 
It's always a good idea to encrypt the passwords but I won't discuss this today here.

Monday 26 October 2015

Adventures using Availability Groups and RBS with SharePoint 2013

The concept behind a remote blob storage is pretty simple, see this for instance. I just want to talk about the myriad issues we've had when using RBS. with availability groups.

Our database setup uses Availability Groups, which, and this is controversial, is a cheap cluster. I do get that there are advantages to availability groups but these seem to be outweighed by the disadvantages. I know this is just my opinion and that I also know nothing about availability groups, HA clusters or anything in general, thank you for pointing out.

So what are the drawbacks of AGs?

  • Official Support is patchy. e.g. in Dynamics CRM 2013 one is forced to update the database directly.
  • Performance can be an issue as the database is always using at least two boxes.
  • Stuff flat out refuses to work, e.g. RBS Maintainer, various SharePoint database related operations.
To the AG mix we introduced RBS and this is were things started to go horribly wrong for us.

The first issue we encountered was the inability to delete a content database from SharePoint, which is not a major issue but it's really annoying.

The second issue was that the RBS maintianer would not work, so the storage requirements would just keep growing. This might not be an issue if you don't plan to archive your documents, but our DB had ~500GB of docs, about 2/3 of which were old but for contractual reasons needed to be kept.

This effectively put a nail in the coffin of the RBS + AG combo but there is more.

In order to load the ~500 GB document, we had a tool to upload the documents to SharePoint. This tool was multi-threaded and it essentially read the documents from the source DB and uploaded them to SharePoint, using the SharePoint CSOM model.

At this point, it's worth mentioning that our hosting provider does not guarantee any sort of performance level, too long to explain.

A couple of weeks back, with RBS on the database, we did a trial run of the upload and we were hitting very poor rates, ~ 4 GB per hour.

Last week, after RBS had been disabled and the content databases recreated, we tried a second trial run and the speed jumped to ~ 20 GB per hour.

I can't say that our RBS configuration was perfect, I think the threshold was on the low side (128 KB) but even so, the speed increase has been massive.

It actually gets better, because the 4 GB per hour figure was using both servers in the farm, whereas the 20 GB per hour figure was simply using one.

yes, yes, I know our hosting provider is crap and 128 KB is below the recommendation, but a 5 fold increase in transfer rates and a lowering of the error rate to almost zero is something that should be considered.

Sunday 4 October 2015

Integrating MS SharePoint with MS Dynamics CRM 2011/2013 - User Permissions

One of the things that seems to come up time and again on any integration of MS Dynamics CRM and SharePoint is the issue of user permissions.

Generally speaking it would be nice to be able to control access to SharePoint based upon the permissions the user has in MS Dynamics CRM. Alas, this is not possible without writing a bit code. (I've not investigated the Server to Server Integration yet as it seems to be available for online Dynamics CRM only)

The way I have done this, is by using an HttpModule deployed to the Ms SharePoint servers to check whether the user making the request to the MS SharePoint site has actually got access to the record in MS Dynamics CRM itself.

In our case this is pretty straight forward as we only store documents for a single entity, but there is nothing in principle to rule out an expansion to multiple entities.

Depending on the usage, caching will need to be a serious consideration, as performance could be impacted, but I have not thought about it too much yet.

The following assumptions have been made about the integration between MS Dynamics CRM and MS SharePoint:
  1. A document library exists and is named with the entity schema.
  2. Each entity record in MS Dynamics CRM has a single folder in MS SharePoint and this folder is named with the GUID of the record. 
  3. Entity Records in MS Dynamics CRM are not shared.

This is the code for the module itself:

using log4net;
using Microsoft.Crm.Sdk.Messages;
using Microsoft.Xrm.Sdk.Client;
using Microsoft.Xrm.Sdk.Query;
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Net;
using System.Security.Principal;
using System.ServiceModel.Description;
using System.Text;
using System.Text.RegularExpressions;
using System.Threading.Tasks;
using System.Web;

namespace CRMUserPermissions
{
    public class CRMUserPermissions : IHttpModule
    {

        public static ConcurrentDictionary<string, Guid> userIds = new ConcurrentDictionary<string, Guid>();

        const string GuidPattern = @"(/|%2f)([A-F0-9]{8}(?:-[A-F0-9]{4}){3}-[A-F0-9]{12})";

        const string UserIdQuery = @"<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>
  <entity name='systemuser'>   
    <attribute name='systemuserid' />
    <attribute name='domainname' />
    <filter type='and'>
      <condition attribute='domainname' operator='eq' value='{0}' />
    </filter> 
  </entity>
</fetch>";

        public void Dispose()
        {
        }

        public void Init(HttpApplication context)
        {
            context.PostAuthenticateRequest += new EventHandler(context_PostAuthenticateRequest);
        }

        void context_PostAuthenticateRequest(object sender, EventArgs e)
        {
            HttpApplication app = sender as HttpApplication;
            HttpContext context = app.Context;

            if (IsRequestRelevant(context))
            {
                try
                {

                    string user = HttpContext.Current.User.Identity.Name.Split('|').Last();

                    var service = CrmService.GetService();

                    string url = app.Request.Url.ToString();

                    if (!userIds.ContainsKey(user))
                    {
                        string query = string.Format(UserIdQuery, user);
                        var userId = service.RetrieveMultiple(new FetchExpression(query)).Entities.SingleOrDefault();
                        userIds.TryAdd(user, userId.Id);
                    }

                    var record = GetRecordInfo(url);

                    RetrievePrincipalAccessRequest princip = new RetrievePrincipalAccessRequest();
                    princip.Principal = new EntityReference("systemuser", userIds[user]);

                    princip.Target = new EntityReference(record.Item1, record.Item2);

                    var res = (RetrievePrincipalAccessResponse)service.Execute(princip);

                    if (res.AccessRights == AccessRights.None)
                    {
                        app.Response.StatusCode = 403;
                        app.Response.SubStatusCode = 1;
                        app.CompleteRequest();
                    }
   
                }
                catch (Exception)
                {
                    app.Response.StatusCode = 403;
                    app.Response.SubStatusCode = 1;
                    app.CompleteRequest();
                }
            }
        }
    }
}

A few comments are in order since I'm not including all methods.

IsRequestRelevant(context): This method checks that the user is authenticated and that the request is for documents relating to an entity we want to control access via this method.
CrmService.GetService(); This method just returns an OrganizationServiceProxy.
GetRecordInfo(url); This method works out the record guid and what type of entity it is.

It would probably be a better idea to get all, or some, users and cache them on receipt of the first query.

Depending on the system's usage profile different types of caching make more or less sense. For instance, if users tend to access multiple documents within a record in a relatively short time, say 10 minutes, then it makes sense to cache the records and the user's right to them but if users only tend to access a single document within a record, this would make less sense. Consideration needs to be given to memory pressures that caching will create if not using a separate caching layer, such as Redis.

The simplest way of deploying this httpModule is by installing the assembly in the GAC and then manually modifying the web.config of the relevant MS SharePoint site by adding the httpModule to the modules in configuration/system.webServer/modules:

<add name="CRMUserPermissions" type="CRMUserPermissions.CRMUserPermissions, CRMUserPermissions,,Version=1.0.0.0, Culture=neutral, PublicKeyToken=87b3480442bff091"></add>
I will post how to do this properly, i.e. by packaging it up in a MS SharePoint solution in an upcoming post.

Sunday 27 September 2015

IIS App Pool Credentials Exposed

Last week I was looking at changing the periodic restart for an app pool using the appcmd tool and I found something very interesting. Using this tool can reveal the username and password used for the app pool.

See below:
PS C:\Windows\system32\inetsrv> whoami
dev\crminstall
PS C:\Windows\system32\inetsrv> .\appcmd list apppool /"apppool.name:CRMApppool" -config
<add name="CRMAppPool" managedRuntimeVersion="v4.0" managedPipelineMode="Classic">
  <processModel identityType="SpecificUser" userName="dev\crmapppool" password="MyPassword" idleTimeout="1.01:00:00" />
  <recycling>
    <periodicRestart time="1.05:00:00">
      <schedule>
      </schedule>
    </periodicRestart>
  </recycling>
  <failure />
  <cpu />
</add>
The user in question was a local administrator (member of the local Administrators group) and the command was run from PowerShell with elevated permissions.

So you might need to be logged in as an administrator but you should under no circumstances be able to see another user's password. This is a pretty big security hole, IMHO.

I've only tried on Windows 2012 and 2012 R2, but the behaviour seems consistent.

Incidentally, this does not seem to be the first case where credentials are exposed like this, see this post. It's fair to mention that the issue on the link was eventually fixed.

Monday 21 September 2015

ID1073: A CryptographicException occurred when attempting to decrypt the cookie using the ProtectedData API (see inner exception for details).

A few weeks back, the performance testers found this issue when doing some resilience testing:
ID1073: A CryptographicException occurred when attempting to decrypt the cookie using the ProtectedData API (see inner exception for details). If you are using IIS 7.5, this could be due to the loadUserProfile setting on the Application Pool being set to false.
In essence, if the user started his session on server 1, then that server was stopped, when he connected to server 2, the issue would occur. Annoyingly, this would require removing cookies, which our testers are not able to do, on their locked down machines.

The deployment uses an NLB flag was checked for this deployment, but this seemed to make no difference, so we decided to encrypt the cookies using a certificate rather than the machine key.

This is a bit annoying as there are few things to consider:
  1. We're going to have to modify the MS Dynamics CRM web.config, which means that every new patch, update, etc.. might overwrite it.
  2. Following from that we need a deployment script to automate it as much as possible.
  3. We'll need to store a way to id the certificate to be used for the encryption somewhere easily accessible (We could hard code it but then when the certificate expires we'd been in trouble).
I decided to use the registry for 3.

This is the code that we've used to encrypt the cookies with a certificate.

using Microsoft.IdentityModel.Tokens;
using Microsoft.IdentityModel.Web;
using Microsoft.Win32;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Cryptography.X509Certificates;
using System.Text;
using System.Threading.Tasks;

namespace CRM.RSASessionCookie
{
    /// <summary>
    /// This class encrypts the session security token using the RSA key
    /// of the relying party's service certificate.
    /// </summary>
    public class RsaEncryptedSessionSecurityTokenHandler : SessionSecurityTokenHandler
    {
        static List<CookieTransform> transforms;

        static RsaEncryptedSessionSecurityTokenHandler()
        {            
            string certThumbprint = (string)Registry.GetValue(@"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM",
                "CookieCertificateThumbprint",
                null);

            if (!string.IsNullOrEmpty(certThumbprint))
            {
                X509Certificate2 serviceCertificate = CertificateUtil.GetCertificate(StoreName.My,
                                                         StoreLocation.LocalMachine, certThumbprint);

                if (serviceCertificate == null)
                {
                    throw new ApplicationException(string.Format("No certificate was found with thumbprint: {0}", certThumbprint));
                }

                transforms = new List<CookieTransform>() 
                         { 
                             new DeflateCookieTransform(), 
                             new RsaEncryptionCookieTransform(serviceCertificate),
                             new RsaSignatureCookieTransform(serviceCertificate),
                         };
            }
            else
            {
                throw new ApplicationException(
                @"Could not read Registry Key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM\CookieCertificateThumbprint.\nPlease ensure that the key exists and that you have permission to read it.");
            }
        }

        public RsaEncryptedSessionSecurityTokenHandler()
            : base(transforms.AsReadOnly())
        {
        }
    }

    /// <summary>
    /// A utility class which helps to retrieve an x509 certificate
    /// </summary>
    public class CertificateUtil
    {
        /// <summary>
        /// Gets an X.509 certificate given the name, store location and the subject distinguished name of the X.509 certificate.
        /// </summary>
        /// <param name="name">Specifies the name of the X.509 certificate to open.</param>
        /// <param name="location">Specifies the location of the X.509 certificate store.</param>
        /// <param name="thumbprint">Subject distinguished name of the certificate to return.</param>
        /// <returns>The specific X.509 certificate.</returns>
        public static X509Certificate2 GetCertificate(StoreName name, StoreLocation location, string thumbprint)
        {
            X509Store store = null;
            X509Certificate2Collection certificates = null;           
            X509Certificate2 result = null;

            try
            {
                store = new X509Store(name, location);
                store.Open(OpenFlags.ReadOnly);
                //
                // Every time we call store.Certificates property, a new collection will be returned.
                //
                certificates = store.Certificates;

                for (int i = 0; i < certificates.Count; i++)
                {
                    X509Certificate2 cert = certificates[i];

                    if (cert.Thumbprint.Equals(thumbprint, StringComparison.InvariantCultureIgnoreCase))
                    {
                        result = new X509Certificate2(cert);
                        break;
                    }
                }                
            }
            catch (Exception ex)
            {
                throw new ApplicationException(string.Format("An issue occurred opening cert store: {0}\\{1}. Exception:{2}.", name, location, ex));
            }
            finally
            {
                if (certificates != null)
                {
                    for (int i = 0; i < certificates.Count; i++)
                    {
                        X509Certificate2 cert = certificates[i];
                        cert.Reset();
                    }
                }

                if (store != null)
                {
                    store.Close();
                }
            }

            return result;
        }
    }
}

Company standards dictate that this class should be deployed to GAC but it can be deployed to the CRM webpage bin folder instead.

This is the PowerShell function in our script that sets the certificate on the registry:

function SetCookieCertificateThumbprint
{
 param ([string]$value)
 $path = "hklm:\Software\microsoft\mscrm"
 $name = "CookieCertificateThumbprint"
 
 if( -not (Test-Path -Path $path -PathType Container) )
 {
  Write-Error "Cannot find MSCRM Registry Key: " + $path
 }
 else
 {
  $keys = Get-ItemProperty -Path $path

  if ($keys.$name -or $keys.$name -ne $value)
  {
   Set-ItemProperty -path $path -name $name -value $value 
  }
 }
}

I have not automated the rest, which is the really fiddly part, i.e. updating the web.config. Here are the relevant parts though:

Config Sections First:
<configSections>
    <!-- COMMENT:START CRM Titan 28973
   If you add any new section here , please ensure that section name is removed from help/web.config
End COMMENT:END-->
    <section name="crm.authentication" type="Microsoft.Crm.Authentication.AuthenticationSettingsConfigurationSectionHandler, Microsoft.Crm.Authentication, Version=6.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
    <section name="microsoft.identityModel" type="Microsoft.IdentityModel.Configuration.MicrosoftIdentityModelSection, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
  </configSections>
The actual token handler:
<microsoft.identityModel>
  <service>   
    <securityTokenHandlers>
      <!-- Remove and replace the default SessionSecurityTokenHandler with your own -->
      <remove type="Microsoft.IdentityModel.Tokens.SessionSecurityTokenHandler, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
      <add type="CRM.RSASessionCookie.RsaEncryptedSessionSecurityTokenHandler, CRM.RSASessionCookie, Version=1.0.0.0, Culture=neutral, PublicKeyToken=d10ca3d28ba8fa6e" />
    </securityTokenHandlers>
  </service>
</microsoft.identityModel>

It should, hopefully be obvious that this will need to be done across all CRM servers that have the Web Server role.

After all this work, we thought we would be ok for our dev and test environments sharing a single ADFS server, as the cookies would be encrypted with the same certificate, but it turns out that this is not supported by MS Dynamics CRM 2013 :(

Monday 14 September 2015

Disabling Autocomplete for ADFS forms sign in page

We've been asked to disable Autocomplete for the sign in page on our MS Dynamics CRM application. We have a sign-in page because we're using IFD.

This turns out to require an unsupported customization of ADFS, as we're using ADFS 2.1, which really doesn't support any customization at all.

Unsupported here, simply means that a patch might overwrite our changes or the page might change completely, no big deal in this case, as it's unlikely that many changes will be rolled out for ADFS 2.1, but it pays to be careful when doing unsupported customization.

Most of our users use IE 9, which means that autocomplete=off will work, however, some of our users don't, which means that we have to have a new solution.

We're are modifying the FormsSignIn.aspx page. This page can normally be found in c:\inetpub\wwwroot\ls\, but it really does depend on how ADFS is installed.

I've done this in a rather verbose way, first the JavaScript functions:

function EnablePasswordField(){
    document.getElementById('<%=PasswordTextBox.ClientID%>').readOnly=false;         
    document.getElementById('<%=PasswordTextBox.ClientID%>').select();
}

function DisablePasswordField(){
    document.getElementById('<%=PasswordTextBox.ClientID%>').readOnly=true;     
}

and then the markup:
<asp:TextBox runat="server" ID="PasswordTextBox" TextMode="Password" onfocus="EnablePasswordField()" onblur="DisablePasswordField()" ReadOnly="true" autocomplete="off"></asp:TextBox>

The key here is to make the password textbox readonly and use the JavaScript functions to make the control writable on focus and readonly when it loses focus, this seems to be enough to thwart autocomplete, for now at least.

This is the complete page:

<%@ Page Language="C#" MasterPageFile="~/MasterPages/MasterPage.master" AutoEventWireup="true" ValidateRequest="false"
    CodeFile="FormsSignIn.aspx.cs" Inherits="FormsSignIn" Title="<%$ Resources:CommonResources, FormsSignInPageTitle%>"
    EnableViewState="false" runat="server" %>

<%@ OutputCache Location="None" %>

<asp:Content ID="FormsSignInContent" ContentPlaceHolderID="ContentPlaceHolder1" runat="server">
        <script>
        
            function EnablePasswordField(){
               document.getElementById('<%=PasswordTextBox.ClientID%>').readOnly=false;
            }
            
   function DisablePasswordField(){
               document.getElementById('<%=PasswordTextBox.ClientID%>').readOnly=true;     
            }
        </script>
    <div class="GroupXLargeMargin">
        <asp:Label Text="<%$ Resources:CommonResources, FormsSignInHeader%>" runat="server" /></div>
    <table class="UsernamePasswordTable">
        <tr>
            <td>
                <span class="Label">
                    <asp:Label Text="<%$ Resources:CommonResources, UsernameLabel%>" runat="server" /></span>
            </td>
            <td>
                <asp:TextBox runat="server" ID="UsernameTextBox" autocomplete="off"></asp:TextBox>
            </td>
            <td class="TextColorSecondary TextSizeSmall">
                <asp:Label Text="<%$ Resources:CommonResources, UsernameExample%>" runat="server" />
            </td>
        </tr>
        <tr>
            <td>
                <span class="Label">
                    <asp:Label Text="<%$ Resources:CommonResources, PasswordLabel%>" runat="server" /></span>
            </td>
            <td>
                 <asp:TextBox runat="server" ID="PasswordTextBox" TextMode="Password" onfocus="EnablePasswordField()" onblur="DisablePasswordField()" ReadOnly="true" autocomplete="off"></asp:TextBox>
            </td>
            <td>&nbsp;</td>
        </tr>
        <tr>
            <td></td>
            <td colspan="2" class="TextSizeSmall TextColorError">
                <asp:Label ID="ErrorTextLabel" runat="server" Text="" Visible="False"></asp:Label>
            </td>
        </tr>
        <tr>
            <td colspan="2">
                <div class="RightAlign GroupXLargeMargin">
                    <asp:Button ID="SubmitButton" runat="server" Text="<%$ Resources:CommonResources, FormsSignInButtonText%>" OnClick="SubmitButton_Click" CssClass="Resizable" />
                </div>
            </td>
            <td>&nbsp;</td>
        </tr>
    </table>
</asp:Content>

Monday 7 September 2015

ARR 3.0 - Bug?

A few weeks back we found a bug in ARR 3.0, if it's not a bug then it's definitely an odd feature.

We have a couple of ARR servers and while configuring, troubleshooting, etc.. we kept one of the servers off. We turned it on and it started failing with the following errors getting logged in the event log.

Application Log

Source: Application Error

Event ID: 1000

Faulting application name: w3wp.exe, version: 8.0.9200.16384, time stamp: 0x50108835

Faulting module name: requestRouter.dll, version: 7.1.1952.0, time stamp: 0x5552511b

Exception code: 0xc0000005

Fault offset: 0x000000000000f2dd

Faulting process id: 0x8bc

Faulting application start time: 0x01d0b89d5edc49ba

Faulting application path: c:\windows\system32\inetsrv\w3wp.exe

Faulting module path: C:\Program Files\IIS\Application Request Routing\requestRouter.dll

Report Id: 9caa10cb-2490-11e5-943b-005056010c6a

Faulting package full name:

Faulting package-relative application ID:


System log

Source: WAS

Event ID: 5009

A process serving application pool 'stark.dev.com' terminated unexpectedly. The process id was '52'. The process exit code was '0xff'.

Source: WAS

Event ID: 5011

A process serving application pool '
stark.dev.com' suffered a fatal communication error with the Windows Process Activation Service. The process id was '2792'. The data field contains the error number.

Source: WAS

Event ID: 5002

Application pool '
stark.dev.com' is being automatically disabled due to a series of failures in the process(es) serving that application pool.

This last one was due to rapid fail being enabled in the app pool

The odd thing is that, Server 1 was working fine, but Server 2 wasn't. Odder still was that they were configured in the same way, at least it look that way, at first.

After a lot of troubleshooting, we found the issue in Server 2, which, surprise, surprise was not configured the same way as Server 1.

This is the offending rule:



Yes, it's a stupid rule, it clearly should've have been Match Any but then again ARR should not have taken the app pool down.

We talked with Microsoft support who said that they were going to talk to the product team but I've not heard anything, so who knows.

Thursday 3 September 2015

How to disable FIPS using PowerShell

I always forget about this, so I thought I would add myself a remainder

FIPS can be disabled by editing the registry and restarting the server:
New-ItemProperty - Path HKLM\System\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy -name Enabled -value 0; Restart-Computer -Force



Monday 31 August 2015

Using ARR (Reverse Proxy) with Microsoft Dynamics CRM 2015 - IFD

In this post I'll discuss how we have used the Application Request Routing module from IIS to expose Ms Dynamics CRM to the big bad world.

In this project I'm working on at the moment we want to allow authorized third parties to access to our application, which means that there has been a lot of discussions around what's the best way of doing this.

The Problem:

We want to allow third parties access to our MS Dynamics CRM 2015 system but these third parties might be mom and pop operations so federating with them is not an option. In reality, this is a fantasy conjured up by the PM, but we all go along with it, because sometimes it's important to know which battles to fight.

We also don't want to expose our MS Dynamics CRM 2015 servers to the big bad internet.

The Solution:

Use ARR to expose the MS Dynamics CRM 2015 website to the outside world, while keeping the servers happily inside the perimeter.

This is an architecture diagram of what we are trying to achieve.






I'm assuming that you have a working MS Dynamics CRM 2013/2015 installation configured for Internet Facing Deployment and a wildcard certificate for the appropriate domain available.

Install ARR.

Navigate to this page and click on install this extension, which will use the Windows Platform installer to install ARR, just follow the instructions therein. Alternatively you can do it manually, which I haven't tried.

Configure ARR

ARR is configured from the IIS Manager console.

So fire it up (windows key + r -> inetmgr)  and let's get started:

We first create a Server Farm for the MS Dynamics CRM Web Servers. If you have web and application CRM servers then you only need to add the web servers. We have full servers.



Ensure you add all servers, not just one server, unless you only have one server like myself in this environment :)


If you are only using ARR to expose MS Dynamics CRM to the outside world, then it's probably ok to click yes here.


 We can now configure our server farm.


We want to make sure that traffic is only directed to servers that are up and running so I normally set the organization's wsdl page as the health test URL. The server's page (using FQDN) should also work and in fact should be used if you have more than one organization.


The other thing that needs to be done is to set load balancing to be a based on Client affinity, in other words, a client will always hit the same server, unless that server is down. This is an optional step, that's only required, as far as I can tell, for reports.


We now need to check that the routing rule is correct for the current set up.



Pretty simple, as we're sending everything that hits this server onward to the MS Dynamics CRM Farm. However, the rule will, by default, forward it to http rather than https, so this needs to be changed.




The final step is to create the proxy websites. I'm going to assume that a wildcard certificate has been installed already for the server. These website will receive the traffic from the internet and forward it to the MS Dynamics CRM Web Servers.

When using IFD, a website is needed for each organization and another one for the auth website. You can also add one for the discovery service, but as far as my limited testing goes, it works without it.





At this point ARR is configured, now all that remains is to set up name resolution.

I normally use CNAME aliases for auth, disc and the various organizations to point to the ARR server:

stark.dev.local --> ARR Server (Organization)
auth.dev.local --> ARR Server  (auth)
disc.dev.local --> ARR Server   (discovery)

In my next post I will add ADFS to the reverse proxy, note that it's perfectly acceptable to use ARR for this purpose as well, but it also possible to use WAP, which provides pre-authentication.

Monday 24 August 2015

MS Dynamics CRM - Blocking Direct Access to SharePoint

In this project I'm working on we've had a requirement to block direct access to SharePoint, in other words. Our users can only access their documents through CRM.

There are various ways, in which this can be achieved and today I will be discussing achieving this by leveraging ARR.

We have a similar architecture to this:



We have a Sharepoint site collection and every document is stored in this site collection, so our users would go https://sp.dev.local/sites/spsite/<crm_entity>/ to access documents for <crm_entity>

This is actually a somewhat irritating requirement because IE behaves differently than Firefox and Chrome. We're lucky enough not to have to support Opera and Safari as well, nothing wrong with these browsers, but 5 browsers would drive our testers crazy.

In any case, we've configured two farms, CRM and SHAREPOINT, so we need rules for those.

So for CRM we have:

<rule name="CRM Farm Forwarding Rule" enabled="true" patternSyntax="Wildcard" stopProcessing="false">
    <match url="*" />
    <conditions logicalGrouping="MatchAny" trackAllCaptures="true">
        <add input="{HTTP_HOST}" pattern="*crm.dev.local" />
    </conditions>
    <serverVariables>
    </serverVariables>
    <action type="Rewrite" url="https://CRM/{R:0}" />
</rule>

And for Sharepoint we have:

<rule name="Sharepoint Farm Forwarding Rule" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
    <match url="*" />
    <conditions logicalGrouping="MatchAny" trackAllCaptures="true">
        <add input="{HTTP_HOST}" pattern="*sp.dev.local" />
    </conditions>
    <serverVariables>
    </serverVariables>
    <action type="Rewrite" url="https://SHAREPOINT/{R:0}" />
</rule>

So far so good, this is where things start to get interesting. We want to block direct access to SP for IE browsers, which we can achieve like this:

 
<rule name="IE - Allow Access to SharePoint grid in CRM" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
    <match url="*sites/SPSITE/crmgrid*" />
    <conditions logicalGrouping="MatchAny" trackAllCaptures="false">
    </conditions>
    <serverVariables>
    </serverVariables>
    <action type="Rewrite" url="https://SHAREPOINT/{R:0}" />
</rule>    
and
<rule name="IE - Block Direct Access to SharePoint" stopProcessing="true">
    <match url="sites\/SPSITE" />
    <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_REFERER}" pattern="^/?$" negate="true" />
        <add input="{HTTP_USER_AGENT}" pattern="MSIE" />
    </conditions>
    <serverVariables>
    </serverVariables>
    <action type="CustomResponse" statusCode="403" subStatusCode="1" statusReason="IE" statusDescription="Direct Access to SHAREPOINT is not permitted" />
</rule>

The first rules allows traffic if it's trying to access SP through the List component, this allows the SharePoint CRM List component to work.

The second rule will block access for IE user agents ( i.e containing MSIE) and where the referer is not empty. It will stop processing if there is a match.

For some reason, I.E. blanks the referer when accessing documents in SharePoint from the list component but crucially fills it in if accessing documents directly from SharePoint.

Firefox and Chrome will have the correct referer, i.e. sp.dev.local/sites/spsite/crmgrid, so there is a single rule:

<rule name="Other Browsers - Block Direct Access to SharePoint" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
    <match url="*sites/SPSITE*" />
    <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_REFERER}" pattern="*dev.local/sites/SPSITE/crmgrid*" negate="true" />
        <add input="{HTTP_USER_AGENT}" pattern="*MSIE*" negate="true" />
    </conditions>
    <serverVariables>
    </serverVariables>
    <action type="CustomResponse" statusCode="403" subStatusCode="2" statusReason="Other" statusDescription="Direct Access to SHAREPOINT is not permitted" />
</rule>

Just need to make sure that the user agent is not from IE. The rule will stop processing if there is a match.

Rules.xml can be found below, with the rules in the correct order.

Thus effectively we do the following:

CRM -> Rewrite to  CRM Farm
SP - crm grid?  -> Rewrite to SP Farm
SP - IE and empty Referer -> Rewrite to SP Farm
SP - Other Browser and correct Referer -> Rewrite to SP Farm
SP -> Rewrite to Farm

The last rule is only needed if there are other sites in SP that the users might need to access.

<?xml version="1.0" encoding="UTF-8"?>
<appcmd>
    <CONFIG CONFIG.SECTION="system.webServer/rewrite/globalRules" path="MACHINE/WEBROOT/APPHOST" overrideMode="Inherit" locked="false">
        <system.webServer-rewrite-globalRules>
            <rule name="CRM Farm Forwarding Rule" enabled="true" patternSyntax="Wildcard" stopProcessing="false">
                <match url="*" />
                <conditions logicalGrouping="MatchAny" trackAllCaptures="true">
                    <add input="{HTTP_HOST}" pattern="*crm.dev.local" />
                </conditions>
                <serverVariables>
                </serverVariables>
                <action type="Rewrite" url="https://CRM/{R:0}" />
            </rule>
            <rule name="IE - Allow Access to SharePoint grid in CRM" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
                <match url="*sites/SPSITE/crmgrid*" />
                <conditions logicalGrouping="MatchAny" trackAllCaptures="false">
                </conditions>
                <serverVariables>
                </serverVariables>
                <action type="Rewrite" url="https://SHAREPOINT/{R:0}" />
            </rule>
            <rule name="IE - Block Direct Access to SharePoint" stopProcessing="true">
                <match url="sites\/SPSITE" />
                <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
                    <add input="{HTTP_REFERER}" pattern="^/?$" negate="true" />
                    <add input="{HTTP_USER_AGENT}" pattern="MSIE" />
                </conditions>
                <serverVariables>
                </serverVariables>
                <action type="CustomResponse" statusCode="403" subStatusCode="1" statusReason="IE" statusDescription="Direct Access to SHAREPOINT is not permitted" />
            </rule>
            <rule name="Other Browsers - Block Direct Access to SharePoint" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
                <match url="*sites/SPSITE*" />
                <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
                    <add input="{HTTP_REFERER}" pattern="*dev.local/sites/SPSITE/crmgrid*" negate="true" />
                    <add input="{HTTP_USER_AGENT}" pattern="*MSIE*" negate="true" />
                </conditions>
                <serverVariables>
                </serverVariables>
                <action type="CustomResponse" statusCode="403" subStatusCode="2" statusReason="Other" statusDescription="Direct Access to SHAREPOINT is not permitted" />
            </rule>
            <rule name="Sharepoint Farm Forwarding Rule" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
                <match url="*" />
                <conditions logicalGrouping="MatchAny" trackAllCaptures="true">
                    <add input="{HTTP_HOST}" pattern="*sp.dev.local" />
                </conditions>
                <serverVariables>
                </serverVariables>
                <action type="Rewrite" url="https://SHAREPOINT/{R:0}" />
            </rule>
        </system.webServer-rewrite-globalRules>
    </CONFIG>
</appcmd>

It can be exported with this command:
appcmd.exe list config -section:system.webServer/rewrite/globalRules -xml > rules.xml 
The imported with this command:
appcmd.exe set config /in < rules.xml

Understanding default values in C#

I was refactoring a bit of code last week and I ended creating an interesting scenario.
In short, I was re-writing a method to see if by using a completely different approach I could make the code simpler easier to maintain and faster, and I ended up with two methods like this (don't even ask, what I was thinking):
public class Point
{
    public int X { get; set; }
    public int Y { get; set; }

    public Point(int x, int y)
    {
        this.X = x;
        this.Y = y;
    }

    public void Move(int distance)
    {
        this.X += distance;
        this.Y += distance;
    }

    public bool Move(int distance, bool dummy = true)
    {
        this.X += distance;
        this.Y += distance;
  
        return true;
    }
}
So with this calling code, which polymorphic operation will be selected:
class Program
{
    static void Main(string[] args)
    {
        var p = new Point(1, 2);
        p.Move(1);
        p.Move(1,true);
    }
}

p.Move(1) invokes the void method p.Move(1,true); invokes the bool method.

The question is how does the compiler know which polymorphic operation to invoke, after all, if the Point class had no void Move method, both statement would invoke the second Move method. 

Time to look at the IL output of Main:


.method private hidebysig static 
 void Main (
  string[] args
 ) cil managed 
{
 // Method begins at RVA 0x2100
 // Code size 27 (0x1b)
 .maxstack 3
 .entrypoint
 .locals init (
  [0] class ConsoleApplication6.Point p
 )

 IL_0000: nop
 IL_0001: ldc.i4.1
 IL_0002: ldc.i4.2
 IL_0003: newobj instance void ConsoleApplication6.Point::.ctor(int32, int32)
 IL_0008: stloc.0
 IL_0009: ldloc.0
 IL_000a: ldc.i4.1
 IL_000b: callvirt instance void ConsoleApplication6.Point::Move(int32)
 IL_0010: nop
 IL_0011: ldloc.0
 IL_0012: ldc.i4.1
 IL_0013: ldc.i4.1
 IL_0014: callvirt instance bool ConsoleApplication6.Point::Move(int32, bool)
 IL_0019: pop
 IL_001a: ret
} // end of method Program::Main

This is not very illuminating, but if we comment out the void method on the Point class and get the IL output again:

.method private hidebysig static 
 void Main (
  string[] args
 ) cil managed 
{
 // Method begins at RVA 0x20e0
 // Code size 28 (0x1c)
 .maxstack 3
 .entrypoint
 .locals init (
  [0] class ConsoleApplication6.Point p
 )

 IL_0000: nop
 IL_0001: ldc.i4.1
 IL_0002: ldc.i4.2
 IL_0003: newobj instance void ConsoleApplication6.Point::.ctor(int32, int32)
 IL_0008: stloc.0
 IL_0009: ldloc.0
 IL_000a: ldc.i4.1
 IL_000b: ldc.i4.1
 IL_000c: callvirt instance bool ConsoleApplication6.Point::Move(int32, bool)
 IL_0011: pop
 IL_0012: ldloc.0
 IL_0013: ldc.i4.1
 IL_0014: ldc.i4.1
 IL_0015: callvirt instance bool ConsoleApplication6.Point::Move(int32, bool)
 IL_001a: pop
 IL_001b: ret
} // end of method Program::Main

As expected both invoke the second method but why this behaviour?

Well, it turns out that this is part of the spec:

Use of named and optional arguments affects overload resolution in the following ways:
  • A method, indexer, or constructor is a candidate for execution if each of its parameters either is optional or corresponds, by name or by position, to a single argument in the calling statement, and that argument can be converted to the type of the parameter.
  • If more than one candidate is found, overload resolution rules for preferred conversions are applied to the arguments that are explicitly specified. Omitted arguments for optional parameters are ignored.
  • If two candidates are judged to be equally good, preference goes to a candidate that does not have optional parameters for which arguments were omitted in the call. This is a consequence of a general preference in overload resolution for candidates that have fewer parameters.

Monday 17 August 2015

The authentication endpoint Kerberos was not found on the configured Secure Token Service! - Redux

So today I had an interesting moment of Deja vu today, where we started getting this issue again.

This time though, the configuration had already been done so it was a bit mystifying to say the least and to further add to the confusion, it worked on one server but not the other.

After a lot of blood, sweat and tears the issue was tracked down to the affected server being unable to communicate with the ADFS load balancer. In essence, our network guys had not opened the firewall for both servers but just one of them.

So, if you get this issue, it might simply be due to a good old fashioned network issue.

Monday 10 August 2015

Removing HTTP Headers for an ARR/MS Dynamics CRM/Sharepoint 2013 system

We had a pen test carried out last week and one of the outcomes was that we were leaking information with our HTTP headers and they must be removed.

Our environment consists of a web layer (IIS using ARR) and then an app layer (MS Dynamics CRM and MS SharePoint)

Personally I think this is a bit of security through obscurity but needs must so here we go:

X-AspNet-Version Header:

MS Dynamics CRM 2013

In the web.config, <drive>:Program Files\Microsoft Dynamics CRM\CRMWeb  add enableVersionHeader="false" to the httpRuntime element, normally you'll end up with something like this:

<httpRuntime executionTimeout="300" maxRequestLength="32768" requestValidationMode="3.0" encoderType="Microsoft.Crm.CrmHttpEncoder, Microsoft.Crm" enableVersionHeader="false"/>

MS SharePoint 2013

In the web.config,  <drive>:inetpub\wwwroot\wss\VirtualDirectories\80\ add enableVersionHeader="false" to the httpRuntime element, normally you'll end up with something like this:

<httpRuntime maxRequestLength="51200" requestValidationMode="2.0" enableVersionHeader="False" />

X-Powered-By Header:

From IIS Manager -> Server -> HTTP Response Headers




Server Header:

The simplest way I found is to use URL Rewrite to blank this header, which works very well for our system as we're using ARR already so just need to do this one on the web layer ... from IIS Manager -> Url Rewrite -> Add Rule

Select Blank Outbound Rule


Fill in the details as below

Don't forget to click Apply when you've finished.

It's worth pointing out  that this will simply blank out the value of the Server Header, rather than remove it completely.

If you want to remove it completely you will need to install urlscan.

This approach can be used for all the other headers above I suppose.

X-Powered-By: ARR/2.5 Header

From a powershell console with elevated permissions go to C:\Windows\system32\inetsrv and run this command:

.\appcmd.exe set config -section:webFarms /"[name='serverfarmname'].applicationRequestRouting.protocol.arrResponseHeader:false" /commit:apphost

Saturday 25 July 2015

SPApplicationAuthenticationModule: There is no Authorization header, can't try to perform application authentication.

This week we had an interesting issue with one of our SharePoint servers.

We have a two server farm, both servers are full servers that had been installed a couple of months ago and as far as I was aware both servers had been tested, so I was little bit surprised when the farm was tested in anger and we were getting a roughly ~20% failure rate in a process that uploads a document to SharePoint.

After a bit of digging we found that it was due to one of the SharePoint servers. 

We could not even log in to any of sites hosted on the farm if we hit this server. We simply would get a 401 unauthorized error. 

I know we also seem to have a load balancing issue but that's for another day.

Perhaps, unsurprisingly the logs did not show much, so I bumped them up to verbose and here's what we found:

Claims Authentication        SPIisWebServiceAuthorizationManager: Using identity '0#.w|dev\svc-spadm' as the actor identity.
Topology                     WcfReceiveRequest: LocalAddress: 'http://sp02.dev.local:32843/934e0061c6a94255b9ab9e6f2ba45325/SearchService.svc' Channel: 'System.ServiceModel.Channels.ServiceChannel' Action: 'http://tempuri.org/ISearchHealthMonitoringServiceApplication/GetQueryHealthMonitoringSettingsForComponents' MessageId: 'urn:uuid:f00ca305-b1d5-4454-85fa-5f83e7094518'
Monitoring                   Leaving Monitored Scope (ExecuteWcfServerOperation). Execution Time=154.973746663333
Monitoring                   Entering monitored scope (Request (GET:https://<site>)). Parent No
Logging Correlation Data     Name=Request (GET:https://<site>)
Claims Authentication        SPTokenCache.ReadTokenXml: Successfully read token XML ''.
Application Authentication   SPApplicationAuthenticationModule: There is no Authorization header, can't try to perform application authentication.
Authentication Authorization Non-OAuth request. IsAuthenticated=False, UserIdentityName=, ClaimsCount=0
Claims Authentication        Claims Windows Sign-In: Sending 401 for request 'https://<site>' because the user is not authenticated and resource requires authentication.
Monitoring                   Leaving Monitored Scope (Request (GET:https://<site>)). Execution Time=3.75103539695688
Claims Authentication        SPFederationAuthenticationModule.OnEndRequest: User was being redirected to authenticate.
Claims Authentication        Claims Windows Sign-In: Sending 401 for request 'https://<site>' because the user is not authenticated and resource requires authentication.

Clearly, It's not able to authenticate but why? I thought that the lack of authorization header was the clue but nothing I found in Google helped me and then I sort of had a flash of inspiration and decided to check whether the site had Windows Authentication enabled.

Bingo!!!!!  Windows Authentication is Disabled, no wonder nobody could log in :)


After I enabled it and restarted IIS, the second server started working :)

I didn't install SharePoint on these servers and I don't really have that much experience with SharePoint so I'm entirely sure who to blame here, our guys or Microsoft, but it seems to me that since one of the big things with Microsoft is integration with AD, it's just a bit daft that it doesn't turn Windows Authentication on for the SharePoint site by default. 

Maybe it does and it's something that we did.

At any rate, hope this helps.

Monday 20 July 2015

Edit C# config file with PowerShell

So today we changed the organization name, too long to explain and all the external apps needed to have their app.config changed to reflect this change. This was across several environments so I thought it would probably be quicker to write a quick script to do it.

The script works for multiple files because they are on subfolders of a common folder, e.g.

Mycompany --> My App 1
                   --> My App 2
                   --> My App 3

It will of course work from say, c:, if you have your apps in different places but it might take a while to run. It will not work if they are in different drives though.

Here's the script:

param ($path, $keyname, $value)

if (-not($Path))
{
 write-host "Path is a mandatory parameter"
 break;
}

if (-not($keyname))
{
 write-host "KeyName is a mandatory parameter"
 break;
}

if (-not($value))
{
 write-host "value is a mandatory parameter"
 break;
}

$configfiles = ls -Path $path -Recurse -Include *config

foreach ($config in $configfiles)
{
 $doc = (Get-Content $config) -as [Xml]
 $obj = $doc.configuration.appSettings.add | where {$_.Key -eq $keyname}
 $obj.value = $value
 $doc.save($config)
}
Assuming the script has been named ChangeConfigFile.ps1, it can be invoked like this:
 ChangeConfigFile.ps1 -path "c:\program files\Apps\" -keyname "Organization" -value "NewOrg"

Saturday 11 July 2015

Configure MS Dynamics CRM 2011/2013/2015 to use multiple Report Servers (SSRS)

This week I've had quite a bit of fun turning the resilience up to 11 for our production environment of MS Dynamics CRM 2013.

In this post I will discuss how to configure Ms Dynamics CRM 2013 to use multiple SSRS servers, thus ensuring that reporting functionality is as resilient as the rest of the system.

In order to achieve this we need to make changes to AD, the SSRS configuration and finally MS Dynamics CRM. 

It's worth pointing out that this could well be overkill for your system, and to a certain extent it is for ours, but the whole architecture must be resilient is the diktat from above so ...

Pre-Requisites:

  • 1 x Load Balancer (Distributing traffic to SSRS Servers on correct port, normally 80).
  • 1 x VIP.
  • 1 x DNS Record (For VIP above).
  • 2+ x SSRS Servers in a scale out deployment.
  • SSRS configured to use a domain account.
  • Permissions to set SPNs on your domain and edit at least ssrs service account.

My Setup:

DNS record is: CRMReports.dev.local
SSRS Service Account:  dev\svc-ssrs

1. Active Directory

The first thing to do is to set up an Service Name Principal for the account that's running the SSRS service, which can be done with the following commands:
setspn -S HTTP/<VIP FQDN> <SSRS Service account>
setspn -S HTTP/<VIP Name> <SSRS Service account>
So in my case:
setspn -S HTTP/CRMReports.dev.local dev\svc-ssrs
setspn -S HTTP/CRMReports dev\svc-ssrs
The next thing is to ensure that the account is enabled for delegation, so from Active Directory Users and Computers console.


Note that the delegation tab will only appear after an SPN has been set up for that account.

2. SSRS

The first thing to do in the report server, is to enable Kerberos authentication, which can be done by editing the rsreportserver.config file. This file is normally found in this directory: C:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\ReportServer\

Look for the Authentication section and enable Kerberos and Negotiate as below (I've commented out NTLM in case I wanted to go back)
<Authentication>
  <AuthenticationTypes>
   <RSWindowsKerberos/>
   <RSWindowsNegotiate/>
   <!--<RSWindowsNTLM/>-->
  </AuthenticationTypes>
  <RSWindowsExtendedProtectionLevel>Off</RSWindowsExtendedProtectionLevel>
  <RSWindowsExtendedProtectionScenario>Proxy</RSWindowsExtendedProtectionScenario>
  <EnableAuthPersistence>true</EnableAuthPersistence>
 </Authentication>
From the Reporting Services Configuration Manager, Click on Web Service URL and then click on Advanced


Click on Add as highlighted.


Enter the host header name, which will be the DNS Record for the VIP, in my case: crmreports.dev.local and click OK to accept.



Repeat this process for the Report Manager Url.

Once done repeat all steps on section 2 on the other report servers.

3. MS Dynamics CRM

The process is relatively simple and it involves editing the organization to point to the new report server dns, i.e. crmreports.dev.local in my case.

From the Deployment Manager, select organization and Disable your organization(s).


Click Edit Organization.


Set the new report server Url.


 Ensure that all checks are ok.


Congratulations, you now have a resilient reporting service for MS Dynamics CRM.

Saturday 30 May 2015

MOOC done right - Embedded Systems - Shape the World

A few years back I signed up to edX and at time there weren't that many courses available so after the initial excitement of being able to do MIT courses died down, my account was dormant for quite a while, more on this later, until last year I decided to try to learn Python and rather than using LTCTHW or Code Academy, I thought I would try a Python course instead.

A few weeks after I got an email alerting me to other courses that might be of interest and in this email was one that sounded really interesting:
Embedded Systems - Shape the World - UTAustinX -  UT.6.02x
I signed up immediately.

Although the course can be completed without an embedded system, it is, of course, recommended that one is used. Buying instructions for the recommend kit (TI Tiva LaunchPad and some bits and bobs) are provided and not just for the USA. This I found a really nice detail, as I can imagine that probably the majority of people taking the course were not in the USA and it's really one of the many examples of the kind of involvement and enthusiasm that the staff exudes.

My main problem with MOOCs so far has been a combination of lack of motivation, I find it hard to follow through on a topic that, while interesting, might not be applicable for work, current or future, or even daily life and this is not to say that I only learn stuff that's useful, I don't, my head is filled with objectively useless facts, such as the Band Gap of GaAs being ~ 1.42eV (I did not look it up) or Juan Manuel Fangio having won almost half of all the F1 races he participated in or one of my favourite words, biblioclasm

The other reason, and this is probably, the main reason, is lack of time. A lot of the courses suggest a 10-15 hour weekly commitment, this might not sound like much, and in fairness it isn't, most of the time, but some times it is and this is where the Embedded Systems - Shape the World course gets it completely, and absolutely right. The first seven lessons were available when the course started, and the most of rest of content was made available shortly afterwards, so that effectively two weeks after the course started 85+% of the course materials and labs were available.

This is completely at odds with the way that most courses release their material, which is done on an almost weekly basis, with homework(s) due almost on a weekly basis. I find this terribly disappointing. What is the point of doing a course online when you have to basically do it at a pace set for you?  I appreciate that I'm doing it from the comfort of my home but even so this is very likely a major contributory factor to the really poor completion rates in MOOCs. Although I'm not doing the courses for the grades, it's always motivating to be able to get a good grade and the course runs on a very tight schedule a busy week at work or trip can prevent keeping up with the course schedule.

I don't have a CS degree and I've had a interest in low-level programming for a while now, but I've never really taken the time to explore this in any detail as I've always found it pretty daunting, but in the course concepts ranging from CPU instructions and registers to  interrupts and device drivers are explained in a simple and accessible manner. 

In fact, it is explained in such a manner that it's made me lose some of the awe for the guys and gals doing the low-level stuff. I realize that this is silly, as the drivers that are part of the course are extremely simply but it feels as if a veil has been lifted and beneath it, the truth has been revealed.

Instructions on how to get started, install IDE, drivers and TeXas software are provided and I found them easy enough to follow. Those of you out there without a Windows machine might grumble at the lack of direct support, there are instructions on how to the install from virtualization software from a Mac. I guess the authors assume that if you're using Linux you don't need any help getting a hypervisor running :)

All labs have a skeleton project, for Keil's Âµvision IDE and a simulation board, which allows the code to be tested before deploying it to the physical board. I often found that working code in the simulation board would fail when deployed to the physical board. This annoyed me a bit at first, but in reality this is no different from the good old:
Well, it works on my machine.
Generally speaking the code was not robust enough and it needed tweaking. I imagine that there are probably more sophisticated simulators available, but the cost is likely to be prohibitive. This is not unlike apps for the myriad Android phones out there.

One thing that was quite surprising at first, although it makes sense since we're are so close to the bare metal, is the way everything is referred to by memory address. For instance, if an application needs to use bits 0 and 1 of Port E, this required knowing the exact address of these bits. Thankfully, these were provided on the skeleton projects but they can also be looked up on the spec sheets. This is, incidentally, an invaluable skill due to large catalogue of systems and components out there.

This is a very simple program, that flashes an LED that's connected to bit 1 of Port E, based on whether a switch connected to bit 0 of PORT E is pressed, and I think it illustrates the point above. Also note, how the delay function counts, effectively, the number of cycles.

#define GPIO_PORTE_DATA_R       (*((volatile unsigned long *)0x400243FC))

int main(void){ 
 while(1){    
 Delay1ms(100);
 if( GPIO_PORTE_DATA_R&0x01){
  //Let's flip it (Bit 1)
  GPIO_PORTE_DATA_R^=0x02;
 }
 else{
  //Lusisti satis,edisti satis,atque bibisti,tempus abire tibi est
  GPIO_PORTE_DATA_R |=0x02;   
 }
  }

void Delay1ms(unsigned long msec){
        long cyclesPerMs = 15933;
 long counter = cyclesPerMs * msec;
 while (counter > 0){
  counter--;
 }
}
}

I've removed the initialization routine, which in essence, gets the physical board running and activates the various ports as needed. I found quite a few of my issues on the physical board where down to issues in the initialization routine, so it's by no means trivial.

The gradual increase in complexity of the physical circuits that needed to be built was very finely tuned. Chapter 8, finally required a circuit to be build, not just using the TI Tiva LaunchPad and it was really nerve racking, I don't really know why, as I think it was only £20 worth of kit, but there were diagrams available and enough warnings regarding what to do and more importantly what not to do, that I built the circuit without any issues. This, actually, ended up becoming the most enjoyable activity of the course, the actual building of the circuits. 

One of the hardest labs, was chapter 10, where a Finite State Machine is used to model a simplified traffic light system for an intersection. I actually really enjoyed this lab even if it took quite a bit of pen and paper to get the design right in the first place. Also, one of my better pictures :). The switches (yellow parts in the middle of the breadboard) model a sensor that detects cars or pedestrians.

Chapter 10 - Modeled  Intersection's Traffic Light System

Chapter 10 - Real System. 
This is not a great picture, but it shows the TI Tiva LaunchPad interfacing with the Nokia 5110 screen. In Chapter 14, an ADC was used for to measure distances. This works by moving the slide potentiometer, measuring the changes in voltage, which can then be converted to a distance, after suitable calibration.
Chapter 14 - Measuring Gauge
In the penultimate chapter of the course, all that was learned during the course is put together on a sort of final project: a games console. This involved designing the game and then putting together the circuit to build it. Although it might sound daunting as usual there was a lot of help in the form of skeleton code. The hardware part, was relatively simple in that it consisted in putting all that had been learned previously to use. An interesting exercise, you can see the results, not mine, I was too late, here.

The last chapter of the course, involved the Internet of Things, which I have to confess, I haven't done yet, as I've procrastinated on getting the WiFi booster pack for the Launchpad and this brings me to another issue with most other courses: The graders

In other courses, I've done these became inactive when the course becomes inactive and, to be fair, it's the same with this course, but there is a massive difference, the graders in this course work by checking a hash (The hash is computed by the grader software that is run in the local machine) and thus, it is entirely possible to check that your programs work as intended regardless of the course status, this is very welcomed novelty for me and I don't know why this is not the case for more courses.

I should point out that the last chapter does require access to a server, which to be fair could've been mocked to allow offline access. The server is still up 2+ weeks after the course ended.

This post has gone on for far longer than I originally intended and I haven't even talked about Electronics, which is a pretty important part of the course, but I  will stop here.

I would like to end the post by thanking the staff on the course and particularly Dr Valvano and Dr Yerraballi, for making this course very accessible, really enjoyable and tremendously educational.

I really hope that a more advanced course is made available by Dr Valvano and Dr Yerraballi, at some point in the near future.

This post doesn't do the course justice, go on, just go and take it, you will enjoy it.