Monday, 26 October 2015

Adventures using Availability Groups and RBS with SharePoint 2013

The concept behind a remote blob storage is pretty simple, see this for instance. I just want to talk about the myriad issues we've had when using RBS. with availability groups.

Our database setup uses Availability Groups, which, and this is controversial, is a cheap cluster. I do get that there are advantages to availability groups but these seem to be outweighed by the disadvantages. I know this is just my opinion and that I also know nothing about availability groups, HA clusters or anything in general, thank you for pointing out.

So what are the drawbacks of AGs?

  • Official Support is patchy. e.g. in Dynamics CRM 2013 one is forced to update the database directly.
  • Performance can be an issue as the database is always using at least two boxes.
  • Stuff flat out refuses to work, e.g. RBS Maintainer, various SharePoint database related operations.
To the AG mix we introduced RBS and this is were things started to go horribly wrong for us.

The first issue we encountered was the inability to delete a content database from SharePoint, which is not a major issue but it's really annoying.

The second issue was that the RBS maintianer would not work, so the storage requirements would just keep growing. This might not be an issue if you don't plan to archive your documents, but our DB had ~500GB of docs, about 2/3 of which were old but for contractual reasons needed to be kept.

This effectively put a nail in the coffin of the RBS + AG combo but there is more.

In order to load the ~500 GB document, we had a tool to upload the documents to SharePoint. This tool was multi-threaded and it essentially read the documents from the source DB and uploaded them to SharePoint, using the SharePoint CSOM model.

At this point, it's worth mentioning that our hosting provider does not guarantee any sort of performance level, too long to explain.

A couple of weeks back, with RBS on the database, we did a trial run of the upload and we were hitting very poor rates, ~ 4 GB per hour.

Last week, after RBS had been disabled and the content databases recreated, we tried a second trial run and the speed jumped to ~ 20 GB per hour.

I can't say that our RBS configuration was perfect, I think the threshold was on the low side (128 KB) but even so, the speed increase has been massive.

It actually gets better, because the 4 GB per hour figure was using both servers in the farm, whereas the 20 GB per hour figure was simply using one.

yes, yes, I know our hosting provider is crap and 128 KB is below the recommendation, but a 5 fold increase in transfer rates and a lowering of the error rate to almost zero is something that should be considered.

Sunday, 4 October 2015

Integrating MS SharePoint with MS Dynamics CRM 2011/2013 - User Permissions

One of the things that seems to come up time and again on any integration of MS Dynamics CRM and SharePoint is the issue of user permissions.

Generally speaking it would be nice to be able to control access to SharePoint based upon the permissions the user has in MS Dynamics CRM. Alas, this is not possible without writing a bit code. (I've not investigated the Server to Server Integration yet as it seems to be available for online Dynamics CRM only)

The way I have done this, is by using an HttpModule deployed to the Ms SharePoint servers to check whether the user making the request to the MS SharePoint site has actually got access to the record in MS Dynamics CRM itself.

In our case this is pretty straight forward as we only store documents for a single entity, but there is nothing in principle to rule out an expansion to multiple entities.

Depending on the usage, caching will need to be a serious consideration, as performance could be impacted, but I have not thought about it too much yet.

The following assumptions have been made about the integration between MS Dynamics CRM and MS SharePoint:
  1. A document library exists and is named with the entity schema.
  2. Each entity record in MS Dynamics CRM has a single folder in MS SharePoint and this folder is named with the GUID of the record. 
  3. Entity Records in MS Dynamics CRM are not shared.

This is the code for the module itself:

using log4net;
using Microsoft.Crm.Sdk.Messages;
using Microsoft.Xrm.Sdk.Client;
using Microsoft.Xrm.Sdk.Query;
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Net;
using System.Security.Principal;
using System.ServiceModel.Description;
using System.Text;
using System.Text.RegularExpressions;
using System.Threading.Tasks;
using System.Web;

namespace CRMUserPermissions
{
    public class CRMUserPermissions : IHttpModule
    {

        public static ConcurrentDictionary<string, Guid> userIds = new ConcurrentDictionary<string, Guid>();

        const string GuidPattern = @"(/|%2f)([A-F0-9]{8}(?:-[A-F0-9]{4}){3}-[A-F0-9]{12})";

        const string UserIdQuery = @"<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>
  <entity name='systemuser'>   
    <attribute name='systemuserid' />
    <attribute name='domainname' />
    <filter type='and'>
      <condition attribute='domainname' operator='eq' value='{0}' />
    </filter> 
  </entity>
</fetch>";

        public void Dispose()
        {
        }

        public void Init(HttpApplication context)
        {
            context.PostAuthenticateRequest += new EventHandler(context_PostAuthenticateRequest);
        }

        void context_PostAuthenticateRequest(object sender, EventArgs e)
        {
            HttpApplication app = sender as HttpApplication;
            HttpContext context = app.Context;

            if (IsRequestRelevant(context))
            {
                try
                {

                    string user = HttpContext.Current.User.Identity.Name.Split('|').Last();

                    var service = CrmService.GetService();

                    string url = app.Request.Url.ToString();

                    if (!userIds.ContainsKey(user))
                    {
                        string query = string.Format(UserIdQuery, user);
                        var userId = service.RetrieveMultiple(new FetchExpression(query)).Entities.SingleOrDefault();
                        userIds.TryAdd(user, userId.Id);
                    }

                    var record = GetRecordInfo(url);

                    RetrievePrincipalAccessRequest princip = new RetrievePrincipalAccessRequest();
                    princip.Principal = new EntityReference("systemuser", userIds[user]);

                    princip.Target = new EntityReference(record.Item1, record.Item2);

                    var res = (RetrievePrincipalAccessResponse)service.Execute(princip);

                    if (res.AccessRights == AccessRights.None)
                    {
                        app.Response.StatusCode = 403;
                        app.Response.SubStatusCode = 1;
                        app.CompleteRequest();
                    }
   
                }
                catch (Exception)
                {
                    app.Response.StatusCode = 403;
                    app.Response.SubStatusCode = 1;
                    app.CompleteRequest();
                }
            }
        }
    }
}

A few comments are in order since I'm not including all methods.

IsRequestRelevant(context): This method checks that the user is authenticated and that the request is for documents relating to an entity we want to control access via this method.
CrmService.GetService(); This method just returns an OrganizationServiceProxy.
GetRecordInfo(url); This method works out the record guid and what type of entity it is.

It would probably be a better idea to get all, or some, users and cache them on receipt of the first query.

Depending on the system's usage profile different types of caching make more or less sense. For instance, if users tend to access multiple documents within a record in a relatively short time, say 10 minutes, then it makes sense to cache the records and the user's right to them but if users only tend to access a single document within a record, this would make less sense. Consideration needs to be given to memory pressures that caching will create if not using a separate caching layer, such as Redis.

The simplest way of deploying this httpModule is by installing the assembly in the GAC and then manually modifying the web.config of the relevant MS SharePoint site by adding the httpModule to the modules in configuration/system.webServer/modules:

<add name="CRMUserPermissions" type="CRMUserPermissions.CRMUserPermissions, CRMUserPermissions,,Version=1.0.0.0, Culture=neutral, PublicKeyToken=87b3480442bff091"></add>
I will post how to do this properly, i.e. by packaging it up in a MS SharePoint solution in an upcoming post.

Sunday, 27 September 2015

IIS App Pool Credentials Exposed

Last week I was looking at changing the periodic restart for an app pool using the appcmd tool and I found something very interesting. Using this tool can reveal the username and password used for the app pool.

See below:
PS C:\Windows\system32\inetsrv> whoami
dev\crminstall
PS C:\Windows\system32\inetsrv> .\appcmd list apppool /"apppool.name:CRMApppool" -config
<add name="CRMAppPool" managedRuntimeVersion="v4.0" managedPipelineMode="Classic">
  <processModel identityType="SpecificUser" userName="dev\crmapppool" password="MyPassword" idleTimeout="1.01:00:00" />
  <recycling>
    <periodicRestart time="1.05:00:00">
      <schedule>
      </schedule>
    </periodicRestart>
  </recycling>
  <failure />
  <cpu />
</add>
The user in question was a local administrator (member of the local Administrators group) and the command was run from PowerShell with elevated permissions.

So you might need to be logged in as an administrator but you should under no circumstances be able to see another user's password. This is a pretty big security hole, IMHO.

I've only tried on Windows 2012 and 2012 R2, but the behaviour seems consistent.

Incidentally, this does not seem to be the first case where credentials are exposed like this, see this post. It's fair to mention that the issue on the link was eventually fixed.

Monday, 21 September 2015

ID1073: A CryptographicException occurred when attempting to decrypt the cookie using the ProtectedData API (see inner exception for details).

A few weeks back, the performance testers found this issue when doing some resilience testing:
ID1073: A CryptographicException occurred when attempting to decrypt the cookie using the ProtectedData API (see inner exception for details). If you are using IIS 7.5, this could be due to the loadUserProfile setting on the Application Pool being set to false.
In essence, if the user started his session on server 1, then that server was stopped, when he connected to server 2, the issue would occur. Annoyingly, this would require removing cookies, which our testers are not able to do, on their locked down machines.

The deployment uses an NLB flag was checked for this deployment, but this seemed to make no difference, so we decided to encrypt the cookies using a certificate rather than the machine key.

This is a bit annoying as there are few things to consider:
  1. We're going to have to modify the MS Dynamics CRM web.config, which means that every new patch, update, etc.. might overwrite it.
  2. Following from that we need a deployment script to automate it as much as possible.
  3. We'll need to store a way to id the certificate to be used for the encryption somewhere easily accessible (We could hard code it but then when the certificate expires we'd been in trouble).
I decided to use the registry for 3.

This is the code that we've used to encrypt the cookies with a certificate.

using Microsoft.IdentityModel.Tokens;
using Microsoft.IdentityModel.Web;
using Microsoft.Win32;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Cryptography.X509Certificates;
using System.Text;
using System.Threading.Tasks;

namespace CRM.RSASessionCookie
{
    /// <summary>
    /// This class encrypts the session security token using the RSA key
    /// of the relying party's service certificate.
    /// </summary>
    public class RsaEncryptedSessionSecurityTokenHandler : SessionSecurityTokenHandler
    {
        static List<CookieTransform> transforms;

        static RsaEncryptedSessionSecurityTokenHandler()
        {            
            string certThumbprint = (string)Registry.GetValue(@"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM",
                "CookieCertificateThumbprint",
                null);

            if (!string.IsNullOrEmpty(certThumbprint))
            {
                X509Certificate2 serviceCertificate = CertificateUtil.GetCertificate(StoreName.My,
                                                         StoreLocation.LocalMachine, certThumbprint);

                if (serviceCertificate == null)
                {
                    throw new ApplicationException(string.Format("No certificate was found with thumbprint: {0}", certThumbprint));
                }

                transforms = new List<CookieTransform>() 
                         { 
                             new DeflateCookieTransform(), 
                             new RsaEncryptionCookieTransform(serviceCertificate),
                             new RsaSignatureCookieTransform(serviceCertificate),
                         };
            }
            else
            {
                throw new ApplicationException(
                @"Could not read Registry Key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM\CookieCertificateThumbprint.\nPlease ensure that the key exists and that you have permission to read it.");
            }
        }

        public RsaEncryptedSessionSecurityTokenHandler()
            : base(transforms.AsReadOnly())
        {
        }
    }

    /// <summary>
    /// A utility class which helps to retrieve an x509 certificate
    /// </summary>
    public class CertificateUtil
    {
        /// <summary>
        /// Gets an X.509 certificate given the name, store location and the subject distinguished name of the X.509 certificate.
        /// </summary>
        /// <param name="name">Specifies the name of the X.509 certificate to open.</param>
        /// <param name="location">Specifies the location of the X.509 certificate store.</param>
        /// <param name="thumbprint">Subject distinguished name of the certificate to return.</param>
        /// <returns>The specific X.509 certificate.</returns>
        public static X509Certificate2 GetCertificate(StoreName name, StoreLocation location, string thumbprint)
        {
            X509Store store = null;
            X509Certificate2Collection certificates = null;           
            X509Certificate2 result = null;

            try
            {
                store = new X509Store(name, location);
                store.Open(OpenFlags.ReadOnly);
                //
                // Every time we call store.Certificates property, a new collection will be returned.
                //
                certificates = store.Certificates;

                for (int i = 0; i < certificates.Count; i++)
                {
                    X509Certificate2 cert = certificates[i];

                    if (cert.Thumbprint.Equals(thumbprint, StringComparison.InvariantCultureIgnoreCase))
                    {
                        result = new X509Certificate2(cert);
                        break;
                    }
                }                
            }
            catch (Exception ex)
            {
                throw new ApplicationException(string.Format("An issue occurred opening cert store: {0}\\{1}. Exception:{2}.", name, location, ex));
            }
            finally
            {
                if (certificates != null)
                {
                    for (int i = 0; i < certificates.Count; i++)
                    {
                        X509Certificate2 cert = certificates[i];
                        cert.Reset();
                    }
                }

                if (store != null)
                {
                    store.Close();
                }
            }

            return result;
        }
    }
}

Company standards dictate that this class should be deployed to GAC but it can be deployed to the CRM webpage bin folder instead.

This is the PowerShell function in our script that sets the certificate on the registry:

function SetCookieCertificateThumbprint
{
 param ([string]$value)
 $path = "hklm:\Software\microsoft\mscrm"
 $name = "CookieCertificateThumbprint"
 
 if( -not (Test-Path -Path $path -PathType Container) )
 {
  Write-Error "Cannot find MSCRM Registry Key: " + $path
 }
 else
 {
  $keys = Get-ItemProperty -Path $path

  if ($keys.$name -or $keys.$name -ne $value)
  {
   Set-ItemProperty -path $path -name $name -value $value 
  }
 }
}

I have not automated the rest, which is the really fiddly part, i.e. updating the web.config. Here are the relevant parts though:

Config Sections First:
<configSections>
    <!-- COMMENT:START CRM Titan 28973
   If you add any new section here , please ensure that section name is removed from help/web.config
End COMMENT:END-->
    <section name="crm.authentication" type="Microsoft.Crm.Authentication.AuthenticationSettingsConfigurationSectionHandler, Microsoft.Crm.Authentication, Version=6.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
    <section name="microsoft.identityModel" type="Microsoft.IdentityModel.Configuration.MicrosoftIdentityModelSection, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
  </configSections>
The actual token handler:
<microsoft.identityModel>
  <service>   
    <securityTokenHandlers>
      <!-- Remove and replace the default SessionSecurityTokenHandler with your own -->
      <remove type="Microsoft.IdentityModel.Tokens.SessionSecurityTokenHandler, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
      <add type="CRM.RSASessionCookie.RsaEncryptedSessionSecurityTokenHandler, CRM.RSASessionCookie, Version=1.0.0.0, Culture=neutral, PublicKeyToken=d10ca3d28ba8fa6e" />
    </securityTokenHandlers>
  </service>
</microsoft.identityModel>

It should, hopefully be obvious that this will need to be done across all CRM servers that have the Web Server role.

After all this work, we thought we would be ok for our dev and test environments sharing a single ADFS server, as the cookies would be encrypted with the same certificate, but it turns out that this is not supported by MS Dynamics CRM 2013 :(

Monday, 14 September 2015

Disabling Autocomplete for ADFS forms sign in page

We've been asked to disable Autocomplete for the sign in page on our MS Dynamics CRM application. We have a sign-in page because we're using IFD.

This turns out to require an unsupported customization of ADFS, as we're using ADFS 2.1, which really doesn't support any customization at all.

Unsupported here, simply means that a patch might overwrite our changes or the page might change completely, no big deal in this case, as it's unlikely that many changes will be rolled out for ADFS 2.1, but it pays to be careful when doing unsupported customization.

Most of our users use IE 9, which means that autocomplete=off will work, however, some of our users don't, which means that we have to have a new solution.

We're are modifying the FormsSignIn.aspx page. This page can normally be found in c:\inetpub\wwwroot\ls\, but it really does depend on how ADFS is installed.

I've done this in a rather verbose way, first the JavaScript functions:

function EnablePasswordField(){
    document.getElementById('<%=PasswordTextBox.ClientID%>').readOnly=false;         
    document.getElementById('<%=PasswordTextBox.ClientID%>').select();
}

function DisablePasswordField(){
    document.getElementById('<%=PasswordTextBox.ClientID%>').readOnly=true;     
}

and then the markup:
<asp:TextBox runat="server" ID="PasswordTextBox" TextMode="Password" onfocus="EnablePasswordField()" onblur="DisablePasswordField()" ReadOnly="true" autocomplete="off"></asp:TextBox>

The key here is to make the password textbox readonly and use the JavaScript functions to make the control writable on focus and readonly when it loses focus, this seems to be enough to thwart autocomplete, for now at least.

This is the complete page:

<%@ Page Language="C#" MasterPageFile="~/MasterPages/MasterPage.master" AutoEventWireup="true" ValidateRequest="false"
    CodeFile="FormsSignIn.aspx.cs" Inherits="FormsSignIn" Title="<%$ Resources:CommonResources, FormsSignInPageTitle%>"
    EnableViewState="false" runat="server" %>

<%@ OutputCache Location="None" %>

<asp:Content ID="FormsSignInContent" ContentPlaceHolderID="ContentPlaceHolder1" runat="server">
        <script>
        
            function EnablePasswordField(){
               document.getElementById('<%=PasswordTextBox.ClientID%>').readOnly=false;
            }
            
   function DisablePasswordField(){
               document.getElementById('<%=PasswordTextBox.ClientID%>').readOnly=true;     
            }
        </script>
    <div class="GroupXLargeMargin">
        <asp:Label Text="<%$ Resources:CommonResources, FormsSignInHeader%>" runat="server" /></div>
    <table class="UsernamePasswordTable">
        <tr>
            <td>
                <span class="Label">
                    <asp:Label Text="<%$ Resources:CommonResources, UsernameLabel%>" runat="server" /></span>
            </td>
            <td>
                <asp:TextBox runat="server" ID="UsernameTextBox" autocomplete="off"></asp:TextBox>
            </td>
            <td class="TextColorSecondary TextSizeSmall">
                <asp:Label Text="<%$ Resources:CommonResources, UsernameExample%>" runat="server" />
            </td>
        </tr>
        <tr>
            <td>
                <span class="Label">
                    <asp:Label Text="<%$ Resources:CommonResources, PasswordLabel%>" runat="server" /></span>
            </td>
            <td>
                 <asp:TextBox runat="server" ID="PasswordTextBox" TextMode="Password" onfocus="EnablePasswordField()" onblur="DisablePasswordField()" ReadOnly="true" autocomplete="off"></asp:TextBox>
            </td>
            <td>&nbsp;</td>
        </tr>
        <tr>
            <td></td>
            <td colspan="2" class="TextSizeSmall TextColorError">
                <asp:Label ID="ErrorTextLabel" runat="server" Text="" Visible="False"></asp:Label>
            </td>
        </tr>
        <tr>
            <td colspan="2">
                <div class="RightAlign GroupXLargeMargin">
                    <asp:Button ID="SubmitButton" runat="server" Text="<%$ Resources:CommonResources, FormsSignInButtonText%>" OnClick="SubmitButton_Click" CssClass="Resizable" />
                </div>
            </td>
            <td>&nbsp;</td>
        </tr>
    </table>
</asp:Content>

Monday, 7 September 2015

ARR 3.0 - Bug?

A few weeks back we found a bug in ARR 3.0, if it's not a bug then it's definitely an odd feature.

We have a couple of ARR servers and while configuring, troubleshooting, etc.. we kept one of the servers off. We turned it on and it started failing with the following errors getting logged in the event log.

Application Log

Source: Application Error

Event ID: 1000

Faulting application name: w3wp.exe, version: 8.0.9200.16384, time stamp: 0x50108835

Faulting module name: requestRouter.dll, version: 7.1.1952.0, time stamp: 0x5552511b

Exception code: 0xc0000005

Fault offset: 0x000000000000f2dd

Faulting process id: 0x8bc

Faulting application start time: 0x01d0b89d5edc49ba

Faulting application path: c:\windows\system32\inetsrv\w3wp.exe

Faulting module path: C:\Program Files\IIS\Application Request Routing\requestRouter.dll

Report Id: 9caa10cb-2490-11e5-943b-005056010c6a

Faulting package full name:

Faulting package-relative application ID:


System log

Source: WAS

Event ID: 5009

A process serving application pool 'stark.dev.com' terminated unexpectedly. The process id was '52'. The process exit code was '0xff'.

Source: WAS

Event ID: 5011

A process serving application pool '
stark.dev.com' suffered a fatal communication error with the Windows Process Activation Service. The process id was '2792'. The data field contains the error number.

Source: WAS

Event ID: 5002

Application pool '
stark.dev.com' is being automatically disabled due to a series of failures in the process(es) serving that application pool.

This last one was due to rapid fail being enabled in the app pool

The odd thing is that, Server 1 was working fine, but Server 2 wasn't. Odder still was that they were configured in the same way, at least it look that way, at first.

After a lot of troubleshooting, we found the issue in Server 2, which, surprise, surprise was not configured the same way as Server 1.

This is the offending rule:



Yes, it's a stupid rule, it clearly should've have been Match Any but then again ARR should not have taken the app pool down.

We talked with Microsoft support who said that they were going to talk to the product team but I've not heard anything, so who knows.

Thursday, 3 September 2015

How to disable FIPS using PowerShell

I always forget about this, so I thought I would add myself a remainder

FIPS can be disabled by editing the registry and restarting the server:
New-ItemProperty - Path HKLM\System\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy -name Enabled -value 0; Restart-Computer -Force



Monday, 31 August 2015

Using ARR (Reverse Proxy) with Microsoft Dynamics CRM 2015 - IFD

In this post I'll discuss how we have used the Application Request Routing module from IIS to expose Ms Dynamics CRM to the big bad world.

In this project I'm working on at the moment we want to allow authorized third parties to access to our application, which means that there has been a lot of discussions around what's the best way of doing this.

The Problem:

We want to allow third parties access to our MS Dynamics CRM 2015 system but these third parties might be mom and pop operations so federating with them is not an option. In reality, this is a fantasy conjured up by the PM, but we all go along with it, because sometimes it's important to know which battles to fight.

We also don't want to expose our MS Dynamics CRM 2015 servers to the big bad internet.

The Solution:

Use ARR to expose the MS Dynamics CRM 2015 website to the outside world, while keeping the servers happily inside the perimeter.

This is an architecture diagram of what we are trying to achieve.






I'm assuming that you have a working MS Dynamics CRM 2013/2015 installation configured for Internet Facing Deployment and a wildcard certificate for the appropriate domain available.

Install ARR.

Navigate to this page and click on install this extension, which will use the Windows Platform installer to install ARR, just follow the instructions therein. Alternatively you can do it manually, which I haven't tried.

Configure ARR

ARR is configured from the IIS Manager console.

So fire it up (windows key + r -> inetmgr)  and let's get started:

We first create a Server Farm for the MS Dynamics CRM Web Servers. If you have web and application CRM servers then you only need to add the web servers. We have full servers.



Ensure you add all servers, not just one server, unless you only have one server like myself in this environment :)


If you are only using ARR to expose MS Dynamics CRM to the outside world, then it's probably ok to click yes here.


 We can now configure our server farm.


We want to make sure that traffic is only directed to servers that are up and running so I normally set the organization's wsdl page as the health test URL. The server's page (using FQDN) should also work and in fact should be used if you have more than one organization.


The other thing that needs to be done is to set load balancing to be a based on Client affinity, in other words, a client will always hit the same server, unless that server is down. This is an optional step, that's only required, as far as I can tell, for reports.


We now need to check that the routing rule is correct for the current set up.



Pretty simple, as we're sending everything that hits this server onward to the MS Dynamics CRM Farm. However, the rule will, by default, forward it to http rather than https, so this needs to be changed.




The final step is to create the proxy websites. I'm going to assume that a wildcard certificate has been installed already for the server. These website will receive the traffic from the internet and forward it to the MS Dynamics CRM Web Servers.

When using IFD, a website is needed for each organization and another one for the auth website. You can also add one for the discovery service, but as far as my limited testing goes, it works without it.





At this point ARR is configured, now all that remains is to set up name resolution.

I normally use CNAME aliases for auth, disc and the various organizations to point to the ARR server:

stark.dev.local --> ARR Server (Organization)
auth.dev.local --> ARR Server  (auth)
disc.dev.local --> ARR Server   (discovery)

In my next post I will add ADFS to the reverse proxy, note that it's perfectly acceptable to use ARR for this purpose as well, but it also possible to use WAP, which provides pre-authentication.

Monday, 24 August 2015

MS Dynamics CRM - Blocking Direct Access to SharePoint

In this project I'm working on we've had a requirement to block direct access to SharePoint, in other words. Our users can only access their documents through CRM.

There are various ways, in which this can be achieved and today I will be discussing achieving this by leveraging ARR.

We have a similar architecture to this:



We have a Sharepoint site collection and every document is stored in this site collection, so our users would go https://sp.dev.local/sites/spsite/<crm_entity>/ to access documents for <crm_entity>

This is actually a somewhat irritating requirement because IE behaves differently than Firefox and Chrome. We're lucky enough not to have to support Opera and Safari as well, nothing wrong with these browsers, but 5 browsers would drive our testers crazy.

In any case, we've configured two farms, CRM and SHAREPOINT, so we need rules for those.

So for CRM we have:

<rule name="CRM Farm Forwarding Rule" enabled="true" patternSyntax="Wildcard" stopProcessing="false">
    <match url="*" />
    <conditions logicalGrouping="MatchAny" trackAllCaptures="true">
        <add input="{HTTP_HOST}" pattern="*crm.dev.local" />
    </conditions>
    <serverVariables>
    </serverVariables>
    <action type="Rewrite" url="https://CRM/{R:0}" />
</rule>

And for Sharepoint we have:

<rule name="Sharepoint Farm Forwarding Rule" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
    <match url="*" />
    <conditions logicalGrouping="MatchAny" trackAllCaptures="true">
        <add input="{HTTP_HOST}" pattern="*sp.dev.local" />
    </conditions>
    <serverVariables>
    </serverVariables>
    <action type="Rewrite" url="https://SHAREPOINT/{R:0}" />
</rule>

So far so good, this is where things start to get interesting. We want to block direct access to SP for IE browsers, which we can achieve like this:

 
<rule name="IE - Allow Access to SharePoint grid in CRM" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
    <match url="*sites/SPSITE/crmgrid*" />
    <conditions logicalGrouping="MatchAny" trackAllCaptures="false">
    </conditions>
    <serverVariables>
    </serverVariables>
    <action type="Rewrite" url="https://SHAREPOINT/{R:0}" />
</rule>    
and
<rule name="IE - Block Direct Access to SharePoint" stopProcessing="true">
    <match url="sites\/SPSITE" />
    <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_REFERER}" pattern="^/?$" negate="true" />
        <add input="{HTTP_USER_AGENT}" pattern="MSIE" />
    </conditions>
    <serverVariables>
    </serverVariables>
    <action type="CustomResponse" statusCode="403" subStatusCode="1" statusReason="IE" statusDescription="Direct Access to SHAREPOINT is not permitted" />
</rule>

The first rules allows traffic if it's trying to access SP through the List component, this allows the SharePoint CRM List component to work.

The second rule will block access for IE user agents ( i.e containing MSIE) and where the referer is not empty. It will stop processing if there is a match.

For some reason, I.E. blanks the referer when accessing documents in SharePoint from the list component but crucially fills it in if accessing documents directly from SharePoint.

Firefox and Chrome will have the correct referer, i.e. sp.dev.local/sites/spsite/crmgrid, so there is a single rule:

<rule name="Other Browsers - Block Direct Access to SharePoint" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
    <match url="*sites/SPSITE*" />
    <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_REFERER}" pattern="*dev.local/sites/SPSITE/crmgrid*" negate="true" />
        <add input="{HTTP_USER_AGENT}" pattern="*MSIE*" negate="true" />
    </conditions>
    <serverVariables>
    </serverVariables>
    <action type="CustomResponse" statusCode="403" subStatusCode="2" statusReason="Other" statusDescription="Direct Access to SHAREPOINT is not permitted" />
</rule>

Just need to make sure that the user agent is not from IE. The rule will stop processing if there is a match.

Rules.xml can be found below, with the rules in the correct order.

Thus effectively we do the following:

CRM -> Rewrite to  CRM Farm
SP - crm grid?  -> Rewrite to SP Farm
SP - IE and empty Referer -> Rewrite to SP Farm
SP - Other Browser and correct Referer -> Rewrite to SP Farm
SP -> Rewrite to Farm

The last rule is only needed if there are other sites in SP that the users might need to access.

<?xml version="1.0" encoding="UTF-8"?>
<appcmd>
    <CONFIG CONFIG.SECTION="system.webServer/rewrite/globalRules" path="MACHINE/WEBROOT/APPHOST" overrideMode="Inherit" locked="false">
        <system.webServer-rewrite-globalRules>
            <rule name="CRM Farm Forwarding Rule" enabled="true" patternSyntax="Wildcard" stopProcessing="false">
                <match url="*" />
                <conditions logicalGrouping="MatchAny" trackAllCaptures="true">
                    <add input="{HTTP_HOST}" pattern="*crm.dev.local" />
                </conditions>
                <serverVariables>
                </serverVariables>
                <action type="Rewrite" url="https://CRM/{R:0}" />
            </rule>
            <rule name="IE - Allow Access to SharePoint grid in CRM" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
                <match url="*sites/SPSITE/crmgrid*" />
                <conditions logicalGrouping="MatchAny" trackAllCaptures="false">
                </conditions>
                <serverVariables>
                </serverVariables>
                <action type="Rewrite" url="https://SHAREPOINT/{R:0}" />
            </rule>
            <rule name="IE - Block Direct Access to SharePoint" stopProcessing="true">
                <match url="sites\/SPSITE" />
                <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
                    <add input="{HTTP_REFERER}" pattern="^/?$" negate="true" />
                    <add input="{HTTP_USER_AGENT}" pattern="MSIE" />
                </conditions>
                <serverVariables>
                </serverVariables>
                <action type="CustomResponse" statusCode="403" subStatusCode="1" statusReason="IE" statusDescription="Direct Access to SHAREPOINT is not permitted" />
            </rule>
            <rule name="Other Browsers - Block Direct Access to SharePoint" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
                <match url="*sites/SPSITE*" />
                <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
                    <add input="{HTTP_REFERER}" pattern="*dev.local/sites/SPSITE/crmgrid*" negate="true" />
                    <add input="{HTTP_USER_AGENT}" pattern="*MSIE*" negate="true" />
                </conditions>
                <serverVariables>
                </serverVariables>
                <action type="CustomResponse" statusCode="403" subStatusCode="2" statusReason="Other" statusDescription="Direct Access to SHAREPOINT is not permitted" />
            </rule>
            <rule name="Sharepoint Farm Forwarding Rule" enabled="true" patternSyntax="Wildcard" stopProcessing="true">
                <match url="*" />
                <conditions logicalGrouping="MatchAny" trackAllCaptures="true">
                    <add input="{HTTP_HOST}" pattern="*sp.dev.local" />
                </conditions>
                <serverVariables>
                </serverVariables>
                <action type="Rewrite" url="https://SHAREPOINT/{R:0}" />
            </rule>
        </system.webServer-rewrite-globalRules>
    </CONFIG>
</appcmd>

It can be exported with this command:
appcmd.exe list config -section:system.webServer/rewrite/globalRules -xml > rules.xml 
The imported with this command:
appcmd.exe set config /in < rules.xml

Understanding default values in C#

I was refactoring a bit of code last week and I ended creating an interesting scenario.
In short, I was re-writing a method to see if by using a completely different approach I could make the code simpler easier to maintain and faster, and I ended up with two methods like this (don't even ask, what I was thinking):
public class Point
{
    public int X { get; set; }
    public int Y { get; set; }

    public Point(int x, int y)
    {
        this.X = x;
        this.Y = y;
    }

    public void Move(int distance)
    {
        this.X += distance;
        this.Y += distance;
    }

    public bool Move(int distance, bool dummy = true)
    {
        this.X += distance;
        this.Y += distance;
  
        return true;
    }
}
So with this calling code, which polymorphic operation will be selected:
class Program
{
    static void Main(string[] args)
    {
        var p = new Point(1, 2);
        p.Move(1);
        p.Move(1,true);
    }
}

p.Move(1) invokes the void method p.Move(1,true); invokes the bool method.

The question is how does the compiler know which polymorphic operation to invoke, after all, if the Point class had no void Move method, both statement would invoke the second Move method. 

Time to look at the IL output of Main:


.method private hidebysig static 
 void Main (
  string[] args
 ) cil managed 
{
 // Method begins at RVA 0x2100
 // Code size 27 (0x1b)
 .maxstack 3
 .entrypoint
 .locals init (
  [0] class ConsoleApplication6.Point p
 )

 IL_0000: nop
 IL_0001: ldc.i4.1
 IL_0002: ldc.i4.2
 IL_0003: newobj instance void ConsoleApplication6.Point::.ctor(int32, int32)
 IL_0008: stloc.0
 IL_0009: ldloc.0
 IL_000a: ldc.i4.1
 IL_000b: callvirt instance void ConsoleApplication6.Point::Move(int32)
 IL_0010: nop
 IL_0011: ldloc.0
 IL_0012: ldc.i4.1
 IL_0013: ldc.i4.1
 IL_0014: callvirt instance bool ConsoleApplication6.Point::Move(int32, bool)
 IL_0019: pop
 IL_001a: ret
} // end of method Program::Main

This is not very illuminating, but if we comment out the void method on the Point class and get the IL output again:

.method private hidebysig static 
 void Main (
  string[] args
 ) cil managed 
{
 // Method begins at RVA 0x20e0
 // Code size 28 (0x1c)
 .maxstack 3
 .entrypoint
 .locals init (
  [0] class ConsoleApplication6.Point p
 )

 IL_0000: nop
 IL_0001: ldc.i4.1
 IL_0002: ldc.i4.2
 IL_0003: newobj instance void ConsoleApplication6.Point::.ctor(int32, int32)
 IL_0008: stloc.0
 IL_0009: ldloc.0
 IL_000a: ldc.i4.1
 IL_000b: ldc.i4.1
 IL_000c: callvirt instance bool ConsoleApplication6.Point::Move(int32, bool)
 IL_0011: pop
 IL_0012: ldloc.0
 IL_0013: ldc.i4.1
 IL_0014: ldc.i4.1
 IL_0015: callvirt instance bool ConsoleApplication6.Point::Move(int32, bool)
 IL_001a: pop
 IL_001b: ret
} // end of method Program::Main

As expected both invoke the second method but why this behaviour?

Well, it turns out that this is part of the spec:

Use of named and optional arguments affects overload resolution in the following ways:
  • A method, indexer, or constructor is a candidate for execution if each of its parameters either is optional or corresponds, by name or by position, to a single argument in the calling statement, and that argument can be converted to the type of the parameter.
  • If more than one candidate is found, overload resolution rules for preferred conversions are applied to the arguments that are explicitly specified. Omitted arguments for optional parameters are ignored.
  • If two candidates are judged to be equally good, preference goes to a candidate that does not have optional parameters for which arguments were omitted in the call. This is a consequence of a general preference in overload resolution for candidates that have fewer parameters.