Monday, 30 December 2013

Actions (Custom Messages) in MS Dynamics CRM 2013

Actions are another new feature in MS Dynamics CRM 2013. They are an intriguing new feature, which I must admit don't see much of a use case for right now.

In essence, they are a custom message, something like Create, Update, etc... but the actions that take place when the custom message is invoked are defined as a sort of Workflow, which allows custom code via Custom Workflow activities.

It then it gets a bit weird in that this can only be invoked via code. A potential workaround is to use a workflows with a custom Workflow activity or presumably, JavaScript hooked up to a button or other event.

I thought I would try to create a sample action. The action will create a task related to a Stamp (a custom entity), which is a mandatory input to the request. There is another non-mandatory input variable, if set to false, the action will not trigger. Finally, there will be an output variable, which will effectively confirm whether it's been actioned ...

At any rate, this is how one would create such an action. Navigate to Settings -> Processes -> New

I created this action to be global, it can be limited to an entity.
I added a few arguments and there are a few things that are annoying about this. The first one is that unless the Entity Type is set for Entity or EntityReference, you will not be able to use the argument, at least not from the GUI, so this limits a bit what can be done. Although I suspect no such limitations apply to code ... The second annoying thing is best explained by the next screenshot:
Essentially, 0 will be false, everything else is True, if the field is boolean, not sure why not just have True or False?



Make sure that the action is activated before calling it from code.

This is how you would call it:
private static void supadupaaction(IOrganizationService service, EntityReference target, Entity stamp, bool carryOut=true)
        {
            var parameters = new ParameterCollection();
            parameters.Add(new KeyValuePair<string, object>("Target", target));
            parameters.Add(new KeyValuePair<string, object>("RealTarget", stamp));
            parameters.Add(new KeyValuePair<string, object>("AreYouSure", carryOut));

            OrganizationRequest request = new OrganizationRequest()
            {
                RequestName = "new_SupaDupaAction",
                Parameters = parameters
            };

            var result = (OrganizationResponse)service.Execute(request);

            foreach (var item in result.Results)
            {
                Console.WriteLine("Result - Name: {0}. Value: {1}", item.Key, item.Value);
            }
         }
Target is an entity reference to the, yep, target entity, i.e. the entity against which the action/custom message is running. Stamp is the instance of the Stamp entity against which the task will be created.

Note that the RequestName is not all lower case, not sure if this is only for actions or a more generic shift in MS Dynamics CRM 2013 in general.

Frankly, I need to think what the benefit of this is/can be, because with this noddy example I am seeing none. I guess defining a type of operation can be quite handy as stuff can happen pre and post operation, e.g. register plug-ins to do this, so nothing ground breaking but perhaps simplifies things ...

Feel free to enlighten me...

Friday, 27 December 2013

Business Rules in MS Dynamics CRM 2013

I finally gave MS Dynamics CRM 2013 a whirl last night and found an interesting new feature: Business Rules.

Not sure what the exact definition is, but the idea is that these rules can be used to modify the form without the need to resorting to JavaScript. This is not only good because it means that non-developers can do it but also because it means no more JavaScript !!!!!, well maybe.

This is one of the myriad of features in MS Dynamics CRM that are always mentioned as non-developer feature, but in reality it's always the developers that work with these, but I digress.

Business rules seem to be entity wide, which is a bit of a shame as it would be handy to have business rules that apply to various entities, not entirely sure I see a reasonable business case, but, generally, the fewer limitations the better.

At any rate, I created a new Entity, added a few fields and edited the form, from where I added a Business rule.

The rule will make the Class field mandatory if the Cost field is greater than £10, for instance if this were an expense you could make a explanation/justification field become mandatory when a certain threshold was exceeded.


The conditions, as well as actions, can be chained, e.g. in the invoice example, make the explanation/justification field become mandatory if it's above a certain threshold AND it's a certain type of expense only. 


Not 100% what the description is supposed to do, as it does not appear to be displayed anywhere, probably it's just another field that will never be filled in.

An immediate downside is that in this example, you would need another business rule to set it back to not required if the field is changed to a value below £10.

Anyway, here's the result:


Another downside is the lack of OR on the rules. Is it really that complicated for non technical staff to understand the or operation? or is it complex for Microsoft to implement? I don't know but there is an or feature in Advanced Find. At any rate, the lack of or in the Check Condition Actions was really annoying on Dialogs and Workflows in 2011 and this seems to be still missing in 2013.

It's of interest to note that for numeric fields, simple formulas can be used instead of fixed values, no regex for string fields though :(, again not surprising given the audience, but it would've been nice.

Also the comparison can be made against another field in the entity.

The range of actions is interesting:
  • Show Error Message - Yahoo!!! no more javascript validation (in your dreams, sunshine)
  • Set Field Value 
  • Set Business Required
  • Set Visibility
  • Lock or Unlock Field
I guess this feature is a typical MS Dynamics CRM feature, where it seems like a great idea in principle but then in practice it's too limited or cumbersome but for the most basic of tasks. Designed for Power Users, used and hated by developers, more of the same.

All in all not a bad feature, I just need to start working on some MS Dynamics CRM 2013 projects now :).

Thursday, 26 December 2013

Add/Remove Programs Product Icon with Wix

This is a very simple change that can help your application look that little bit more professional.

Just add a Add/Remove Programs Product Icon with Wix:

<Icon Id="ProductIcon" SourceFile="Icon\myicon.ico"/>    
<Property Id="ARPPRODUCTICON" Value="ProductIcon" />

See What I mean:


You need to make sure that you store myicon.ico on a directory called Icon.

Friday, 20 December 2013

Get user group membership in Sharepoint from Powershell

A few days back I needed to find out whether the System account (SharePoint\System) was a member of a particular group and it turns out that this cannot be done from the GUI as it doesn't show up in the list of accounts that are members of that group so ... PowerShell to the rescue.

The following snippet will provide a list of users belonging to the desired group <Group Name>:
$site = Get-SPSite -WebApplication "<url>"
$mainsite = (Get-SPWeb -Site $site)[0]
$group = $mainsite.SiteGroups | ?{$_.Name -match "<Group Name>"}
foreach($user in $group.Users){$user}
As usual it needs to be run from the SharePoint PowerShell console or the Sharepoint PowerShell snap in needs to be loaded before running the above commands.

Monday, 25 November 2013

TFS Error - There is not enough space on the disk

I've been working on trying to automate our development workflow or in other words I've been banging my head against the TFS wall trying to set up CI.

I thought i had everything working when I encountered this little beauty for one of the ASP.NET projects:

The build server had over 20 GB of free space, so I was somewhat baffled by this error message, so I did what every self respecting IT professional does: I started bouncing stuff.

After I bounced, the build agent, build service and the server itself, all to no avail, I hit the interwebs where I found the solution.

This error has got nothing to do with disk space, it's related to tempory files not being deleted in this folder:
C:\Windows\Microsoft.net\Framework64\<version>\Temporary ASP.NET Files
I checked the folder and sure enough there was a subfolder with the name of one of the projects. I deleted this and the builds started to complete again.

Wednesday, 20 November 2013

ADFS - Turning Debug Tracing on

I was struggling last week with some ADFS issues and I decided to turn debug tracing on to see if it threw any light as to why it wasn't working, it didn't, the issue was somewhere else, but it might be useful for future reference, so here it goes:

1. Start the event viewer:
Start | Run | eventvwr
2. Disable ADFS event log:
3. Run the following command with elevated permissions (i.e. Run as Administrator):
wevtutil sl "AD FS 2.0 Tracing/Debug" /l:5
4. Enable ADFS event log.
5. Ensure that Analytic and Debug logs are enabled
  View | Show Analytic and Debug logs
6. Enjoy all the debug output goodness.

Friday, 15 November 2013

Exit codes in msi installers created with Wix

There was (is?) a bug in Wix that prevented successful creation of an SSL binding in IIS 7.5, so in order to get around this issue, I wrote a custom action to do this.

A failure in this custom action will not stop the installation*, which means that the exit code will be driven by the custom action failure, which will lead to the exit code being 1603 but the product installing, since the failure is relating to setting the certificate for a webpage, this actually has no effect as the newly installed website picks up the old certificate.

The problem was in our install scripts, where we check on the exit code to ascertain whether the installation was successful, which it wasn't as the exit code was 1603 but since the app was actually there this lead to a lot of confusion and head scratching.

Thought I'd share in case somebody does something as a stupid as I did.

*I've edited the custom action code so that it does not always return success now. There was a perfectly valid reason for the code to always return a success value and I will talk about it as soon as I find it.

Selected output from the install log file:

CustomAction UpdateBinding returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox)
UpdateBinding. Return value 3.
INSTALL. Return value 3.

Product:  Portal -- Installation failed.
Windows Installer installed the product. Product Name: Portal. Product Version: 1.3.3.7. Product Language: 1033. Manufacturer: Yo! Ltd. Installation success or error status: 1603.

Wednesday, 30 October 2013

Delete documents from SharePoint document library using PowerShell

So today we had to delete a bunch of documents from SharePoint, so I wrote this script to accomplish this task.

A couple of notes about the script:

  1. By default it will not delete anything, just list the urls that contain the $document argument.
  2. The $document argument is matched with like as it's matching urls, so be careful that it doesn't delete more documents than you want.


param
(
[string]$url = "https://sp.dev.local",
[string]$document = "Welcome"
[bool]$delete = false
)

$site = Get-SPSite $url

foreach($web in $site.AllWebs) {
    foreach($list in $web.Lists) {
              if($list.BaseType -eq "DocumentLibrary") {
                     foreach($item in $list.Items){
                           if($item.url -like "*$document*")
                           {
                                  if($delete)
                                  {
                                   Write-Host "Deleting $item.url"
                                   $item.File.Delete()
                                  }
                                  else
                                  {
                                   Write-Host "$item.url"
                                  }
                         }               
                      }
              }
       }
}

 

Friday, 25 October 2013

Issues with TurnOffFetchThrottling in MS Dynamics CRM 2011 - FetchXml Record Limit

I think i found a bug in MS Dynamics CRM 2011 today. We have a few UR 12 installed on our servers and for various reasons too long too explain, we had Turned off Fetch throttling.

This is done by adding a registry key to HKEY_LOCAL_MACHINE\Software\Microsoft\MSCRM called TurnOffFetchThrottling and setting the value to 1. This needs to be a DWORD type.

If you do this, then this FetchXml query will not work:
<fetch mapping="logical" count="2147483647" version="1.0"> 
 <entity name="account">
  <attribute name="name" />
 </entity>
</fetch>
It seems that by default MS Dynamics CRM 2011 will add one to the number of results that one is trying to get. So with Fetch Throttling on, i.e. the default, the above query will be parsed to the database like this:
select top 5001 "account0".Name as "name" , "account0".AccountId as "accountid" from AccountBase as "account0" order by "account0".AccountId asc
But if, FetchThrottling is off, when we pass Int.Max32 as the number of results required, like in the fetchxml query above, then when Dynamics CRM tries to add one to this value and parse it, it overflows and the query isn't run.

Admittedly it is extremely unlikely that this will be a problem, if you have 2147483647 instances of an entity in your database, I rather suspect that you need to be doing some archiving, but still it looks like a genuine issue.

It's interesting that the count value uses a signed integer rather than an unsigned one, which would give us twice as many records, i.e. 4294967295, or one record per every two human beings on Earth.

Sunday, 20 October 2013

Developers Vs Developers

I have been meaning to write this post for ages as this is something that I have encountered time and time again during my career.

We have an integration layer on our application with our telephony system. A third party wrote this integration layer, in essence they have a web services that we communicate to and we expose a web service so that they can communicate with us, so far so simple, this is a staple of enterprise development and as I said I have had to deal with situations like this many times, both inside and outside the company, and if you've not done it inside your company and think that none of the problems that you are having would occur if you didn't have to deal with the idiots at Third Part Ltd., let me tell you that you will just have to deal with the idiots at the Other Department.

At any rate, integration was working fine when somebody realized that the spec called for SSL/TLS usage rather than using clear text. In theory this requirement made sense when the various sides of the integration equation where hosted by two different companies, but a change of architecture had been made, which meant that they weren't any longer, so using a secure website for an internal web interface that contained little more than phone numbers and UUIDs seemed like overkill, but the spec is the spec, agile be dammed.

So both Web Services and client apps on either side of the integration equation where reconfigured to use SSL/TLS and this is were the problems started and as it's normally the case in these situations, we started blaming each other. 

Furthermore, a supplier customer relationship had been allowed to develop. This is a relationship in which the supplier has the technical know how, or believes he does, and the customer doesn't, or is at least believed not to have it by the supplier. Needless to say that this wasn't the case as we shall see, but for various personnel reasons, i.e. developers on our company leaving, this relationship had taken hold, which meant that our evidence carried less weight as it were, because they were already predisposed to assuming that they were talking to yet another developer who didn't have a clue about how the integration was supposed to be working, which wasn't true but it was true that they knew their system and the history, and while I knew ours but could not comment on historical decisions as this had landed on my lap without handover from somebody that left the company suddenly, not so suddenly but I digress.

We both went back to basis to prove our respective thesis, i.e. it was the other party's fault, because history is written by the victors, you know how this will turn out, but I will continue anyway. The first thing that we discovered was another disagreement in the spec, regarding authentication, which I remedied after a lot of hair pulling.

After that, I found that our side was working fine, furthermore, I used Wireshark to prove that nothing was coming through over the wire on port 443, while stuff was going through on port 80 from the client PC that hosted the client app, which meant that failures on their side were not due to our Web Service, this despite the fact that the app was throwing a 404 error and pointed me to this link.

I mentioned the supplier customer relationship above, because it helps to explain why this evidence, our Wireshark evidence was ignored, they knew what they were doing we didn't so anything coming from our side was tainted.

To further compound the confusion, the client app would sometimes work when tested by them, which made us extremely suspicious and not work for us, which surely made them more suspicious of our general competence. It was working for them and they were dealing with a bunch of idiots so they probably were doing something stupid.

At this point they were willing to discuss the code that they were using for the first time and we realized that the source of the issue was their code, to be fair it was the usage of our proxy server, when it should not have been used, there were other issues but they are not important.

So I asked them to add this to the system.net element of the client's app.config:

<defaultProxy  enabled="false"/>

Lo and behold, everything started to work fine.

Not sure what the lesson is here, I guess if there is one, it's that appearances do matter even in a supposedly analytical field like this one.

This link seems tangentially related.

Tuesday, 15 October 2013

Start Windows Services from PowerShell according to StartMode.

In a previous post, I described how to stop and start MS Dynamics CRM services, this post is just a different way of doing it. The reason for looking for a different way is that the Get-Services cmdlet ignores the StartMode, i.e. if the service is disabled Start-Service will try to start it and fail so the solution is using WMI objects:
Get-WmiObject -Class "win32_service" | ?{$_.Name -like "mscrm*" -and $_.StartMode -eq "Auto"} | % {Restart-Service -Name $_.Name}‏
You might be wondering why this is needed, surely you should just uninstall the disabled service(s) and I would tend to agree, but sometimes it might be necessary to temporarily disable a service for testing purposes for instance, and by checking for the startmode you will only attempt to start services that can actually be restarted. 

Thursday, 10 October 2013

Using Host headers for SSL/TLS in SSRS 2008 R2

In a project I 'm working on at the moment, we are using SSRS over SSL and this had been working fine but we were using self signed certificates, so when we changed the certificates we started getting some issues, and by same issues, i mean the reports not working properly, as there was a mismatch between the cert name and the server name. We could not navigate to the Report Server or Report Manager URL, did not note down the exact error message, but it was related to a cert name mismatch causing the connection to be closed.

The logs would should this error:
appdomainmanager!ReportManager_0-3!1a88!10/04/2013-16:22:14:: e ERROR: Remote certificate error RemoteCertificateNameMismatch encountered for url https://ssrsserver.domain.co.uk:501/ReportServer/ReportService2010.asmx.
ui!ReportManager_0-3!1a88!10/04/2013-16:22:14:: e ERROR: System.Threading.ThreadAbortException: Thread was being aborted.
   at System.Threading.Thread.AbortInternal()
   at System.Threading.Thread.Abort(Object stateInfo)
   at System.Web.HttpResponse.End()
   at Microsoft.ReportingServices.UI.ReportingPage.ShowErrorPage(String errMsg)

In order to sort this problem I first configured a host header to match the certificate name and then modified the reportserver configuration file.

1. Run the Reporting Services Configuration Manager.


 2. Select Web Service Url.



 3. Click on Advanced to configure the Web Service Url.


 4. Select http entity and click edit to add the host header.


 5. Repeat steps 3 & 4 for Report Manager.



6. Edit C:\Program Files\Microsoft SQL Server\MSRS10.MSSQLSERVER\Report Services\ReportServer\rsreportserver.config:
Change the urlstring to reflect the new host header e.g.
   <UrlString>http://mydomain.co.uk:80</UrlString>
Make sure that you only change the entries that actually have a domain name and are not just showing a url registration. In other words, leave these alone:
<UrlString>https://+:443</UrlString>
Having said, changing these might allow to use different host headers in plaintext and secure websites, but I've not tried it.

Saturday, 5 October 2013

Deploy SharePoint Event Receivers from PowerShell

I thought I would share a script that we use for deploying Event Receivers to SharePoint.

param ([string] $url, [string] $featureName,[string] $solution, [bool] $install, [bool] $uninstall)

if ( -not ($url))
{
 Write-Host "Please enter the Site Url"
 exit
}

function SelectOperation
{
 $message = "Select Operation?";
 $InstallMe = new-Object System.Management.Automation.Host.ChoiceDescription "&Install","Install";
 $UninstallMe = new-Object System.Management.Automation.Host.ChoiceDescription "&Uninstall","Uninstall";
 $choices = [System.Management.Automation.Host.ChoiceDescription[]]($InstallMe,$UninstallMe);
 $answer = $host.ui.PromptForChoice($caption,$message,$choices,0)
 return $answer
}

if (-not($install) -and -not($uninstall))
{
 $answer = SelectOperation

 switch ([int]$answer)
 {
   0 {$install=$true}
   1 {$uninstall=$true}
 }

}

if ($install)
{
 Add-SPSolution -LiteralPath $(join-path $(pwd).Path $solution)
 $solutionId = (Get-SPSolution | ? {$_.name -eq  $solution}).Id
 Install-SPSolution -Identity $solutionId -GACDeployment
 Write-Host "Waiting for the SharePoint to finish..."
 Sleep(120)
 $featureNameId = Get-SPfeatureName | where {$_.displayname -eq  $featureName} 
 Enable-SPfeatureName -Identity $featureNameId -Url $url
}

if ($uninstall)
{
 $featureNameId = (Get-SPfeatureName | where {$_.displayname -eq  $featureName}).Id
 Disable-SPfeatureName $featureNameId -Url $url
 $solutionId = (Get-SPSolution | ? {$_.name -eq  $solution}).Id
 Uninstall-SPSolution -Identity $solutionId
 Write-Host "Waiting for SharePoint to finish..."
 Sleep(120)
 Remove-SPSolution -Identity $solutionId
}
Write-host "All Done."

An example of usage:
.\DeployEV.ps1 -featureName "receiver1" -solution "receiver1.wsp"  -install $true
 Note that the script assumes that the wsp file will be in the same location as the script.

Monday, 30 September 2013

Create and Delete Website from PowerShell.

I thought I would share the whole solution rather than just the SSL Binding code.

Import-Module WebAdministration

function Add-Site([string]$folder, [string]$sitename, [string]$protocol="http", [int]$port, [int]$sslport, [string] $hostheader, [string]$thumbprint, [string]$appPoolName, [hashtable] $appDetails, [string]$version="v4.0")
{
 
 if ( -not ( Get-Website | ? {$_.Name -eq $sitename}))
 {
  if ($hostheader)
  {
   New-Item iis:\Sites\$sitename -bindings @{protocol="$protocol";bindingInformation="*:$($port):$($hostheader)"} -physicalPath $folder
  }
  else
  {
   New-Item iis:\Sites\$sitename -bindings @{protocol="$protocol";bindingInformation="*:$($port):"} -physicalPath $folder
  }
  
  if (-not($thumbprint) -or -not ($sslport))
  {
   Write-Error "Ensure that a Certificate Thumbprint and SSLport are set for HTTPS Bindings"
   Write-Host "Let's clean up a little bit here..."
   DeleteSite $sitename
   exit
  }
  else
  {
   AddSSLBinding $thumbprint $sitename $sslport $hostheader
  }

  if ($appDetails -and $appPoolName)
  {
   CreateAppPool $appPoolName
   SetAppPoolVersion $appPoolName $version
   foreach ($app in $appDetails.GetEnumerator())
   {
    MakeApplication $sitename $app.Name $app.Value 
    SetAppPool $sitename $app.Name $appPoolName
   }
  }
  else
  {
   Write-Warning "The website $sitename has been created with no applications or applicationPools. Nothing wrong with this, just saying"
  }
 }
}

function Remove-Site([string]$sitename, $appPoolName)
{  
  Get-ChildItem IIS:\SslBindings | ? {$_.Sites -eq $sitename} | %{ Remove-Item iis:\sslbindings\$($_.pschildname) -Force -Recurse}
  Get-ChildItem IIS:\Sites\ | ?{$_.Name -eq $sitename} |% { Remove-Item  IIS:\Sites\$sitename  -Force -Recurse}
  Get-ChildItem IIS:\AppPools\ |? {$_.Name -eq $appPoolName} | %{ Remove-Item IIS:\AppPools\$appPoolName -Force -Recurse }  
}

function AddSSLBinding([string]$thumbprint, [String]$sitename, [int]$port, [String]$hostheader)
{
 
 $cert = Get-ChildItem cert:\LocalMachine\My | ?{$_.Thumbprint -eq $thumbprint}
 
 if( -not($(Get-ChildItem iis:\sslbindings| ? {$_.Port -eq $port})))
 {
  New-Item IIS:\SslBindings\0.0.0.0!$port -Value $cert | out-null
  
  if ($hostheader)
  {
   New-ItemProperty $(join-path iis:\Sites $sitename) -name bindings -value @{protocol="https";bindingInformation="*:$($port):$($hostheader)";certificateStoreName="My";certificateHash=$thumbprint} | out-null
  }
  else
  {
   New-ItemProperty $(join-path iis:\Sites $sitename) -name bindings -value @{protocol="https";bindingInformation="*:$($port):";certificateStoreName="My";certificateHash=$thumbprint} | out-null
  }
 }
 else
 {
  Write-Warning "SSL binding already exists on port $port"
 }
}

function MakeApplication([string]$sitename, [string]$applicationName, [string]$folder)
{
  New-Item "IIS:\Sites\$sitename\$applicationName" -physicalPath $folder -type Application | out-null
}

function CreateAppPool([string]$applicationPoolName)
{
  New-Item IIS:\AppPools\$applicationPoolName | out-null
}

function SetAppPool([string]$sitename, [string]$application, [string]$applicationPool)
{
  Set-ItemProperty IIS:\sites\$sitename\$application -name applicationPool -value $applicationPool | out-null
}

function SetAppPoolVersion([string]$applicationPool, [string]$version)
{
  Set-ItemProperty IIS:\AppPools\$applicationPool managedRuntimeVersion $version | out-null
}

Export-ModuleMember -Function Add-Site,Remove-Site
An example of how to use it the above module to add a new site, assuming that it's been saved as Module.psm1. The thumbprint needs to be of a certificate already installed in your server on the LocalMachine\My store. This will create a website with an http binding in port 80 and an https binding on port 443 using the certificate passed in (thumbprint). An application and virtual directory called TestService will be created. All that remains is to copy the website files.
Import-Module Module.psm1
$path="F:\TestWebSite"
$testAppdetails=@{"TestService" = "$Path\TestService"}
Add-Site -folder $path -sitename "testsite" -protocol "http" -port 80 -sslport 443 -thumbprint "FE1D6F1A5F217A7724034BA42D8C57BEC36DD168" -appPoolName "testapppool"  -appDetails $testappDetails
and an example of how to remove the same site:
Remove-Site -sitename "testsite" -appPoolName "testapppool"

Sunday, 29 September 2013

On Quality

A few months back I hit upon an idea for a rather laborious scheme that would make me not that much money: Selling hard drive magnets on Ebay.

There were approximately 50 or so old hard drives at work, and when I say old, I do mean old. None of them could accommodate more than 18.6 GB of storage with an Ultra2 Wide SCSI interface.

Nobody wanted to take the time to dispose of them properly, which included the unenvious tasks of filling all the paper work or indeed running the disks through the data erasure program. I figured if the disks were not working nobody had to worry about this.

I set about taking apart the first disk and I hit my first road block: Torx headed screws. I had never had the need to use a Torx screwdriver so I decided to go out and buy a set of Torx screwdrivers.

I did a bit of research online and I wasn't surprised to find such a large disparity in prices: from as low as a few pounds to around one hundred pounds. I was sure that I didn't need to spend £100 on a set of Torx screwdrivers, but how low should I go?

As is my wont, I procrastinated and resolved to do more research on the topic. However, that weekend I stumbled upon a set of Torx screwdrivers at a discount store for £2.99, so I thought that I might as well buy them there and then.

I was fully aware that at this price the quality of the set would leave a lot to be desired but still I managed to suppress this knowledge long enough for me to dismantle 1.5 hard drives, which is when I hit the quality limit of my £2.99 set of Torx screwdrivers.

As I said above, I was not expecting them to last a lifetime and they were a spur of the moment, almost an impulse buy, so it wasn't unexpected, however this left me with a bit of a problem.

I know that if one buys the cheapest, in this day an age with a proliferation of  peddlers of cheap junk, one gets poor quality, but is the converse true?  In other words, does one get quality by buying expensive stuff? Perhaps more importantly, how much should I have spent on the set given the used I was planning to give it?

It is very easy to determine the quality of items at the bottom of the price scale: They are rubbish, but one does know what one is paying for. However, once we leave the safety of the cheaper items, then it becomes a lot harder to ascertain how much better something is or put another way Do you know what you are paying for when you buy an expensive item?

Take the Apple iPod shuffle, which can be obtained for £35 from Amazon. Storage capacity is 2 GB, it has no screen, it's tiny and has some sort of clip mechanism. For a similar price, £36, it is possible to buy a Sansa Clip+ with 8GB of storage, an expansion slot, screen, FM radio and also a clip mechanism. Yes, it's slightly bigger but hardly noticeable and voice commands can be added with RockBox firmware, so are you sacrificing 6 GB of storage for voice command?

The reality is that to a great extent you are paying for the Apple brand, with its design and quasi-religious following, which means that if you don't really care about design and don't think much of Apple as brand then you would be wasting money by going down the iPod shuffle route.

Is there are a similar quasi-religious following for say Stanley tools?, I would rather imagine that this unlikely to be the case. In fact, from talking to some of my relatives, who work or have worked in construction, they seem to buy tools from different brands mostly through experience. In other words, they tend to favour a brand because it was worked for them in the past and negative experience have a much more lasting effect that positive ones:
I spent loads of money on a expensive diamond tipped drill bit set from Black & Decker and it was rubbish. Since then I've always gone with Bosch drill bit sets and power tools.
In truth it might have been the other way round, the point still stands though, a negative experience is a lot more likely to be remembered, than a positive one as the positive one, this case, simply means having a reliable tool every day for a long time.

Whenever I find myself thinking about quality, I always imagine myself going back to that Arcadia of the consumer on the days prior to consumerism, whenever they happen to have occurred. In reality I have to admit that there have always been different quality levels on the products available to the consumer and while the bewildering price ranges that can be found for most products these days, makes buying the right item really tricky and by the right item I mean an item whose quality is commensurate with the price paid, it is simply naive to think that it easier through choice.

It was only easier because there was no choice, in other words, if you were a worker you could just afford the cheapest stuff and that is what you bought. It's only a modern dilemma that we have this paradox of choice, which makes discerning how much of your money goes on quality and how much goes on to pay for the brand premium almost impossible.

To a certain extent this is ameliorated by the various product reviews, but product reviews are no panacea as it is just as likely that the service is reviewed, which can be helpful, but it's hardly relevant to the product's quality or lack thereof. Furthermore, a large number of reviews describe personal preference and are normally added very early on, i.e. when no issues are found or the product was found to be defective, so they tend to be very Manichean.

There are dedicated people who seem to take reviewing very seriously, a sort of amateur Which? (Consumer Reports if you are in the  US) but sadly they are, very much, the minority and if you're not contemplating buying something that they have already bought, then you are out of luck.

So what to do?






Wednesday, 25 September 2013

Issues with large solutions in Ms Dynamics CRM 2011.

We have a Ms Dynamics CRM 2011 in box Test environment, i.e. CRM and SQL on the same box, don't ask why, and our main solution is a good 10+ MB zipped up, which means that sometimes it takes a few attempts to get the solution imported.

As of late, my lack of commitment to scientific inquiry and betterment of the world continues to show as I really haven't tested this thoroughly enough, but here it goes anyway.

The problem seemed to be that the W3WP process was using almost all the available memory on the server, which resulted in timeouts when running various SQL queries, at least that's what the trace log said, it's hard to trust a log that seems surprised that the are no errors, but I digress. 

The solution was to set upper and lower limits of memory on SQL server, to be fair I think the problem was the lower limit, but it makes sense to limit memory usage at the high end as well, lest SQL server thinks all the memory it's for itself.

EXEC sys.sp_configure N'show advanced options', N'1'  RECONFIGURE WITH OVERRIDE
GO
EXEC sys.sp_configure N'min server memory (MB)', N'512'
GO
EXEC sys.sp_configure N'max server memory (MB)', N'2048'
GO
RECONFIGURE WITH OVERRIDE
GO
EXEC sys.sp_configure N'show advanced options', N'0'  RECONFIGURE WITH OVERRIDE
GO
For the record the server had 4 GB of RAM, which could well be the source of the issue in the first place, i.e. this might not happen in a server with 8 GB of RAM.

We've not had any of these issues on our OAT environment, which features separated CRM and SQL boxes, each with 8 GB of RAM, so hopefully setting limits to the memory used by SQL server was the solution to the problem.

Friday, 20 September 2013

Updates are currently disallowed on GET requests. To allow updates on a GET, set the 'AllowUnsafeUpdates' property on SPWeb.

So today I hit a limit on SharePoint when listing from a library
The attempted operation is prohibited because it exceeds the list view threshold enforced by the administrator
The solution is simple, just increase the number of items, so from the Central Administration Site:
  1. Application Management -> Manage Web Application and select your web application
  2. In the Ribbon, click on General Settings drop-down and choose “Resource Throttling”.
  3. In the “List View Threshold”, increase the value
The problem I was having was that when I tried to do this I would get the following error:
Updates are currently disallowed on GET requests.  To allow updates on a GET, set the 'AllowUnsafeUpdates' property on SPWeb.
The solution, from the SharePoint PowerShell console:
$sp = get-spwebapplication https://myapp
$sp.HttpThrottleSettings
$sp.Update()
The problem seems to be related to the web application not having a value for HttpThrottleSettings, which will be set by running the above commands.

Sunday, 15 September 2013

Issues with Word Automation Services in SharePoint 2010

This week we had an issue with Word Automation Services, in one of our test servers, where our custom code (really boiler plate code, see below) would fail on the second line:

var context = SPServiceContext.GetContext(SPsite);
var wsaProxy = (WordServiceApplicationProxy)context.GetDefaultProxy(typeof(WordServiceApplicationProxy));

Since the same code was working fine in our development environment, it was clear that it was not the code that was at fault but our SharePoint configuration that was at fault.

The issue was that the Word Automation Services Application had not been configured to be added to the default proxy list, see screenshot below, and thus the code was failing to get the proxy.


Note that this adding Word Automation services is done from the Central Administration website:
Central Administration -> Manage Service Applications -> New Word Automation Service

Tuesday, 10 September 2013

Add HTTPS/SSL Binding to website in IIS from powershell

Edit:

I've crearted a powershell module to create and remove websites that includes this function, see it here post.

Since Wix seems to be frowned upon at work, I have been looking at PowerShell as a replacement to try to automate deployment of builds.

This little script will set the HTTPS binding. The certificate thumbprint is needed but the rest of the parameters are optional, defaulting to the most common option.

param ([String]$thumbprint, [String]$sitename="Default Web Site", [int]$port=443, [String]$hostheader)

if (-not($thumbprint))
{
  Write-Error "Certificate Thumprint is needed"
  exit
}

Import-Module WebAdministration

If (-not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator"))
{
 Write-Warning "Run this script with elevated permissions"
 exit
}

function AddHTTPSBinding([String]$thumbprint, [String]$sitename, [int]$port, [String]$hostheader)
{
 $cert = Get-ChildItem cert:\LocalMachine\My | ?{$_.Thumbprint -eq $thumbprint}
 
 if( -not($(gci iis:\sslbindings| ? {$_.Port -eq $port})))
 {
  New-Item IIS:\SslBindings\0.0.0.0!$port -Value $cert | out-null
  
  if ($hostheader)
  {
   New-ItemProperty $(join-path iis:\Sites $sitename) -name bindings -value @{protocol="https";bindingInformation="*:$($port):$($hostheader)";certificateStoreName="My";certificateHash=$thumbprint}
  }
  else
  {
   New-ItemProperty $(join-path iis:\Sites $sitename) -name bindings -value @{protocol="https";bindingInformation="*:$($port):";certificateStoreName="My";certificateHash=$thumbprint}
  }
 }
 else
 {
  Write-Warning "SSL binding already exists on port $port"
 }
}

AddHTTPSBinding $thumbprint $sitename $port $hostheader

There is a New-WebBinding cmdlet in the WebAdministration module, but I think it needs to be used in conjunction with the Set-WebBinding to set the certificate and certificate store.

Thursday, 5 September 2013

Issues with solutions in MS Dynamics CRM 2011 - Maintaining vs Overwriting Customizations

The ongoing saga of the CRM solutions continues, MS Dynamics CRM 2013 can't come too soon, hopefully there will be improvements around this area.

Having undertaken no rigorous testing whatsoever we have determined that overwriting customizations is slower than maintaining customizations when importing a solution, and so we normally try to avoid overwriting customizations, the problem is what CRM considers a customization.

I haven't really done a serious investigation, but a colleague did a semi serious one and he found that for workflows, the simple action of deactivating it and then activating it, was enough to for CRM to think it had been changed and thus would not be updated if the solution was imported maintaining customizations.

I guess the modifiedon attribute is changed when a workflow is deactivated and that must explain why CRM thinks it has been changed and thus will not modify it.

Like I said, this is all preliminary, so take it with a pinch of salt. Good thing that Ms Dynamics CRM 2011 has just been released, I'm sure this post will save people loads of headaches

Saturday, 31 August 2013

ADFS issues - ID3242: The security token could not be authenticated or authorized.

What a load of fun I had yesterday with ADFS. For some unknown reason, working web services stopped working today and after a lot of pleading we managed to get the logs from the ADFS server, which was showing this error:

 ID3242: The security token could not be authenticated or authorized.

After a lot of soul searching and hair pulling, we realized that the issue might be with the encryption certificate as the ADFS server cannot get to the CRL distribution point of the encryption certificate, due to the firewall.

This can be sorted out with these commands:
Add-PSSnapin Microsoft.ADFS.PowerShell (Import-Module ADFS - if using Win2k12 R2)  
Set-ADFSRelyingPartyTrust -TargetName <name> -EncryptionCertificateRevocationCheck None 
Set-ADFSRelyingPartyTrust -TargetName <name> -SigningCertificateRevocationCheck None 
We also set the signing certificate revocation check to none, although I think this is not needed, but there seems to be some reluctance to remove it.

Edit:

I write most of my posts well in advance and I'm not 100% that this is entirely correct. I'd like to say that I will check to make sure, but it's extremely unlikely.

Edit 2:

In our case it seems that this is indeed the solution as the ADFS server cannot get to the CRL Distribution Point, which causes issues :), which is why disabling the Revocation Checks on the certificates works.

Monday, 26 August 2013

Issues with solutions in Ms Dynamics CRM 2011 - Be careful when renaming custom workflow activities

We have a couple of custom workflow activity libraries, and one of them is a bit like a helper library with about 10 custom activities and yesterday I decided to screw it all up had a brilliant idea. One of the custom activities was confusingly named as it had been extended from its original purpose of retrieving an entityreference to a custom entity, to actually create that custom entity if it could not be found, so far so good.

I then thought that a bit of control would be nice, so I added a new argument so that we could choose whether the custom entity would be created if it wasn't found.

I updated the assembly on the server using the pluginregistration tool and nothing happened, the new attribute would not appear, cue bouncing of the CRM services, still nothing. IIS went down and up and still nothing. Server does the same thing and nothing.

So bullet biting time then; I removed the custom workflow activity from the three dialogs where it was being used, thankfully at the very end of the dialogs and then deleted it using the pluginregistration tool, updated the assembly again, bounce the crm services and iis and it starts working.

Incidentally, in theory this should not be needed, i.e. adding a new argument to a workflow should just show in CRM, but this might be only if it's not in use anywhere, I don't really know, every time I have made a change like this, i.e. add or remove attributes I have had to go through the rigmarole, which is really annoying.

I fixed the dialogs back to what they were and think nothing of it, until late this afternoon, when the import of the solution to the test server fails.

We then tried doing a new import but this time we use the overwrite all customizations and still it fails, so same procedure as in the development environment, namely remove the custom workflow activity from the dialogs, remove the custom workflow activity from the server using the pluginregistration tool, then finally we were able to import the solution successfully.

The moral of the story is:

Be Careful When  Renaming Custom Workflow Activities

Wednesday, 21 August 2013

Editing Active Directory user accounts from PoweShell

For reasons too long to explain I had to make changes to a few accounts (5+) today so rather than do them one by one I thought I would try using PowerShell.
Import-Module ActiveDirectory 
Get-ADUser -Filter 'name -like "*service"' | %{Set-ADUser -PasswordNeverExpires $true -Identity $_.Name}
No prices for guessing what the change needed was.

Friday, 16 August 2013

Using Selenium with Microsoft Dynamics CRM 2011

Earlier this week I was asked to look at the possiblity of using Selenium with Microsoft Dynamics CRM 2011 since it now supports Firefox (nothing like a clued up Test manager). At any rate, I thought I would give it a try.

The problem was that recording tests with the Selenium IDE wasn't not working as I was hitting constant javascript errors, so I decided to use the IDE as some sort of guidance and then modify the code generated to get it to work.

The test test [sic.] was to generate an entity (change) off another (callback) and then check that depending on the change type (field pre_type) various workflows and/or plugins would trigger.

Why this needed doing with Selenium it's beyond me, but there you go.

I won't go into details about the entities, but suffice to say that callback is the main entity in the system, there is a 1:N relationship between callback and change and changes can be created from the callback form.

So, I installed Selenium, downloaded the IE driver and got coding:

using System.Text;

using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;
using OpenQA.Selenium.IE;
using OpenQA.Selenium.Support.UI;
using System.Runtime.InteropServices;
using System.Security.Cryptography;
using System.Threading.Tasks;
using System.Threading;

namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
            IWebDriver driver = new InternetExplorerDriver(@"C:\Users\john\Downloads\selenium-dotnet-2.32.1\IEDriverServer_Win32_2.32.3\");
            driver.Url = "https://devcrm.dev.local//main.aspx?skipNotification=1";
            driver.FindElement(By.CssSelector("#pre_callback > nobr.ms-crm-NavBar-Subarea-Title")).Click();

            driver.SwitchTo().Frame("contentIFrame");

            driver.FindElement(By.Id("crmGrid_findCriteria")).Clear();
            driver.FindElement(By.Id("crmGrid_findCriteria")).SendKeys("*William*");
            driver.FindElement(By.Id("crmGrid_findCriteriaButton")).Click();

            driver.FindElement(By.Id("gridBodyTable_primaryField_{B2C895DC-DEAD-BEEF-9B08-F05056B2009F}_0")).Click();

            WaitForNewWindow(driver, 2);
            driver.SwitchTo().Window(driver.WindowHandles[1]);
            driver.SwitchTo().Frame("contentIFrame");

            driver.FindElement(By.Id("nav_pre_pre_callback_pre_change")).Click();
            driver.SwitchTo().DefaultContent();

            driver.FindElement(By.Id("pre_change|OneToMany|SubGridAssociated|Mscrm.SubGrid.pre_change.AddNewStandard-Large")).Click();

   WaitForNewWindow(driver, 3);

            driver.SwitchTo().Window(driver.WindowHandles[2]);
            driver.SwitchTo().Frame("contentIFrame");
            driver.FindElement(By.Id("DateInput")).SendKeys(DateTime.Now.ToString("dd/MM/yyyy"));
            driver.FindElement(By.Id("pre_changedetails")).SendKeys("Selenium Attack");
            
   for(int i=1; i < 6; i++
   {
                 SelectDropDown(driver, "pre_type", i);
                 driver.SwitchTo().DefaultContent();
                 driver.FindElement(By.Id("pre_change|NoRelationship|Form|Mscrm.Form.pre_change.SaveAndClose-Large")).Click();
   }
            driver.Quit();

        }

        private static void WaitForNewWindow(IWebDriver driver, int windowNumber)
        {
            while (driver.WindowHandles.Count != windowNumber)
            {
                Thread.Sleep(133);
            }
        }

        private static void SelectDropDown(IWebDriver driver, string fieldName, int selection)
        {
            IWebElement sourceWeb = driver.FindElement(By.Id(fieldName));
            SelectElement source = new SelectElement(sourceWeb);
            source.SelectByIndex(selection);
        }
    }
}

There is no reason why this could not be done as a unit test, but I thought it would be easier to distribute to the testers as a console app (It does need a lot of work, I know)

I have to say that I found it extremely flaky, in fact it seemed to need two runs, one to warm up and then it would almost always work.

Since, I haven't used Selenium much, I can't say how reliable or otherwise it is, but using the IE driver was not found suitable for testers.

I think it can be used for early morning checks and things like that but not for automated testing, all in all it was a big disappointment.

Sunday, 11 August 2013

Sed equivalent in Powershell

A few days back I was trying to fix some issues with our Visual Studio solution, where the hintpaths were all wrong, so I thought it would try to use PowerShell:
ls -recurse -Include *.csproj | % {sp $_ isreadonly $false; (Get-Content $_) -replace "here","there" | Set-Content -Path $_}
A few comments to make:

% is an alias for foreach-object
sp is an alias fro Set-ItemProperty
-replace allows using regular expressions

The only downside was that I still had to check in the projects back into TFS manually, for some reason the TFS powertools would not allow me to checkout more than 2 projects at the same time, but on the plus side this is a good equivalent to Sed in PowerShell.

Tuesday, 6 August 2013

Powershell one liner to check that a server is listening on a particular port

A lot of times I find myself trying to RDP to a box following a reboot only to be denied because although the box is pinging, not all services are up, so I can't connect to the box. This handy one liner helps by displaying errors until the connection on the port can be made.
$s = New-Object system.net.sockets.tcpclient; while (-not ($s.Connected)){$s.Connect("10.10.10.125",3389)}
Needless to say that if you change the port you can test any other service.

Thursday, 1 August 2013

Adding comments to fetchxml queries in MS Dynamics CRM 2011

Today I learnt that it is possible to embed comments in fetchxml queries, which when you think about it is obvious, but I didn't know.

I think this can be really useful for optionsets. Mind you if you are lazy/pressed for time enough that you don't create a enum for your optionsets then it's unlikely that you are going to bother with comments, but then again you might, as adding a comment is certainly quicker than creating an enum.

Exempli Gratia:

<fetch mapping="logical" count="50" version="1.0">
 <entity name="h2h_claim">
  <filter>
   <!--2 equals customers with a medium risk rating, i.e. at least 1 claim in the last 6 months-->
   <condition attribute="h2h_risk" operator="eq" value="2" />
  </filter>
 </entity>
</fetch>

Saturday, 27 July 2013

Debugging PowerShell scripts

This week I've been working on a couple of PowerShell scripts to automate a few tasks around deployment of an application and because I didn't know any better, I returned to the good bad old technique of writing out the variable values to see what the problem was. It turns out that it is possible to debug PowerShell scripts without using the ISE, which was crashing for me every time I tried to run it.

Enter the Set-PSBreakpoint cmdlet:


It's worth noting that you can set multiple breakpoints with the same command and as you can probably imagine they can be added to commands, variables or actions, for more details see this:
Get-help Set-PSBreakpoint -examples
In order to remove the all breakpoints, just use:
Get-PSBreakpoint | ForEach-Object {Remove-PSBreakpoint $_}
Or delete a single one
Remove-PSBreakpoint -id 0
Hope this helps, if it doesn't use Powershell ISE, don't waste your time adding write-host or echo statements to output values like I did this week.

Monday, 22 July 2013

FetchXml linked entity limit in MS Dynamics CRM 2011

I discovered this today, it's only possible to have 9 linked entities on a fetchXml query. I suspect that this is due to laziness, as the AliasedValue only seems to go to 9.

I might be wrong and there could be a genuine reason, but ....

Wednesday, 17 July 2013

List all checked-out files in TFS


If you, like me have used various servers workspaces to develop on, then this command might come useful to see which files you have checked out where.

(needs to be run from Visual Studio’s console)
tf status <yoursourceroot> /user:<youruser> /recursive /format:detailed
Run this to get the same ouput for all users:
tf status <yoursourceroot> /user:* /recursive /format:detailed

Friday, 12 July 2013

Hosting a RESTful JSON WCF Service from a console app or a windows service.

I've been working for a while on an application, too long to explain what it actually does, but the bottom line is that it requires, or at least it could benefit from having, a RESTful WCF service hosted on both http and https endpoints.

I toyed with the idea of doing the application in Python as it uses some Python code but I decided to stick with what I knew as I wanted to finish it quickly. At any rate, here is the code:

This is simply shown as an example of how it could be done, if you follow it, your end point will be listening on http://<hostname>/store/ and can be invoked by simply navigating to it like this:

http://<hostname>/store?page=url 

In order for the application to listen on https you will need to have a valid certificate on your certificate store. The subject name should match the hostname of the machine running this application and then this should work:

https://<hostname>/store?page=url 

First the interface:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.Text;

namespace WcfJsonRestService
{
    [ServiceContract]
    public interface IStore
    {
        [OperationContract]
        bool Store(string item);
    }

}
Then the class implementing the interface:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;
using System.Configuration;

namespace WcfJsonRestService
{
   
    public class Store : IStore
    {
        [WebInvoke(Method = "GET",
                    ResponseFormat = WebMessageFormat.Json,
                    UriTemplate = "store?page={item}")]
        public bool Store(string item)
        {
            //do stuff here

            return true;
        }

    }
}
And finally a Console application that hosts the service.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Cryptography.X509Certificates;
using System.ServiceModel;
using System.ServiceModel.Description;
using System.ServiceProcess;
using System.Text;
using System.Threading.Tasks;

namespace WcfJsonRestService
{
    class Program
    {
        static void Main(string[] args)
        {

            try
            {
                using (ServiceHost host = new ServiceHost(typeof(RESTful)))
                {

                    AddServiceEndPoint(host, "https://{0}/store", true, "change me");
                    AddServiceEndPoint(host, "http://{0}/store", false);

                    host.Open();

                    Console.WriteLine("Service host running......");
                    Console.WriteLine("Press Any key at any time to exit...");

                    Console.Read();

                    host.Close();
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex);
                Console.Read();
            }


        }

        private static void AddServiceEndPoint(ServiceHost host, string url, bool useSSLTLS, string certSubjectName="")
        {
            string addressHttp = String.Format(url,
                System.Net.Dns.GetHostEntry("").HostName);


            WebHttpBinding binding;

            if (useSSLTLS)
            {

                binding = new WebHttpBinding(WebHttpSecurityMode.Transport);
                binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None;
                binding.HostNameComparisonMode = HostNameComparisonMode.WeakWildcard;
                binding.CrossDomainScriptAccessEnabled = true;
            }
            else
            {
                binding = new WebHttpBinding(WebHttpSecurityMode.None);
                binding.CrossDomainScriptAccessEnabled = true;
            }

            // You must create an array of URI objects to have a base address.
            Uri uri = new Uri(addressHttp);
            Uri[] baseAddresses = new Uri[] { uri };

            WebHttpBehavior behaviour = new WebHttpBehavior();
            // Add an endpoint to the service. Insert the thumbprint of an X.509 
            // certificate found on your computer. 
            host.AddServiceEndpoint(typeof(IRESTful), binding, uri).EndpointBehaviors.Add(behaviour);

            if (useSSLTLS)
            {
                host.Credentials.ServiceCertificate.SetCertificate(
                    StoreLocation.LocalMachine,
                    StoreName.My,
                    X509FindType.FindBySubjectName,
                    certSubjectName);
            }
        }
    }
}

Alternatively, the WCF service can be hosted by a Windows service. Code behind for windows service here:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Linq;
using System.ServiceModel;
using System.ServiceModel.Description;
using System.ServiceProcess;
using System.Text;
using System.Threading.Tasks;
using StoreAndConvert.WCFService;
using System.Security.Cryptography.X509Certificates;
using System.Configuration;


namespace StoreAndConvert.WindowsService
{
    public partial class Store : ServiceBase
    {

        string certSubjectName = string.Empty;

        ServiceHost host;

        public Store()
        {
            InitializeComponent();
        }


        protected override void OnStart(string[] args)
        {
            try
            {
                //Debugger.Launch();
                certSubjectName = ConfigurationManager.AppSettings["CertificateSubjectName"];

                host = new ServiceHost(typeof(StoreUrls));

                AddServiceEndPoint(host, "https://{0}/storeurl", true, certSubjectName);
                AddServiceEndPoint(host, "http://{0}/storeurl", false);

                host.Open();

                Trace.WriteLine("Service host running......");
                Trace.WriteLine("Listening on");

                foreach (ServiceEndpoint sep in host.Description.Endpoints)
                {
                    Trace.WriteLine(string.Format("endpoint: {0} - BindingType: {1}",
                        sep.Address, sep.Binding.Name));
                }
            }
            catch (Exception ex)
            {
                Trace.WriteLine(ex);
            }

        }

        protected override void OnStop()
        {
            try
            {
                if (host != null)
                {
                    host.Close();
                }
            }
            catch (Exception ex)
            {
                Trace.WriteLine(ex);
            }
        }

        private void AddServiceEndPoint(ServiceHost host, string url, bool useSSLTLS, string certSubjectName = "")
        {
            string addressHttp = String.Format(url,
                System.Net.Dns.GetHostEntry("").HostName);

            WebHttpBinding binding;

            if (useSSLTLS)
            {
                binding = new WebHttpBinding(WebHttpSecurityMode.Transport);
                binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None;
                binding.HostNameComparisonMode = HostNameComparisonMode.WeakWildcard;
                binding.CrossDomainScriptAccessEnabled = true;
            }
            else
            {
                binding = new WebHttpBinding(WebHttpSecurityMode.None);
                binding.CrossDomainScriptAccessEnabled = true;
            }

            // You must create an array of URI objects to have a base address.
            Uri uri = new Uri(addressHttp);
            Uri[] baseAddresses = new Uri[] { uri };

            WebHttpBehavior behaviour = new WebHttpBehavior();
            // Add an endpoint to the service. Insert the thumbprint of an X.509 
            // certificate found on your computer. 
            host.AddServiceEndpoint(typeof(IStoreUrls), binding, uri).EndpointBehaviors.Add(behaviour);

            if (useSSLTLS)
            {
                host.Credentials.ServiceCertificate.SetCertificate(
                    StoreLocation.LocalMachine,
                    StoreName.My,
                    X509FindType.FindBySubjectName,
                    certSubjectName);
            }
        }
    }
}

Wednesday, 10 July 2013

Run Wix installer using elevated permissions

In my last post I talked about setting a certificate binding for an IIS website from a Wix installer, which required elevated permissions in order for the operation to work.

The solution involved checking that the user was running using elevated permissions, which was simple enough but it turns out there is a far neater solution to achieve this:

 <Package InstallerVersion="200" Compressed="yes" InstallScope="perMachine" InstallPrivileges="elevated" />

Sunday, 7 July 2013

Assign Certificate (Set HTTPS Binding certificate) to IIS website from Wix Installer

I'm working on this project where we have a secure website and I was tasked with creating an installer for it. After quite a few searches and not coming up with any results I went down the Custom Action route.

Not shown here is how to install the website for which we are modifying the binding.

This is very simple, it just uses IIS server manager to set the binding for the certificate, note that since this operation requires elevation of permissions, there is a check to ensure that the user is running with elevated permissions, if this is not the case then an the NotElevated custom action will be triggered, and error messaged displayed and the installation will be rolled back.

This is the Custom Action code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Deployment.WindowsInstaller;
using Microsoft.Web.Administration;
using System.Security.Cryptography.X509Certificates;
using System.Diagnostics;
using System.Security.Principal;

namespace Installer.CustomActions
{
    public class CustomActions
    {
        const string protocol = "https";
        const string bindingPattern = "*:{0}:";


        [CustomAction]
        public static ActionResult UpdateBinding(Session session)
        { 
            ActionResult result = ActionResult.Failure;
            session.Log("Start UpdateBinding.");
   if (CheckRunAsAdministrator())
   {
    bool outcome=       UpdateBinding("Portal", protocol, string.Format(bindingPattern, session["SSLPORT"]), session["CERT"], session);
     if(outcome){result = ActionResult.Success;}
                            session.Log("End UpdateBinding.");
                            return result;
   }
   else
   {
       session.Log("Not running with elevated permissions.STOP");
              session.DoAction("NotElevated");
   }
        }

        private static bool UpdateBinding(string sitename, string protocol, string port, string certSubject, Session session)
        {
            bool result=false; 
            session.Log(string.Format("Binding info (Port) {0}.", port));
            session.Log(string.Format("Certificate Subject {0}.", certSubject));

            using (ServerManager serverManager = new ServerManager())
            {
                Site site = serverManager.Sites.Where(x => x.Name == sitename).SingleOrDefault();

                X509Store store = new X509Store(StoreName.My, StoreLocation.LocalMachine);

                store.Open(OpenFlags.OpenExistingOnly | OpenFlags.ReadWrite);

                var certificate = store.Certificates.OfType<X509Certificate2>().Where(x => x.Subject == certSubject).FirstOrDefault();

                if (certificate != null)
                {
                    session.Log(string.Format("Certificate - Friendly Name: {0}. Thumbprint {1}", certificate.FriendlyName, certificate.Thumbprint));

                    site.Bindings[0].CertificateHash = certificate.GetCertHash();
                    site.Bindings[0].CertificateStoreName = store.Name;
                    site.Bindings[0].BindingInformation = port;

                    serverManager.CommitChanges();
                    result=true;
                }

                session.Log(string.Format("Could not find a certificate with Subject Name:{0}.", certSubject));

                store.Close();

            }
            return result;    
        }

        /// <summary>
        /// Check that process is being run as an administrator
        /// </summary>
        /// <returns></returns>
        private static bool CheckRunAsAdministrator()
        {
            var identity = WindowsIdentity.GetCurrent();
            var principal = new WindowsPrincipal(identity);
            return principal.IsInRole(WindowsBuiltInRole.Administrator);
        }
    }
}
and here is the Wix markup that uses the above custom action :
<Product ....>
<!--All the rest of the stuff-->

    <Binary Id="CA" SourceFile="$(var.Installer.CustomActions.TargetDir)Installer.CustomActions.CA.dll"/>

    <CustomAction Id="UpdateBinding" BinaryKey="CA" DllEntry="UpdateBinding" Execute="immediate" Return="check" />

    <CustomAction Id="NotElevated" Error="Ensure that the Installer is Run with elevated permissions (i.e. Run as Administrator)" />

    <InstallExecuteSequence>
      <Custom Action="UpdateBinding" After="InstallFinalize">NOT Installed</Custom>
    </InstallExecuteSequence>
</Product>

Tuesday, 2 July 2013

Assert.AreEqual() failing for strings that are equal

Today I almost lost it while doing the simplest of unit tests.

In essence, we had a plugin that would fire on an entity being updated and would set the name of a custom entity to a particular string retrieved from a remote service. The thing was that even though the strings were the same, Assert.AreEqual() was failing.

After many attempts with various StringComparison options, using Trim in a fit of desperation I created a method to check each character in the actual string against the expected string and lo and behold they were different.

The actual string, coming from CRM was using character 160, which is a non-breaking space while the C# code was using character 32, which is a simple space.

The solution was to replace the character 160 with character 32, now the unit tests pass.

Code:

const string FirstName="A Random Name";

[TestMethod]
public void CheckRuleName()
{
    Entity entity = Service.Retrieve("H2H_rule", RuleId, new Microsoft.Xrm.Sdk.Query.ColumnSet("H2H_name"));
    entity = UpdateEntity(entity, RuleId);
 
    string result = entity.Attributes["H2H_name"].ToString().Replace((char)160,(char)32);
 
    Assert.AreEqual(FirstName, result);
}

Thursday, 27 June 2013

Domain Issues with NetworkCredential class in C#

On Friday we had some interesting issues related to the NetworkCredential class.

We were implementing some functionality that was very similar to an already existing piece of functionality, in essence we were calling a third party web service that required authentication. In reality is not a third party as we have the code for it, but it's for a different application, so for all intents and purposes we treat it as a third part service, i.e. it's a black box.

Since we had some unit tests for this call, we ran through them and we found an issue, it would not authenticate to the web service, so the test would fail. 

We checked through the application and the web service was working fine, which should have led us to believe that there was something wrong with the unit tests, but we assumed that the unit tests were working before and it was something environmental that was causing the issue.

As it turns out somebody had changed the unit tests and not bothered to test them. The issue was the following:
var cred=new NetworkCredential(@"dev\testuser", "ReallySecurePass1"); 
instead of:
var cred=new NetworkCredential("testuser", "ReallySecurePass1","dev");

Saturday, 22 June 2013

News is bad for you – and giving up reading it will make you happier

A few months back I read this article about how News is bad for you and I thought I would give the advise a try to see what would happen. I would give up the News, sort of. Would it result in any kind of improvement in my day to day? Would it make me more .. or less ..?

The first thing to point out is that I haven't completely cut myself out of the news cycle, my radio alarm is still set at 07:00 every morning, with the (electronic) dial set to Radio 4, so I do get some news, not many as I'm normally out of the door by the time the sports section start, so this is normally about 25 minutes or fewer.

Furthermore, I don't actually go out of my way to avoid the news, but since I don't own a television, this means that at most, I'm only going to catch the odd glimpse of the news from a TV left on somewhere or a newspaper laying about. In essence, what I have done is stopped browsing news websites, mostly the BBC and various other newspapers here and abroad. 

So, Gave I had an epiphany? Has my (mild) depression lifted? my productivity increased 2 fold? 3 fold? 10 fold? Has anything changed at all?

Perhaps, unsurprisingly, there is very little that has changed in my life. I think this is probably down to two main facts:

  1. Rolf Dobelli is mostly right in his assertion that we don't really need the news.
  2. I avoid local news like the plague.
I admit that I have not completely cut myself off from the News, so perhaps we do need the News, but he is correct that there is hardly a news item that, has affected my life in any meaningful way, and I have benefited from knowing it as early as possible or most of them at all.

It is worthwhile mentioning black swans events and opportunity cost here. The former, as it would seem that only such events would be worthwhile knowing about as soon as possible and opportunity cost because all the time, mainly, spent consuming the news in the vain hope of being ready for the black swan event, which might never come and even if it does, will it compensate for all the other things that could have been done with that time (money)?

On the aftermath of the Boston bombing, somebody wrote a blog post/article on the best way of having the most accurate information about the bombing or any such event. Their suggestion was to turn yourself off from the electronic world, go out with your mates to the park or something and then read all about it on the morning paper. 

Mr Dobelli would probably argue that even reading about on the morning paper would be a waste of time, which is probably true.

I think number 2 is the key to why I feel very little change in my life without a constant stream of news.

There really is no easy way of saying this, but Local news are simply evil. They tend to concentrate, overwhelmingly, on crime and because by their very own definition, they are local, it does not even allow us to dismiss them as something that would/could not happen here, as it has happened. Furthermore, since we are notoriously bad at probability, reading them is very likely to make us anxious even though nothing, or very little, has changed about the probability of being the victim of a crime. Yet, reading about it is likely to have made us more likely to believe that crime is worse than what it actually is and even, and I'm going on a limb here, before we read the story about the horrific crime.

The murder rate in the UK is 1.2 per 100000 inhabitants or 12 per million inhabitants. The local news in my area cover approximately 1000000 people (some local media cover smaller areas, of course), which means that on average there will be 12 murders a year, or put another way: 1 per month. Not enough to be a constant worry, but enough to be a constant reminder. Never mind the fact that most of the crime is essentially criminals killing each other. Yes, there are cases were there are random acts on innocent people, but just because you can easily recall an example does not mean it's common, in fact, it's quite the opposite. Media coverage tends to be inversely proportional to frequency of an event. This is one of the reasons why the attacks on the London bombings on July 7th 2005, got the coverage they got. 

The one positive effect, that this voluntary withdrawal from news sites has had in my life, is related to my somewhat complicated relationship with sports, which I'm not going to go into detail here, but suffice to say that not knowing anything sport news has left me without those little moments of joy when the results went my way or those loooong periods of annoyance, frustration, irritation and helplessness when they didn't (I am of course exaggerating a little here for effect).

If you consume local news, I do recommend that you stop, for everybody else you can probably carry on as you were, but know that being au fait with the latest is unlikely to be of much use for anything unless your job depends on it, in which case what the hell are you doing reading this blog?