Wednesday, 20 November 2013

ADFS - Turning Debug Tracing on

I was struggling last week with some ADFS issues and I decided to turn debug tracing on to see if it threw any light as to why it wasn't working, it didn't, the issue was somewhere else, but it might be useful for future reference, so here it goes:

1. Start the event viewer:
Start | Run | eventvwr
2. Disable ADFS event log:
3. Run the following command with elevated permissions (i.e. Run as Administrator):
wevtutil sl "AD FS 2.0 Tracing/Debug" /l:5
4. Enable ADFS event log.
5. Ensure that Analytic and Debug logs are enabled
  View | Show Analytic and Debug logs
6. Enjoy all the debug output goodness.

Friday, 15 November 2013

Exit codes in msi installers created with Wix

There was (is?) a bug in Wix that prevented successful creation of an SSL binding in IIS 7.5, so in order to get around this issue, I wrote a custom action to do this.

A failure in this custom action will not stop the installation*, which means that the exit code will be driven by the custom action failure, which will lead to the exit code being 1603 but the product installing, since the failure is relating to setting the certificate for a webpage, this actually has no effect as the newly installed website picks up the old certificate.

The problem was in our install scripts, where we check on the exit code to ascertain whether the installation was successful, which it wasn't as the exit code was 1603 but since the app was actually there this lead to a lot of confusion and head scratching.

Thought I'd share in case somebody does something as a stupid as I did.

*I've edited the custom action code so that it does not always return success now. There was a perfectly valid reason for the code to always return a success value and I will talk about it as soon as I find it.

Selected output from the install log file:

CustomAction UpdateBinding returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox)
UpdateBinding. Return value 3.
INSTALL. Return value 3.

Product:  Portal -- Installation failed.
Windows Installer installed the product. Product Name: Portal. Product Version: 1.3.3.7. Product Language: 1033. Manufacturer: Yo! Ltd. Installation success or error status: 1603.

Wednesday, 30 October 2013

Delete documents from SharePoint document library using PowerShell

So today we had to delete a bunch of documents from SharePoint, so I wrote this script to accomplish this task.

A couple of notes about the script:

  1. By default it will not delete anything, just list the urls that contain the $document argument.
  2. The $document argument is matched with like as it's matching urls, so be careful that it doesn't delete more documents than you want.


param
(
[string]$url = "https://sp.dev.local",
[string]$document = "Welcome"
[bool]$delete = false
)

$site = Get-SPSite $url

foreach($web in $site.AllWebs) {
    foreach($list in $web.Lists) {
              if($list.BaseType -eq "DocumentLibrary") {
                     foreach($item in $list.Items){
                           if($item.url -like "*$document*")
                           {
                                  if($delete)
                                  {
                                   Write-Host "Deleting $item.url"
                                   $item.File.Delete()
                                  }
                                  else
                                  {
                                   Write-Host "$item.url"
                                  }
                         }               
                      }
              }
       }
}

 

Friday, 25 October 2013

Issues with TurnOffFetchThrottling in MS Dynamics CRM 2011 - FetchXml Record Limit

I think i found a bug in MS Dynamics CRM 2011 today. We have a few UR 12 installed on our servers and for various reasons too long too explain, we had Turned off Fetch throttling.

This is done by adding a registry key to HKEY_LOCAL_MACHINE\Software\Microsoft\MSCRM called TurnOffFetchThrottling and setting the value to 1. This needs to be a DWORD type.

If you do this, then this FetchXml query will not work:
<fetch mapping="logical" count="2147483647" version="1.0"> 
 <entity name="account">
  <attribute name="name" />
 </entity>
</fetch>
It seems that by default MS Dynamics CRM 2011 will add one to the number of results that one is trying to get. So with Fetch Throttling on, i.e. the default, the above query will be parsed to the database like this:
select top 5001 "account0".Name as "name" , "account0".AccountId as "accountid" from AccountBase as "account0" order by "account0".AccountId asc
But if, FetchThrottling is off, when we pass Int.Max32 as the number of results required, like in the fetchxml query above, then when Dynamics CRM tries to add one to this value and parse it, it overflows and the query isn't run.

Admittedly it is extremely unlikely that this will be a problem, if you have 2147483647 instances of an entity in your database, I rather suspect that you need to be doing some archiving, but still it looks like a genuine issue.

It's interesting that the count value uses a signed integer rather than an unsigned one, which would give us twice as many records, i.e. 4294967295, or one record per every two human beings on Earth.

Sunday, 20 October 2013

Developers Vs Developers

I have been meaning to write this post for ages as this is something that I have encountered time and time again during my career.

We have an integration layer on our application with our telephony system. A third party wrote this integration layer, in essence they have a web services that we communicate to and we expose a web service so that they can communicate with us, so far so simple, this is a staple of enterprise development and as I said I have had to deal with situations like this many times, both inside and outside the company, and if you've not done it inside your company and think that none of the problems that you are having would occur if you didn't have to deal with the idiots at Third Part Ltd., let me tell you that you will just have to deal with the idiots at the Other Department.

At any rate, integration was working fine when somebody realized that the spec called for SSL/TLS usage rather than using clear text. In theory this requirement made sense when the various sides of the integration equation where hosted by two different companies, but a change of architecture had been made, which meant that they weren't any longer, so using a secure website for an internal web interface that contained little more than phone numbers and UUIDs seemed like overkill, but the spec is the spec, agile be dammed.

So both Web Services and client apps on either side of the integration equation where reconfigured to use SSL/TLS and this is were the problems started and as it's normally the case in these situations, we started blaming each other. 

Furthermore, a supplier customer relationship had been allowed to develop. This is a relationship in which the supplier has the technical know how, or believes he does, and the customer doesn't, or is at least believed not to have it by the supplier. Needless to say that this wasn't the case as we shall see, but for various personnel reasons, i.e. developers on our company leaving, this relationship had taken hold, which meant that our evidence carried less weight as it were, because they were already predisposed to assuming that they were talking to yet another developer who didn't have a clue about how the integration was supposed to be working, which wasn't true but it was true that they knew their system and the history, and while I knew ours but could not comment on historical decisions as this had landed on my lap without handover from somebody that left the company suddenly, not so suddenly but I digress.

We both went back to basis to prove our respective thesis, i.e. it was the other party's fault, because history is written by the victors, you know how this will turn out, but I will continue anyway. The first thing that we discovered was another disagreement in the spec, regarding authentication, which I remedied after a lot of hair pulling.

After that, I found that our side was working fine, furthermore, I used Wireshark to prove that nothing was coming through over the wire on port 443, while stuff was going through on port 80 from the client PC that hosted the client app, which meant that failures on their side were not due to our Web Service, this despite the fact that the app was throwing a 404 error and pointed me to this link.

I mentioned the supplier customer relationship above, because it helps to explain why this evidence, our Wireshark evidence was ignored, they knew what they were doing we didn't so anything coming from our side was tainted.

To further compound the confusion, the client app would sometimes work when tested by them, which made us extremely suspicious and not work for us, which surely made them more suspicious of our general competence. It was working for them and they were dealing with a bunch of idiots so they probably were doing something stupid.

At this point they were willing to discuss the code that they were using for the first time and we realized that the source of the issue was their code, to be fair it was the usage of our proxy server, when it should not have been used, there were other issues but they are not important.

So I asked them to add this to the system.net element of the client's app.config:

<defaultProxy  enabled="false"/>

Lo and behold, everything started to work fine.

Not sure what the lesson is here, I guess if there is one, it's that appearances do matter even in a supposedly analytical field like this one.

This link seems tangentially related.

Tuesday, 15 October 2013

Start Windows Services from PowerShell according to StartMode.

In a previous post, I described how to stop and start MS Dynamics CRM services, this post is just a different way of doing it. The reason for looking for a different way is that the Get-Services cmdlet ignores the StartMode, i.e. if the service is disabled Start-Service will try to start it and fail so the solution is using WMI objects:
Get-WmiObject -Class "win32_service" | ?{$_.Name -like "mscrm*" -and $_.StartMode -eq "Auto"} | % {Restart-Service -Name $_.Name}‏
You might be wondering why this is needed, surely you should just uninstall the disabled service(s) and I would tend to agree, but sometimes it might be necessary to temporarily disable a service for testing purposes for instance, and by checking for the startmode you will only attempt to start services that can actually be restarted. 

Thursday, 10 October 2013

Using Host headers for SSL/TLS in SSRS 2008 R2

In a project I 'm working on at the moment, we are using SSRS over SSL and this had been working fine but we were using self signed certificates, so when we changed the certificates we started getting some issues, and by same issues, i mean the reports not working properly, as there was a mismatch between the cert name and the server name. We could not navigate to the Report Server or Report Manager URL, did not note down the exact error message, but it was related to a cert name mismatch causing the connection to be closed.

The logs would should this error:
appdomainmanager!ReportManager_0-3!1a88!10/04/2013-16:22:14:: e ERROR: Remote certificate error RemoteCertificateNameMismatch encountered for url https://ssrsserver.domain.co.uk:501/ReportServer/ReportService2010.asmx.
ui!ReportManager_0-3!1a88!10/04/2013-16:22:14:: e ERROR: System.Threading.ThreadAbortException: Thread was being aborted.
   at System.Threading.Thread.AbortInternal()
   at System.Threading.Thread.Abort(Object stateInfo)
   at System.Web.HttpResponse.End()
   at Microsoft.ReportingServices.UI.ReportingPage.ShowErrorPage(String errMsg)

In order to sort this problem I first configured a host header to match the certificate name and then modified the reportserver configuration file.

1. Run the Reporting Services Configuration Manager.


 2. Select Web Service Url.



 3. Click on Advanced to configure the Web Service Url.


 4. Select http entity and click edit to add the host header.


 5. Repeat steps 3 & 4 for Report Manager.



6. Edit C:\Program Files\Microsoft SQL Server\MSRS10.MSSQLSERVER\Report Services\ReportServer\rsreportserver.config:
Change the urlstring to reflect the new host header e.g.
   <UrlString>http://mydomain.co.uk:80</UrlString>
Make sure that you only change the entries that actually have a domain name and are not just showing a url registration. In other words, leave these alone:
<UrlString>https://+:443</UrlString>
Having said, changing these might allow to use different host headers in plaintext and secure websites, but I've not tried it.

Saturday, 5 October 2013

Deploy SharePoint Event Receivers from PowerShell

I thought I would share a script that we use for deploying Event Receivers to SharePoint.

param ([string] $url, [string] $featureName,[string] $solution, [bool] $install, [bool] $uninstall)

if ( -not ($url))
{
 Write-Host "Please enter the Site Url"
 exit
}

function SelectOperation
{
 $message = "Select Operation?";
 $InstallMe = new-Object System.Management.Automation.Host.ChoiceDescription "&Install","Install";
 $UninstallMe = new-Object System.Management.Automation.Host.ChoiceDescription "&Uninstall","Uninstall";
 $choices = [System.Management.Automation.Host.ChoiceDescription[]]($InstallMe,$UninstallMe);
 $answer = $host.ui.PromptForChoice($caption,$message,$choices,0)
 return $answer
}

if (-not($install) -and -not($uninstall))
{
 $answer = SelectOperation

 switch ([int]$answer)
 {
   0 {$install=$true}
   1 {$uninstall=$true}
 }

}

if ($install)
{
 Add-SPSolution -LiteralPath $(join-path $(pwd).Path $solution)
 $solutionId = (Get-SPSolution | ? {$_.name -eq  $solution}).Id
 Install-SPSolution -Identity $solutionId -GACDeployment
 Write-Host "Waiting for the SharePoint to finish..."
 Sleep(120)
 $featureNameId = Get-SPfeatureName | where {$_.displayname -eq  $featureName} 
 Enable-SPfeatureName -Identity $featureNameId -Url $url
}

if ($uninstall)
{
 $featureNameId = (Get-SPfeatureName | where {$_.displayname -eq  $featureName}).Id
 Disable-SPfeatureName $featureNameId -Url $url
 $solutionId = (Get-SPSolution | ? {$_.name -eq  $solution}).Id
 Uninstall-SPSolution -Identity $solutionId
 Write-Host "Waiting for SharePoint to finish..."
 Sleep(120)
 Remove-SPSolution -Identity $solutionId
}
Write-host "All Done."

An example of usage:
.\DeployEV.ps1 -featureName "receiver1" -solution "receiver1.wsp"  -install $true
 Note that the script assumes that the wsp file will be in the same location as the script.

Monday, 30 September 2013

Create and Delete Website from PowerShell.

I thought I would share the whole solution rather than just the SSL Binding code.

Import-Module WebAdministration

function Add-Site([string]$folder, [string]$sitename, [string]$protocol="http", [int]$port, [int]$sslport, [string] $hostheader, [string]$thumbprint, [string]$appPoolName, [hashtable] $appDetails, [string]$version="v4.0")
{
 
 if ( -not ( Get-Website | ? {$_.Name -eq $sitename}))
 {
  if ($hostheader)
  {
   New-Item iis:\Sites\$sitename -bindings @{protocol="$protocol";bindingInformation="*:$($port):$($hostheader)"} -physicalPath $folder
  }
  else
  {
   New-Item iis:\Sites\$sitename -bindings @{protocol="$protocol";bindingInformation="*:$($port):"} -physicalPath $folder
  }
  
  if (-not($thumbprint) -or -not ($sslport))
  {
   Write-Error "Ensure that a Certificate Thumbprint and SSLport are set for HTTPS Bindings"
   Write-Host "Let's clean up a little bit here..."
   DeleteSite $sitename
   exit
  }
  else
  {
   AddSSLBinding $thumbprint $sitename $sslport $hostheader
  }

  if ($appDetails -and $appPoolName)
  {
   CreateAppPool $appPoolName
   SetAppPoolVersion $appPoolName $version
   foreach ($app in $appDetails.GetEnumerator())
   {
    MakeApplication $sitename $app.Name $app.Value 
    SetAppPool $sitename $app.Name $appPoolName
   }
  }
  else
  {
   Write-Warning "The website $sitename has been created with no applications or applicationPools. Nothing wrong with this, just saying"
  }
 }
}

function Remove-Site([string]$sitename, $appPoolName)
{  
  Get-ChildItem IIS:\SslBindings | ? {$_.Sites -eq $sitename} | %{ Remove-Item iis:\sslbindings\$($_.pschildname) -Force -Recurse}
  Get-ChildItem IIS:\Sites\ | ?{$_.Name -eq $sitename} |% { Remove-Item  IIS:\Sites\$sitename  -Force -Recurse}
  Get-ChildItem IIS:\AppPools\ |? {$_.Name -eq $appPoolName} | %{ Remove-Item IIS:\AppPools\$appPoolName -Force -Recurse }  
}

function AddSSLBinding([string]$thumbprint, [String]$sitename, [int]$port, [String]$hostheader)
{
 
 $cert = Get-ChildItem cert:\LocalMachine\My | ?{$_.Thumbprint -eq $thumbprint}
 
 if( -not($(Get-ChildItem iis:\sslbindings| ? {$_.Port -eq $port})))
 {
  New-Item IIS:\SslBindings\0.0.0.0!$port -Value $cert | out-null
  
  if ($hostheader)
  {
   New-ItemProperty $(join-path iis:\Sites $sitename) -name bindings -value @{protocol="https";bindingInformation="*:$($port):$($hostheader)";certificateStoreName="My";certificateHash=$thumbprint} | out-null
  }
  else
  {
   New-ItemProperty $(join-path iis:\Sites $sitename) -name bindings -value @{protocol="https";bindingInformation="*:$($port):";certificateStoreName="My";certificateHash=$thumbprint} | out-null
  }
 }
 else
 {
  Write-Warning "SSL binding already exists on port $port"
 }
}

function MakeApplication([string]$sitename, [string]$applicationName, [string]$folder)
{
  New-Item "IIS:\Sites\$sitename\$applicationName" -physicalPath $folder -type Application | out-null
}

function CreateAppPool([string]$applicationPoolName)
{
  New-Item IIS:\AppPools\$applicationPoolName | out-null
}

function SetAppPool([string]$sitename, [string]$application, [string]$applicationPool)
{
  Set-ItemProperty IIS:\sites\$sitename\$application -name applicationPool -value $applicationPool | out-null
}

function SetAppPoolVersion([string]$applicationPool, [string]$version)
{
  Set-ItemProperty IIS:\AppPools\$applicationPool managedRuntimeVersion $version | out-null
}

Export-ModuleMember -Function Add-Site,Remove-Site
An example of how to use it the above module to add a new site, assuming that it's been saved as Module.psm1. The thumbprint needs to be of a certificate already installed in your server on the LocalMachine\My store. This will create a website with an http binding in port 80 and an https binding on port 443 using the certificate passed in (thumbprint). An application and virtual directory called TestService will be created. All that remains is to copy the website files.
Import-Module Module.psm1
$path="F:\TestWebSite"
$testAppdetails=@{"TestService" = "$Path\TestService"}
Add-Site -folder $path -sitename "testsite" -protocol "http" -port 80 -sslport 443 -thumbprint "FE1D6F1A5F217A7724034BA42D8C57BEC36DD168" -appPoolName "testapppool"  -appDetails $testappDetails
and an example of how to remove the same site:
Remove-Site -sitename "testsite" -appPoolName "testapppool"

Sunday, 29 September 2013

On Quality

A few months back I hit upon an idea for a rather laborious scheme that would make me not that much money: Selling hard drive magnets on Ebay.

There were approximately 50 or so old hard drives at work, and when I say old, I do mean old. None of them could accommodate more than 18.6 GB of storage with an Ultra2 Wide SCSI interface.

Nobody wanted to take the time to dispose of them properly, which included the unenvious tasks of filling all the paper work or indeed running the disks through the data erasure program. I figured if the disks were not working nobody had to worry about this.

I set about taking apart the first disk and I hit my first road block: Torx headed screws. I had never had the need to use a Torx screwdriver so I decided to go out and buy a set of Torx screwdrivers.

I did a bit of research online and I wasn't surprised to find such a large disparity in prices: from as low as a few pounds to around one hundred pounds. I was sure that I didn't need to spend £100 on a set of Torx screwdrivers, but how low should I go?

As is my wont, I procrastinated and resolved to do more research on the topic. However, that weekend I stumbled upon a set of Torx screwdrivers at a discount store for £2.99, so I thought that I might as well buy them there and then.

I was fully aware that at this price the quality of the set would leave a lot to be desired but still I managed to suppress this knowledge long enough for me to dismantle 1.5 hard drives, which is when I hit the quality limit of my £2.99 set of Torx screwdrivers.

As I said above, I was not expecting them to last a lifetime and they were a spur of the moment, almost an impulse buy, so it wasn't unexpected, however this left me with a bit of a problem.

I know that if one buys the cheapest, in this day an age with a proliferation of  peddlers of cheap junk, one gets poor quality, but is the converse true?  In other words, does one get quality by buying expensive stuff? Perhaps more importantly, how much should I have spent on the set given the used I was planning to give it?

It is very easy to determine the quality of items at the bottom of the price scale: They are rubbish, but one does know what one is paying for. However, once we leave the safety of the cheaper items, then it becomes a lot harder to ascertain how much better something is or put another way Do you know what you are paying for when you buy an expensive item?

Take the Apple iPod shuffle, which can be obtained for £35 from Amazon. Storage capacity is 2 GB, it has no screen, it's tiny and has some sort of clip mechanism. For a similar price, £36, it is possible to buy a Sansa Clip+ with 8GB of storage, an expansion slot, screen, FM radio and also a clip mechanism. Yes, it's slightly bigger but hardly noticeable and voice commands can be added with RockBox firmware, so are you sacrificing 6 GB of storage for voice command?

The reality is that to a great extent you are paying for the Apple brand, with its design and quasi-religious following, which means that if you don't really care about design and don't think much of Apple as brand then you would be wasting money by going down the iPod shuffle route.

Is there are a similar quasi-religious following for say Stanley tools?, I would rather imagine that this unlikely to be the case. In fact, from talking to some of my relatives, who work or have worked in construction, they seem to buy tools from different brands mostly through experience. In other words, they tend to favour a brand because it was worked for them in the past and negative experience have a much more lasting effect that positive ones:
I spent loads of money on a expensive diamond tipped drill bit set from Black & Decker and it was rubbish. Since then I've always gone with Bosch drill bit sets and power tools.
In truth it might have been the other way round, the point still stands though, a negative experience is a lot more likely to be remembered, than a positive one as the positive one, this case, simply means having a reliable tool every day for a long time.

Whenever I find myself thinking about quality, I always imagine myself going back to that Arcadia of the consumer on the days prior to consumerism, whenever they happen to have occurred. In reality I have to admit that there have always been different quality levels on the products available to the consumer and while the bewildering price ranges that can be found for most products these days, makes buying the right item really tricky and by the right item I mean an item whose quality is commensurate with the price paid, it is simply naive to think that it easier through choice.

It was only easier because there was no choice, in other words, if you were a worker you could just afford the cheapest stuff and that is what you bought. It's only a modern dilemma that we have this paradox of choice, which makes discerning how much of your money goes on quality and how much goes on to pay for the brand premium almost impossible.

To a certain extent this is ameliorated by the various product reviews, but product reviews are no panacea as it is just as likely that the service is reviewed, which can be helpful, but it's hardly relevant to the product's quality or lack thereof. Furthermore, a large number of reviews describe personal preference and are normally added very early on, i.e. when no issues are found or the product was found to be defective, so they tend to be very Manichean.

There are dedicated people who seem to take reviewing very seriously, a sort of amateur Which? (Consumer Reports if you are in the  US) but sadly they are, very much, the minority and if you're not contemplating buying something that they have already bought, then you are out of luck.

So what to do?