Saturday, 30 May 2015

MOOC done right - Embedded Systems - Shape the World

A few years back I signed up to edX and at time there weren't that many courses available so after the initial excitement of being able to do MIT courses died down, my account was dormant for quite a while, more on this later, until last year I decided to try to learn Python and rather than using LTCTHW or Code Academy, I thought I would try a Python course instead.

A few weeks after I got an email alerting me to other courses that might be of interest and in this email was one that sounded really interesting:
Embedded Systems - Shape the World - UTAustinX -  UT.6.02x
I signed up immediately.

Although the course can be completed without an embedded system, it is, of course, recommended that one is used. Buying instructions for the recommend kit (TI Tiva LaunchPad and some bits and bobs) are provided and not just for the USA. This I found a really nice detail, as I can imagine that probably the majority of people taking the course were not in the USA and it's really one of the many examples of the kind of involvement and enthusiasm that the staff exudes.

My main problem with MOOCs so far has been a combination of lack of motivation, I find it hard to follow through on a topic that, while interesting, might not be applicable for work, current or future, or even daily life and this is not to say that I only learn stuff that's useful, I don't, my head is filled with objectively useless facts, such as the Band Gap of GaAs being ~ 1.42eV (I did not look it up) or Juan Manuel Fangio having won almost half of all the F1 races he participated in or one of my favourite words, biblioclasm

The other reason, and this is probably, the main reason, is lack of time. A lot of the courses suggest a 10-15 hour weekly commitment, this might not sound like much, and in fairness it isn't, most of the time, but some times it is and this is where the Embedded Systems - Shape the World course gets it completely, and absolutely right. The first seven lessons were available when the course started, and the most of rest of content was made available shortly afterwards, so that effectively two weeks after the course started 85+% of the course materials and labs were available.

This is completely at odds with the way that most courses release their material, which is done on an almost weekly basis, with homework(s) due almost on a weekly basis. I find this terribly disappointing. What is the point of doing a course online when you have to basically do it at a pace set for you?  I appreciate that I'm doing it from the comfort of my home but even so this is very likely a major contributory factor to the really poor completion rates in MOOCs. Although I'm not doing the courses for the grades, it's always motivating to be able to get a good grade and the course runs on a very tight schedule a busy week at work or trip can prevent keeping up with the course schedule.

I don't have a CS degree and I've had a interest in low-level programming for a while now, but I've never really taken the time to explore this in any detail as I've always found it pretty daunting, but in the course concepts ranging from CPU instructions and registers to  interrupts and device drivers are explained in a simple and accessible manner. 

In fact, it is explained in such a manner that it's made me lose some of the awe for the guys and gals doing the low-level stuff. I realize that this is silly, as the drivers that are part of the course are extremely simply but it feels as if a veil has been lifted and beneath it, the truth has been revealed.

Instructions on how to get started, install IDE, drivers and TeXas software are provided and I found them easy enough to follow. Those of you out there without a Windows machine might grumble at the lack of direct support, there are instructions on how to the install from virtualization software from a Mac. I guess the authors assume that if you're using Linux you don't need any help getting a hypervisor running :)

All labs have a skeleton project, for Keil's ┬Ávision IDE and a simulation board, which allows the code to be tested before deploying it to the physical board. I often found that working code in the simulation board would fail when deployed to the physical board. This annoyed me a bit at first, but in reality this is no different from the good old:
Well, it works on my machine.
Generally speaking the code was not robust enough and it needed tweaking. I imagine that there are probably more sophisticated simulators available, but the cost is likely to be prohibitive. This is not unlike apps for the myriad Android phones out there.

One thing that was quite surprising at first, although it makes sense since we're are so close to the bare metal, is the way everything is referred to by memory address. For instance, if an application needs to use bits 0 and 1 of Port E, this required knowing the exact address of these bits. Thankfully, these were provided on the skeleton projects but they can also be looked up on the spec sheets. This is, incidentally, an invaluable skill due to large catalogue of systems and components out there.

This is a very simple program, that flashes an LED that's connected to bit 1 of Port E, based on whether a switch connected to bit 0 of PORT E is pressed, and I think it illustrates the point above. Also note, how the delay function counts, effectively, the number of cycles.

#define GPIO_PORTE_DATA_R       (*((volatile unsigned long *)0x400243FC))

int main(void){ 
 while(1){    
 Delay1ms(100);
 if( GPIO_PORTE_DATA_R&0x01){
  //Let's flip it (Bit 1)
  GPIO_PORTE_DATA_R^=0x02;
 }
 else{
  //Lusisti satis,edisti satis,atque bibisti,tempus abire tibi est
  GPIO_PORTE_DATA_R |=0x02;   
 }
  }

void Delay1ms(unsigned long msec){
        long cyclesPerMs = 15933;
 long counter = cyclesPerMs * msec;
 while (counter > 0){
  counter--;
 }
}
}

I've removed the initialization routine, which in essence, gets the physical board running and activates the various ports as needed. I found quite a few of my issues on the physical board where down to issues in the initialization routine, so it's by no means trivial.

The gradual increase in complexity of the physical circuits that needed to be built was very finely tuned. Chapter 8, finally required a circuit to be build, not just using the TI Tiva LaunchPad and it was really nerve racking, I don't really know why, as I think it was only £20 worth of kit, but there were diagrams available and enough warnings regarding what to do and more importantly what not to do, that I built the circuit without any issues. This, actually, ended up becoming the most enjoyable activity of the course, the actual building of the circuits. 

One of the hardest labs, was chapter 10, where a Finite State Machine is used to model a simplified traffic light system for an intersection. I actually really enjoyed this lab even if it took quite a bit of pen and paper to get the design right in the first place. Also, one of my better pictures :). The switches (yellow parts in the middle of the breadboard) model a sensor that detects cars or pedestrians.

Chapter 10 - Modeled  Intersection's Traffic Light System

Chapter 10 - Real System. 
This is not a great picture, but it shows the TI Tiva LaunchPad interfacing with the Nokia 5110 screen. In Chapter 14, an ADC was used for to measure distances. This works by moving the slide potentiometer, measuring the changes in voltage, which can then be converted to a distance, after suitable calibration.
Chapter 14 - Measuring Gauge
In the penultimate chapter of the course, all that was learned during the course is put together on a sort of final project: a games console. This involved designing the game and then putting together the circuit to build it. Although it might sound daunting as usual there was a lot of help in the form of skeleton code. The hardware part, was relatively simple in that it consisted in putting all that had been learned previously to use. An interesting exercise, you can see the results, not mine, I was too late, here.

The last chapter of the course, involved the Internet of Things, which I have to confess, I haven't done yet, as I've procrastinated on getting the WiFi booster pack for the Launchpad and this brings me to another issue with most other courses: The graders

In other courses, I've done these became inactive when the course becomes inactive and, to be fair, it's the same with this course, but there is a massive difference, the graders in this course work by checking a hash (The hash is computed by the grader software that is run in the local machine) and thus, it is entirely possible to check that your programs work as intended regardless of the course status, this is very welcomed novelty for me and I don't know why this is not the case for more courses.

I should point out that the last chapter does require access to a server, which to be fair could've been mocked to allow offline access. The server is still up 2+ weeks after the course ended.

This post has gone on for far longer than I originally intended and I haven't even talked about Electronics, which is a pretty important part of the course, but I  will stop here.

I would like to end the post by thanking the staff on the course and particularly Dr Valvano and Dr Yerraballi, for making this course very accessible, really enjoyable and tremendously educational.

I really hope that a more advanced course is made available by Dr Valvano and Dr Yerraballi, at some point in the near future.

This post doesn't do the course justice, go on, just go and take it, you will enjoy it.

Tuesday, 26 May 2015

Create Relying Party Trust for Microsoft Dynamics CRM from Powershell

I've configured Claims based authentication and IFD for MS Dynamics CRM more times than I care to remember and every time I do it manually, on the basis that it just doesn't take that long, which is true but it's also very tedious, so I spent some time creating a script to create the Relying Party Trust needed for MS Dynamics CRM claims based authentication and IFD to work. Obligatory XKCD.

I've only tried this script with ADFS 3.0 and MS Dynamics CRM 2015, but it should work for MS Dynamics CRM 2013 as well.

It's also possible to pass a file with the claims, using the IssuanceTransformRulesFile and IssuanceAuthorizationRules flags instead for the Add-AdfsRelyingPartyTrust command.

The script should be run after MS Dynamics CRM has been configured for Claims based authentication from the ADFS server.

The script can be also used to create the Relying Party trust for an Internet Facing Deployment and again it needs to be run after IFD has been configured in MS Dynamics CRM.

param ([string]$Name, [string]$Identifier)

if (-not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator"))
{
    Write-Warning "You do not have Administrator rights to run this script!`nPlease re-run this script as an Administrator!"
    break
}

if (-not($Name))
{
 Write-Host "Name is a mandatory parameter. This should be the name of the Relying Party Trust"
 break
}

if (-not($Identifier))
{
 Write-Host "Identifier is a mandatory parameter. This will normally be of the form: https://<fqdn crm>/"
 break
}

$Identifier = $Identifier.Trim("/")

#These are the Transform Rules needed for CRM to work.
$transformRules='@RuleTemplate = "PassThroughClaims"
@RuleName = "Pass through UPN"
c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"]
 => issue(claim = c);

@RuleTemplate = "PassThroughClaims"
@RuleName = "Pass through primary SID"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid"]
 => issue(claim = c);

@RuleTemplate = "MapClaims"
@RuleName = "Transform Windows Account name to Name"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]
 => issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType);'

#A single Authorization Rule, i.e. let everybody thru. Could tie down further if needed.
$authRules='@RuleTemplate = "AllowAllAuthzRule"
 => issue(Type = "http://schemas.microsoft.com/authorization/claims/permit",
Value = "true");'

#Copied and pasted this from a CRM 2011/ADFS 2.1 RPT
$imperRules ='c:[Type =="http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid", Issuer =~"^(AD AUTHORITY|SELF AUTHORITY|LOCAL AUTHORITY)$" ] => issue(store="_ProxyCredentialStore",types=("http://schemas.microsoft.com/authorization/claims/permit"),query="isProxySid({0})", param=c.Value );c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",Issuer =~ "^(AD AUTHORITY|SELF AUTHORITY|LOCAL AUTHORITY)$" ] => issue(store="_ProxyCredentialStore",types=("http://schemas.microsoft.com/authorization/claims/permit"),query="isProxySid({0})", param=c.Value );c:[Type =="http://schemas.microsoft.com/ws/2008/06/identity/claims/proxytrustid", Issuer=~ "^SELF AUTHORITY$" ] => issue(store="_ProxyCredentialStore",types=("http://schemas.microsoft.com/authorization/claims/permit"),query="isProxyTrustProvisioned({0})", param=c.Value );'

Add-AdfsRelyingPartyTrust -Name $Name -Identifier $Identifier -IssuanceTransformRules $transformRules -IssuanceAuthorizationRules $authRules -ImpersonationAuthorizationRules $imperRules

Set-AdfsRelyingPartyTrust -TargetName $Name -MetadataUrl $($Identifier + "/FederationMetadata/2007-06/FederationMetadata.xml") -MonitoringEnabled $true -AutoUpdateEnabled $true

Update-ADFSRelyingPartyTrust -TargetName $Name
This is what I ran to create the relying party trust for Claims based authentication:

 .\CRMRPT.ps1 -Name "crm2015 - CBA" -Identifier "https://crm2015.dev.local/" 

and this tocreate the relying party trust for IFD:

 .\CRMRPT.ps1 -Name "crm2015 - IFD" -Identifier "https://auth.dev.local/"

Sunday, 17 May 2015

Cashless society N=1

A couple of weeks ago I read an article regarding a new law in Denmark that will effectively make cash not legal tender anymore. In other words, business will not be obliged to accept cash as payment.

To me this seems fairly sensible, I really hate paying by cash, in fact I occasionally find myself struggling to remember the pin number of my debit card as I use it so rarely, so I thought I would try to analyze my cash use. The simplest way of doing this is by looking at cash withdrawals from my bank account.

I decided to have a look at what data I could use from my banks online site and was a little bit disappointed to find that they only provide the last 12 months, I could request older data but it would be printed so I decided to stick to 12 months.

This is the raw data.

As I suspected, there seems to be a decrease in the frequency of cash withdrawals, but the linear fit is pretty poor due to the various outliers.

I did a little bit of thinking and I realized that the March and September outliers are due to leaving dos from people at work and the June one was due to buying some stuff for my girlfriend at Vintage Fair. 

I decided to plot the data again but this time without the outliers.

The trend line, just a linear fit, shows a much better fit than the raw data as is to be expected.

I can easily see this trend holding true, i.e. I will continue to visit the ATM less often, due to the increasing acceptance of contactless payments and its limit being raised to £30 in September.

Friday, 15 May 2015

I Don’t Always Test My Code. But When I Do I Do It In Production

I honestly never I thought I would need to post this:

Unable to Navigate to external domain (auth endpoint) in MS Dynamics CRM 2013/2015 IFD

The standard practice for my company when deploying web servers is to use a host header, which probably made sense at some point but it surely made for an interesting Friday.

I'm tired, so I will just present the facts: If you are configuring IFD and can't get to the external domain endpoint, the problem might be that you have a host header for your https binding.

The external domain endpoint is normally: https://auth.adomain.com/FederationMetadata/2007-06/FederationMetadata.xml and is the last step on the configure IFD wizard.


Ensure that IIS is configured without a host header for the https binding:


Friday, 8 May 2015

ExpectedException Failures in MSTest?

This morning I got an automated email alerting me of a failed build.

Excerpt from Jenkins

Results               Top Level Tests
-------               ---------------
Failed                SecurePasswordTest.EncryptDecryptFailure
Failed                SecurePasswordTest.EncryptDecryptFailureSecureString
Passed               SecurePasswordTest.EncryptDecryptSuccess
Passed               SecurePasswordTest.EncryptDecryptSuccessSecureString
2/4 test(s) Passed, 2 Failed

This is one of the failing tests:

[TestMethod]
[ExpectedException(typeof(System.Security.Cryptography.CryptographicException))]
public void EncryptDecryptFailureSecureString()
{
    var password = DateTime.Now.Ticks.ToString();
    var encryptedPasswordRandom = Convert.ToBase64String(Encoding.ASCII.GetBytes(password));
    var encrypted = SecurePassword.Encrypt(SecurePassword.ToSecureString(password));
    var decrypted = SecurePassword.ToInsecureString(SecurePassword.DecryptString(encryptedPasswordRandom));
}
I ran the tests from my machine and it worked fine. I tried from the server and it worked ....

At this point I remembered that a second build server had been brought online, as some tests runs were taking 2-3 hours, in any case. I checked the other server and I could reproduce the issue, in other words, the test failed there, but why?

Build Server #2 did not have Visual Studio 2013 Shell installed, after I installed it, it started working correctly.

It is rather strange that this happened in the first place and I'm not sure that this is actually related to the ExpectedException attribute, the type of exception or what exactly.

At any rate, hopefully it helps somebody out some day.

Saturday, 2 May 2015

Squid, Azure and beating youtube's regional filter

Every so often I try to watch videos on YouTube that are not available for my region/country (UK) and normally I can easily find an alternative that is available for the UK but last week I thought, why not just use a proxy?.

I have an MSDN subscription, which among other things, gives me £100 of Azure credit a month, so I thought I'd use some of the credit for this.

I provisioned a A0 OpenLogic 7.0 box and I got started. If you're wondering the cost for a linux A0 instance is £0.0011/hr ~ £8 a month and bandwidth is £0.08 per GB (first 5GB are free).
  1. Install Squid

  2. sudo yum -y install squid

  3. Add allowed IP addresses

  4. sudo vi /etc/squid/squid.conf

    # Example rule allowing access from your local networks.
    # Adapt to list your (internal) IP networks from where browsing
    # should be allowed
    acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
    acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
    acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
    acl localnet src fc00::/7       # RFC 4193 local private network range
    acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machine  s

    acl work src  111.111.111.0/24
    acl home src  111.111.111.111

    # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

    http_access allow work
    http_access allow home


  5. Enable IP Forwarding by editing sysctl.conf file.

  6. sudo vi /etc/sysctl.conf
    Add the following to file: net.ipv4.ip_forward = 1
    sudo sysctl -p /etc/sysctl.conf


  7. Enable and start Squid

  8. systemctl enable squid
    systemctl start squid


Now we need to open the firewall in Azure for the box, Squid listens, by default on port 3128 and configure your browser to use this proxy ... Bye, Bye dreaded: The uploader has not made this video available in your country.

It's worth pointing out that this doesn't work for all websites, see my post about a setting up a VPN server on AWS if you want to a VPN server. Azure doesn't support GRE protocol 47, so it's SSL VPNs only on Azure.

Tuesday, 31 March 2015

Integrated Windows Authentication Log Out

It turns out that it's sort of possible to log out from Integrated Windows Authentication:

document.execCommand("ClearAuthenticationCache")

A few problems though:
  1. It only works with IE
  2. It will log you out of all websites in IE
The latter tends to annoy users no end, but I don't really know why

Saturday, 28 February 2015

Ordered Parallel Processing in C#

In one of the projects I've been working on, we need process a bunch of files containing invoice data. Processing these can be time consuming, as the files can be quite large and although the usage given to this data seems to suggest that it can be done overnight, the business has insisted in processing the files during the online day, at 17:00.

The problem is that that the files tend to contain the invoice journey through the various states and for audit purposes we need to process them all.

So, for instance if the first record on a file is an on hold invoice, we want to process this, but we also want to process the same invoice showing as paid further down the file. We can't just process the paid event. Furthermore, we also want the invoice record to end with a status of paid, which is fairly reasonable.

The problem is that if we process the invoices in parallel, we have no guarantees that they will be processed in the right order, so a paid invoice record might end up with a state of issued, which is not great, so we just went for the quick and easy solution and thus processed the files serially.

I gave the matter a little bit more thought and came up with this:

private void UpdateInvoices(IEnumerable<IInvoice> invoices)
{
    var groupedInvoices = invoices.GroupBy(x => x.Status)
        .OrderBy(x => x.Key)
        .Select(y => y.Select(x => x));

    foreach (var invoiceGroup in groupedInvoices)
    {
        Parallel.ForEach(invoiceGroup, po, (invoice) =>
        {
           UpdateInvoice(invoice);
        });
    }
}
What we do is, we group all the invoices by status and order them by status. We then process all of the invoices in a status group in parallel, so that all invoices with status issued, get processed first, and paid last, a few more get processed in between.

It is, of course, possible to have multiple parallel for each loops for each status, but I feel that this solution is more elegant and easier to maintain.

PLinq does have an AsOrdered method, but the UpdateInvoice method doesn't return anything, if it fails to update the database, it simple logs it and it's for the server boys and girls to worry about.

Furthermore, it simply doesn't quite work as I might have expected it to work.

The code from this sample has been modified to better simulate what we're trying to achieve:

var source = Enumerable.Range(9, 50);

var parallelQuery = source.AsParallel().AsOrdered()
    .Where(x => x % 3 == 0)
    .Select(x => { System.Diagnostics.Debug.WriteLine("{0} ", x); return x; });

// Use foreach to preserve order at execution time. 
foreach (var v in parallelQuery)
{
    System.Diagnostics.Debug.WriteLine("Project");
    break;
}

// Some operators expect an ordered source sequence. 
var source = Enumerable.Range(9, 30);

var parallelQuery = source.AsParallel().AsOrdered()
    .Where(x => x % 3 == 0)
    .Select(x => { System.Diagnostics.Debug.WriteLine("{0} ", x); return x; });

// Use foreach to preserve order at execution time. 
foreach (var v in parallelQuery)
{
    System.Diagnostics.Debug.WriteLine("Project");
    break;
}

// Some operators expect an ordered source sequence. 
var lowValues = parallelQuery.Take(10);

int counter = 0;
foreach (var v in lowValues)
{
    System.Diagnostics.Debug.WriteLine("{0}-{1}", counter, v);
    counter++;
}
The call to Debug.WriteLine is the same as UpdateInvoice in the code above, in the sense that they are both void methods that cause side effects.

This is what the above prints:
9 15 18 12 30 33 36 21 24 27
Project
9 15 18 12 30 21 36 27 24 33 
0-9 1-12 2-15 3-18 4-21 5-24 6-27 7-30 8-33 9-36 


As you can see the end result is ordered but the getting there isn't, and the getting there is what we're interested in, which is why we could not use PLinq.


Saturday, 14 February 2015

Gas and Electricity Consumption in a 1920s mid-terrace house in the North of England.

Last week I was going through some old pen drives to see if there was actually anything worth keeping and I found a lot of old energy consumption measurements I took back at our old house, so I thought I would share them here.

The house was a small mid terrace house, with central heating and a gas cooker, built after the First World War. I started taking the measurements after I decided that leaving my gaming PC on 24/7 wasn't a great idea, I should've taken a few measurements with it on, but there you go. We only heated the house to a relatively low temperature, i.e. ~ 18° C

Unfortunately, I don't have measurements of outside temperature so I cannot correlate energy use to outside temperature, but the data was gathered to try to get a better understanding of how much gas and electricity we were using at the time.

Without further ado here are the charts:

It's hard to see electricity consumption in the above chart, so here it is:

Estimate costs below. I will not rant about the rather ludicrous way Gas and Electricity is priced in this country.



Electricity on its own again: