Sunday, November 10, 2013

log4javascript and ASP.NET Web Api

log4javascript is a nice logging framework for JavaScript. With it you can log to the browser console (if supported by the browser), but also to an own window and even to the server via AJAX calls. For the latter, you need also something on the server which can handle the AJAX requests. Here I wanted to use ASP.NET Web Api. Since I didn’t find any documentation on this specific topic, I want to share my experiences here.

In general, the whole stuff is quite easy. On the client side you have to define the AjaxAppender:

var ajaxAppender = new log4javascript.AjaxAppender(serverUrl);
ajaxAppender.setLayout(new log4javascript.JsonLayout());
ajaxAppender.addHeader("Content-Type", "application/json; charset=utf-8");
I thought, with Web Api JSON would be the most natural data format. The more tricky line is the last one. Without it, the Content-Type header has the value application/x-www-form-urlencoded. This causes Web Api to use the JQueryMvcFormUrlEncodedFormatter. Unfortunately, this formatter cannot handle the JSON formatted data.
After specifying the correct content type, Web Api uses the JsonMediaTypeFormatter. And everything is fine.

On the server side, I first had to define the structure of the log data:

public struct LogEntry
  public string Logger;
  public long Timestamp;
  public string Level;
  public string Url;
  public string Message;
Since log4javascript can send more than one log entry in one AJAX call, my logging method gets an array of LogEntry instances. Additionally I needed to convert the timestamp value, since log4javascript sends it in milliseconds since 01-Jan-1970:
public void Write(LogEntry[] data)
  if (data != null)
    foreach (LogEntry entry in data)
      DateTime timestampUtc = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc).AddMilliseconds(entry.Timestamp);
      DateTime timestampLocal = timestampUtc.ToLocalTime();
That’s it!

Sunday, September 22, 2013

Registration-Free COM with ActiveX Controls

In my current project, I have beside other things, a form with an ActiveX control on it. Additionally, I am using registration free COM, meaning that the COM information is stored in a manifest file instead of the registry.

Everything worked fine, until I tried to create the form a second time. Also this worked without problems, but not in the Visual Studio debugger. Here I got the strange exception:

System.NotSupportedException: Unable to get the window handle for the 'xxx' control. Windowless ActiveX controls are not supported.
at System.Windows.Forms.AxHost.EnsureWindowPresent()
at System.Windows.Forms.AxHost.InPlaceActivate()
at System.Windows.Forms.AxHost.TransitionUpTo(Int32 state)
at System.Windows.Forms.AxHost.CreateHandle()
at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)
at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)
at System.Windows.Forms.AxHost.EndInit()

After hours of thinking, debugging, code stripping and so on (to be honest, mainly from a colleague of me), we found the solution: the used manifest was not complete. The manifest was created using mt.exe with the typelib of the control. This manifest looked like

<file name="..." hashalg="SHA1">
  <comClass clsid="..." tlbid="..." description="..." />
  <typelib tlbid="..." version="..." helpdir="" />

When we compared this with the entries in the registry, we saw in the registry much more things. And also in the assembly manifest documentation are more attributes mentioned. Therefore we tried to add as much attributes to the manifest as possible (even if we did not understand every bit completely). And viola, now the error was gone!

In total, we added 4 attributes (in our case):

<file name="..." hashalg="SHA1">
  <comClass clsid="..." tlbid="..." description="..." threadingModel="..." progid="..." miscStatus="..." />
  <typelib tlbid="..." version="..." helpdir="" flags="..." />

I hope this could help, if you have a similar issue.

Wednesday, September 18, 2013

AppDomains and user.config

Previously I had a problem with an application using different AppDomains. In one AppDomain I wrote some settings to an user.config file. And then I got in another AppDomain the following exception:
System.Configuration.ConfigurationErrorsException: Configuration system failed to initialize
---> System.Configuration.ConfigurationErrorsException: Unrecognized configuration section userSettings. (C:\Users\uuuuuuuu\AppData\Local\cccccc\aaaaaaaaaaaaaa_Url_4v0elz3yo0gytsdhg5vusobffefqs0so\\user.config line 3)

This was very strange since in this AppDomain I even didn’t use any user.config. After some hours of investigation, I found the problem. The user.config file will be written to

<Profile Directory>\<Company Name>\<App Domain>_<Evidence Type>_<Evidence Hash>\<Version>\user.config
  • <Profile Directory>: %APPDATA% or %LOCALAPPDATA%
  • <Company Name>: value of AssemblyCompanyAttribute, trimmed to 25 characters, invalid characters replaced by '_'
  • <App Domain>: friendly name of the current AppDomain, trimmed to 25 characters, invalid characters replaced by '_'
  • <Evidence Type> and <Evidence Hash>: some magic from AppDomain’s evidence
  • <Version>: value of AssemblyVersionAttribute
In my case, all these properties had the same value. Therefore the whole stuff got mixed up: 
  • company name: ok this is by purpose the same
  • version: I generated the version from the build number for all assemblies, but also this is not so uncommon
  • evidence: I created the new AppDomains with
    AppDomain.CreateDomain(appDomainName, null, appDomainSetup);
    The second parameter is the evidence for the new AppDomain. If it is null, the evidence from the current AppDomain will be taken. Also understandable.
  • App domain: this was the sticky point. I used for the AppDomain’s name the full name of the main assembly. Since I used a pattern like Company.Application.Subsystem..., this name was simply too long. The first 25 characters were all the same...
So the solution was quite easy: I just had to change the AppDomain’s name. But the trail to the solution took its time. Maybe this blog can accelerate your search a little bit.

BTW: Finally I checked the source code of System.Configuration.ClientConfigPaths.cs. This helped me a lot to understand the problem.

Sunday, August 11, 2013

Problems with WSDL of WCF web services behind load balancer

If you have a WCF web service, you can get its WSDL by appending ?wsdl to the URL:
Typically, the generated WSDL is not complete. The types are loaded separately from the server:
<xsd:import schemaLocation="http://server/web/Service.svc?xsd=xsd0" />
For the type import, the current machine is used. Normally this isn't a problem. But if you use a load balancer, you end up with the following requests:

<xsd:import schemaLocation="http://node1/web/Service.svc?xsd=xsd0" />
This will not work, when node1 is not accessible directly.
Fortunately, you can force WCF to use the Loadbalancer also in the WSDL. You only have to add one line to the serviceBehavior in the Web.config:
<behavior name="MyBehavior">
<useRequestHeadersForMetadataAddress />

.net programs on 32 and/or 64 bit machines

Generally, a .net program can run on a 32 bit machine as well as on a 64 bit machine. But sometimes it is necessary to run the program also on a 64 bit machine in 32 bit mode, the so-called WoW64.
WoW64 stands for "Windows on 64-bit Windows", and it contains all the 32-bit binary files required for compatibility, which run on top of the 64 bit Windows. So, yeah, it looks like a double copy of everything in System32 (which despite the directory name, are actually 64-bit binaries).
You will need WoW64 par example, if you want to call 32 bit ActiveX components. Visual Studio provides for this purpose the so-called platform target:
  • x86
    32 bit application, runs either on Win32 or on Win64 in WoW64
  • x64
    64 bit application, runs only on Win64 (not in WoW64)
  • Any CPU
    runs on Win32 as 32 bit application and on Win64 as 64 bit application
This info will be stored in the PE header. At application startup, Windows checks the settings and starts the application in the appropriate mode (or not). If you want to check later, for which platform the application was built, you can use the corflags tool in Visual Studio Command Prompt:
> corflags MyApp.exe
Microsoft (R) .NET Framework CorFlags Conversion Tool.  Version  4.0.30319.1
Copyright (c) Microsoft Corporation.  All rights reserved.

Version   : v4.0.30319
CLR Header: 2.5
PE        : PE32
CorFlags  : 11
ILONLY    : 1
32BIT     : 1
Signed    : 1

The interesting parts are PE and 32BIT. The values are a little bit strange and hard to remember:

Platform targetPE32BIT
Any CPUPE320


Saturday, August 10, 2013

Problem with asynchronous HttpClient methods

Recently, I wrote a client application which should send some log messages to a server. Since it was only for statistics, the log didn’t have the highest requirements on reliability. Additionally, it shouldn’t block my application. Therefore I decided to send the message asynchronously, and in case of an error only to write something to the local log file. I came up with

static void LogMessage(string message)
  Uri baseAddress = new Uri("http://localhost/");
  string requestUri = "uri";

  using (HttpClient client = new HttpClient { BaseAddress = baseAddress })
    client.PostAsJsonAsync(requestUri, message, cancellationToken).ContinueWith(task =>
        if (task.IsFaulted)
          Console.WriteLine(“Failed: “ + task.Exception);
        else if (task.IsCanceled)
          Console.WriteLine ("Canceled");
          HttpResponseMessage response = task.Result;
          if (response.IsSuccessStatusCode)
            response.Content.ReadAsStringAsync().ContinueWith(task2 => Console.WriteLine(“Failed with status " + response.StatusCode));

This code should send the message asynchronously (PostAsJsonAsync). And afterwards it should check, if the sending was successfully or not (ContinueWith). My expectation was to see an error, since there is nothing is listening on the specified address. But I didn’t see anything. Moreover, it even didn’t send any requests. Even for my requirements, this was not enough.

After some research, I added trace switches to my config file:

    <add name="System.Net" value="Verbose"/>
    <add name="System.Net.Http" value="Verbose"/>
    <add name="System.Net.HttpListener" value="Verbose"/>
    <add name="System.Net.Sockets" value="Verbose"/>
    <add name="System.Net.Cache" value="Verbose"/>

With it, one of the last lines of my debug output was

System.Net Error: 0 : [0864] Exception in HttpWebRequest#54246671:: - The request was aborted: The request was canceled..

This led me to the evil: it was the disposing of HttpClient too early. At the end of the using block the client will be disposed. But at this time, the message has not been sent. This happens, since I do something asynchronous inside the using block without waiting for its end.

The solution to this problem is to reverse the order of using and asynchronous: if I start the using in an asynchronous way, everything is fine:

static void LogMessage(string message)
  Uri baseAddress = new Uri("http://localhost/");
  string requestUri = "uri";

  new TaskFactory().StartNew(() =>
      using (HttpClient client = new HttpClient { BaseAddress = baseAddress })
        Console.WriteLine("Sending message");
        HttpResponseMessage response = client.PostAsJsonAsync(requestUri, message).Result;

        Console.WriteLine("Evaluating response");
        if (response.IsSuccessStatusCode)
          Console.WriteLine("Failed with status " + response.StatusCode);

Here I start a new task, which does inside the using / Dispose stuff. And the task is finished only after the Dispose.

This took me to the next stage: what about the async / await pattern from .net 4.5? The implementation is quite similar to the one above - the main difference is that the method now has to return a Task.

static async Task LogMessage (string message)
  Uri baseAddress = new Uri("http://localhost/");
  string requestUri = "uri";

  using (HttpClient client = new HttpClient { BaseAddress = baseAddress })
    Console.WriteLine("Sending message");
    HttpResponseMessage response = await client.PostAsJsonAsync(requestUri, message);

    Console.WriteLine("Evaluating response");
    if (response.IsSuccessStatusCode)
      Console.WriteLine("Failed with status " + response.StatusCode);

Also this implementation starts a new thread for the response message handling. But it does it only if really needed. And it does it as late as possible.

A little more sophisticated logging shows some details (the 2nd column is the thread number). The implementation with an explicit Task calls LogMessage on the main thread 9. Afterwards in continues immediately on the same thread. Approx. 20 ms later the new thread 12 starts with the HTTP handling:

21:28:42.524    9       CallMethod      Calling LogMessageWithTask
21:28:42.526    9       CallMethod      Continuing after LogMessageWithTask
21:28:42.545    12      LogMessageWithTask      Sending message
21:28:45.682    12      LogMessageWithTask      Evaluating response
21:28:45.682    12      LogMessageWithTask      Failed with status NotFound

With async / await it is a little bit different: also the request will be sent on the main thread. The response handling will be done later also in a second thread:

21:28:51.638    9       CallMethod      Calling LogMessageAsyncAwait
21:28:51.666    9       LogMessageAsyncAwait    Sending message
21:28:51.701    9       CallMethod      Continuing after LogMessageAsyncAwait
21:28:52.144    16      LogMessageAsyncAwait    Evaluating response
21:28:52.144    16      LogMessageAsyncAwait    Failed with status NotFound

However, the coding is more precise. And the request will be sent without the delay for creating the new Task.

You can find the source code at GitHub:

Thursday, June 20, 2013

AJAX calls to different servers (CORS)

Developing modern HTML applications, sometimes you have the need to send an AJAX request data to another server. Unfortunately, this could be a security issue. Therefore all browsers implement the Same origin policy, which prevents such calls.
But what, if you really need it? The rescue is Cross-origin resource sharing (CORS). The principle is quite easy: the browser sends with the AJAX request an additional HTTP header:
The server can analyze the header. If it decides to fulfill the request, it adds another header:
If the server decides to trust all clients, it can also return
Access-Control-Allow-Origin: *
But if you send the AJAX request from a page with origin, while the server replies with, you cannot read the response.

Browser support

All major browsers support CORS with XMLHttpRequest. All but IE8 or IE9. These IE versions use a slightly different approach. When you want CORS, you have to use another object instead: XDomainRequest. Fortunately at least its interface is similar to XMLHttpRequest.


JQuery uses for AJAX calls always XMLHttpRequest. The request to use XDomainRequest when needed was rejected. Instead it is recommended to use a JQuery plugin. For this purpose I found 2 implementations:
Both worked for me, but with jQuery.iecors I had problems receiving errors. At least in my tests the error function was not called. Therefore I finally decided to use jQuery.XDomainRequest.

CORS and ASP.NET Web Api

For supporting CORS with ASP.NET Web Api, you need to add the Access-Control-Allow-Origin header to the response. One way is to implement a custom message handler for this purpose. Carlos Figueira blogged a good description, how to do this.

Tuesday, June 18, 2013

OWIN with static files, exception handling and logging

As I wrote already in Self-host ASP.NET Web API and SignalR together, the OWIN configuration is done in Startup.Configuration():
public void Configuration(IAppBuilder app)
  // Configure WebApi
  var config = new HttpConfiguration();
  config.Routes.MapHttpRoute("API Default", "api/{controller}/{id}", new { id = RouteParameter.Optional });

  // Configure SignalR
It looks, as if there are some features get activated with app.UseXXX() (or app.MapHubs(), respectively). But this is not completely true. When we look into the implementation of these "feature activating methods", they finally call
public IAppBuilder Use(object middleware, params object[] args)
  this._middleware.Add(AppBuilder.ToMiddlewareFactory(middleware, args));
  return (IAppBuilder) this;
This method adds the features to a list. During the request processing, every feature in this list will be checked, if it can handle the request. Therefore the order of the "feature activating methods" can be important. Not in the example above, since WebApi and SignalR do not compete for the same requests.


The package Microsoft.Owin.Diagnostics contains one useful feature catching and displaying exceptions. For sure this shouldn't happen, but when it would be nice to know about. But as always you should consider to use it only during development.

To switch it on, simply add the following line at the beginning of Startup.Configuration():

Another feature in Microsoft.Owin.Diagnostics is
This displays the message Welcome to Katana to the client. You should place it at the very end of Configuration.Startup(). Otherwise your other features wouldn't never reached.

But anyway, this feature is useful only in hello world status. Later I would prefer to get HTTP 404 instead of this message.


Like most of the other OWIN packages, also Microsoft.Owin.StaticFiles is in prerelease status. But this package is special, you even cannot find it in NuGet. To install it, you need to enter the following command in the Package Manager Console:
Install-Package Microsoft.Owin.StaticFiles -Version 0.20-alpha-20220-88 -Pre 
But probably the package is hidden since it doesn't work really. It has problems when you request more than one file in parallel. The solution is to use one of the nightly Katana builds (e.g. 0.24.0-pre-20624-416). Probably this is even more alpha than the hidden version. But is works better. Obviously there was some improvement between version 0.20 and 0.24.
You can get the nightly builds from a separate feed:
After adding the package, just add one line to Startup.Configuration():
The parameter specifies, in which directory the static files will be searched. Since I named it StaticFiles, you have to add a folder with this name to your project. And for every file you add to this folder you have to set in its properties that it will be copied to the output directory:

When you start the project, you can fire up the browser and enter http://localhost:8080/test.htm (without specifying the StaticFiles folder), and you get simply the page back.

Logging OWIN requests

Sometimes it would be interesting to see the incoming requests in a trace. This can be achieved with a custom feature. The constructor is quite simple. It just stores the reference to the next feature:
private readonly Func<IDictionary<string, object>, Task> _next; 

public Logger(Func<IDictionary<string, object>, Task> next)
  if (next == null)
    throw new ArgumentNullException("next");
  _next = next;
The implementation isn't really complicated, too:
public Task Invoke(IDictionary<string, object> environment)
  string method = GetValueFromEnvironment(environment, OwinConstants.RequestMethod);
  string path = GetValueFromEnvironment(environment, OwinConstants.RequestPath);

  Console.WriteLine("Entry\t{0}\t{1}", method, path);

  Stopwatch stopWatch = Stopwatch.StartNew();
  return _next(environment).ContinueWith(t =>
    Console.WriteLine("Exit\t{0}\t{1}\t{2}\t{3}\t{4}", method, path, stopWatch.ElapsedMilliseconds,
      GetValueFromEnvironment(environment, OwinConstants.ResponseStatusCode), 
      GetValueFromEnvironment(environment, OwinConstants.ResponseReasonPhrase));
    return t;
First, it prints some data of the current request. The more interesting part is, that it then calls the succeeding features (return _next(environment)). And when the succeeding features were evaluated, it finally (ContinueWith) prints some response data. I added here also method and path, since otherwise it were difficult to find the corresponding entries. Maybe it would be even better to use some kind of unique id for this purpose. But in my projects, method and path are enough.
GetValueFromEnvironment is only a little helper, since in some cases the environment dictionary does not contains all values:
private static string GetValueFromEnvironment(IDictionary<string, object> environment, string key)
  object value;
  environment.TryGetValue(key, out value);
  return Convert.ToString(value, CultureInfo.InvariantCulture);
Since the Logger traces the beginning and the end of the processing, it should be activated quite at the beginning of Startup.Configuration():

Logging the Request Body

With POST requests, it can be very handy to log also the request body. For this, you have only to extend the Invoke method a little bit:
string requestBody;
Stream stream = (Stream)environment[OwinConstants.RequestBody];
using (StreamReader sr = new StreamReader(stream))
  requestBody = sr.ReadToEnd();
environment[OwinConstants.RequestBody] = new MemoryStream(Encoding.UTF8.GetBytes(requestBody));
The access to the request body is provided by a Stream. The only caveat with this stream is that it is not seekable. That means you can read it only once. And maybe the simple logging will not be enough for your requirements. Sometimes you will also process it afterwards...
Fortunately, this is no big issue: just replace the old stream with a new MemoryStream. For sure, this is not good idea with big request bodies. In such a case you will need a more sophisticated solution. But normally, it should be good enough. Moreover, you can use it only during development. In production you can disable it by configuration, par example.

Logging SignalR

With the Logger from above, you get a nice logging of the several SignalR requests (when it uses LongPolling instead of WebSockets):
10:51:18,181     11     Entry     GET     /signalr/negotiate
10:51:18,263     8      Exit      GET     /signalr/negotiate     82              
10:51:18,271     11     Entry     GET     /signalr/ping
10:51:18,275     8      Exit      GET     /signalr/ping          4               
10:51:18,540     11     Entry     GET     /signalr/connect
10:53:08,806     14     Exit      GET     /signalr/connect       110260          
10:53:08,826     11     Entry     GET     /signalr/poll
10:54:58,960     15     Exit      GET     /signalr/poll          110128          
10:54:58,969     11     Entry     GET     /signalr/poll
10:56:49,136     12     Exit      GET     /signalr/poll          110161          
The used Logger prints also timestamp and thread id (in the first 2 columns). It is easily to see, how SignalR waits for 110 seconds for an answer from the server (a notification). When it doesn't get one, it simply sends the next request.
You can also see that all requests (at least in this example) are handled by the same thread (11). But the response is created by other threads (8, 14, 15 and 12).


It is quite easy to add additional features to an OWIN host. And it is also not too hard to implement own features. Drawbacks of the whole stuff are (hopefully only at the moment):
  • the beta status of some packages
  • the lack of documentation
But for a real developer, the best documentation is the code, anyway.

You can find the source code at GitHub:

Wednesday, May 29, 2013

Garbage Collector's sabotage of my message pump

I spent my last days with a boring problem. I implemented a Windows service listening to WM_DEVICECHANGE messages (which will be sent par example, when you plug-in a new USB device). For this I implemented a NativeWindow, whose only purpose was to implement the window procedure for some custom message processing:

internal sealed class DeviceChangeHandler : NativeWindow
  private const int WM_DEVICECHANGE = 0x219;
  private const int DBT_DEVNODES_CHANGED = 0x7;

  public DeviceChangeHandler()
    CreateHandle(new CreateParams());

  protected override void WndProc(ref Message msg)
    if (msg.Msg == WM_DEVICECHANGE && 
        msg.WParam.ToInt32() == DBT_DEVNODES_CHANGED)
      // device change detected

    base.WndProc(ref msg);

In my service itself, I started a new Thread with the following ThreadStart:

private void RunDeviceChangeHandler()
  // create window
  DeviceChangeHandler deviceChangeHandler = new DeviceChangeHandler();

  // run message pump

When I debugged the code, it worked always. Also my release build worked - but unfortunately not always. Sometimes I got the messages, sometimes not. Testing was tedious, since Windows takes some time until sending the message, especially after disconnecting the device.

Finally I remembered a tool I used 10 years ago: Spy++, which is still part of Visual Studio. I saw that, when I didn't get messages, there was also no window. That explained to me, why I didn't get the messages. But the question was now: why was there no window?

Next I defined a caption for my window (via the CreateParams). And I implemented a loop in which I call every 5 seconds FindWindow with the just specified caption. Now I saw that FindWindow was able to find my window. But only for some iterations, sooner or later it returned only 0.

By chance, I added tracing to the finalizer of my NativeWindow implementation. Now I saw that the finalizer was called some time, and afterwards FindWindow returned only 0. My first idea was that there was an exception in Application.Run(). But this was to easy.

The problem is optimization done by the just-in-time compiler. Since the variable deviceChangeHandler is no longer referenced after Application.Run(), it is a candidate for garbage collection. This I could verify by a call of GC.Collect(). Afterwards the windows was always gone.

The solution is to make the variable ineligible for garbage collection. For this purpose exists the method GC.KeepAlive().
With my modified ThreadStart, everything worked as expected:

private void RunDeviceChangeHandler()
  // create window
  DeviceChangeHandler deviceChangeHandler = new DeviceChangeHandler();

  // run message pump

  // make deviceChangeHandler ineligible for garbage collection
As with the most complex problems, the solution is only one line. The art is to insert it at the right place.

Wednesday, May 15, 2013

Pitfall with ASP.NET Web Api, OWIN and FxCop

As I wrote in Self-host ASP.NET Web API and SignalR together, you can easily combine ASP.NET Web Api and OWIN. My Startup class contained at least the Configuration method:
public void Configuration(IAppBuilder app)
  HttpConfiguration config = new HttpConfiguration();
  config.Routes.MapHttpRoute("API Default", "api/{controller}/{id}", new { id = RouteParameter.Optional });
With this, everything worked fine.

(Unfortunately) I am a well-behaved man. Therefore I ran the code analysis (aka FxCop). It returned the warning
CA2000: Microsoft.Reliability: In method 'Startup.Configuration(IAppBuilder)', call System.IDisposable.Dispose on object 'config' before all references to it are out of scope.
Understandable for me. And since Microsoft suggests to fix it, I changed my method to:
public void Configuration(IAppBuilder app)
  using (HttpConfiguration config = new HttpConfiguration())
    config.Routes.MapHttpRoute("API Default", "api/{controller}/{id}", new { id = RouteParameter.Optional });
Sadly, I didn’t test it immediately. Instead I fixed a lot of other FxCop warnings. And I improved my coding otherwise. Finally I forgot this special fix.

When I finally put all together, I got only HTTP 500 Internal Server Error, without any further information. A lot of people suggest in such a case to debug the server, but also there no exception was thrown. It simply didn’t work. I slimmed down my coding until I removed the using statement. And then it worked again!

So it’s better to keep the original code and to add only a SuppressMessage attribute:
[SuppressMessage("Microsoft.Reliability", "CA2000:Dispose objects before losing scope",
                 Justification = "HttpConfiguration must not be disposed, otherwise web api will not work")]
public void Configuration(IAppBuilder app)
  HttpConfiguration config = new HttpConfiguration();
  config.Routes.MapHttpRoute("API Default", "api/{controller}/{id}", new { id = RouteParameter.Optional });

Monday, April 29, 2013

Claims-based authentication in a web application using ACS

I like the idea to use existing user accounts for authentication instead of managing users by myself.
Otherwise I have to store user names and passwords in my database. Therefore I have to provide a user interface to create, change or delete them. And most important, I have to ensure that everything is secure. Nothing I really want to bother when I am developing some cool application.
Also for the user does not like to create a new account for every web site she visits. This is quite boring, especially as she should use different passwords for each site. That arises the need to store somewhere the different passwords?
To cut a long story short: why not use existing user accounts like Google, Windows Live ID, Facebook, … ?

Integrating ACS

A good starter is How to: Create My First Claims-Aware ASP.NET Application Using ACS. Unfortunately, this article is a little bit outdated in between (written in April 2011). Mainly the toolset (Visual Studio 2012, Identity and Access Tool) changed.

Web Application

To demonstrate the principle, start with an ASP.NET Empty Web Application. Add a Web Form with some text in the body like
Hello <%= User.Identity.Name %>
When you start the debugger, you should see a web page with text Hello – but without user name, since no authentication is done.

Configure ACS

Before we can add authentication, we have to do some configuration. Just follow the first steps from the document mentioned above (How to: Create My First Claims-Aware ASP.NET Application Using ACS):
  • Step 1 - Create an Access Control Namespace
  • Step 2 – Launch the ACS Management Portal
  • Step 3 – Add Identity Providers

Identity and Access Tool

With Visual Studio 2012 the ACS integration is no longer done with a STS Reference. Instead, the Identity and Access Tool has to be used. You can download it from Visual Studio Gallery. After installing it, you can start it in the Solution Explorer in the project’s context menu.
Since we want to connect to ACS, select the third option Use the Windows Azure Access Control Service:
Now we have to configure the providers. Just click on the Configure link in the middle:
The ACS namespace is easy to know. It’s just the name you chose. The management key is not so easy to find:
  • Open Windows Azure Portal
  • Manage your ACS namespace
  • Select Management service
  • Click ManagementClient
  • Click Symmetric Key
Here you can copy the key:
The next step is to select the providers you want to use in your application. And to specify realm and return url of your application. By default the are initialized with the url used for debugging in Visual Studio.
After clicking OK, you get a lot of stuff generated in your Web.config. Additionally, you can find in the Azure Portal an additional Relying Party Application (originally, this was step 4 from How to: Create My First Claims-Aware ASP.NET Application Using ACS.
When you start now the debugger, the first page is something like
The user can select with which identity provider she wants to use. After the login, the start page of your application should be displayed, showing the user name.


When you check the logon process in Fiddler, you will see the following requests:
  • http://localhost:58235/
    The URL of the application itself, redirects to
    The page to select the identity provider; selecting one forwards to
    The login itself, here at Google; after confirmation redirects back to
    ACS redirects back to the application itself
  • http://localhost:58235/


Since .NET 4.5 every Principal is based on a ClaimsPrincipal. That means every attribute of the user is a claim. It’s easy to query or display them:
System.Security.Claims.ClaimsPrincipal cp = (System.Security.Claims.ClaimsPrincipal)User;
foreach (var claim in cp.Claims)
With a Goggle account par example, you have at least for claims:

Specific claims

You can rely only on the existence of the claims and All other claims are optional, also such convenient things like or Par example Windows Live ID does not provide this information due to security restrictions.
This is especially boring, since the name claim will be mapped to User.Identity.Name. When the claim is missing, the Name property is null. So maybe you get runtime errors because of that. Therefore it could be a good idea to provide a default value for this claim.
You can do it in the Windows Azure Portal in the section Rule groups. Select the rule group of your application, and then you should see Passthrough rules, which forward the claims from the identity provider to your application. Here you add your own rules, e.g.:
  • Identity Provider: Windows Live ID
  • Input claim type: Any
  • Input claim value: Any
  • Output claim type:
  • Output claim value: ???
  • Description: Default name for Windows LiveID
With this rule in place, every user authenticated with Windows Live ID has the name ???.


Also roles are now claims. That means you can also define a rule to apply a role:
  • Identity Provider: Windows Live ID
  • Input claim type:
  • Input claim value: (the name identifier)
  • Output claim type:
  • Output claim value: admin
  • Description: Admin role for xxx
Now you can check the existence of the role, e.g. (or 1 one of the other 1,000,000 possibilities):


Adding roles or names via rules is not very feasible when you have more than 2 or 3 users. Therefore it is better to implement a ClaimsAuthenticationManager to extend the claims processing pipeline. Here you can modify the claims as you want:
public class MyClaimsAuthenticationManager : ClaimsAuthenticationManager
  public override ClaimsPrincipal Authenticate(string resourceName, ClaimsPrincipal incomingPrincipal)
    if (incomingPrincipal != null && incomingPrincipal.Identity.IsAuthenticated)
      ClaimsIdentity claimsIdentity = (ClaimsIdentity)incomingPrincipal.Identity;
      string identityProvider =
          .Where(c => c.Type == "")
          .Select(c => c.Value)
      string nameIdentifier = 
          .Where(c => c.Type ==ClaimTypes.NameIdentifier)
          .Select(c => c.Value)

      if (identityProvider == "uri:WindowsLiveID" && nameIdentifier == "FVUzvNwYGuC5cG4VYdWArf81SRj0QISjQpUIhaHonNE=")
        claimsIdentity.AddClaim(new Claim(ClaimTypes.Name, "Markus Wagner"));
        claimsIdentity.AddClaim(new Claim(ClaimTypes.Role, "admin"));

    return incomingPrincipal;
Finally you have to configure your application to use the new ClaimsAuthenticationManager. This can be done in the Web.config:
    <claimsAuthenticationManager type="AcsAuthentication.MyClaimsAuthenticationManager, AcsAuthentication" />
For sure, in a real world application, you will not hard-code the claims here. Instead you will take them from a configuration file or a database. But the principle will stay the same.


For authentication, ACS is already a good alternative. Especially since you do not have to manage passwords.
With authentication it gets more complicated. Some prerequisites exist, but it isn't really comfortable.

Sunday, April 28, 2013

Writing your own Glimpse Plugin

As I wrote in my last post, Glimpse is a great tool to diagnose your web application. And there is already a lot of plugins available. But maybe you have the need to get some additional info, for which no plugin is available so far. In such a case it is quite easy to write your own plugin.
I will show it with a plugin for Log4Net. Before you ask, yes there is already a Glimpse.Log4Net plugin. Unfortunately this package is outdated. It was written for Glimpse 0.86, but with 1.x it doesn't work anymore.
UPDATE 15-Jun-2013: In between the Glimpse.Log4Net plugin has been updated to support Glimpse 1.x. But nevertheless the how to below should be interesting enough, since there is not a lot of documentation out there.

Log4Net and Trace

It is not absolute necessary to use a special Log4Net plugin. Instead it is possible to redirect the Log4Net output to the Tracing infrastructure. And for this, Glimpse provides already a tab.
All you need is a new appender:
<appender name="TraceAppender" type="log4net.Appender.TraceAppender">
  <category value="%level" /> 
  <layout type="log4net.Layout.PatternLayout"> 
    <conversionPattern value="%logger - %message" /> 
After connecting it to a logger, you will see all relevant log lines in the trace tab. With the specification of the category tag, the trace output will have the same level as the original Log4Net log line. Therefore some of the outputs are prefixed with the corresponding icons:

Log4Net Plugin

The implementation of the plugin is straightforward: first we need a Log4Net appender, which collects the messages to display in Glimpse. And we need an implementation for the tab in the Glimpse window. The rest in some infrastructure around it.
So let's have a look at the details.


This is a simple container for the data transferred from the Log4Net appender to Glimpse. The only prerequisite is, it has to implement the interface Glimpse.Core.Message.IMessage:

public class Log4NetMessage : IMessage
  public Log4NetMessage()
    Id = Guid.NewGuid();

  public Guid Id { get; private set; }
  public TimeSpan FromFirst { get; set; }
  public TimeSpan FromLast { get; set; }
  public string ThreadName { get; set; }
  public string Level { get; set; }
  public string LoggerName { get; set; }
  public string Message { get; set; }


Next we need some additional infrastructure. The appender sends the message to the tab via a Glimpse.Core.Extensibility.IMessageBroker. This must be set from outside.
Therefore I added an implementation of Glimpse.Core.Extensibility.IInspector, whose Setup method will be called at startup. It takes the needed properties from the Glimpse context and stores them in static properties of my appender:
public class Log4NetInspector : IInspector
  public void Setup(IInspectorContext context)
    GlimpseAppender.Initialize(context.MessageBroker, context.TimerStrategy);


Now it's time for the real stuff, first the appender. The implementation is quite easy, a bigger part of the code is about the calculation of the elapsed time since the last call (this part could be also skipped).
public class GlimpseAppender : AppenderSkeleton
  private static IMessageBroker _messageBroker;
  private static Func<IExecutionTimer> _timerStrategy;

  private static Stopwatch fromLastWatch;

  public static void Initialize(IMessageBroker messageBroker, Func<IExecutionTimer> timerStrategy)
    _messageBroker = messageBroker;
    _timerStrategy = timerStrategy;

  protected override void Append(LoggingEvent loggingEvent)
    if (_timerStrategy != null && _messageBroker != null)
      IExecutionTimer timer = _timerStrategy();
      if (timer != null)
        _messageBroker.Publish(new Log4NetMessage
            ThreadName = loggingEvent.ThreadName,
            Level = loggingEvent.Level.DisplayName,
            LoggerName = loggingEvent.LoggerName,
            Message = loggingEvent.RenderedMessage,
            FromFirst = timer.Point().Offset,
            FromLast = CalculateFromLast(timer)

  private static TimeSpan CalculateFromLast(IExecutionTimer timer)
    if (fromLastWatch == null)
      fromLastWatch = Stopwatch.StartNew();
      return TimeSpan.FromMilliseconds(0);

    // Timer started before this request, reset it
    if (DateTime.Now - fromLastWatch.Elapsed < timer.RequestStart)
      fromLastWatch = Stopwatch.StartNew();
      return TimeSpan.FromMilliseconds(0);

    var result = fromLastWatch.Elapsed;
    fromLastWatch = Stopwatch.StartNew();
    return result;


Now we are ready for the tab's implementation. It inherits from Glimpse.Core.Extensibility.TabBase, and implements the interfaces Glimpse.Core.Extensibility.ITabSetup, Glimpse.Core.Extensibility.IKey and Glimpse.Core.Extensibility.ITabLayout. As you see, the biggest part is the definition of the layout, how the data will be displayed on the tab.
public class Log4NetTab : TabBase, ITabSetup, IKey, ITabLayout
  private static readonly object layout = TabLayout.Create()
    .Row(r =>
      r.Cell(4).WidthInPercent(15).Suffix(" ms").AlignRight().Prefix("T+ ").Class("mono");
      r.Cell(5).WidthInPercent(15).Suffix(" ms").AlignRight().Class("mono");

  public override object GetData(ITabContext context)
    return context.GetMessages<Log4NetMessage>().ToList();

  public override string Name
    get { return "Log4Net"; }

  public void Setup(ITabSetupContext context)

  public string Key
    get { return "glimpse_log4net"; }

  public object GetLayout()
    return layout;


The final part is a converter, which knows how to display a Log4NetMessage in the layout of Log4NetTab:
public class Log4NetMessagesConverter : SerializationConverter<IEnumerable<Log4NetMessage>>
  public override object Convert(IEnumerable<Log4NetMessage> obj)
    var root = new TabSection("Level", "ThreadName", "LoggerName", "Message", "From Request Start", "From Last");
    foreach (var item in obj)
      return root.Build();

  private static string GetStyle(string levelDisplayName)
    switch (levelDisplayName)
      case "EMERGENCY":
      case "FATAL":
      case "ALERT":
        return FormattingKeywords.Fail;

      case "CRITICAL":
      case "SEVERE":
      case "ERROR":
        return FormattingKeywords.Error;

      case "WARN":
        return FormattingKeywords.Warn;

      case "NOTICE":
      case "INFO":
        return FormattingKeywords.Info;

      case "DEBUG":
      case "FINE":
      case "TRACE":
      case "FINER":
      case "VERBOSE":
      case "FINEST":
      case "ALL":
        return FormattingKeywords.Quiet;

        return FormattingKeywords.Quiet;


Finally you have to add the GlimpseAppender to your Log4Net configuration. When you start now your application, you should have a new Log4Net tab in the Glimpse window. Its content is similar to the Trace tab above, but the data is better structured.
NOTE: Be sure to start the application really from scratch. E.g. stop an already running IISExpress before. Otherwise it could be the Glimpse setup is not working as expected.

You can find the source code at GitHub:

Saturday, April 27, 2013

Web diagnostics with Glimpse

Glimpse is a great tool to get more info what is going on your web server. And it is easy to use: for my demo, I created a new ASP.NET MVC 4 Web Application (it works also with other MVC versions, and also with ASP.NET Web Forms) with the Internet Application template (again, it works also with the other templates). Then I added the NuGet package Glimpse.Mvc4. Finally I pressed F5, the application started and - nothing changed! No hint of Glimpse.
The reason is that Glimpse has to be enabled first. For this I opened the page Glimpse.axd in the root of my web application:
I clicked on Turn Glimpse On, went back to my application home page, refreshed it, and now in the bottom right corner was the Glimpse icon: . Clicking on this icon, I got the Glimpse window, displaying several tabs with some info used to produce the current web page:

NOTE: Glimpse stores the info if it is enabled or not in a cookie called glimpsePolicy. That means, when you later access the page again, Glimpse will be enabled already (or not).

When you look at the Glimpse tabs, some like Request, Server or Trace appear familiar from ASP.NET Tracing. But there are also some other tabs. Just play around with them a little bit.

Entity Framework Plugin

As useful as Glimpse is so far - that is not all. You can easily add additional tabs to Glimpse. A complete list of all available packages can be found on the Glimpse Extensions page.
The Internet Application template I used above contains some database access in combination with the user stuff (Log in, Register). Therefore I added now the Entity Framework Plugin.
I just had to download the NuGet package Glimpse.EF5. After compiling I refreshed the home page and found a new SQL tab in the Glimpse window. Only the tab was disabled since no database access was done for the home page.
Then I logged in, but again the SQL tab was disabled. This happened because there several requests executed for the log in, and only the last request didn't access the database. For problems like this Glimpse provides the History tab:
Here I selected the correct request, clicked Inspect, and finally the SQL tab was enabled:
Great, isn't it? A lot of useful info, and no effort to get them.

Sunday, March 31, 2013

Self-host ASP.NET Web API and SignalR together

A few days ago, I wanted to build a Windows service providing some services via ASP.NET Web API. This was done easily. I just used the HttpSelfHostServer, and everything was fine.

But then I wanted also to inform the clients about some internal changes in my program in an asynchronous way. For this I thought ASP.NET SignalR would be the perfect solution. But unfortunately, SignalR expects an OWIN host, not the HttpSelfHostServer.

Then I spent some time with googling. And finally I was able to find the solution. I had to add the following NuGet packages to my project:


All packages are in pre-release state (version 0.21.0-pre at the moment). But they work already. To fire up the OWIN host, I needed only the following few lines of code:

const string serverUrl = "http://localhost:8080";
using (WebApplication.Start<Startup>(serverUrl))

The configuration itself is done in the Startup class:

public class Startup
 public void Configuration(IAppBuilder app)
  // Configure WebApi
  var config = new HttpConfiguration();
   "API Default", "api/{controller}/{id}", new { id = RouteParameter.Optional });

  // Configure SignalR

Quite easy if you know it...