Blog Posts

Use a XBox Controller to control your Angular2 App

TLDR;

  1. For repeated multiple choice data entry you might want to consider alternative forms of input, like e.g. a gamepad.
  2. Gamepad support is provided by XInput on Windows.
  3. SendInput() allows sending of Keyboard events to foreground applications
  4. Combine both in a simple application
  5. Add Hotkey support to your Angular2 app
  6. Now you are able to control your app with an XBox Controller

Who would want that?

A customer had an interesting idea - he asked if there were alternative user input devices available that could be used for data entry for an Angular2 Intranet Application that I was working on.

His reasoning was that the WebApp was used for data entry - and lots of it! After login, the application basically consists of a main loop that would present it's user with visual and textual information and would ask the user to take a decision based on it. As the possible input is limited and speed for data entry is paramount, hotkeys support was introduced. I've wrote my take on hotkey support here. But additionally the customer was worried about the amount of stress that an employee could suffer from repeated usage.

The customer asked for ergonomic input devices that could be used in addition to a keyboard. The idea was to allow the users to freely switch between different devices if he felt uncomfortable using one.

Why the XBox Controller?

An ergonomic input device in widespread use that is also within budget are Gamepads. They are basically made for varying methods of input and allow extended usage with minimum strain.

Additionally I knew that the WebApp was used as an intranet application and the customer was using Windows Machines, which do have native support for XBox Controllers and on which additional software could be installed.

Support for gamepads and joysticks on Windows is provided via XInput or the now deprecated DirectInput API - both of them are a part of DirectX.

The XBox Controller works with XInput and DirectInput and most of the third party Gamepads provide a hardware switch for changing between XInput and DirectInput. This comes in handy if your game or app supports only one API, but not the other.

Getting access to the XBox Controller

So the first obstacle was to get access to the XBox Controller, as you cannot get access to DirectX (and specifically XInput) from an Angular Application directly. For this, a native Windows Application was necessary.

I opted for a C# Wpf Application and chose SharpDX (http://sharpdx.org), which is a thin wrapper for the C++ DirectX Api.

Polling the Controller

XInput comes with a poll based API, which is perfect for it's natural audience, that is games. They usually poll the state of the controllers inside of their main loop and do immediate mode rendering - it all fits together.

Example of polling the Keystroke:

var controller = new Controller(UserIndex.One);
Keystroke keystroke;
var result = controller.GetKeystroke(DeviceQueryType.Gamepad, out keystroke);
if (result.Success)
    ...

or the more concise :

var controller = new Controller(UserIndex.One);
if (controller.GetKeystroke(DeviceQueryType.Gamepad, out Keystroke keystroke).Success)
    ...

From Polling to Pushing

In this case however, I am creating a normal Windows Desktop Application using retained mode drawing and I would prefer a push based interface for the gamepad events. That's why I've chosen to wrap the poll based API inside of my application with push based events.

For exposing the push events to the rest of the application, you can use c# events or plain callbacks using delegates, but I've chosen Reactive Extensions as I greatly prefer the interface.

So my interface looks like this:

public interface IXInputService
{
    bool IsListening { get; set; }
    IObservable<Keystroke> Keystrokes { get; }
    IObservable<ControllerConnected> Connected { get; }
}

For implementing the continuous polling there are also a couple of options, for example a timer, a delayed task, a thread pool thread or the Application Idle event.

I've chosen to spin up a dedicated Thread, as the thread is both long running and the timing between each Poll request is fixed.

To test it out, I created a small Wpf prototype app (XInput2Key), that is able to read out the state of the gamepad's buttons and shows if they are currently pressed.

XInput2Key

XInput2Key

Interfacing with Angular2

Now I was able to read out the gamepad input and the only thing that remains is to interface with the angular2 application. As I mentioned previously, the application already had hotkey support, so the obvious choice was to just send keyboard input to the angular application.

This has the additional advantage that the applications are very loosely coupled and you can use one without the other. This allows using the XInput application to work with any other application as well, as long as the target app has somekind of hotkey support.

As I already took a dependency on the host operating system being windows (for the DirectX support), I took a quick look at the native Windows Api and chose the SendInput() function. This function allows sending of keystrokes and mouse events to the foreground application, which in my case would be a browser with the loaded angular2 application.

To access the native C WinApi Method, I needed [DllImport] declarations for SendInput() which I luckily found on PInvoke.net.

Thanks to both SharpDX and PInvoke.Net I was quickly able to throw together a prototype application that:

  1. reads in a config file to for mapping the gamepad buttons -> keyboard keys
  2. listens for gamepad button presses using XInput
  3. maps them to keystrokes
  4. sends the keystrokes to the foreground application using SendInput()

Sample App

I've pushed the resulting app to github (https://github.com/8/XInput2Key) incase you want to try it yourself.

If you check the Checkbox 'Is Emulating Keys', then the mapped keyboard input is sent to the foreground window. If you open up an instance of your favourite text editor, you can try it out.

XInput2Key

XInput2Key

The only caveat is that due to security constraints of SendInput() it cannot send input to applications running with administrator rights, if the app isn't started with administrator rights itself.

Setting up Angular2 to deal with Hotkeys

For hotkey support in angular I am basically using the javascript library mousetrap with an angular2 wrapper. If you're interested in how I've done that, you can check out my blog post about using Hotkeys in Angular2.

References

Hotkeys in Angular2

Background

For the app that I was building for a customer, I needed hotkey support for Angular2. For a plain old javascript web app, I've used the excellent javascript library mousetrap (https://craig.is/killing/mice) to great success and I wanted to use it in angular2 app as well.

Mousetrap for angular => angular2-hotkeys

As it turns out, somebody already created an nice angular2 wrapper for mousetrap called angular2-hotkeys (https://github.com/brtnshrdr/angular2-hotkeys) that wraps mousetrap and allows you to import a HotkeysService and register keys for it.

To install it, simply follow the instructions in the README.

Now a component can just request the HotkeysService in it's constructor and register a hotkey for itself by invoking the HotkeysService.add() method.

Additionally, the component should also remove the hotkey once it gets destroyed. To do this, we store the returned value of the HotkeysService.add() method and supply it as an argument to the HotkeysService.remove() method when the component is destroyed.

In Angular, this can be done by implementing OnDestroy and it's ngOnDestroy method. When the component gets destroyed, angular invokes the method and and the previously registered hotkey is removed.

A complete example could look like this:

import { Component, OnDestroy } from '@angular/core';
import { HotkeysService, Hotkey } from 'angular2-hotkeys';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent implements OnDestroy {
  title = 'app works!';
  hotkeyCtrlLeft: Hotkey | Hotkey[];
  hotkeyCtrlRight: Hotkey | Hotkey[];

  constructor(private hotkeysService: HotkeysService) {
    this.hotkeyCtrlLeft = hotkeysService.add(new Hotkey('ctrl+left', this.ctrlLeftPressed));
    this.hotkeyCtrlRight = hotkeysService.add(new Hotkey('ctrl+right', this.ctrlRightPressed));
  }

  ctrlLeftPressed = (event: KeyboardEvent, combo: string): boolean => {
    this.title = 'ctrl+left pressed';
    return true;
  }

  ctrlRightPressed = (event: KeyboardEvent, combo: string): boolean => {
    this.title = 'ctrl+right pressed';
    return true;
  }

  ngOnDestroy() {
    this.hotkeysService.remove(this.hotkeyCtrlLeft);
    this.hotkeysService.remove(this.hotkeyCtrlRight);
  }
}

Beyond "Hello World!"

Now this works fine for a simple app, but there are a couple of problems:

  • If one component registers a hotkey and a second component also registered the same hotkey, the previous subscription would be overriden.
  • Additionally the subscription / unsubscription logic leakes into each and every component that wants to register a hotkey.
  • The Hotkey events are not the flexible Observable<T> as we have come to expect in angular.
  • Keys are hardcoded inside of each component and therefore difficult to change

Wrapping Hotkeys in a CommandService

To solve that problem, I've introduced a CommandService. It's basically an EventAggregator, that upon initialization reads in a config.json that specifies which keys should be mapped to which commands. It exposes an Observable and registers all the hotkeys specified in the config.json.

Everytime one of those keys are pressed, it triggers the corresponding commands. Instead of importing the HotkeysService itself, all components import the CommandService and subscribe to it's observable. If the user presses a registered hotkey, a Command is triggered and the components check if they are interested in the comand and if so, take action.

Besides allowing easy updating of the hotkeys by editing the config.json, this moves the hotkey registration code to one place, which makes switching the hotkeys library a breeze (in case that should be necessary in the future). This approach also captures the essence of what the hotkeys are doing - they are issuing a command to components. It also allows reusing the CommandService to explicitedly raise those commands from other components.

An implementation of the CommandService looks like that:

CommandService.ts

import { Injectable } from '@angular/core';
import { Http } from '@angular/http';
import { HotkeysService, Hotkey } from 'angular2-hotkeys';
import { Subject } from 'rxjs/Subject';
import { Observable } from 'rxjs/Observable';

class HotkeyConfig {
  [key: string]: string[];
}

class ConfigModel {
  hotkeys: HotkeyConfig;
}

export class Command {
  name: string;
  combo: string;
  ev: KeyboardEvent;
}

@Injectable()
export class CommandService {

  private subject: Subject<Command>;
  commands: Observable<Command>;

  constructor(private hotkeysService: HotkeysService,
              private http: Http) {
    this.subject = new Subject<Command>();
    this.commands = this.subject.asObservable();
    this.http.get('assets/config.json').toPromise()
      .then(r => r.json() as ConfigModel)
      .then(c => {
        for (const key in c.hotkeys) {
          const commands = c.hotkeys[key];
          hotkeysService.add(new Hotkey(key, (ev, combo) => this.hotkey(ev, combo, commands)));
        }
      });
  }

  hotkey(ev: KeyboardEvent, combo: string, commands: string[]): boolean {
    commands.forEach(c => {
      const command = {
        name: c,
        ev: ev,
        combo: combo
      } as Command;
      this.subject.next(command);
    });
    return true;
  }
}

An config.json example:

{
  "hotkeys": {
    "left": [ "MainComponent.MoveLeft" ],
    "right": [ "MainComponent.MoveRight" ],
    "ctrl+left": [ "AppComponent.Back", "MainComponent.MoveLeft" ],
    "ctrl+right": [ "AppComponent.Forward", "MainComponent.MoveRight" ]
  }
}

A consuming Component would look like that:

MainComponent.ts

import { Component, OnDestroy } from '@angular/core';
import { Command, CommandService } from './command.service';
import { Subscription } from 'rxjs/Subscription';

@Component({
  moduleId: module.id,
  selector: 'main',
  templateUrl: 'main.component.html'
})
export class MainComponent implements OnDestroy {
  command: string = 'None';
  subscription: Subscription;
  constructor(private commandService: CommandService) {
    this.subscription = commandService.commands.subscribe(c => this.handleCommand(c));
  }
  handleCommand(command: Command) {
    switch (command.name) {
      case 'MainComponent.MoveLeft': this.command = 'left!'; break;
      case 'MainComponent.MoveRight': this.command = 'right!'; break;
    }
  }
  ngOnDestroy() {
    this.subscription.unsubscribe();
  }
}

Sample App

I've pushed a simple sample app that uses the CommandService to github (https://github.com/8/hotkey-sample).

References

Using Win10 Built-in OCR

TLDR;

To get OCR in C# Console- Wpf- or WinForms-App:

  1. run on a modern Windows Version (e.g.: Win10)
  2. add nuget UwpDesktop
  3. add the following code:
var engine = OcrEngine.TryCreateFromLanguage(new Windows.Globalization.Language("en-US"));
string filePath = TestData.GetFilePath("testimage.png");
var file = await Windows.Storage.StorageFile.GetFileFromPathAsync(filePath);
var stream = await file.OpenAsync(Windows.Storage.FileAccessMode.Read);
var decoder = await Windows.Graphics.Imaging.BitmapDecoder.CreateAsync(stream);
var softwareBitmap = await decoder.GetSoftwareBitmapAsync();
var ocrResult = await engine.RecognizeAsync(softwareBitmap);
Console.WriteLine(ocrResult.Text);

OCR Troubles

When UWP (=Universal Windows Platform) Apps were introduced, I was interested in what new APIs came with them. Soon the OcrEngine (https://docs.microsoft.com/en-us/uwp/api/windows.media.ocr.ocrengine) peaked my interest, because it promised a simple and quick way to retrieve text from images.

A simple OcrEngine was something that I was looking for as the alternatives are big and cumbersome to use (I am looking at you Tesseract), discontinued (MODI; was included with Office), in the cloud and/or expensive.

Back then the problem was that you needed to create a UWP Application to access the UWP APIs, but at the same time an UWP Application was completely sandboxed! You couldn't even use any cross process communication (with the exception of using the cloud and a very basic file based approach).

That meant I couldn't use the OcrEngine in a WindowsService or WebService or even over a commandline!

So with that being the case, I put together a quick solution using Tesseract, but I never got around to tuning it and it never performed well.

UwpDesktop

Time went by and then the great Lucian Wischik (https://blogs.msdn.microsoft.com/lucian) published the library uwp-desktop (https://github.com/ljw1004/uwp-desktop) as a nuget package called UwpDesktop.

This package made UWP APIs available to Applications based on the normal .NET Framework. When I read the announcement, I was instantly reminded of my previous failure to make use of the OcrEngine and finally today I took it out for a spin and it worked great!

Example Code

The following code reads in the supplied file and prints out the detected text:

var engine = OcrEngine.TryCreateFromLanguage(new Windows.Globalization.Language("en-US"));
string filePath = TestData.GetFilePath("testimage.png");
var file = await Windows.Storage.StorageFile.GetFileFromPathAsync(filePath);
var stream = await file.OpenAsync(Windows.Storage.FileAccessMode.Read);
var decoder = await Windows.Graphics.Imaging.BitmapDecoder.CreateAsync(stream);
var softwareBitmap = await decoder.GetSoftwareBitmapAsync();
var ocrResult = await engine.RecognizeAsync(softwareBitmap);
Console.WriteLine(ocrResult.Text);

Example Application

I've put together a very simple example app and pushed it to github (https://github.com/8/ConsoleUwpOcr) that makes use of the OcrEngine.

Example Output:

ocr.exe ..\..\..\ConsoleUwpOcr.Test\TestData\testimage.png
Welcome to Thunderbird Donate to Thunderbird Thunderbird IS the leading open source, cross- platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. If you like Thunderbird, please consider a donation! By donating, you Will help us to continue delivering an ad-free top-notch email client. Make a donation » Other ways to contribute to Thunderbird Now IS a great time for you to get involved: writing code, testing, support, localization and more. Join a global community! Share your skills and Pick up a few new ones along the way. Volunteer as much as you like. Or as little. It's totally up to you. Learn more » Why we need donations You might already know that Thunderbird improvements are no longer paid for by Mozilla. Fortunately there IS an active community keeping it running and developing it further. But to survive long term, the project needs funding. Thunderbird IS currently transitioning to an independent organization. Being independent, we can shape our own fate, but there IS significant infrastructure that must be majntajned to deliver the application to our tens of millions of users. For Thunderbird to survive and continue to evolve, we need your support and ask for your donation today. All the money donated Will go directly to funding Thunderbird development and infrastructure.

References

Fixing ORA-06502 in C

A bug appears

A few days ago a friend asked me to help him figure out a bug that was reported. Thanks to the error report he was already able to trace the bug to a specific code fragment, but he was wondering why it failed - the code looked perfectly fine.

The cause

The code was interfacing with an oracle database and was calling a stored procedure. As a stored procedures can't return a value by itself, the usual way to retrieve data is to use an output parameter. The C# code sets the parameter up by declaring it's type, size and direction and the stored procedure is then able to access and update it. After the control returns back to the caller, you can access the parameter and use it's filled value.

The error message was: ORA-06502: PL/SQL: numeric or value error: character string buffer too small exception

In this case the calling code was:

cmd.Parameters.Add("param1", OracleDbType.Varchar2, 512, Direction.Output);

When we took a look at signature of the offending method, we quickly spotted the bug:

public OracleParameter Add(string name, OracleDbType dbType, object val, ParameterDirection dir)

The problem was with the third parameter - the caller thought he was initializing the size of the output parameter, but instead he was supplying the initial value - which isn't used for an output parameter anyway.

Fixing the code was easy by setting the size of the parameter explicitedly and we could have called it a day, but I've seen a bug just like this before and so I wondered how this mistake could have happened and I took a second look.

The root cause

Digging deeper, I found the following overload:

public OracleParameter Add(string name, OracleDbType dbType, int size)

And then it dawned on me! Can you see what happened?

When the caller of the method started typing, he saw the overload where the third parameter is the size of type int and the correct overload is chosen.

It looked something like this:

cmd.Parameters.Add("param1", OracleDbType.Varchar2, 512

But he didn't stop here, he continued on, because he wanted to supply the ParameterDirection as well and the moment he did so, the other overload was chosen, where the third parameter is now the value and not the size!

The caller didn't notice, as an int converts nicely to an object and the signatures match up.

Bad API design

The culprit in this scenario was bad design on oracles behalf when they added overloads that change the semantics of a parameter at a certain position.

If the types of the parameters would have differed sufficently it would only be a nuisance for the developer, as the compiler would have caught the error, but to make matters worse the types used were implicetly convertable from one to the other and therefore the compiler was of no help.

General guidelines for Member Overloading are documented nicely on the msdn and while you may freely ignore those design principles in your own applications (even if they make sense) in a professional public facing API you really should follow them, your customers will thank you for it.

References

Member Overloading on msdn (https://msdn.microsoft.com/en-us/library/ms229029(v=vs.110).aspx)

Binding your View to your ViewModel in Wpf

Overview

When you are using Mvvm you need a way to bind your view to the ViewModel.

While this is always done by binding the DataContext Property of a View to an instance of the specific ViewModel class, there are generally two different scenarios:

  1. The ViewModel can be retrieved using the current DataContext.
  2. The ViewModel needs to be retrieved from a global source or created on demand.

While the first first scenario is straightforward, the second is a little more a tricky and in this article I'll show a simple pattern that you can use in your applications to simplify the binding process.

Different Scenarios

If the ViewModel can be retrieved using the current DataContext of a View, then I'll call this scenario "Hierarchical ViewModels" and if this is not the case, then I'll call them "Independent ViewModels".

Hierarchical ViewModels

Background

In the first scenario, you are already within a view that is bound to a ViewModel and you want to bind a child view to a ViewModel that is different than the one in the current DataContext, but one that it holds a reference to.

Often the parent is created by a DependencyInjection container that supplies the child ViewModel to the parent inside it's constructor on instantiation.

The parent than exposes the child ViewModel via a public property.

In the parent view, you can just create a binding that sets the DataContext of the child to this property.

Example

In the following section, I've included an example for a hierarchical setup.

Consider this example consisting of two Views:

  • MasterView
  • DetailView

that are bound to these two ViewModels:

  • MasterViewModel
  • DetailViewModel
MasterViewModel.cs
public class MasterViewModel
{
  public DetailViewModel Detail { get; set; }
  ...
}
DetailViewModel.cs
public class DetailViewModel
{
  ...
}

Then the following code can be used to bind the DetailView inside of the MasterView to the DetailViewModel contained inside the MasterViewModels Detail property:

MasterView.xaml
<v:DetailView DataContext={Binding Detail} />

Note: As the Source of the Binding defaults to the current DataContext, the Source property of the Binding does not need to be set explicitly and the the Binding needs only to contain the path to the ViewModel.

Independent ViewModels

Background

Then there there are Independent ViewModels, which don't know of each other. This is the case for all base ViewModels, who are the "entry points" for all hierarchical setups.

Some examples:

  • The first ViewModel that is bound to a view, eg. MainViewModel, when there is no existing DataContext
  • Independent ViewModels for cross cutting concerns like
    • navigation eg. the Menu
    • information eg. the Statusbar and Notifications

In those cases, there current DataContext of the View does not contain a property that we can use, so we need to access some kind of central Locator or Factory, which is probably backed by an IoC Container that knows how to retrieve or create the requested ViewModel.

Example

Now setting up independent ViewModels or the first ViewModel when no DataContext is yet available, requires a little more work.

I've come with this simple pattern to make the ViewModels available for binding inside each view.

  1. Create a Locator that exposes the ViewModels that your views require via properties.

  2. Add an instance of the locator as a static resource to your Application. This should be done on Application Startup, for example by:
  • Creating an Instance declaritively in your app.xaml file
  • Subscribing to your applications Startup event and setting it using the Resources property.
  1. Bind the View to the ViewModel by using the Locator as a Source using the StaticResource Binding.
Locator

As the first step, we need to create a locator that exposes the ViewModels that will be requested by from within a view.

The Locator exposes the ViewModels as properties, that makes binding against them easy using the normal binding syntax.

You can take the manual approach, where you implement each new ViewModel as a new property of the Locator yourself or you can use a dynamic approach where the ViewModels are lookup by your dependency injection container.

An example for the manual approach would look like something like this:

Locator.cs (manual)
public class Locator
{
    public MainViewModel { get { return new MainViewModel(); }}
}

As the manual approach gets tedious really fast, I've opted for the dynamic approach.

My implementation is based on DynamicObject that allows me to forward the property accessor to a dependency resolver for fulfillment.

Locator.cs (dynamic)
/// <summary>Locator that forwards property access to the Dependency Resolver</summary>
public class Locator : DynamicObject
{
  /// <summary>Gets or sets the resolver that is used to map a property access to an instance</summary>
  public Func<string, object> Resolver { get; private set; }

  public Locator(Func<string, object> resolver)
  {
    this.Resolver = resolver;
  }

  public override bool TryGetMember(GetMemberBinder binder, out object result)
  {
    bool successful;

    string property = binder.Name;

    var resolver = this.Resolver;

    if (resolver != null)
    {
      try
      {
        result = resolver(property);
        successful = true;
      }
      catch { result = null; successful = false; }
    }
    else
    {
      result = null;
      successful = false;
    }

    return successful;
  }
}

The locator is supplied with a Func in it's constructor that resolves the request for the ViewModel based on the requested property name.

For example, let's say you are using Autofac as a dependency resolver and you've configured Autofac to resolve your ViewModels by looking them up from your ViewModel namespace using a simple convention like this:

ContainerBuilder builder = new Autofac.ContainerBuilder();
builder.RegisterAssemblyTypes(Assembly.GetExecutingAssembly())
  .InNamespace("WpfMvvmExample.ViewModel");
var container = builder.Build();

Now you can create an instance of the Locator like this:
var locator = new Locator(property => container.Resolve(Type.GetType($"WpfMvvmExample.ViewModel.{property}")));

Note: As the binding path used inside a view is just a string and evaluated at runtime and not strongly typed, using a dynamic object fits nicely, as we don't lose any type information.

Add the initialized Locator

Now it's time to add the Locator to someplace where your views can access it. The Application_Startup method inside the App class is a good place for that.

App.xaml.cs
ContainerBuilder builder = new Autofac.ContainerBuilder();
builder.RegisterAssemblyTypes(Assembly.GetExecutingAssembly())
  .InNamespace("WpfMvvmExample.ViewModel")
  .SingleInstance();
        
var container = builder.Build();

this.Resources["Locator"] = new Locator(property => container.Resolve(Type.GetType($"WpfMvvmExample.ViewModel.{property}")));
Bind the View to the ViewModel

Finally we can use the Locator inside of a view to use it to bind to a ViewModel. We reference the Locator as the bindings Source property and point the path property to the required ViewModel type.

MainWindow.xaml
<v:MainView DataContext="{Binding MainViewModel, Source={StaticResource Locator}}" />

Example Code on github

I've uploaded an small example application to github that contains the Locator and the setup in case it is useful for anybody else.

If you have any comments please feel free to drop me a line in the comments below, thanks!

Take care,
-Martin

References

SkiaSharp with Wpf Example

Background

After SkiaSharp was announced by Miguel de Icaza on his blog, I downloaded the nuget and took it for a spin and used it for some image manipulation.

While the sample code got me started, it was written for System.Drawing/GDI+ and when I later wanted to use it in a Wpf app, I didn't find any sample code for that. So I wrote some code and this blog post, in case someone else might find that useful.

Drawing a Bitmap in Wpf

ImageSource and WriteableBitmap

Basically, when you're using Wpf you most often want to use an ImageSource, for example to display it within an Image control. When creating an ImageSource yourself, the WriteableBitmap comes in handy. It is not only a subclass of ImageSource, it's also double buffered, which allows a smooth udpate process.

Sourcecode

I've written the following code to do that:

public WriteableBitmap CreateImage(int width, int height)
{
  return new WriteableBitmap(width, height, 96, 96, PixelFormats.Bgra32, BitmapPalettes.Halftone256Transparent);
}
public void UpdateImage(WriteableBitmap writeableBitmap)
{
  int width  = (int)writeableBitmap.Width,
      height = (int)writeableBitmap.Height;
  writeableBitmap.Lock();
  using (var surface = SKSurface.Create(
    width: width,
    height: height,
    colorType: SKColorType.Bgra_8888,
    alphaType: SKAlphaType.Premul,
    pixels: writeableBitmap.BackBuffer,
    rowBytes: width * 4))
  {
    SKCanvas canvas = surface.Canvas;
    canvas.Clear(new SKColor(130, 130, 130));
    canvas.DrawText("SkiaSharp on Wpf!", 50, 200, new SKPaint() { Color = new SKColor(0, 0, 0), TextSize = 100 });
  }
  writeableBitmap.AddDirtyRect(new Int32Rect(0, 0, width, height));
  writeableBitmap.Unlock();
}

Basically, what we want to do is:

  • Create a WriteableBitmap of the appropriate size
  • Update the WriteableBitmap with Skia
    1. Lock the Backing Buffer
    2. Use Skia with the matching pixelformat to draw into the backing buffer
    3. Mark the Bitmap as dirty
    4. Unlock the Bitmaps Backing Buffer again
Don't forget to mark the updated region of the bitmap as dirty, else nothing is going to happen!

Example Wpf App

Now that I was able to render an Wpf image with Skia and the WriteableBitmap class supports double buffering, I wanted to create a quick app that updates the Image once per frame.

For that, I've subscribed the CompositionTarget.Rendering event and updated the render method to draw the number of elapsed frames. You can see the output on the screenshot below:

Screenshot

Screenshot of SkiaSharp Wpf Example Application

Sourcecode on Github

If you're interested in the example app, I've uploaded the source of the SkiaSharp Wpf Example Application to github at https://github.com/8/SkiaSharp-Wpf-Example

If you find any of that useful or I am missing something, please feel free to drop me a comment below, thanks!

Take care,
Martin

References

Creating Custom Knockout Bindings

Background

I've been using and enjoying knockout.js for some time now.

It's a great library that allows you to use MVVM in web applications and keeps you from writing spaghetti code to manipulate the DOM without requiring a switch to a monolithic framework and the associated downsides like lock-in and too many abstractions from plain html.

Using knockoutjs, you are still free to use DOM manipulation yourself if and when you need it. The great thing is, it's also easily extendable.

Extending Knockout

Why is being extendable a big plus and why would you want to extend knockout? Is something essential missing from knockout?

Nope, I don't think so.

Instead of growing to a monolithic framework, it just solves a particular problem, namely factoring out the UI glue code into reusable bindings. It comes with almost all bindings you could think of by default, but it doesn't try to be everything for everyone - and thats where custom bindings come in.

Using custom binding handlers, it offers you the chance to stick to DRY and to use declarations instead of repeating javascript snippets over and over again.

That often comes in handy, when you need to reuse some javascript code in multiple places and the code is tied to an element defined in html.

In the next few paragraphs, I am showing some small, exemplary binding handlers that have proven useful to me, nothing fancy.

Example BindingHandlers

I've been using some small knockout bindings that uses jquery's fadeIn() / fadeOut() methods and slideDown() / slideUp() to achieve simple animations on an element.

FadeVisible

The binding is defined in the following few lines:

ko.bindingHandlers.fadeVisible = {
  init: function (element, valueAccessor) {
    var value = valueAccessor();
    $(element).toggle(ko.unwrap(value));
  },
  update: function (element, valueAccessor) {
    var value = valueAccessor();
    ko.unwrap(value) ? $(element).fadeIn() : $(element).fadeOut();
  }
};

SlideDownVisible

The definition for the slideDown binding looks almost identical:

ko.bindingHandlers.slideDownVisible = {
  init: function (element, valueAccessor) {
    var value = valueAccessor();
    $(element).toggle(ko.unwrap(value));
  },
  update: function (element, valueAccessor) {
    var value = valueAccessor();
    ko.unwrap(value) ? $(element).slideDown() : $(element).slideUp();
  }
};

In turn, they both are very similar to the example knockout binding in knockouts custom-binding documentation which also provides a binding that uses slideDown() and slideUp().

Usage

As for usage, you'd replace the default 'visible' binding with 'fadeVisible' or 'slideDownVisible' respectively.

<div data-bind="fadeVisible: isVisible">
...

Nuget package

I've used the slideDownVisible binding already in a couple of projects and I've finally gotten sick of copy/pasting them, so I've packaged them as nuget packages named 'knockout-fadeVisible' and 'knockout-slideDownVisible' and uploaded them to nuget.org so that I can add it faster the next time I might need it. The (very short) source is on github as well.

Bootstrap Modal

Another example of transforming javascript glue code to a declarative knockout binding would be the following modalVisible binding:

ko.bindingHandlers.modalVisible = {
  init: function (element, valueAccessor) {
    var value = valueAccessor();
    /* init the modal */
    $(element).modal();
    /* subscribe to the 'hidden' event and update the observable, if the modal gets hidden */
    $(element).on('hidden.bs.modal', function (e) {
      if (ko.isObservable(value)) { value(false); }
    });
  },
  update: function (element, valueAccessor) {
    var value = valueAccessor();
    ko.unwrap(value) ? $(element).modal('show') : $(element).modal('hide');
  }
}

It wraps bootstrap javascript code in a tidy, nice to use knockout binding after using the binding like:

<div class="modal fade" data-bind="modalVisible: isVisible"...

This takes care of initializing the modal and allows controlling it's visibility using an observable. It handles hiding and showing of the modal and therefore removes the need to manipulate the DOM from my ViewModels javascript code.

Conclusion

Knockoutjs is nice and flexible library that is not only easy to get started with, but also easy to extend.

Creating custom binding handlers may save you from writing repetitive and error prone code and allows you to stick view specific code declaratively right on the target html element, which makes reasoning about your view easier.

Decoupling jQuery and other DOM manipulation code from your normal code also makes that code simpler to test.

Take care,
Martin

References

Debugging JavaScript in Visual Studio

TL;DR

  1. Start chrome in remote debug mode: chrome.exe --remote-debugging-port=9222
  2. Attach Visual Studio: "Debug" -> "Attach to Process..." -> select the chrome instance
  3. Done.

Justifying a use case

So you are still reading? Fine, than I can do some rambling. I was developing a JavaScript WebApp with some complicated client code - it's built like a game loop using requestAnimationFrame and canvas to render multiple videos onscreen and playing synced audio - like most software it worked, but sometimes it would glitch and I was trying to figure out what caused it.

Now what I wanted was to debug the code, preferably from the comfort of my IDE, which happens to be Visual Studio, but while Visual Studio supports debugging JavaScript via Internet Explorer out of the box, it does not support any other browser.

More often than not, that's not a big problem, you just fire up IE, wonder why you never changed the startup page to something reasonable and use it just once for debugging.

But not in this case as I was making use of AudioContext and other shiny new WebApi stuff that are available in Chrome and Firefox already but - you guessed it - not in IE.

You could of course do what everyone else would do and use the built-in chrome developer tools, which are great imho, but that would incur the cost of mental task switchs for using a different IDE that does not share the same syntax highlighting, hotkeys and general workflow you have come to be so productive with. So for the sake of this article I count switching to a different tooling as giving up.

Wondering if it's possible...?

So I started wondering, if the big V is able to debug javascript running in chrome.

The first hint, that the consensus is that it won't work, was for me that out of the box selecting chrome as your browser in visual studio and starting your debug session does not work, while it does for IE.

Quick Robin, to the googlemobile!
Almost every IT-SuperHero

But the googlemobile failed hard this time, top search results was talking about a Native Client and C++ code and a thread from last year said that it's not possible and Visual Studio's integrated Extension search turned up nothing.

But on the other hand, I've already tried Visual Studio's Node.js Tools and I remember vividly being amazed that debugging just worked. Okay, so because Node.js and chrome both use V8 as their JavaScript engine, Visual Studio must already be able to debug it.

Fiddlin' around

So I ignored that Visual Studio does not start debugging if you are using chrome and I simply tried to attach it to chrome using DEBUG -> Attach to Process... and while that did not work, I noticed something interesting:

In the "Code Type" selection I found a listing for Webkit!

Now I knew that Visual Studio could do it and even expects me to use Debug and Attach and so it's probably chrome that doesn't cooperate, which makes sense as a sane default.

Solution

So when I returned to google again I knew what to look for and a search for chrome remote debugging brought me to this page where my the missing part for my answer was waiting:

  1. Start chrome in remote debug mode: chrome.exe --remote-debugging-port=9222
  2. Attach Visual Studio: "Debug" -> "Attach to Process..." -> select the chrome instance
  3. Done.

References

Exporting a lot of files at once from M-Files

Background

I've written about how to do a mass file import into M-Files here and here before, but recently I was contacted by a client who had quite the opposite problem - he wanted to export a lot of files out of M-Files.

Getting Info

After a quick skype call to get to know the client and the details of the project, the following facts were available:

  • The data resides in a single M-Files Vault
  • It's backed by a whopping 230+ GB SQL Server Database
  • Other ways to export, direct access to the SQL Server and manual exporting had failed
  • An export of all document files (.docx, .pdf, ...) is needed
  • All properties of the files should be exported to a CSV file
  • Time was of the essence (when is it not?)

What didn't work

It seems that the client tried to export the data directly from the SQL Server, but I heard that this approach failed as they couldn't make out what goes where. From a software engineering perspective, this is fair, as the data storage is an implementation detail that can be changed anytime (for example by using another database backend).

Next they tried to export the files manually. That's not only slow, but also an awfully error prone process and when you're interested in the metadata as well, then you really really shouldn't go down this route, even with a few documents, but in this case we had a few hundred thousand.

So what's the alternative?

You probably know what's the right approach: a small custom application that uses the M-Files API to access all documents programmatically.

Solution

Armed with this knowledge we can formulate the characteristics of the solution:

  • Create the files
  • Create metadata
  • Reliable
  • Fast
  • Inexpensive

Considering the fact, that the tool needed to be done quickly and keep the development cost low, I decided on writing a Commandline Application. Another reason was, that it did not need to look fancy and would be operated by skilled IT personal who preferred a simple commandline interface anyway. I would have been happy to create a Wpf Application like Chronographer but that would have been overkill.

Exporting the files

As I had prior experience with the M-Files API, it didn't take me long get the file export running. In the screenshot below you see the results if run against the Sample Vault.

Exporting the Data to CSV

A little more interesting but still straight forward was the csv export, as you need to know all properties to create the csv header and put them in the right column. To do this, I enumerate all classes in the Vault and collect their properties and then they are written in the header row.

A feature of M-Files is, that all M-Files documents can contain 0 or more files, which meant for me that a found document could result in zero exported files (if it didn't hold any) or more than 1 file, in which case the csv export also needed to repeat the properties accordingly.

I settled on creating 4 fixed csv columns followed by the properties of all classes. The 4 columns are:

  1. FilePath
    Allows mapping to the exported files
  2. FileId
    The id of the document in the M-Files Vault
  3. SourceFileId
    The id of the binary file document (.docx, .pdf, ...)
  4. ClassName
    The name of the class that the exported file belongs to

Below you'll find a screenshot of the result when run against the Sample Vault.

Enumerating the files

An interesting problem was enumerating all the files to export. I settled on creating a search for the documents that skips deleted objects. As the maximum number of search results is capped at 100.000 items, you are not able to fetch all documents with a single search. I solved that by adding an additional search condition, namely searching for ids with a specified segment, where segment 0 for example means items 0-9999 and segment 1 returns 10000-19999. By repeatedly searching for files in this way and incrementing the segment, I was able to traverse the whole vault.

Complications

As always, nothing works perfectly on the first try and while the CSV Export completed successfully, when we were exporting the files for a few hours M-Files threw an Exception with the error message "An SQL update statement yielded a probable lock conflict. Lock request time out period exceeded." which seemed like a M-Files glitch to me.

Anyway, as we needed to try again we were loath to start the export from the beginning, so I added another commandline argument that allowed us to specify a starting segment so that we could continue our export instead of starting it from the beginning.

So that the final commandline interface looked like this:

Additional parameters like the Vault Name and the Credentials are stored in a config file in the same folder and read by the application at startup.

On the second run, we didn't encounter any errors and we had exported over 200K files and produced a nice 120MB CSV file. In the end, we had a nice, repeatable process that was able to save the client a lot of time, money and headaches.

References

About M-Files Databases

Background

M-Files always stores it's data in a Sql Database. As of this writing two Database vendors are supported:

  • Firebird (default)

    Firebird is an open source Sql Server, you can find out more about it on http://firebirdsql.org. As it is free, it's the default option when you install M-Files.

  • MS-SQL Server

    Microsoft's Sql Server is the second option. Because it's rather expensive, it's not the default option and the M-Files customer has to buy and install it themselves.

Database Engines in the wild

Both Firebird and MS-Sql should be able to handle M-Files Vaults of considerable size, but in practice Firebird gets used more often in smaller businesses and MS-Sql Server gets used more often in bigger firms.

While the price tag certainly does matter, more often than not it depends on whetever there is a preexisting investment into MS-Sql Server or not. If the company already has a MS-SQL Server on premise, it's a no brainer to tack on an additional Database. Even if they don't have a DBA who already has experience with administrating the server, a lot of companies already depend on the MS-Sql Server if they run on the Microsoft Stack at all for different reasons, maybe they already have a CMS or Website that depends on it.

MS Sql Server Express

Although a free version of Microsofts Sql Server called "Sql Server Express" is available for download (it even comes with reporting Services and an ok GUI for Administration if you pick up the SQL Server Express with Advanced Services) it's often a poor choice for M-Files, because of the 10 GB limit on database.

Don't get me wrong - 10 GB is not a small amount of data, but in this case remember that M-Files does not only store your customers and invoices as numbers and strings, but it also stores all binary files in the database as well - that means all word documents, powerpoints and pdfs with their high resolution cat pictures. If you throw in revisions of the same file and multiply that by a couple of users you get to a lot of data very quickly.

If you think that that's not an issue in your case, you should be able to use the express version as mentioned in this thread on the M-Files Forum.

Backing up and Restoring a M-Files Vault

Backing up and Restoring an M-Files Vault depends on your backend Database.

If you're using the default firebird sql server, than the backup is done using the M-Files Admin tool.

But if you are using the Microft Sql Server as the backend, then you'll need to a tool like the Sql Server Management Studio to create a backup of your vault and restore it back again, which is rather simple to do for anyone that has used the tool once before.

A problem when restoring a MS-SQL based Vault

What prompted this quick writeup is that yesterday a client had a problem restoring an M-Files Vault on the SQL Server.

Restoring the Database using the Management Tools worked fine, but the target system had a newer version of M-Files running and was trying an upgrade but failed with the following error message:

Upgrading the document vault 'Vaultname' failed.
ALTER ASSEMBLY for assembly 'MFMSSQLCLRObjs' failed because assembly 'MFMSSQLCLRObjs' is not authorized for PERMISSION_SET = UNSAFE. The assembly is authorized when either of the following is true: the database owner (DBO) has UNSAFE ASSEMBLY permission and the database has the TRUSTWORTHY database property on; or the assembly is signed with a certificate or an assymmetric key that has a corresponding login with UNSAFE ASSEMBLY permission. (ERROR: 10327, SQLSTATE: 42000)

After checking that the dbo had the correct permissions, I found that the problem was that the restored database did not have the TRUSTWORTHY property set.

I fixed that by excecuting the following command in the Sql Management Studio as explained in this Microsoft Article:

ALTER DATABASE Vaultname SET TRUSTWORTHY ON;

After that I was able to attach the document vault without problems.

References

Questions?
Ask Martin