Blog Posts

SkiaSharp with Wpf Example

Background

After SkiaSharp was announced by Miguel de Icaza on his blog, I downloaded the nuget and took it for a spin and used it for some image manipulation.

While the sample code got me started, it was written for System.Drawing/GDI+ and when I later wanted to use it in a Wpf app, I didn't find any sample code for that. So I wrote some code and this blog post, in case someone else might find that useful.

Drawing a Bitmap in Wpf

ImageSource and WriteableBitmap

Basically, when you're using Wpf you most often want to use an ImageSource, for example to display it within an Image control. When creating an ImageSource yourself, the WriteableBitmap comes in handy. It is not only a subclass of ImageSource, it's also double buffered, which allows a smooth udpate process.

Sourcecode

I've written the following code to do that:

public WriteableBitmap CreateImage(int width, int height)
{
  return new WriteableBitmap(width, height, 96, 96, PixelFormats.Bgra32, BitmapPalettes.Halftone256Transparent);
}
public void UpdateImage(WriteableBitmap writeableBitmap)
{
  int width  = (int)writeableBitmap.Width,
      height = (int)writeableBitmap.Height;
  writeableBitmap.Lock();
  using (var surface = SKSurface.Create(
    width: width,
    height: height,
    colorType: SKColorType.Bgra_8888,
    alphaType: SKAlphaType.Premul,
    pixels: writeableBitmap.BackBuffer,
    rowBytes: width * 4))
  {
    SKCanvas canvas = surface.Canvas;
    canvas.Clear(new SKColor(130, 130, 130));
    canvas.DrawText("SkiaSharp on Wpf!", 50, 200, new SKPaint() { Color = new SKColor(0, 0, 0), TextSize = 100 });
  }
  writeableBitmap.AddDirtyRect(new Int32Rect(0, 0, width, height));
  writeableBitmap.Unlock();
}

Basically, what we want to do is:

  • Create a WriteableBitmap of the appropriate size
  • Update the WriteableBitmap with Skia
    1. Lock the Backing Buffer
    2. Use Skia with the matching pixelformat to draw into the backing buffer
    3. Mark the Bitmap as dirty
    4. Unlock the Bitmaps Backing Buffer again
Don't forget to mark the updated region of the bitmap as dirty, else nothing is going to happen!

Example Wpf App

Now that I was able to render an Wpf image with Skia and the WriteableBitmap class supports double buffering, I wanted to create a quick app that updates the Image once per frame.

For that, I've subscribed the CompositionTarget.Rendering event and updated the render method to draw the number of elapsed frames. You can see the output on the screenshot below:

Screenshot

Screenshot of SkiaSharp Wpf Example Application

Sourcecode on Github

If you're interested in the example app, I've uploaded the source of the SkiaSharp Wpf Example Application to github at https://github.com/8/SkiaSharp-Wpf-Example

If you find any of that useful or I am missing something, please feel free to drop me a comment below, thanks!

Take care,
Martin

References

Creating Custom Knockout Bindings

Background

I've been using and enjoying knockout.js for some time now.

It's a great library that allows you to use MVVM in web applications and keeps you from writing spaghetti code to manipulate the DOM without requiring a switch to a monolithic framework and the associated downsides like lock-in and too many abstractions from plain html.

Using knockoutjs, you are still free to use DOM manipulation yourself if and when you need it. The great thing is, it's also easily extendable.

Extending Knockout

Why is being extendable a big plus and why would you want to extend knockout? Is something essential missing from knockout?

Nope, I don't think so.

Instead of growing to a monolithic framework, it just solves a particular problem, namely factoring out the UI glue code into reusable bindings. It comes with almost all bindings you could think of by default, but it doesn't try to be everything for everyone - and thats where custom bindings come in.

Using custom binding handlers, it offers you the chance to stick to DRY and to use declarations instead of repeating javascript snippets over and over again.

That often comes in handy, when you need to reuse some javascript code in multiple places and the code is tied to an element defined in html.

In the next few paragraphs, I am showing some small, exemplary binding handlers that have proven useful to me, nothing fancy.

Example BindingHandlers

I've been using some small knockout bindings that uses jquery's fadeIn() / fadeOut() methods and slideDown() / slideUp() to achieve simple animations on an element.

FadeVisible

The binding is defined in the following few lines:

ko.bindingHandlers.fadeVisible = {
  init: function (element, valueAccessor) {
    var value = valueAccessor();
    $(element).toggle(ko.unwrap(value));
  },
  update: function (element, valueAccessor) {
    var value = valueAccessor();
    ko.unwrap(value) ? $(element).fadeIn() : $(element).fadeOut();
  }
};

SlideDownVisible

The definition for the slideDown binding looks almost identical:

ko.bindingHandlers.slideDownVisible = {
  init: function (element, valueAccessor) {
    var value = valueAccessor();
    $(element).toggle(ko.unwrap(value));
  },
  update: function (element, valueAccessor) {
    var value = valueAccessor();
    ko.unwrap(value) ? $(element).slideDown() : $(element).slideUp();
  }
};

In turn, they both are very similar to the example knockout binding in knockouts custom-binding documentation which also provides a binding that uses slideDown() and slideUp().

Usage

As for usage, you'd replace the default 'visible' binding with 'fadeVisible' or 'slideDownVisible' respectively.

<div data-bind="fadeVisible: isVisible">
...

Nuget package

I've used the slideDownVisible binding already in a couple of projects and I've finally gotten sick of copy/pasting them, so I've packaged them as nuget packages named 'knockout-fadeVisible' and 'knockout-slideDownVisible' and uploaded them to nuget.org so that I can add it faster the next time I might need it. The (very short) source is on github as well.

Bootstrap Modal

Another example of transforming javascript glue code to a declarative knockout binding would be the following modalVisible binding:

ko.bindingHandlers.modalVisible = {
  init: function (element, valueAccessor) {
    var value = valueAccessor();
    /* init the modal */
    $(element).modal();
    /* subscribe to the 'hidden' event and update the observable, if the modal gets hidden */
    $(element).on('hidden.bs.modal', function (e) {
      if (ko.isObservable(value)) { value(false); }
    });
  },
  update: function (element, valueAccessor) {
    var value = valueAccessor();
    ko.unwrap(value) ? $(element).modal('show') : $(element).modal('hide');
  }
}

It wraps bootstrap javascript code in a tidy, nice to use knockout binding after using the binding like:

<div class="modal fade" data-bind="modalVisible: isVisible"...

This takes care of initializing the modal and allows controlling it's visibility using an observable. It handles hiding and showing of the modal and therefore removes the need to manipulate the DOM from my ViewModels javascript code.

Conclusion

Knockoutjs is nice and flexible library that is not only easy to get started with, but also easy to extend.

Creating custom binding handlers may save you from writing repetitive and error prone code and allows you to stick view specific code declaratively right on the target html element, which makes reasoning about your view easier.

Decoupling jQuery and other DOM manipulation code from your normal code also makes that code simpler to test.

Take care,
Martin

References

Debugging JavaScript in Visual Studio

TL;DR

  1. Start chrome in remote debug mode: chrome.exe --remote-debugging-port=9222
  2. Attach Visual Studio: "Debug" -> "Attach to Process..." -> select the chrome instance
  3. Done.

Justifying a use case

So you are still reading? Fine, than I can do some rambling. I was developing a JavaScript WebApp with some complicated client code - it's built like a game loop using requestAnimationFrame and canvas to render multiple videos onscreen and playing synced audio - like most software it worked, but sometimes it would glitch and I was trying to figure out what caused it.

Now what I wanted was to debug the code, preferably from the comfort of my IDE, which happens to be Visual Studio, but while Visual Studio supports debugging JavaScript via Internet Explorer out of the box, it does not support any other browser.

More often than not, that's not a big problem, you just fire up IE, wonder why you never changed the startup page to something reasonable and use it just once for debugging.

But not in this case as I was making use of AudioContext and other shiny new WebApi stuff that are available in Chrome and Firefox already but - you guessed it - not in IE.

You could of course do what everyone else would do and use the built-in chrome developer tools, which are great imho, but that would incur the cost of mental task switchs for using a different IDE that does not share the same syntax highlighting, hotkeys and general workflow you have come to be so productive with. So for the sake of this article I count switching to a different tooling as giving up.

Wondering if it's possible...?

So I started wondering, if the big V is able to debug javascript running in chrome.

The first hint, that the consensus is that it won't work, was for me that out of the box selecting chrome as your browser in visual studio and starting your debug session does not work, while it does for IE.

Quick Robin, to the googlemobile!
Almost every IT-SuperHero

But the googlemobile failed hard this time, top search results was talking about a Native Client and C++ code and a thread from last year said that it's not possible and Visual Studio's integrated Extension search turned up nothing.

But on the other hand, I've already tried Visual Studio's Node.js Tools and I remember vividly being amazed that debugging just worked. Okay, so because Node.js and chrome both use V8 as their JavaScript engine, Visual Studio must already be able to debug it.

Fiddlin' around

So I ignored that Visual Studio does not start debugging if you are using chrome and I simply tried to attach it to chrome using DEBUG -> Attach to Process... and while that did not work, I noticed something interesting:

In the "Code Type" selection I found a listing for Webkit!

Now I knew that Visual Studio could do it and even expects me to use Debug and Attach and so it's probably chrome that doesn't cooperate, which makes sense as a sane default.

Solution

So when I returned to google again I knew what to look for and a search for chrome remote debugging brought me to this page where my the missing part for my answer was waiting:

  1. Start chrome in remote debug mode: chrome.exe --remote-debugging-port=9222
  2. Attach Visual Studio: "Debug" -> "Attach to Process..." -> select the chrome instance
  3. Done.

References

Exporting a lot of files at once from M-Files

Background

I've written about how to do a mass file import into M-Files here and here before, but recently I was contacted by a client who had quite the opposite problem - he wanted to export a lot of files out of M-Files.

Getting Info

After a quick skype call to get to know the client and the details of the project, the following facts were available:

  • The data resides in a single M-Files Vault
  • It's backed by a whopping 230+ GB SQL Server Database
  • Other ways to export, direct access to the SQL Server and manual exporting had failed
  • An export of all document files (.docx, .pdf, ...) is needed
  • All properties of the files should be exported to a CSV file
  • Time was of the essence (when is it not?)

What didn't work

It seems that the client tried to export the data directly from the SQL Server, but I heard that this approach failed as they couldn't make out what goes where. From a software engineering perspective, this is fair, as the data storage is an implementation detail that can be changed anytime (for example by using another database backend).

Next they tried to export the files manually. That's not only slow, but also an awfully error prone process and when you're interested in the metadata as well, then you really really shouldn't go down this route, even with a few documents, but in this case we had a few hundred thousand.

So what's the alternative?

You probably know what's the right approach: a small custom application that uses the M-Files API to access all documents programmatically.

Solution

Armed with this knowledge we can formulate the characteristics of the solution:

  • Create the files
  • Create metadata
  • Reliable
  • Fast
  • Inexpensive

Considering the fact, that the tool needed to be done quickly and keep the development cost low, I decided on writing a Commandline Application. Another reason was, that it did not need to look fancy and would be operated by skilled IT personal who preferred a simple commandline interface anyway. I would have been happy to create a Wpf Application like Chronographer but that would have been overkill.

Exporting the files

As I had prior experience with the M-Files API, it didn't take me long get the file export running. In the screenshot below you see the results if run against the Sample Vault.

Exporting the Data to CSV

A little more interesting but still straight forward was the csv export, as you need to know all properties to create the csv header and put them in the right column. To do this, I enumerate all classes in the Vault and collect their properties and then they are written in the header row.

A feature of M-Files is, that all M-Files documents can contain 0 or more files, which meant for me that a found document could result in zero exported files (if it didn't hold any) or more than 1 file, in which case the csv export also needed to repeat the properties accordingly.

I settled on creating 4 fixed csv columns followed by the properties of all classes. The 4 columns are:

  1. FilePath
    Allows mapping to the exported files
  2. FileId
    The id of the document in the M-Files Vault
  3. SourceFileId
    The id of the binary file document (.docx, .pdf, ...)
  4. ClassName
    The name of the class that the exported file belongs to

Below you'll find a screenshot of the result when run against the Sample Vault.

Enumerating the files

An interesting problem was enumerating all the files to export. I settled on creating a search for the documents that skips deleted objects. As the maximum number of search results is capped at 100.000 items, you are not able to fetch all documents with a single search. I solved that by adding an additional search condition, namely searching for ids with a specified segment, where segment 0 for example means items 0-9999 and segment 1 returns 10000-19999. By repeatedly searching for files in this way and incrementing the segment, I was able to traverse the whole vault.

Complications

As always, nothing works perfectly on the first try and while the CSV Export completed successfully, when we were exporting the files for a few hours M-Files threw an Exception with the error message "An SQL update statement yielded a probable lock conflict. Lock request time out period exceeded." which seemed like a M-Files glitch to me.

Anyway, as we needed to try again we were loath to start the export from the beginning, so I added another commandline argument that allowed us to specify a starting segment so that we could continue our export instead of starting it from the beginning.

So that the final commandline interface looked like this:

Additional parameters like the Vault Name and the Credentials are stored in a config file in the same folder and read by the application at startup.

On the second run, we didn't encounter any errors and we had exported over 200K files and produced a nice 120MB CSV file. In the end, we had a nice, repeatable process that was able to save the client a lot of time, money and headaches.

References

About M-Files Databases

Background

M-Files always stores it's data in a Sql Database. As of this writing two Database vendors are supported:

  • Firebird (default)

    Firebird is an open source Sql Server, you can find out more about it on http://firebirdsql.org. As it is free, it's the default option when you install M-Files.

  • MS-SQL Server

    Microsoft's Sql Server is the second option. Because it's rather expensive, it's not the default option and the M-Files customer has to buy and install it themselves.

Database Engines in the wild

Both Firebird and MS-Sql should be able to handle M-Files Vaults of considerable size, but in practice Firebird gets used more often in smaller businesses and MS-Sql Server gets used more often in bigger firms.

While the price tag certainly does matter, more often than not it depends on whetever there is a preexisting investment into MS-Sql Server or not. If the company already has a MS-SQL Server on premise, it's a no brainer to tack on an additional Database. Even if they don't have a DBA who already has experience with administrating the server, a lot of companies already depend on the MS-Sql Server if they run on the Microsoft Stack at all for different reasons, maybe they already have a CMS or Website that depends on it.

MS Sql Server Express

Although a free version of Microsofts Sql Server called "Sql Server Express" is available for download (it even comes with reporting Services and an ok GUI for Administration if you pick up the SQL Server Express with Advanced Services) it's often a poor choice for M-Files, because of the 10 GB limit on database.

Don't get me wrong - 10 GB is not a small amount of data, but in this case remember that M-Files does not only store your customers and invoices as numbers and strings, but it also stores all binary files in the database as well - that means all word documents, powerpoints and pdfs with their high resolution cat pictures. If you throw in revisions of the same file and multiply that by a couple of users you get to a lot of data very quickly.

If you think that that's not an issue in your case, you should be able to use the express version as mentioned in this thread on the M-Files Forum.

Backing up and Restoring a M-Files Vault

Backing up and Restoring an M-Files Vault depends on your backend Database.

If you're using the default firebird sql server, than the backup is done using the M-Files Admin tool.

But if you are using the Microft Sql Server as the backend, then you'll need to a tool like the Sql Server Management Studio to create a backup of your vault and restore it back again, which is rather simple to do for anyone that has used the tool once before.

A problem when restoring a MS-SQL based Vault

What prompted this quick writeup is that yesterday a client had a problem restoring an M-Files Vault on the SQL Server.

Restoring the Database using the Management Tools worked fine, but the target system had a newer version of M-Files running and was trying an upgrade but failed with the following error message:

Upgrading the document vault 'Vaultname' failed.
ALTER ASSEMBLY for assembly 'MFMSSQLCLRObjs' failed because assembly 'MFMSSQLCLRObjs' is not authorized for PERMISSION_SET = UNSAFE. The assembly is authorized when either of the following is true: the database owner (DBO) has UNSAFE ASSEMBLY permission and the database has the TRUSTWORTHY database property on; or the assembly is signed with a certificate or an assymmetric key that has a corresponding login with UNSAFE ASSEMBLY permission. (ERROR: 10327, SQLSTATE: 42000)

After checking that the dbo had the correct permissions, I found that the problem was that the restored database did not have the TRUSTWORTHY property set.

I fixed that by excecuting the following command in the Sql Management Studio as explained in this Microsoft Article:

ALTER DATABASE Vaultname SET TRUSTWORTHY ON;

After that I was able to attach the document vault without problems.

References

TL;DR

If razor intellisense stops working suddenly, try deleting your C:\Users\username\AppData\Local\Microsoft\VisualStudio\14.0\ComponentModelCache folder.

The Problem

When I was opening Visual Studio the other day, I was greeted with the following error message:

Closing and reopening Visual Studio didn't fix the error - it occurred every time I opened a razor view (.cshtml) page. While Visual Studio itself did not crash, highlightning and intellisense was broken inside the view. E.g. razor comments

@* this is a comment! *@
were not highlighted correctly (just as in this post) and my favourite spellchecker (intellisense) had stopped working as well.

Looking at the error

I did as I was told and took a look at the ActivityLog as the error message suggested and opened: C:\Users\username\AppData\Roaming\Microsoft\VisualStudio\14.0\ActivityLog.xml Which in turn revealed the following stack trace:

System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentException: Item has already been added. Key in dictionary: 'RazorSupportedRuntimeVersion' Key being added: 'RazorSupportedRuntimeVersion' at System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add) at System.Collections.Hashtable.Add(Object key, Object value) at System.Collections.Specialized.HybridDictionary.Add(Object key, Object value) at Microsoft.VisualStudio.Utilities.PropertyCollection.AddProperty(Object key, Object property) at Microsoft.VisualStudio.Html.Package.Razor.RazorVersionDetector.Microsoft.Html.Editor.ContainedLanguage.Razor.Def.IRazorVersionDetector.GetVersion(ITextBuffer textBuffer) at Microsoft.Html.Editor.ContainedLanguage.Razor.RazorUtility.TryGetRazorVersion(ITextBuffer textBuffer, Version& razorVersion) at Microsoft.Html.Editor.ContainedLanguage.Razor.RazorErrorTagger..ctor(ITextBuffer textBuffer) --- End of inner exception stack trace --- at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor) at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes, StackCrawlMark& stackMark) at System.Activator.CreateInstance(Type type, BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes) at System.Activator.CreateInstance(Type type, Object[] args) at Microsoft.Html.Editor.ContainedLanguage.Common.ContainedCodeErrorTaggerProvider`1.CreateTagger[T](ITextBuffer textBuffer) at Microsoft.VisualStudio.Text.Tagging.Implementation.TagAggregator`1.GatherTaggers(ITextBuffer textBuffer)
Looking at the stacktrace confirmed my suspicion that on a big picture view something razor specific was messed up (I am looking at you 'RazorSupportedRuntimeVersion'), I also got the info that an exception was being thrown, because a duplicate hashtable entry was being inserted.

Google to the rescue

The best part was of course, that an error message gave me something to google for and so I did and finally came to a Visual Studio Feedback Item with the same error message.

The issue is of course marked as closed, but that's neither here nor there.

A first workaround appears

Under workarounds I found out, that resetting your user data seems to fix the issue. To do that you execute devenv /resetuserdata and sure enough it did! Intellisense and Highlightning started to work again in the razor view!

But the workaround also has some drawbacks, it well, resets your user data (who would have thought?!), which means it removes all your extensions as well.

Well, whatever. But then it happened to me again a few days later. :|

Improving the workaround

Okay, so I thought, I could probably do better and not reset everything but only the part that causes the problem. So I took another look at the StackTrace and judging from the StackTrace RazorVersionDetector.GetVersion() seems like the culprit. According to the StackTrace RazorVersionDetector sits in Microsoft.VisualStudio.Html.Package.Razor and luckily I found a dll with the same name under: C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\Microsoft\Web Tools\Editors\Microsoft.VisualStudio.Html.Package.dll.

So I fired up trusty ILSpy and looked at the strange looking interesting interpretation of the C# code of the RazorVersionDetector that ILSpy showed me. There I noticed that it's private methods were called GetCachedVersion() and SetCachedVersion() which was why it worked at first, but broke later on and why resetting the user data worked - it cleared the cache.

Armed with the knowledge I looked for cache folder and luckily I found it: C:\Users\username\AppData\Local\Microsoft\VisualStudio\14.0\ComponentModelCache

Removing or renaming this folder forces Visual Studio to recreate it and Razor intellisense starts working again and you get to keep your extensions and stuff.

References

Choosing between different solutions (WebApp, MobileApp, DesktopApp, Console or Service)

Abstract

In this article I'll encourage you to keep an open mind about your options when building a software solution.

Motivation

Why write or read about how to choose between different solutions? Shouldn't it be obvious? I think it's well worth our time to dwell on that, especially considering two reasons:

  1. Choosing the wrong solution is one of the most expensive mistakes to make. E.g. adding an unplanned feature to an application may seem expensive, but it pales in comparison with the realization that your mobile app should have been website or the other way round.
  2. Often a decision about a platform or solution is not made deliberately, because we are simply not aware of all our options. E.g. consider if Kahneman's WYSIATI (What You See Is All There Is) theory comes into play here or the "If all you have is a hammer, everything looks like a nail" quote rings true.

Why don't we evaluate our options more carefully?

There are a couple of obstacles that keep us from taking a closer look.

  • Missing Expertise It often requires in-depth knowledge and experience of different technologies, which is hard to come by. E.g. if you're only skilled in developing Web Pages for customers you won't necessarily see that a Windows Service or a commandline tool on a Parallela Board fits the requirements better.
  • Emotional Bias We're often biased towards some technology not based upon it's merits and flaws for our specific use case, but rather on our feelings and emotions based on a past project. Not only was it a different project and the technology should be reevaluated in the context of the new one, but also our emotions were influenced by other factors like a too short deadline, mismanagement, unrealistic expectations or difficulties with the involved parties.
  • Lots of Options Evaluating a lot of options is difficult and straining, consider for example the book The paradox of choice or the TED Talk given by Barry Schwartz.

Removing the Obstacles

But how can you remove those obstacles?

Missing Expertise

Missing Expertise can be battled in the following ways:

  1. Learn about it The most obvious, although hardest solution is to learn about the qualities of certain technologies. This does not mean that one needs to become an expert in every new technology that pops up, but knowing if it's a good match for a certain use case is important.
  2. Get Help If you're a consultant or developer and the technology that is most appropriate or required for the task does not match your expertise, you should acknowledge the fact and inform your client or boss. If you have a colleague or acquaintance that specializes in that technology or field you should refer to them.
Emotional Bias
  1. Stressfree Environment A good way to combat emotional bias is to get to know a technology in a stressfree environment. This may mean, that you take a look at it in your spare time, without the clock ticking.
  2. Tiny steps Start step by step and don't try get everything done at once. Sometimes you'll feel tempted to try something big at once that you are able to do in another language or with another framework that you've already mastered - but don't give in. Small incremental successes keep your motiviation going for a longer time.
  3. Ask a friend If you really dislike a certain technology, you should talk with a friend or someone you respect that is fond of it. As he or she is your friend, you are unlikely to completely dismiss them and get a glance why someone would want to use that and advantages it has.

Lots of Options

And finally, to get a quick overview I've compiled the following list that can be used to get the juices flowing on what parts could make up the ideal solution. My hope is that by spelling them out, they'll be on the radar the next time when a problem needs solving. Many solutions do not fall strictly within one category and more often than not consist of more than one part.

Disclaimer

This list is neither exhaustive nor can it be, it's just a reminder of what's out there and should be seen as an inspiration to get you to explore your options.

  • Web Solutions
    • Static Web Pages (consider online/offline help, manuals)
    • Content Management Systems (CMS)
      • Wordpress (php)
      • Umbraco (asp.net)
    • Mobile first
    • Server Side Frameworks
      • Asp.Net
        • NancyFx
        • Asp.Net MVC
        • Asp.Net WebApi
      • Ruby
        • Ruby on Rails
        • Sinatra
      • Php
        • phpcake
        • Symfony
      • Node.js
    • Forums
      • phpBB (php)
      • MVCForum (asp.net mvc)
    • SPA
    • Web API
    • Front End Frameworks
      • Bootstrap
      • jqueryUI
      • Skeleton
    • javascript frameworks
      • angular
      • aurelia
      • knockout
      • react
    • visualization
      • d3
      • Raphaël
      • processing
    • CSS Languages
      • Less
      • Sass
  • Mobile Applications
    • Platform specific
      • Android
      • iOS
      • Windows Phone
      • Blackberry
    • Cross platform
      • Apache Cordova
      • Xamarin
      • RemObjects
      • Unity
  • Desktop Applications
    • Wpf (windows only)
    • WinForms (cross platform using mono)
    • GTK and it's wrappers like GTK#
    • DirextX and it's wrappers like SlimDX or SharpDX
    • OpenGL and it's wrappers like OpenTK
    • Unity
  • Console Applications
  • Windows Services
    • topshelf
  • Linux Daemons
  • Datastorage
    • FileSystem
    • SQL Databases
      • MS-Sql
      • MySql
      • Postgresql
      • Oracle
      • SQLite
    • No SQL Databases
      • RethinkDB
      • MongoDB
      • LMDB
  • WebServers
    • Apache
    • nginx
  • Media Processing
    • imagemagick for images
    • sox for audios
    • ffmpeg for videos
  • Virtualization
    • Virtual Box
    • Hyper-V
    • Virtual Machine
    • Xen
  • Operating Systems
    • Linux
    • Windows
    • BSD
  • Hardware platforms
    • Raspberry Pi
    • Beagle Bone
    • Parallela
  • Cloud platforms
    • Amazon
    • Azure
    • Digital Ocean
    • Linode
  • Payment Providers
    • stripe
    • braintree
    • paypal

As always feel free to leave a comment, especially if you want to remind me of an important option that I completely missed when I typed up the list, as I most certainly did.

"Check for Updates" for an Application

Background

In this article I explore a simple mechanism to check if an update is available by querying a server. My use case was that I've been writing a C# Application and I wanted to include a quick "Check for Updates" functionality that would inform the user if an update is available.

Additionally, I wanted a simple upgrade process, so that I can upload a new setup file and it's picked up by the webserver automatically, so the requirements were quickly formulated:

Requirements

  • The client application should be able to detect if a new version is available
  • Publishing a new update should be painless

Formulating a plan

  1. The Client App reads it's version number
  2. Client App sends the version number to the Webserver
  3. Webserver compares the version number with the latest available version and returns the result
  4. Client displays the result and asks the user to upgrade

Step 0 - Versioning the Application

As it turns out, there is a Step 0 associated with this process and that is versioning the application in the first place.

The simplest way to version your app is probably using the System.Reflect.AssemblyVersionAttribute as used in the auto-generated AssemblyInfo.cs file in your application.

I personally prefer to let the build process increment the build and revision parts of the version, so I've changed the use of the AssemblyVersion attribute in AssemblyInfo.cs to:

[assembly: AssemblyVersion("0.7.*")]
That way, even if I forget to manually increment the version number, the versions differ between each build which adds an additional safety net when diagnosing problems.

On top of that I removed the [assembly: AssemblyFileVersion("1.0.0.0")] declaration, as I prefer to keep them in sync.

After rebuilding the application I checked the generated version by right-clicking the file and selecting "Properties" -> "Details", as you can see in the screenshot.

Step 1 - Reading the Application Version

Reading the version of the Application is a one-liner:

var version = Assembly.GetExecutingAssembly().GetName().Version;

Step 2 - Sending the Version to the Webserver

Sending the version to the webserver and retrieving the result is done using a simple WebRequest:

  var baseUrl = "http://yourdomain.com/yourapp";
  var url = Path.Combine(baseUrl, string.Format("isupdateavailable?v={0}", version));
  var request = (HttpWebRequest)WebRequest.Create(url);
  if (((HttpWebResponse)request.GetResponse()).StatusCode == HttpStatusCode.OK)
  {
    /* update is available... */
  }
The above code snippet assumes that you have a webserver that returns OK when an update is available. We'll see about that in the next step. In a production ready case you might want to use GetResponseAsync and return an Task.

Step 3 - The Webserver compares versions

The webserver retrieves the version from the parameter, creates a Version object and compares the Version object with that of the current file.

In my case, I chose to encode the version in the setup file name like so: 'setup_0.8.5742.26637.exe' and read it back in using a regular expression. My reasons for not using the same process as on the client were the following:

  • You can upload multiple setup files, which is handy if you want to be able to keep older versions
  • As the generated setup file is native code (I am using NSIS to create the installer), not a managed assembly, reading the product version breaks down on a linux server (which I happen to use). The reason is, that the Product Version is encoded in the PE-Header of the file and while mono allows reading the version from a .NET assembly it does not work with a native file as it would on windows.

So the code to retrieve the current version looks something like this:


private static Regex SetupFileRegex = new Regex("Setup_(?.*?)\\.exe");
private Version GetVersionFromFileName(string fileName)
{
  var m = SetupFileRegex.Match(fileName);
  return m.Success ? new Version(m.Groups["Version"].Value) : null;
}
private Version GetLatestVersion()
{
  string[] files = Directory.GetFiles(GetSetupFolder(), "Setup_*.exe");
  return files.Select(f => new {
    File = f,
    Version = GetVersionFromFileName(Path.GetFileName(f))
  })
  .Where(f => f.Version != null)
  .OrderByDescending(f => f.Version)
  .First()
  .Version;
}

Returning the correct status code becomes easy then, simply compare the versions. For example like:

var clientVersion = new Version(this.Request.Query["v"]);
  var latestVersion = GetLatestVersion();
  return clientVersion < latestVersion ? HttpStatusCode.OK : HttpStatusCode.NotFound;
  
In a solid web framework, returning the status code should be as easy as that. I personally use NancyFx for all Serverside coding and warmly recommend it, the SDHP is true, it's really a pleasure to work with.

Step 4 - Display "Update available"

In my use case, I chose to simply display an "update available" button in my application that takes the user to the download page, which I find is the most flexible solution, as I can keep the updating process itself and change and release notes separate from the application.

Take care,
Martin

References

AvalonDock 2.0 and MVVM

Background

When I was writing an application, I came to the conclusion that a flexible GUI Layout that enables the user to rearrange the windows to fit their needs would be the best option. I wanted an interface that is as flexible as Visual Studio itself when it comes to window positioning.

Docking Libraries

Back in the days, I've already used a commercial solution, but that was years ago and I didn't have a license for a current version. That's why I took a look at our all favourite site and so came to find AvalonDock.

AvalonDock

AvalonDock is a Wpf Docking Library that provides your windows app with docking windows just like Visual Studio. The library has a lot going for it: it's available as a nuget package, totally free and it comes with MVVM support (which I deeply care about), at least the codeplex page claims it does.

Getting AvalonDock

There a two options to get AvalonDock, either grab it from the codeplex site under downloads or get it via nuget. While I prefer nuget, this time you need to take care to select the correct package, as there is also a pay-only version available as well as an older, out of date version. It's simply named 'AvalonDock', take a look at the screenshot below:

While the library does what I want, I found it's documentation quite lacking - the tutorials, except for one, are hopelessly outdated - they refer to the version 1.3 while the current version (as of this writing) is 2.0. As almost none of the classes, properties or methods are the same, they are more misleading than do any good.

Considering that the last comment that asks for an update of the documentation was written in 2012 and has gone unanswered, I wouldn't hold my breath for an update anytime soon.

So heading back to our favorite site, search reveals promising questions but lacking answers: consisting of a 'let me google that for you' link, followed by a link to an article about the version 1.3 or another one with answers that link to the same outdated tutorial and one that links to the codeplex documentation page, which got us started hunting for information in the first place, all apparently posted by someone point-hungry that skipped reading the whole question.

Finding an example app

After some looking around I found an example app back on the CodePlex site - it's not under downloads, there you'll only find the library and some themes - but under source code and you can get it when clicking the Download Button.

Now we're off to a good start - we can see some of the properties in use, even in a somewhat MVVMish use.

Putting it to use

After studying the MVVM example, I knew how to bind the DocumentManager against a Document Collection, and wanted it to use it in a way that allows me to just update the properties of our viewmodels and have the view reflect those changes and vice-versa. That would come in handy for opening and closing windows and updating the title and so on.

First of all, I wanted to use the IsChecked property of the menu item to open and close the DockWindow and so on, as shown in the screenshot below.

AvalonDock using Mvvm Bindings to close and open windows

The following xaml code snippet shows how I wired up the View to ViewModel binding with a style:


<Window ...
  xmlns:dock="http://schemas.xceed.com/wpf/xaml/avalondock"
  xmlns:dockctrl="clr-namespace:Xceed.Wpf.AvalonDock.Controls;assembly=Xceed.Wpf.AvalonDock"
  >
  ...
  <dock:DockingManager Grid.Row="1"
                        DataContext="{Binding DockManagerViewModel}"
                        DocumentsSource="{Binding Documents}" >

    <dock:DockingManager.LayoutItemContainerStyle>
      <!-- you can add additional bindings from the layoutitem to the DockWindowViewModel -->
      <Style TargetType="{x:Type dockctrl:LayoutItem}">
        <Setter Property="Title" Value="{Binding Model.Title}" />
        <Setter Property="CloseCommand" Value="{Binding Model.CloseCommand}" />
        <Setter Property="CanClose" Value="{Binding Model.CanClose}" />
      </Style>
    </dock:DockingManager.LayoutItemContainerStyle>

  </dock:DockingManager>

</Window>

As figuring out how to use the MVVM support was harder than expected and in case anyone else needs to do that, I've uploaded a complete example app to github that should get you started.

Take care,
Martin

References

Deleting thousands of files from M-Files

Summary

In this article I am discussing mass file deletion from M-Files and why this is sometimes useful and conclude with a C# implementation that is able to delete files based on their object type or class.

Use Cases

When you are inserting a lot of files you probably need to delete a lot of files as well. In my case, as I've been writing the importer I am using unit tests to test if the insertion works and if it does, I end up with a lot of new files in my M-Files vault. So to make the process of insertion repeatable, I needed a way to reset the vault back to it's former state.

If you'd be working with a database, you'd usually wrap the insertion into a transaction that gets rolled back automatically when you're finished with the test, but sadly M-Files lacks support for this concept.

One option to achieve this is to restore a backup of the vault that was taken before the insertion process. Another way is deletion of the inserted files. Both having different trade-offs. In this installment I am going to visit deleting objects from M-Files.

Another use case comes up if you are creating a new vault by copying all data, but you want to keep only a subset of it and get rid of the rest.

Or you've just done a mass import of files and noticed that you've missed specifying a property or some other error and the quickest way to solve the problem is to repeat the insertion process, but first you need to get rid of the files you just imported.

Deletion is not what you think

Ususally deletion means removal of the files, but M-Files treats deletion differently, it just marks the files in question as deleted. They are still held in the M-Files Vault and can still be retrieved and therefore still take away resources like memory and disk space.

Instead of deleting files, the complete removal of files is achieved by destroying files.

Destroying Objects

Destroying objects in M-Files is independent from deletion - you do not need to delete an object before you destroy it, you can just go ahead and destroy it and be done with it.

Steps involved in destroying an object

  1. Login to the vault
  2. Retrieve the ObjID of the object you want to destroy
  3. call Vault.ObjectOperations.DestroyObject() and supply the ObjID

Login to the vault

The login sequence is straightforward:

  1. Create an MFilesServerApplication instance
  2. Call it's Connect() method and supply your credentials
  3. Iterate through it's vaults and login to your target vault by calling it's LogIn() method.

Retrieve the ObjID

There are plenty of options to retrieve the ObjID of the object or objects you want to destroy.

  • Maybe you already know the Id of the object. That would be the case if you inserted the object before and stored the Id of the newly created object somewhere.
  • You may want to execute a view and delete the objects it returns.
  • Or you may want to search for objects based on some common criteria, like their object type or class.

Searching for Objects

Searching for Objects in M-Files can be achieved by using the SearchCondition class. The following examples demonstrate the use of the SearchCondition class to look for objects that correspond to a specific object type or class.

Search by ObjectType

The following code snippet creates a SearchCondition instance that looks for all objects that are of the specified objectType.

  /* find all files with the specified object type */
  var searchCondition = new SearchCondition();
  searchCondition.ConditionType = MFConditionType.MFConditionTypeEqual;
  searchCondition.Expression.DataStatusValueType = MFStatusType.MFStatusTypeObjectTypeID;
  searchCondition.TypedValue.SetValue(MFDataType.MFDatatypeLookup, objectTypeId);
Search by class

In the following code snippet a SearchCondition instance is created that allows searching for objects that belong to the specified class.

  /* find all files with the specified class */
  var searchCondition = new SearchCondition();
  searchCondition.ConditionType = MFConditionType.MFConditionTypeEqual;
  searchCondition.Expression.DataPropertyValuePropertyDef = (int)MFBuiltInPropertyDef.MFBuiltInPropertyDefClass;
  searchCondition.TypedValue.SetValue(MFDataType.MFDatatypeLookup, classId);
Getting results

After creating a SearchCondition that looks for the objects, we need to execute the search to retrieve the results.

This is done by calling Vault.ObjectSearchOperations.SearchForObjectsByConditionsEx() and supplying the SearchConditions.

The SearchForObjectsByConditionsEx() method returns an instance of the ObjectSearchResults class.

Enumerating the ObjectSearchResults we find ObjectVersion instances and it's ObjVer.ObjID property that we can use as a parameter to the aforementioned DestroyObject() method.

Hint

One Advantage of calling the Ex version is that you can specify additional search parameters, like MaxResultCount and SearchTimeoutInSeconds which is pretty important if you are dealing with a lot of files. Otherwise you might end up with an incomplete query and may overlook files that exceeded the maximum result count.

Putting it all together

I've put together a small C# Console Application that implements the whole process and uploaded it to github.

Screenshots

Screenshot of MFilesDeleter listing objects by class name Screenshot of MFilesDeleter destroying objects by class name

References

If you have any questions or comments, feel free to leave a comment below or e-Mail me and thank you for reading!

Take care,
Martin

Questions?
Ask Martin