Creating your own file format to import .FBX, .OBJ and .X in your Windows 8 modern UI game (or 3D engine)

There is a lot of different file format when it comes to 3D objects. One of the most used is the FBX from Autodesk. This file format can be exported by all major DCC but the key point is that it can be complex for a game or 3D developer to open such file format.

I would like to propose here a solution that can allows you to easily offline files importation. The idea is to simulate a MSBuild execution to reuse the importation process of the XNA pipeline.

Indeed, XNA is able to load file formats such as .X, .OBJ and .FBX. So with the following code, you will be able to parse 3D files and generate a complete in memory object model based on the content of the files.

Why do I need to offline file parsing?

It is a great idea to parse your assets offline because in this case you can create your own file format and efficiently load it at runtime.

You no longer need to have all the different parsers in your game engine and in some case you can optimize things using complex and costly algorithms.

Using MSBuild alongside XNA

The main trick here is to use the power of MSBuild with the XNA Framework.

To do so you just have to use the following code (extracted from Babylon, the 3D engine I wrote for WorldMonger):

public void GenerateBabylonFile(string file, string outputFile, bool skinned)
{
    using (FileStream fileStream = new FileStream(outputFile, FileMode.Create, FileAccess.Write))
    {
        using (BinaryWriter writer = new BinaryWriter(fileStream))
        {
            writer.Write(Version);


            var services = new BabylonImport.Importers.FBX.ServiceContainer();

            // Create a graphics device
            var form = new Form();
            services.AddService<IGraphicsDeviceService>(GraphicsDeviceService.AddRef(form.Handle, 1, 1));

            var contentBuilder = new ContentBuilder();
            var contentManager = new ContentManager(services, contentBuilder.OutputDirectory);

            // Tell the ContentBuilder what to build.
            contentBuilder.Clear();
            contentBuilder.Add(file, "Model", null, skinned ? "SkinnedModelProcessor" : "ModelProcessor");

            // Build this new model data.
            string buildError = contentBuilder.Build();

            if (string.IsNullOrEmpty(buildError))
            {
                var model = contentManager.Load<Model>("Model");
                ParseModel(model, writer);
            }
            else
            {
                throw new Exception(buildError);
            }
        }
    }
}

Please note the usage of the  skinned boolean: It allows me to use the standard XNA ModelProcessor which does not take in account skinned meshes or my own SkinnedModelProcessor to add support for skinned models (I will not detail these files here, you can have a look to the complete solution if you want more information)

To use this code, you will need the ServiceContainer class (a simple implementation of the IServiceProvider interface):

using System;
using System.Collections.Generic;

namespace BabylonImport.Importers.FBX
{
    public class ServiceContainer : IServiceProvider
    {
        Dictionary<Type, object> services = new Dictionary<Type, object>();

        /// <summary>
        /// Adds a new service to the collection.
        /// </summary>
        public void AddService<T>(T service)
        {
            services.Add(typeof(T), service);
        }

        /// <summary>
        /// Looks up the specified service.
        /// </summary>
        public object GetService(Type serviceType)
        {
            object service;

            services.TryGetValue(serviceType, out service);

            return service;
        }
    }
}

You will also need the GraphicsDeviceService class which contains all the required resources to create a XNA GraphicsDevice class:

using System;
using System.Threading;
using Microsoft.Xna.Framework.Graphics;

#pragma warning disable 67

namespace BabylonImport.Importers.FBX
{
    class GraphicsDeviceService : IGraphicsDeviceService
    {
        static GraphicsDeviceService singletonInstance;
        static int referenceCount;

        GraphicsDeviceService(IntPtr windowHandle, int width, int height)
        {
            parameters = new PresentationParameters();

            parameters.BackBufferWidth = Math.Max(width, 1);
            parameters.BackBufferHeight = Math.Max(height, 1);
            parameters.BackBufferFormat = SurfaceFormat.Color;
            parameters.DepthStencilFormat = DepthFormat.Depth24;
            parameters.DeviceWindowHandle = windowHandle;
            parameters.PresentationInterval = PresentInterval.Immediate;
            parameters.IsFullScreen = false;

            graphicsDevice = new GraphicsDevice(GraphicsAdapter.DefaultAdapter,
                                                GraphicsProfile.Reach,
                                                parameters);
        }

        public static GraphicsDeviceService AddRef(IntPtr windowHandle, int width, int height)
        {
            if (Interlocked.Increment(ref referenceCount) == 1)
            {
                singletonInstance = new GraphicsDeviceService(windowHandle, width, height);
            }

            return singletonInstance;
        }

        public void Release(bool disposing)
        {
            if (Interlocked.Decrement(ref referenceCount) == 0)
            {
                if (disposing)
                {
                    if (DeviceDisposing != null)
                        DeviceDisposing(this, EventArgs.Empty);

                    graphicsDevice.Dispose();
                }

                graphicsDevice = null;
            }
        }

        public void ResetDevice(int width, int height)
        {
            if (DeviceResetting != null)
                DeviceResetting(this, EventArgs.Empty);

            parameters.BackBufferWidth = Math.Max(parameters.BackBufferWidth, width);
            parameters.BackBufferHeight = Math.Max(parameters.BackBufferHeight, height);

            graphicsDevice.Reset(parameters);

            if (DeviceReset != null)
                DeviceReset(this, EventArgs.Empty);
        }


        public GraphicsDevice GraphicsDevice
        {
            get { return graphicsDevice; }
        }

        GraphicsDevice graphicsDevice;

        PresentationParameters parameters;

        public event EventHandler<EventArgs> DeviceCreated;
        public event EventHandler<EventArgs> DeviceDisposing;
        public event EventHandler<EventArgs> DeviceReset;
        public event EventHandler<EventArgs> DeviceResetting;
    }
}

Finally you will need the ContentBuilder class to handle the MSBuild process (A big thank to Shawn Hargreaves for this one !):

using System;
using System.IO;
using System.Diagnostics;
using System.Collections.Generic;
using Microsoft.Build.Construction;
using Microsoft.Build.Evaluation;
using Microsoft.Build.Execution;
using Microsoft.Build.Framework;
using System.Windows.Forms;

namespace BabylonImport.Importers.FBX
{
    class ContentBuilder : IDisposable
    {
        const string xnaVersion = ", Version=4.0.0.0, PublicKeyToken=842cf8be1de50553";

        static readonly string[] pipelineAssemblies =
        {
            "Microsoft.Xna.Framework.Content.Pipeline.FBXImporter" + xnaVersion,
            "Microsoft.Xna.Framework.Content.Pipeline.XImporter" + xnaVersion,
            "Microsoft.Xna.Framework.Content.Pipeline.TextureImporter" + xnaVersion,
            "Microsoft.Xna.Framework.Content.Pipeline.EffectImporter" + xnaVersion,
            "SkinnedModelPipeline"
        };

        Project buildProject;
        ProjectRootElement projectRootElement;
        BuildParameters buildParameters;
        readonly List<ProjectItem> projectItems = new List<ProjectItem>();
        ErrorLogger errorLogger;

        string buildDirectory;
        string processDirectory;
        string baseDirectory;

        static int directorySalt;

        public string OutputDirectory
        {
            get { return Path.Combine(buildDirectory, "bin"); }
        }

        public ContentBuilder()
        {
            CreateTempDirectory();
            CreateBuildProject();
        }

        public void Dispose()
        {
            DeleteTempDirectory();
        }

        void CreateBuildProject()
        {
            string projectPath = Path.Combine(buildDirectory, "content.contentproj");
            string outputPath = Path.Combine(buildDirectory, "bin");

            // Create the build project.
            projectRootElement = ProjectRootElement.Create(projectPath);

            // Include the standard targets file that defines how to build XNA Framework content.
            projectRootElement.AddImport(Application.StartupPath + 
"\XNA\Microsoft.Xna.GameStudio.ContentPipeline.targets"); buildProject = new Project(projectRootElement); buildProject.SetProperty("XnaPlatform", "Windows"); buildProject.SetProperty("XnaProfile", "Reach"); buildProject.SetProperty("XnaFrameworkVersion", "v4.0"); buildProject.SetProperty("Configuration", "Release"); buildProject.SetProperty("OutputPath", outputPath); buildProject.SetProperty("ContentRootDirectory", "."); buildProject.SetProperty("ReferencePath", Application.StartupPath); // Register any custom importers or processors. foreach (string pipelineAssembly in pipelineAssemblies) { buildProject.AddItem("Reference", pipelineAssembly); } // Hook up our custom error logger. errorLogger = new ErrorLogger(); buildParameters = new BuildParameters(ProjectCollection.GlobalProjectCollection) {Loggers = new ILogger[] {errorLogger}}; } public void Add(string filename, string name, string importer, string processor) { ProjectItem item = buildProject.AddItem("Compile", filename)[0]; item.SetMetadataValue("Link", Path.GetFileName(filename)); item.SetMetadataValue("Name", name); if (!string.IsNullOrEmpty(importer)) item.SetMetadataValue("Importer", importer); if (!string.IsNullOrEmpty(processor)) item.SetMetadataValue("Processor", processor); projectItems.Add(item); } public void Clear() { buildProject.RemoveItems(projectItems); projectItems.Clear(); } public string Build() { // Clear any previous errors. errorLogger.Errors.Clear(); // Create and submit a new asynchronous build request. BuildManager.DefaultBuildManager.BeginBuild(buildParameters); var request = new BuildRequestData(buildProject.CreateProjectInstance(), new string[0]); BuildSubmission submission = BuildManager.DefaultBuildManager.PendBuildRequest(request); submission.ExecuteAsync(null, null); // Wait for the build to finish. submission.WaitHandle.WaitOne(); BuildManager.DefaultBuildManager.EndBuild(); // If the build failed, return an error string. if (submission.BuildResult.OverallResult == BuildResultCode.Failure) { return string.Join("n", errorLogger.Errors.ToArray()); } return null; } void CreateTempDirectory() { baseDirectory = Path.Combine(Path.GetTempPath(), GetType().FullName); int processId = Process.GetCurrentProcess().Id; processDirectory = Path.Combine(baseDirectory, processId.ToString()); directorySalt++; buildDirectory = Path.Combine(processDirectory, directorySalt.ToString()); Directory.CreateDirectory(buildDirectory); PurgeStaleTempDirectories(); } void DeleteTempDirectory() { Directory.Delete(buildDirectory, true); if (Directory.GetDirectories(processDirectory).Length == 0) { Directory.Delete(processDirectory); if (Directory.GetDirectories(baseDirectory).Length == 0) { Directory.Delete(baseDirectory); } } } void PurgeStaleTempDirectories() { // Check all subdirectories of our base location. foreach (string directory in Directory.GetDirectories(baseDirectory)) { // The subdirectory name is the ID of the process which created it. int processId; if (int.TryParse(Path.GetFileName(directory), out processId)) { try { // Is the creator process still running? Process.GetProcessById(processId); } catch (ArgumentException) { // If the process is gone, we can delete its temp directory. Directory.Delete(directory, true); } } } } } }

Please note that the XNA assemblies used by the MSBuild process are located in the application folder:

The MSBuild targets for XNA are located in this folder:

Application.StartupPath + “\XNA\Microsoft.Xna.GameStudio.ContentPipeline.targets”

Parsing object models

Once all the build process is setup, you just have to browse the objects generated by XNA through the XNA content pipeline:

var model = contentManager.Load<Model>("Model");
ParseModel(model, writer);

The ParseModel method create the final file according to your needs:

void ParseModel(Model model, BinaryWriter writer)
{
    var effects = model.Meshes.SelectMany(m => m.Effects).ToList();
    var meshes = model.Meshes.ToList();
    var total = effects.Count + meshes.Count;
    var progress = 0;
    SkinningData skinningData = null;
    if (model.Tag != null)
    {
        skinningData = model.Tag as SkinningData;
        total += skinningData.BindPose.Count;
    }

    if (skinningData != null)
    {
        // Bones
        for (int boneIndex = 0; boneIndex < skinningData.BindPose.Count; boneIndex++)
        {
            ParseBone(boneIndex, skinningData, writer);
            if (OnImportProgressChanged != null)
                OnImportProgressChanged(((progress++) * 100) / total);
        }

        // Animations
        foreach (var clipKey in skinningData.AnimationClips.Keys)
        {
            ParseAnimationClip(clipKey, skinningData.AnimationClips[clipKey], writer);
        }
    }

    foreach (Effect effect in effects)
    {
        ParseEffect(effect, writer);
        if (OnImportProgressChanged != null)
            OnImportProgressChanged(((progress++) * 100) / total);
    }

    foreach (var mesh in meshes)
    {
        ParseMesh(mesh, writer);
        if (OnImportProgressChanged != null)
            OnImportProgressChanged(((progress++) * 100) / total);
    }
}

In my case, I go through all meshes and effects and I write on the final output file what I need for my game:

void ParseMesh(ModelMesh modelMesh, BinaryWriter writer)
{
    var proxyID = ProxyMesh.Dump(modelMesh.Name, writer);
    int indexName = 0;

    foreach (var part in modelMesh.MeshParts)
    {
        var material = exportedMaterials.First(m => m.Name == part.Effect.GetHashCode().ToString());

        var indices = new ushort[part.PrimitiveCount * 3];
        part.IndexBuffer.GetData(part.StartIndex * 2, indices, 0, indices.Length);

        for (int index = 0; index < indices.Length; index += 3)
        {
            var temp = indices[index];
            indices[index] = indices[index + 2];
            indices[index + 2] = temp;
        }

        if (part.VertexBuffer.VertexDeclaration.VertexStride > PositionNormalTextured.Stride)
        {
            var mesh = new Mesh<PositionNormalTexturedWeights>(material);
            var vertices = new PositionNormalTexturedWeights[part.NumVertices];
            part.VertexBuffer.GetData(part.VertexOffset * part.VertexBuffer.VertexDeclaration.VertexStride,
vertices, 0, vertices.Length, part.VertexBuffer.VertexDeclaration.VertexStride); mesh.AddPart(indexName.ToString(), vertices.ToList(), indices.Select(i=>(
int)i).ToList()); mesh.Dump(writer, proxyID); } else { var mesh = new Mesh<PositionNormalTextured>(material); var vertices = new PositionNormalTextured[part.NumVertices]; part.VertexBuffer.GetData(part.VertexOffset * PositionNormalTextured.Stride, vertices, 0,
vertices.Length,
PositionNormalTextured.Stride); mesh.AddPart(indexName.ToString(), vertices.ToList(), indices.Select(i => (int)i).ToList()); mesh.Dump(writer, proxyID); } indexName++; } }

Using strictly the same you can import .OBJ or even .X files!

Feel free to use it for your own game:

https://www.catuhe.com/msdn/babylonimport.zip

Why WACK fails on my C++/WinRT component ?

If you are developing a WinRT component using C++ (good idea!), you could have some issues with the validation process:

The key point is that all calls to vccorlib110.dll (The C++ runtime used by Visual Studio 2012) are marked as not supported:


”API xxxxx in vccorlib110.dll is not supported for this application type.”

Actually if you want to reference natives libraries which are not system libraries, you must include them inside your package.

To do so, just add a reference to the Microsoft Visual C++ Runtime Package (alongside your own WinRT component):

So now, the runtime package is included in the package so the WACK will complete sucessfully Sourire

How to develop a game for Windows 8 modern UI

updated on 10/12/2012

The Windows Store is an exceptional opportunity for your games to reach an unmatchable market size.

So the question is: which technologies are available to develop a game?

Developing from scratch

Using DirectX

Windows 8 Modern UI allows developers to create games using DirectX 11 with C++:

https://msdn.microsoft.com/en-us/library/windows/apps/ee663274.aspx

Microsoft DirectX graphics provides a set of APIs that you can use to create games and other high-performance multimedia apps. DirectX graphics includes support for high-performance 2-D and 3-D graphics.

If you are a .NET developer you can use an excellent wrapper called SharpDX:

https://www.sharpdx.org/

Using a wrapper consumes a bit of the raw power of your computer but it can be a good tradeoff if you do not want to learn C++ (for instance my game, WorldMonger, is developed using SharpDX. By the way, I will publish soon the Babylon Engine I wrote for WorldMonger).

Using HTML5

Another way to develop a game for Windows 8 is to use HTML5. The beauty of the thing is that you can just use standard HTML5 code and integrate it in your Windows 8 project!

You have tons of articles on the web explaining how to use the canvas to create accelerated 2D games. Some of them:

  1. Everything you need to know to build HTML5 games with canvas and SVG: https://blogs.msdn.com/b/davrous/archive/2012/07/27/everything-you-need-to-know-to-build-html5-games-with-canvas-amp-svg.aspx
  2. Modernizing your HTML5 canvas games:
  3. https://blogs.msdn.com/b/davrous/archive/2012/04/06/modernizing-your-html5-canvas-games-with-offline-apis-file-apis-css3-amp-hardware-scaling.aspx
  4. https://blogs.msdn.com/b/davrous/archive/2012/04/17/modernizing-your-html5-canvas-games-part-2-offline-api-drag-n-drop-amp-file-api.aspx

  5. Unleash the power of HTML5 canvas for gaming: https://blogs.msdn.com/b/eternalcoding/archive/2012/03/22/unleash-the-power-of-html-5-canvas-for-gaming-part-1.aspx

  6. Writing a small game using HTML5 and JavaScript: https://blogs.msdn.com/b/eternalcoding/archive/2011/09/06/write-a-small-game-using-html5-and-javascript-brikbrok.aspx

Using tools

You can also use third parties tools to help you create your games. I tried to gather some of them and possibly I will update my post to add new ones when available.

Unity 3D 4

Unity 3D is one of the biggest middleware for developing 2D and 3D cross platform games. The next version (4.0) will add support for Windows 8. You can already pre-order it right there: https://unity3d.com/#unity4beta

Unity 3D is an integrated environment where coding skill are not required (but you can

My colleague Michel Rousseau published an article about it:

https://blogs.msdn.com/b/designmichel/archive/2012/09/24/the-3d-and-windows-8-unity-3d-4.aspx

EaselJS

EaselJS provides straight forward solutions for working with rich graphics and interactivity with HTML5 canvas. It provides an API that is familiar to Flash developers, but embraces JavaScript sensibilities. It consists of a full, hierarchical display list, a core interaction model, and helper classes to make working with Canvas much easier: https://www.createjs.com/#!/EaselJS.

For instance, the Atari Arcade experience was developed using EaselJS: https://www.atari.com/arcade.

Furthermore, EaselJS comes with another really useful frameworks:

  1. TweenJS: A simple tweening library for use in JavaScript. It was developed to integrate well with the EaselJS library, but is not dependent on or specific to it. It supports tweening of both numeric object properties & CSS style properties. The API is simple but very powerful, making it easy to create complex tweens by chaining commands.
  2. SoundJS: Works to abstract away the problems and makes adding sound to your games or rich experiences much easier. You can query for capabilities, then specify and prioritize what APIs, plugins, and features are leveraged for specific devices or browsers.

MonoGame

MonoGame is an Open Source implementation of the Microsoft XNA 4 Framework which you can use to port an existing XNA game to Windows 8:

https://monogame.codeplex.com/

DirectX ToolKit

DirectX Tool Kit (aka DirectXTK) is a collection of helper classes for writing Direct3D 11 code for Windows Store apps, Windows 8 Win32 desktop, and Windows 7 ‘classic’ applications in C++.

It features:

· SpriteBatch: simple & efficient 2D sprite rendering

· SpriteFont: bitmap based text rendering

· Effects: a set of built-in shaders for common rendering tasks

· GeometricPrimitive: draws basic shapes such as cubes and spheres

· CommonStates: factory providing commonly used D3D state objects

· VertexTypes: structures for commonly used vertex data formats

· DDSTextureLoader: light-weight DDS file texture loader

· WICTextureLoader: WIC-based image file texture loader

· ScreenGrab: light-weight screen shot saver

You can find it on Codeplex:

https://directxtk.codeplex.com/

GameMaker Studio

GameMaker Studio is a Framework I discovered recently. It allows you to create casual and social games for different OS including Windows 8 Modern UI:

https://www.yoyogames.com/gamemaker/studio/multiformat/windows8

So where should I go?

This matrix is not the absolute truth but a guide to help you make a decision:

Some examples?

To end this article, I would like to share with you some examples of games ported to Windows 8 in order to see the portage path.

Cut The Rope

Cut The Rope was ported from IOS to HTML5 / canvas with a really successful story:
https://www.cuttherope.ie/dev/

Jazz

Jazz (from Bulkypix & Eggball games) was ported (really quickly) from C++/DirectX on the desktop to C++/DirectX for Windows 8 Modern UI.

Pirates Love Daisies

Pirates Love Daisies was almost directly ported from HTML5 / canvas with EaselJS to HTML5 / canvas for Windows 8 (with EaselJS obviously).

WorldMonger is on the Windows 8 Store !!!!!!!

I’m thrilled to announce that WorldMonger, the game we developed with some friends of mine is available on the Windows 8 store:

https://apps.microsoft.com/webpdp/app/worldmonger/4a3fa8c4-5086-4b91-b63b-a878da33a28d

WorldMonger is a god game where you will learn how to create a stable ecosystem. You will use huge powers to model the world and control the DNA of grass, rabbits and foxes in order to establish a peaceful place Sourire

You can challenge your friends and become the best god ever!!

Feel free to also discover the integrated Island Factory to create your own island!

Credits for WorldMonger:

  • Artificial Intelligence: Eric Mittelette
  • Web Site and Back end: Pierre Lagarde
  • Graphics, UI and 3D models: Michel Rousseau
  • Game and Level Design: Ludovic Wagner
  • Music: David Rousset
  • Game Logic and 3D Engine: David Catuhe

 

Enjoy!!!

Discover how the Kinect Toolbox was created

 

I’m pleased to announce the final availability of my book about Kinect for Windows:

The summary is the following:

PART I KINECT AT A GLANCE

CHAPTER 1 A bit of background

CHAPTER 2 Who’s there?

PART II INTEGRATE KINECT IN YOUR APPLICATION

CHAPTER 3 Displaying Kinect data

CHAPTER 4 Recording and playing a Kinect session

PART III POSTURES AND GESTURES

CHAPTER 5 Capturing the context

CHAPTER 6 Algorithmic gestures and postures

CHAPTER 7 Templated gestures and postures

CHAPTER 8 Using gestures and postures in an application

PART IV CREATING A USER INTERFACE FOR KINECT

CHAPTER 9 You are the mouse!

CHAPTER 10 Controls for Kinect **

CHAPTER 11 Creating augmented reality with Kinect

So if you want to discover how to use Kinect for Windows or how the Kinect Toolkit was built, feel free to grab your copy Sourire

 

The book version:
https://www.amazon.com/Programming-Kinect-Windows-Software-Development/dp/0735666814/ref=tmm_pap_title_0?ie=UTF8&qid=1347907979&sr=8-2

The Kindle version:
https://www.amazon.com/Programming-KinectTM-Windows%C2%AE-Development-ebook/dp/B009AITHPC/ref=tmm_kin_title_0?ie=UTF8&qid=1347907979&sr=8-2

Using Web Workers to improve performance of image manipulation

Today I would like to talk about picture manipulation. Not the Direct2D way I used in my previous article but the pure JavaScript way.

 

The test case

The test application is simple. On the left a picture to manipulate and on the right the updated result (a sepia tone effect is applied):

The page itself is simple and is described as follow:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>PictureWorker</title>

    <link href="default.css" rel="stylesheet" />
</head>
<body id="root">
    <div id="sourceDiv">
        <img id="source" src="mop.jpg" />
    </div>
    <div id="targetDiv">
        <canvas id="target"></canvas>
    </div>
    <div id="log"></div>
</body>
</html>

 

The overall process to apply a sepia tone effect requires you to compute a new value for every pixel of the picture:

finalRed= (red 0.393) + (green 0.769) + (blue * 0.189);

finalGreen = (red 0.349) + (green 0.686) + (blue * 0.168);

finalBlue= (red 0.272) + (green 0.534) + (blue * 0.131);

To make it more realistic I added a bit of random in the formula so the final JavaScript code to apply to every pixel is:

function noise() {
    return Math.random() * 0.5 + 0.5;
};

function colorDistance(scale, dest, src) {
    return (scale * dest + (1 - scale) * src);
};

var processSepia = function (pixel) {
    pixel.r = colorDistance(noise(), (pixel.r * 0.393) + (pixel.g * 0.769) + (pixel.b * 0.189), pixel.r);
    pixel.g = colorDistance(noise(), (pixel.r * 0.349) + (pixel.g * 0.686) + (pixel.b * 0.168), pixel.g);
    pixel.b = colorDistance(noise(), (pixel.r * 0.272) + (pixel.g * 0.534) + (pixel.b * 0.131), pixel.b);
};

Brutal force

Obviously the very first solution can consist to the use of a brutal approach with a function that apply the previous code on every pixel.

To get access to the pixels, you can use the canvas context with the following code:

var source = document.getElementById("source");

    source.onload = function () {
        var canvas = document.getElementById("target");
        canvas.width = source.clientWidth;
        canvas.height = source.clientHeight;
        tempContext.drawImage(source, 0, 0, canvas.width, canvas.height);
        var canvasData = tempContext.getImageData(0, 0, canvas.width, canvas.height);
        var binaryData = canvasData.data;
    }

The binaryData object contains an array of every pixel and can be used to quickly read or write data directly to the canvas.

So with this in mind, we can apply the whole effect with the following code:

    var source = document.getElementById("source");

    source.onload = function () {
        var start = new Date();

        var canvas = document.getElementById("target");
        canvas.width = source.clientWidth;
        canvas.height = source.clientHeight;

        if (!canvas.getContext) {
            log.innerText = "Canvas not supported. Please install a HTML5 compatible browser.";
            return;
        }

        var tempContext = canvas.getContext("2d");
        var len = canvas.width * canvas.height * 4;

        tempContext.drawImage(source, 0, 0, canvas.width, canvas.height);

        var canvasData = tempContext.getImageData(0, 0, canvas.width, canvas.height);
        var binaryData = canvasData.data;
        processSepia(binaryData, len);

        tempContext.putImageData(canvasData, 0, 0);
        var diff = new Date() - start;
        log.innerText = "Process done in " + diff + " ms (no web workers)";

     }

The processSepia function is just an variation of the previous one:

var processSepia = function (binaryData, l) {
    for (var i = 0; i < l; i += 4) {
        var r = binaryData[i];
        var g = binaryData[i + 1];
        var b = binaryData[i + 2];

        binaryData[i] = colorDistance(noise(), (r * 0.393) + (g * 0.769) + (b * 0.189), r);
        binaryData[i + 1] = colorDistance(noise(), (r * 0.349) + (g * 0.686) + (b * 0.168), g);
        binaryData[i + 2] = colorDistance(noise(), (r * 0.272) + (g * 0.534) + (b * 0.131), b);
    }
};

With this solution, on my Intel Extreme processor (12 cores), the main process takes 150ms and obviously only use one processor:

 

Adding web workers

The best thing you can do when dealing with SIMD (single instruction multiple data) is to use a parallelization approach. Especially when you want to work with low-end hardware (such as phone devices) with limited resources.

With JavaScript, to enjoy the power of parallelization, you have to use the Web Workers (my friend David Rousset wrote an excellent paper on this subject: https://blogs.msdn.com/b/davrous/archive/2011/07/15/introduction-to-the-html5-web-workers-the-javascript-multithreading-approach.aspx).

Picture processing is a really good candidate for parallelization because (in the case of sepia tone) every processing is independent and so the following approach is possible:

To do so, first of all you have to create a tools.js file to be used as a reference by other scripts:

function noise() {
    return Math.random() * 0.5 + 0.5;
};

function colorDistance(scale, dest, src) {
    return (scale * dest + (1 - scale) * src);
};

var processSepia = function (binaryData, l) {
    for (var i = 0; i < l; i += 4) {
        var r = binaryData[i];
        var g = binaryData[i + 1];
        var b = binaryData[i + 2];

        binaryData[i] = colorDistance(noise(), (r * 0.393) + (g * 0.769) + (b * 0.189), r);
        binaryData[i + 1] = colorDistance(noise(), (r * 0.349) + (g * 0.686) + (b * 0.168), g);
        binaryData[i + 2] = colorDistance(noise(), (r * 0.272) + (g * 0.534) + (b * 0.131), b);
    }
};

The processSepia function will be applied to every bunch of the picture by a dedicated worker. The code of each worker is included in a pictureprocessor.js file:

importScripts("tools.js");

self.onmessage = function (e) {
    var canvasData = e.data.data;
    var binaryData = canvasData.data;

    var l = e.data.length;
    var index = e.data.index;

    processSepia(binaryData, l);

    self.postMessage({ result: canvasData, index: index });
};

The main point here is that the canvas data (actually a part of it according to the current block to process) is cloned by JavaScript and passed to the worker. The worker is not working on the initial source but on a copy of it (using a specified algorithm: the structured clone algorithm). The copy itself is really quick and limited to a specific part of the picture.

The main client page (default.js) has to create 4 workers and give them the right part of the picture. Then every worker will callback a function in the main thread using the messaging API (postMessage / onmessage) to give back the result:

var source = document.getElementById("source");

source.onload = function () {
    var start = new Date();

    var canvas = document.getElementById("target");
    canvas.width = source.clientWidth;
    canvas.height = source.clientHeight;

    // Testing canvas support
    if (!canvas.getContext) {
        log.innerText = "Canvas not supported. Please install a HTML5 compatible browser.";
        return;
    }

    var tempContext = canvas.getContext("2d");
    var len = canvas.width * canvas.height * 4;

    // Drawing the source image into the target canvas
    tempContext.drawImage(source, 0, 0, canvas.width, canvas.height);

    // If workers are not supported
    if (!window.Worker) {
        // Getting all the canvas data
        var canvasData = tempContext.getImageData(0, 0, canvas.width, canvas.height);
        var binaryData = canvasData.data;

        // Processing all the pixel with the main thread
        processSepia(binaryData, len);

        // Copying back canvas data to canvas
        tempContext.putImageData(canvasData, 0, 0);

        var diff = new Date() - start;
        log.innerText = "Process done in " + diff + " ms (no web workers)";

        return;
    }

    // Let say we want to use 4 workers
    var workersCount = 4;
    var finished = 0;
    var segmentLength = len / workersCount; // This is the length of array sent to the worker
    var blockSize = canvas.height / workersCount; // Height of the picture chunck for every worker

    // Function called when a job is finished
    var onWorkEnded = function (e) {
        // Data is retrieved using a memory clone operation
        var canvasData = e.data.result; 
        var index = e.data.index;

        // Copying back canvas data to canvas
        tempContext.putImageData(canvasData, 0, blockSize * index);

        finished++;

        if (finished == workersCount) {
            var diff = new Date() - start;
            log.innerText = "Process done in " + diff + " ms";
        }
    };

    // Launching every worker
    for (var index = 0; index < workersCount; index++) {
        var worker = new Worker("pictureProcessor.js");
        worker.onmessage = onWorkEnded;

        // Getting the picture
        var canvasData = tempContext.getImageData(0, blockSize * index, canvas.width, blockSize);

        // Sending canvas data to the worker using a copy memory operation
        worker.postMessage({ data: canvasData, index: index, length: segmentLength });
    }
};

source.src = "mop.jpg";
 

Using this technique, the complete process lasts only 80ms (from 150ms) on my computer and obviously uses 4 processors:

On my low-end hardware (based on dual core system), the process falls to 500ms (from 900ms).

The final code is available here: https://www.catuhe.com/msdn/pictureworkers.zip

And the live version is right there: https://www.catuhe.com/msdn/workers/default.html

(For comparison, the no web workers version: https://www.catuhe.com/msdn/workers/defaultnoworker.html)

A important point to note is that on recent computers the difference can be thin or even in favor of the code without workers. The overhead of the memory copy must be balanced by a complex code used by the workers. The sepia tone could not be enough in some cases.

However, the web workers will really be useful on low-end hardware.

Porting to Windows 8

Finally I was not able to resist to the pleasure of porting my JavaScript code to create a Windows 8 application. It took me about 10 minutes to create a blank JavaScript project and copy/paste the JavaScript code inside Sourire(feel the power of native JavaScript code for Windows 8!)

So feel free to grab the Windows 8 app code here: https://www.catuhe.com/msdn/Win8PictureWorkers.zip

Tips and tricks for C# and JavaScript Windows 8 developers: Using notifications for non-modal messages

When you want to display a message to inform your user, it is not always a good idea to use a modal MessageDialog which can be really annoying for the user.

A better way can be to use a notification in order to display a non-intrusive dialog for the user:

To do so here is the code with C#:

public static void ShowNotification(string title, string message)
{
    const ToastTemplateType template = Windows.UI.Notifications.ToastTemplateType.ToastText02;
    var toastXml = Windows.UI.Notifications.ToastNotificationManager.GetTemplateContent(template);

    var toastTextElements = toastXml.GetElementsByTagName("text");
    toastTextElements[0].AppendChild(toastXml.CreateTextNode(title));
    toastTextElements[1].AppendChild(toastXml.CreateTextNode(message));

    var toast = new Windows.UI.Notifications.ToastNotification(toastXml);

    var toastNotifier = Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier();
    toastNotifier.Show(toast);
}

And the JavaScript version:

var showNotification = function (title, message) {
    var notifications = Windows.UI.Notifications;

    var template = notifications.ToastTemplateType.toastText02;
    var toastXml = notifications.ToastNotificationManager.getTemplateContent(template);

    var toastTextElements = toastXml.getElementsByTagName("text");
    toastTextElements[0].appendChild(toastXml.createTextNode(title));
    toastTextElements[1].appendChild(toastXml.createTextNode(message));

    var toast = new notifications.ToastNotification(toastXml);

    var toastNotifier = notifications.ToastNotificationManager.createToastNotifier();
    toastNotifier.show(toast);
};

And obviously do not forget to activate the toast capable feature in the Package.appxmanifest file:

Creating a WinRT component using C++/CX Part 2: Adding a custom Direct2D effect to DeForm

The previous article introduced the DeForm Library: A WinRT component that used Direct2D to apply filters on a picture:

https://blogs.msdn.com/b/eternalcoding/archive/2012/08/13/creating-a-winrt-component-using-c-cx-deform-a-direct2d-effect-toolkit.aspx

The complete component can be found there:

https://deform.codeplex.com/

This article will show you how to use the Direct2D effect pipeline to create a custom Direct2D effect. This effect will try to apply some kind of Polaroïd effect by applying many filters in a row:

  • Black&White
  • Sepia tone
  • Saturation
  • Brightness

To do so, you have to create a COM component for Direct2D (handling reference counting and interfaces querying):

class PolaroidEffect: public IUnknown
{ public: IFACEMETHODIMP_(ULONG) AddRef(); IFACEMETHODIMP_(ULONG) Release(); IFACEMETHODIMP QueryInterface(_In_ REFIID riid, _Outptr_ void** ppOutput); private: LONG m_refCount; };

This class implements the IUnknown interface (which is the minimum you can do to be a COM component).

The associated code is obvious:

IFACEMETHODIMP_(ULONG) PolaroidEffect::AddRef() 
{ 
    m_refCount++; 
    return m_refCount; 
} 

IFACEMETHODIMP_(ULONG) PolaroidEffect::Release() 
{ 
    m_refCount--; 

    if (m_refCount == 0) 
    { 
        delete this; 
        return 0; 
    } 
    else 
    { 
        return m_refCount; 
    } 
} 

IFACEMETHODIMP PolaroidEffect::QueryInterface(_In_ REFIID riid, _Outptr_ void** ppOutput) 
{ 
    *ppOutput = nullptr; 
    HRESULT hr = S_OK; 

    if (riid == __uuidof(IUnknown)) 
    { 
        *ppOutput = reinterpret_cast<IUnknown*>(this); 
    }     
    else 
    { 
        hr = E_NOINTERFACE; 
    } 

    if (*ppOutput != nullptr) 
    { 
        AddRef(); 
    } 

    return hr; 
}

Then you have to implement the ID2D1EffectImpl interface which is the root interface of every Direct2D effects:

class PolaroidEffect: public ID2D1EffectImpl
{
public:
    IFACEMETHODIMP Initialize(
        _In_ ID2D1EffectContext* pContextInternal,
        _In_ ID2D1TransformGraph* pTransformGraph
        );

    IFACEMETHODIMP PrepareForRender(D2D1_CHANGE_TYPE changeType);
    IFACEMETHODIMP SetGraph(_In_ ID2D1TransformGraph* pGraph);

    static HRESULT Register(_In_ ID2D1Factory1* pFactory);
    static HRESULT __stdcall CreateEffect(_Outptr_ IUnknown** ppEffectImpl);

    IFACEMETHODIMP_(ULONG) AddRef();
    IFACEMETHODIMP_(ULONG) Release();
    IFACEMETHODIMP QueryInterface(_In_ REFIID riid, _Outptr_ void** ppOutput);

    // Properties
    float GetForce() const;
    HRESULT SetForce(float force);

private:
    PolaroidEffect();

    LONG m_refCount; 
    float m_force;

    ComPtr<ID2D1Effect> m_pColorMatrixBWEffect;
    ComPtr<ID2D1Effect> m_pColorMatrixSepiaEffect;
    ComPtr<ID2D1Effect> m_pBrightnessEffect;
    ComPtr<ID2D1Effect> m_pHueEffect;
    ComPtr<ID2D1Effect> m_pSaturationEffect;
    ComPtr<ID2D1TransformNode> m_pSaturationTransform;
    ComPtr<ID2D1TransformNode> m_pHueTransform;
    ComPtr<ID2D1TransformNode> m_pBrightnessTransform;
    ComPtr<ID2D1TransformNode> m_pColorMatrixBWTransform;
    ComPtr<ID2D1TransformNode> m_pColorMatrixSepiaTransform;
};

The PolaroidEffect by itself has a Force property you can get/set with GetForce()/SetForce() methods.

The ID2D1EffectImpl interface adds the following methods:

  • Initialize: This method is used to create the internal Direct2D objects
  • PrepareForRender: This method is called just before rendering if the effect has been previously initialized but not yet drawn or a property has changed or something in the context has changed (for instance the DPI, etc.)
  • SetGraph: This method is intended for composite effects which have variable number of inputs. We will get back to it in a future article

In our case, PrepareForRender and SetGraph are really simple:

IFACEMETHODIMP PolaroidEffect::PrepareForRender(D2D1_CHANGE_TYPE changeType)
{
    return S_OK;
}

IFACEMETHODIMP PolaroidEffect::SetGraph(_In_ ID2D1TransformGraph* pGraph) 
{ 
    return E_NOTIMPL; 
} 

All the job will be done Inside the Initialize method:

IFACEMETHODIMP PolaroidEffect::Initialize(
    _In_ ID2D1EffectContext* pEffectContext, 
    _In_ ID2D1TransformGraph* pTransformGraph
    )
{   
    // Effects

    // Create the b&w effect.
    HRESULT hr = pEffectContext->CreateEffect(CLSID_D2D1ColorMatrix, &m_pColorMatrixBWEffect);

    if (SUCCEEDED(hr)) 
    {
        D2D1_MATRIX_5X4_F matrix = D2D1::Matrix5x4F(
            0.2125f, 0.2125f, 0.2125f, 0.0f,
            0.7154f, 0.7154f, 0.7154f, 0.0f,
            0.0721f, 0.0721f, 0.0721f, 0.0f,
            0.0f, 0.0f, 0.0f, 1.0f,
            0.0f, 0.0f, 0.0f, 0.0f);
        m_pColorMatrixBWEffect->SetValue(D2D1_COLORMATRIX_PROP_COLOR_MATRIX, matrix);
    }

    // Create the sepia
    hr = pEffectContext->CreateEffect(CLSID_D2D1ColorMatrix, &m_pColorMatrixSepiaEffect);

    if (SUCCEEDED(hr)) 
    {
        D2D1_MATRIX_5X4_F matrix = D2D1::Matrix5x4F(
            0.90f, 0.0f, 0.0f, 0.0f,
            0.0f, 0.70f, 0.0f, 0.0f,
            0.0f, 0.0f, 0.30f, 0.0f,
            0.0f, 0.0f, 0.0f, 1.0f,
            0.0f, 0.0f, 0.0f, 0.0f);
        m_pColorMatrixSepiaEffect->SetValue(D2D1_COLORMATRIX_PROP_COLOR_MATRIX, matrix);
    }

    // Create the saturation effect.
    if (SUCCEEDED(hr)) 
    {
        hr = pEffectContext->CreateEffect(CLSID_D2D1Saturation, &m_pSaturationEffect);
    }

    if (SUCCEEDED(hr)) 
    {
        hr = m_pSaturationEffect->SetValue(D2D1_SATURATION_PROP_SATURATION, m_force);
    }

    // Create the brightness effect.
    if (SUCCEEDED(hr)) 
    {
        hr = pEffectContext->CreateEffect(CLSID_D2D1Brightness, &m_pBrightnessEffect);
    }

    if (SUCCEEDED(hr)) 
    {
        hr = m_pBrightnessEffect->SetValue(D2D1_BRIGHTNESS_PROP_WHITE_POINT, D2D1::Vector2F(1.0f, 1.0f));
    }

    if (SUCCEEDED(hr)) 
    {
        hr = m_pBrightnessEffect->SetValue(D2D1_BRIGHTNESS_PROP_BLACK_POINT, D2D1::Vector2F(0.0f, 0.15f));
    }

    // Transforms

    // Create the saturation transform from the saturation effect.
    if (SUCCEEDED(hr))
    {
        hr = pEffectContext->CreateTransformNodeFromEffect(m_pSaturationEffect.Get(), 
&m_pSaturationTransform); }
// Create the brightness transform from the brightness effect. if (SUCCEEDED(hr)) { hr = pEffectContext->CreateTransformNodeFromEffect(m_pBrightnessEffect.Get(),
&m_pBrightnessTransform); }
// Create the sepia transform from the sepia effect. if (SUCCEEDED(hr)) { hr = pEffectContext->CreateTransformNodeFromEffect(m_pColorMatrixBWEffect.Get(),
&m_pColorMatrixBWTransform); }
// Create the sepia transform from the sepia effect. if (SUCCEEDED(hr)) { hr = pEffectContext->CreateTransformNodeFromEffect(m_pColorMatrixSepiaEffect.Get(),
&m_pColorMatrixSepiaTransform); }
// Register transforms with the effect graph. if (SUCCEEDED(hr)) { hr = pTransformGraph->AddNode(m_pSaturationTransform.Get()); } if (SUCCEEDED(hr)) { hr = pTransformGraph->AddNode(m_pBrightnessTransform.Get()); } if (SUCCEEDED(hr)) { hr = pTransformGraph->AddNode(m_pColorMatrixBWTransform.Get()); } if (SUCCEEDED(hr)) { hr = pTransformGraph->AddNode(m_pColorMatrixSepiaTransform.Get()); } // Connect the custom effect’s input to the shadow transform’s input. if (SUCCEEDED(hr)) { hr = pTransformGraph->ConnectToEffectInput( 0, // Input index of the effect. m_pColorMatrixBWTransform.Get(), // The receiving transform. 0 // Input index of the receiving transform. ); } // Connect nodes if (SUCCEEDED(hr)) { hr = pTransformGraph->ConnectNode( m_pColorMatrixBWTransform.Get(), // ‘From’ node. m_pColorMatrixSepiaTransform.Get(), // ‘To’ node. 0 // Input index of the ‘to’ node. ); } if (SUCCEEDED(hr)) { hr = pTransformGraph->ConnectNode( m_pColorMatrixSepiaTransform.Get(), // ‘From’ node. m_pBrightnessTransform.Get(), // ‘To’ node. 0 // Input index of the ‘to’ node. ); } if (SUCCEEDED(hr)) { hr = pTransformGraph->ConnectNode( m_pBrightnessTransform.Get(), // ‘From’ node. m_pSaturationTransform.Get(), // ‘To’ node. 0 // Input index of the ‘to’ node. ); } // Connect the transform’s output to the custom effect’s output. if (SUCCEEDED(hr)) { hr = pTransformGraph->SetOutputNode( m_pSaturationTransform.Get() ); } return hr; }

The code creates every filter, gets the transform interface of each filter and connects the transforms like this:

  • Input to black and white transform
  • Black and white to sepia
  • Sepia to brightness
  • Brightness to saturation
  • Saturation to output

Simple, isn’t it?

Then you have to create a method to register your effect with the factory:

DEFINE_GUID(CLSID_Polaroid, 0x59820389, 0xbbd5, 0x40e8, 0x9a, 0xf2, 0x30, 0x9b, 0xb2, 0x94, 0xb3, 0x95);

HRESULT PolaroidEffect::Register(_In_ ID2D1Factory1* pFactory)
{
    PCWSTR propertyXml =
        XML(
        <?xml version='1.0'?>
        <Effect>
        <!-- System Properties -->
        <Property name='DisplayName' type='string' value='Polaroïd Effect'/>
        <Property name='Author' type='string' value='David Catuhe'/>
        <Property name='Category' type='string' value='Bitmap Effect'/>
        <Property name='Description' type='string' value='Apply a Polaroïd like effect.'/>
        <Inputs>
        <Input name='Source'/>
        </Inputs>
        <!-- Effect-specific Properties -->
        <Property name='Force' type='float' value='0'>
        <Property name='DisplayName' type='string' value='Force value'/>
        <Property name='Default' type='float' value='1.0'/>
        </Property>
        </Effect>
        );

    D2D1_PROPERTY_BINDING bindings[] =
    {
        D2D1_VALUE_TYPE_BINDING(L"Force",  &SetForce,  &GetForce),
    };

    // Register the effect using the data defined above.
    return pFactory->RegisterEffectFromString(
        CLSID_Polaroid,
        propertyXml,
        bindings,
        ARRAYSIZE(bindings),
        CreateEffect
        );
}

HRESULT __stdcall PolaroidEffect::CreateEffect(_Outptr_ IUnknown** ppEffectImpl)
{
    *ppEffectImpl = static_cast<ID2D1EffectImpl*>(new PolaroidEffect());

    if (*ppEffectImpl == nullptr)
    {
        return E_OUTOFMEMORY;
    }

    return S_OK;
}

The goal of the Register method is to create a XML description string. This XML describes the effect and each parameter handled by the effect.

In our case, we can expose the Force parameter through the GetForce()/SetForce() methods:

float PolaroidEffect::GetForce() const
{
    return m_force;
}

HRESULT PolaroidEffect::SetForce(float force)
{
    m_force = force;
    return m_pSaturationEffect->SetValue(D2D1_SATURATION_PROP_SATURATION, m_force);
}

Please note the use of the CLSID_Polaroid in order to uniquely identify the effect when you call CreateEffect:

// Create the effect
ComPtr<ID2D1Effect> d2dEffect;
Tools::Check(
    context->CreateEffect(CLSID_Polaroid, &d2dEffect)
    );

So finally to allow DeForm to instantiate this new effect, you just have to call the following code:

// Register the custom Polaroïd effect.
Tools::Check(
    PolaroidEffect::Register(m_d2dFactory.Get())
    );

The next article will be about creating custom effects using HLSL shader code!

Tips and tricks for the Windows 8 app developer with HTML5/Javascript: Handling the reset (clear) of a textbox

Windows 8 introduced a real nice feature for textbox: the clear button.

By clicking on this button, you can simply empty the textbox.

But the question is: how can I handle the associated event?

In fact, as usual, it is pretty simple (when you have the solution!). You just have to handle the oninput event Sourire

This event is raised every time an input is produced for the textbox. For instance, this occurs when you enter some text, or you cut, delete (using the cross!), or paste content.

 

To handle the event, just add the following code:

document.getElementById("textFilter").oninput = function () {
};

How to cook a Windows 8 application with HTML5/Javascript/CSS3: RTM version

With the availability of Windows 8 RTM, you can now download the RTM version of UrzaGatherer:

https://www.catuhe.com/msdn/urza/urzagatherer-rtm.zip

 

 

If you want to learn more about how you can migrate a RP app, you can go there:

https://www.microsoft.com/en-us/download/details.aspx?id=30706

The complete series can be found here: