Adding support for Canvas in ChakraBridge

Last week, I announced the beginning of ChakraBridge:

One of the problem I mentioned was that the JavaScript framework that you want to use has to be independent of the DOM.

But today, thanks to the great contribution of Koen Zwikstra (Developer of XAML Spy), I’m thrilled to announce that you can also use JavaScript frameworks that use the HTML5 canvas!

You can now find a demo of paper.js running inside a Win2D canvas:

Paper.js is an open source vector graphics scripting framework that runs on top of the HTML5 Canvas. It offers a clean Scene Graph / Document Object Model and a lot of powerful functionality to create and work with vector graphics and bezier curves, all neatly wrapped up in a well designed, consistent and clean programming interface.

How it works?

Basically, ChakraBridge provides an API surface identical to the HTML5 Canvas and routes all order to a Win2D canvas. To achieve this, weprojected new elements to JavaScript space:


For instance the interface implemented by our canvas implementation is the following:

public interface ICanvasRenderingContext2D
        IHTMLCanvasElement canvas { get; }
        string fillStyle { get; set; }
        float lineWidth { get; set; }
        string strokeStyle { get; set; }
        void beginPath();
        void bezierCurveTo(float cp1x, float cp1y, float cp2x, float cp2y, float x, float y);
        void clearRect(float x, float y, float width, float height);
        void closePath();
        void fill();
        void fillRect(float x, float y, float width, float height);
        IImageData getImageData(float sx, float sy, float sw, float sh);
        void lineTo(float x, float y);
        void moveTo(float x, float y);
        void restore();
        void save();
        void stroke();
        void transform(float a, float b, float c, float d, float e, float f);

The idea is then to map these methods to the same Win2D feature like for instance for the fillRect function:

public void fillRect(float x, float y, float width, float height)
    this.window.Session.FillRectangle(x, y, width, height, this.state.Fill);

this.window.Session is a Win2D CanvasDrawingSession and so, we are just using the FillRectangle method.

To display the result, you just have to add a Win2D CanvasControl on your page:

<canvas:CanvasControl x:Name="canvasCtrl" Draw="canvasCtrl_Draw" Width="500" Height="500"/>

Simple, isn’t it?

Managing render frequency

From JavaScript point of view, the rendering is done by a single function:

function drawScene() {

From C# world, the idea is just to call this function every 1/60 second:

this.timer = new DispatcherTimer {
    Interval = TimeSpan.FromMilliseconds(1000d / 60)
this.timer.Tick += (o, e) => {

The Wind2D canvas is then invalidate and redrawn using this code:

 private void canvasCtrl_Draw(CanvasControl sender, CanvasDrawEventArgs args)
     var target = (CanvasRenderTarget);


Going further

Paper.js is now fully usable from your C# project! And that’s just the beginning. We are looking for more contributor to add more features to ChakraBridge! Feel free to join us at

Using JavaScript frameworks from your C#/UWP application

JavaScript has, without doubt, the most vibrant ecosystem out there. There are gazillions of new frameworks released every month (

As a C# developer—even with a great, active C# community—you may sometimes find yourself a little bit jealous.

What if we could bring the JavaScript language and ecosystem also into the C# world? What if a C# developer could use JavaScript inside C#?

Fret not! I’m thrilled to announce a new WinRT project I’ve created—ChakraBridge—which will allow you to get invited to the party like any web developers.

Indeed, thanks to Chakra (the JavaScript engine used by Microsoft Edge), it is now possible to host one of the fastest JavaScript engines (and also the one with the highest support of ECMAScript 6) inside any Universal Windows Platform application. ChakraBridge embeds Chakra engine in a WinRT application and provides all required high level tools to use it seamlessly in a C# / UWP application.

People developing a HTML/JS/CSS based UWP App (WWA or hosted app in the old world) don’t need to host Chakra separately as it’s already a part of the sandbox.

How to use it?

This is pretty simple: just head to and clone the project to your hard drive.

Now you have two options: you can either add the ChakraBridge project (which is a WinRT library) in your solution or you can reference **ChakraBridge.winmd** from /dist folder.

Initializing Chakra

Once referenced, you can call these lines of code to get Chakra ready to use:

host = new ChakraHost();

The variable named host is your JavaScript context.

You may also want to be able to trace the messages sent to the JavaScript console. To do so, please add this code:

Console.OnLog += Console_OnLog;

Once connected, this event handler will be called everytime the JavaScript code executes “console.log()”.

Which JavaScript framework can I use?

Before defining what you can do, you have to understand that Chakra is a JavaScript engine which means that you can execute JavaScript code in your app but there is nothing related to HTML or CSS.

You can then pick any framework not related to HTML (DOM operations) or CSS. Here are some examples (but there are MANY MANY more):

Once you’ve picked the framework that you want to use, you have to inject it into your Chakra context. In my case I wanted to use CDC (CloudDataConnector) because I needed a way to seamlessly connect to various cloud data providers (Amazon, Azure, CouchDB, etc..).You can either download the .js files and embed them in your project or download them every time you launch your application:

await ReadAndExecute("cdc.js");
await ReadAndExecute("azuremobileservices.js");
await ReadAndExecute("cdc-azuremobileservices.js");
await ReadAndExecute("sample.js");

You can replace ReadAndExecute by DownloadAndExecute if you prefer referencing live .js files

Now your JavaScript context has compiled and executed the referenced files.

Please note that “sample.js” is a custom JavaScript file which contains the client code for my application:

var CDCAzureMobileService = new CloudDataConnector.AzureDataService();

var CDCService = new CloudDataConnector.DataService(new CloudDataConnector.OfflineService(), new CloudDataConnector.ConnectivityService());
CDCAzureMobileService.addSource('', 'xxxxxxx', ['people']);


var dataContext = {};

var onUpdateDataContext = function (data) {
    if (data && data.length) {

var syncPeople = function (data) {
    sendToHost(JSON.stringify(data), "People[]");

CDCService.connect(function (results) {
    if (results === false) {
        console.log("CDCService must first be successfully initialized");
    } else {
        console.log("CDCService is good to go!");
}, dataContext, onUpdateDataContext, 3);

Nothing fancy here, I’m just using CDC to connect to an Azure mobile service in order to get a list of people.

Getting data back from JavaScript world

Next, I’ll get my data back from the JavaScript context. As you may have seen in the “sample.js” file, when the data context is updated, I’m calling a global function called sendToHost. This function is provided by ChakraBridge to allow you to communicate with the C# host.

To get it working, you have to define what types can be sent from JavaScript:


So now when sendToHost is called from JavaScript context, a specific event will be raised on C# side:

CommunicationManager.OnObjectReceived = (data) =>
    var peopleList = (People[])data;
    peopleCollection = new ObservableCollection<People>(peopleList);

    peopleCollection.CollectionChanged += PeopleCollection_CollectionChanged;

    GridView.ItemsSource = peopleCollection;
    WaitGrid.Visibility = Visibility.Collapsed;

Obviously you are responsible for the mapping between your JavaScript object and your C# type (same properties names)

Calling JavaScript functions

On the other hand you may want to call specific functions in your JavaScript context from your C# code. Think, for instance, about committing a transaction or adding a new object.

So first let’s create a function for a specific task in our “sample.js” file:

commitFunction = function () {
    CDCService.commit(function () {
        console.log('Commit successful');
    }, function (e) {
        console.log('Error during commit');

To call this function from C#, you can use this code:


If your function accepts parameters, you can pass them as well:

host.CallFunction("deleteFunction", people.Id);

The current version of ChakraBridge accepts int, double, bool and string types.

Debugging in the JavaScript context

Thanks to Visual Studio, it is still possible to debug your JavaScript code even if you are now in a C# application. To do so, you first have to enable script debugging in the project properties:

Then, you can set a breakpoint in your JavaScript code. **

But there is a trick to know: You cannot set this breakpoint in the files in your project as they are here just as a source. You have to reach the executed code through the Script Documents part of the Solution Explorer when running in debug mode:

How does it work?


Let’s now discuss how things work under the hood.

Basically, Chakra is based on a Win32 library located at “C:WindowsSystem32Chakra.dll” on every Windows 10 desktop devices.

So the idea here is to provide a internal C# class that will embed all entry points to the DLL through DllImport attributes:

 internal static class Native
        internal static extern JavaScriptErrorCode JsCreateRuntime(JavaScriptRuntimeAttributes attributes, 
            JavaScriptThreadServiceCallback threadService, out JavaScriptRuntime runtime);

        internal static extern JavaScriptErrorCode JsCollectGarbage(JavaScriptRuntime handle);

        internal static extern JavaScriptErrorCode JsDisposeRuntime(JavaScriptRuntime handle);

The list of available functions is pretty long. ChakraBridge is here to encapsulate these functions and provide a higher level abstraction.

Other option to consider here: you can also use Rob Paveza’s great wrapper called js-rt winrt: It’s higher-level than the pure Chakra engine and it avoids needing P/Invoke.

Providing missing pieces

One important point to understand is that Chakra only provides the JavaScript engine. But you, as the host, have to provide tools used alongside JavaScript. These tools are usually provided by browsers (think about C# without .NET).

For instance, XmlHttpRequest object or setTimeout function are not part of JavaScript language. They are tools used BY the JavaScript language in the context of your browser.

To allow you to use JavaScript frameworks, ChakraBridge provides some of these tools.

This is an ongoing process and more tools will be added to ChakraBridge in the future by me or the community

Let’s now have a look at the implementation of XmlHttpRequest:

using System;
using System.Collections.Generic;
using System.Net.Http;

namespace ChakraBridge
    public delegate void XHREventHandler();

    public sealed class XMLHttpRequest
        readonly Dictionary<string, string> headers = new Dictionary<string, string>();
        Uri uri;
        string httpMethod;
        private int _readyState;

        public int readyState
            get { return _readyState; }
            private set
                _readyState = value;


        public string response => responseText;

        public string responseText
            get; private set;

        public string responseType
            get; private set;

        public bool withCredentials { get; set; }

        public XHREventHandler onreadystatechange { get; set; }

        public void setRequestHeader(string key, string value)
            headers[key] = value;

        public string getResponseHeader(string key)
            if (headers.ContainsKey(key))
                return headers[key];

            return null;

        public void open(string method, string url)
            httpMethod = method;
            uri = new Uri(url);

            readyState = 1;

        public void send(string data)

        async void SendAsync(string data)
            using (var httpClient = new HttpClient())
                foreach (var header in headers)
                    if (header.Key.StartsWith("Content"))
                    httpClient.DefaultRequestHeaders.Add(header.Key, header.Value);

                readyState = 2;

                HttpResponseMessage responseMessage = null;

                switch (httpMethod)
                    case "DELETE":
                        responseMessage = await httpClient.DeleteAsync(uri);
                    case "PATCH":
                    case "POST":
                        responseMessage = await httpClient.PostAsync(uri, new StringContent(data));
                    case "GET":
                        responseMessage = await httpClient.GetAsync(uri);

                if (responseMessage != null)
                    using (responseMessage)
                        using (var content = responseMessage.Content)
                            responseType = "text";
                            responseText = await content.ReadAsStringAsync();
                            readyState = 4;


As you can see, the XmlHttpRequest class uses internally a HttpClient and uses it to mimic the XmlHttpRequest object that you can find in a browser or in node.js.

This class is then projected (literally) to the JavaScript context:


Actually, the entire namespace is projected as there is no way to project only a single class. So a JavaScript is then executed to move the XmlHttpRequest object to the global object:

RunScript("XMLHttpRequest = ChakraBridge.XMLHttpRequest;");

Handling garbage collections

One of the pitfalls you may face if you decide to extend ChakraBridge is garbage collection. Indeed, the JavaScript garbage collector has no idea of what is happening outside of its own context.

So for instance, let’s see how the setTimeout function is developed:

internal static class SetTimeout
        public static JavaScriptValue SetTimeoutJavaScriptNativeFunction(JavaScriptValue callee, bool isConstructCall, 
MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 3)] JavaScriptValue[] arguments,
ushort argumentCount, IntPtr callbackData) { // setTimeout signature is (callback, after) JavaScriptValue callbackValue = arguments[1]; JavaScriptValue afterValue = arguments[2].ConvertToNumber(); var after = Math.Max(afterValue.ToDouble(), 1); uint refCount; Native.JsAddRef(callbackValue, out refCount); Native.JsAddRef(callee, out refCount); ExecuteAsync((int)after, callbackValue, callee); return JavaScriptValue.True; } static async void ExecuteAsync(int delay, JavaScriptValue callbackValue, JavaScriptValue callee) { await Task.Delay(delay); callbackValue.CallFunction(callee); uint refCount; Native.JsRelease(callbackValue, out refCount); Native.JsRelease(callee, out refCount); } }

SetTimeoutJavaScriptNativeFunction is the method that will be projected inside JavaScript context. You can note that every parameter is gathered as a JavaScriptValue and then cast to the expected value. For the callback function (callbackValue), we have to indicate to JavaScript garbage collector that we hold a reference so it could not free this variable even if no one is holding it inside JavaScript context:

Native.JsAddRef(callbackValue, out refCount);

The reference has to be released once the callback is called:

Native.JsRelease(callbackValue, out refCount);

On the other hand, C# garbage collector has no idea of what is happening inside the Chakra black box. So you have to take care of keeping reference to objects or functions that you project into the JavaScript context. In the specific case of setTimeout implementation, you first have to create a static field that point to your C# method just to keep a reference on it.

Why not use a Webview?

This is a valid question that you may ask. Using only Chakra provides some great advantages:

  • Memory footprint: No need to embed HTML and CSS engines as we already have XAML.
  • Performance: We can directly control JavaScript context and, for instance, call JavaScript function without having to go through a complex process like with the webview.
  • Simplicity: The webview needs to navigate to a page to execute JavaScript. There is no straightforward way to just execute JavaScript code.
  • Control: By providing our own tools (like XHR or setTimeout), we have a high level of granularity to control what JavaScript can do.

Going further

Thanks to Chakra engine, this is the beginning of a great collaboration between C#, XAML and JavaScript. Depending on the community response, I plan to add more features in the ChakraBridge project to be able to handle more JavaScript frameworks (for instance, it could be great to add support for canvas drawing in order to be able to use all awesome charting frameworks available for JavaScript).

If you are interested in reading more about Chakra itself you can go to the official Chakra samples repository:

You may also find these links interesting:

JavaScript goes to Asynchronous city

JavaScript has come a long way since its early versions and thanks to all efforts done by TC39 (The organization in charge of standardizing JavaScript (or ECMAScript to be exact)) we now have a modern language that is used widely.

One area within ECMAScript that received vast improvements is asynchronous code. You can learn more about asynchronous programming here if you’re a new developer. Fortunately we’ve included these changes in Windows 10’s new Edge browser. Check out the change log below:

Among all these new features, let’s specifically focus on “ES2016 Async Functions**behind the Experimental Javascript** features flag and take a journey through the updates and see how ECMAScript can improve your currently workflow.

First stop: ECMAScript 5 – Callbacks city

ECMAScript 5 (and previous versions as well) are all about callbacks. To better picture this, let’s have a simple example that you certainly use more than once a day: executing a XHR request.

var displayDiv = document.getElementById("displayDiv");

// Part 1 - Defining what do we want to do with the result
var processJSON = function (json) {
    var result = JSON.parse(json);

    result.collection.forEach(function(card) {
        var div = document.createElement("div");
        div.innerHTML = + " cost is " + card.price;


// Part 2 - Providing a function to display errors
var displayError = function(error) {
    displayDiv.innerHTML = error;

// Part 3 - Creating and setting up the XHR object
var xhr = new XMLHttpRequest();'GET', "cards.json");

// Part 4 - Defining callbacks that XHR object will call for us
xhr.onload = function(){
    if (xhr.status === 200) {

xhr.onerror = function() {
    displayError("Unable to load RSS");

// Part 5 - Starting the process

Established JavaScript developers will note how familiar this looks since XHR callbacks are used all the time! It’s simple and fairly straight forward: the developer creates an XHR request and then provides the callback for the specified XHR object.

In contrast, callback complexity comes from the execution order which is not linear due to the inner nature of asynchronous code:

The “callbacks hell” can even be worse when using another asynchronous call inside of your own callback.

Second stop: ECMAScript 6 – Promises city

ECMAScript 6 is gaining momentum and Edge is has leading support with 88% coverage so far.

Among a lot of great improvements, ECMAScript 6 standardizes the usage of promises (formerly known as futures).

According to MDN, a promise is an object which is used for deferred and asynchronous computations. A promise represents an operation that hasn’t completed yet, but is expected in the future. Promises are a way of organizing asynchronous operations in such a way that they appear synchronous. Exactly what we need for our XHR example.

Promises have been around for a while but the good news is that now you don’t need any library anymore as they are provided by the browser.

Let’s update our example a bit to support promises and see how it could improve the readability and maintainability of our code:

var displayDiv = document.getElementById("displayDiv");

// Part 1 - Create a function that returns a promise
function getJsonAsync(url) {
    // Promises require two functions: one for success, one for failure
    return new Promise(function (resolve, reject) {
        var xhr = new XMLHttpRequest();'GET', url);

        xhr.onload = () => {
            if (xhr.status === 200) {
                // We can resolve the promise
            } else {
                // It's a failure, so let's reject the promise
                reject("Unable to load RSS");

        xhr.onerror = () => {
            // It's a failure, so let's reject the promise
            reject("Unable to load RSS");


// Part 2 - The function returns a promise
// so we can chain with a .then and a .catch
getJsonAsync("cards.json").then(json => {
    var result = JSON.parse(json);

    result.collection.forEach(card => {
        var div = document.createElement("div");
        div.innerHTML = `${} cost is ${card.price}`;

}).catch(error => {
    displayDiv.innerHTML = error;

You may have noticed a lot of improvements here. Let’s have a closer look.

Creating the promise

In order to “promisify” (sorry but I’m French so I’m allowed to invent new words) the old XHR object, you need to create a Promise object:

Using the promise

Once created, the promise can be used to chain asynchronous calls in a more elegant way:

So now we have (from the user standpoint):

  • Get the promise (1)
  • Chain with the success code (2 and 3)
  • Chain with the error code (4) like in a try/catch block

What’s interesting is that chaining promises are easily called using .then().then(), etc.

Side node: Since JavaScript is a modern language, you may notice that I’ve also used syntax sugar from ECMAScript 6 like template strings _or_ arrow functions_._

Terminus: ECMAScript 7 – Asynchronous city

Finally, we’ve reached our destination! We are almost in the future, but thanks to Edge’s rapid development cycle, the team is able to introduce a bit of ECMAScript 7 with async functions in the latest build!

Async functions are a syntax sugar to improve the language-level model for writing asynchronous code.

Async functions are built on top of ECMAScript 6 features like generators. Indeed, generators can be used jointly with promises to produce the same results but with much more user code

We do not need to change the function which generates the promise as async functions work directly with promise.

We only need to change the calling function:

// Let's create an async anonymous function
(async function() {
    try {
        // Just have to await the promise!
        var json = await getJsonAsync("cards.json");
        var result = JSON.parse(json);

        result.collection.forEach(card => {
            var div = document.createElement("div");
            div.innerHTML = `${} cost is ${card.price}`;

    } catch (e) {
        displayDiv.innerHTML = e;

This is where magic happens. This code looks like a regular synchronous code with a perfectly linear execution path:

Quite impressive, right?

And the good news is that you can even use async functions with arrow functions or class methods.


Going further

If you want more detail on how we implemented it in Chakra, please check the official post on Edge blog:

You can also track the progress of various browsers implementation of ECMAScript 6 and 7 using Kangax’s website: Feel free also to check our JavaScript roadmap as well!

Please, do not hesitate to give us your feedback and support your favorite features by using the vote button:

Thanks for reading and we’re eager to hear your feedback and ideas!

David Catuhe

Principal Program Manager


[UWP]Take the control of your title bar

With Windows 10, we (as developers) have the opportunity to create both desktop and mobile apps using Universal Windows Platform.

Feel free to reach me on Twitter to discuss about this article: @deltakosh

With this in mind, I created UrzaGatherer 3.0 that you can grab freely on the store.

Today I would like to zoom on a great feature offered by UWP: the control over the title bar.

Let’s have a look on a more specific part of the previous screenshot:

As you can see, I was able to add a search box INSIDE the title bar. The interesting point here is that I actually integrated my search box into the “official” Windows shell title bar. Previously we were forced to completely replace the title bar and provide a “clone” in order to achieve the same goal (when it was possible depending on the technology used).

But stop talking now, let’s see how it works.

XAML part

The XAML side of the house is pretty simple:

    <RowDefinition Height="Auto" />
    <RowDefinition Height="*" />
<Grid Background="White" Grid.Row="0" x:Name="TitleBar">
        <ColumnDefinition Width="Auto"/>
        <ColumnDefinition Width="Auto"/>
        <ColumnDefinition Width="Auto"/>
    <Grid Background="{StaticResource Accent}" x:Name="BackButtonGrid">
        <Button x:Name="BackButton" Style="{StaticResource IconButtonStyle}" />
    <Grid Grid.Column="1" x:Name="MainTitleBar" Background="Transparent">
        <TextBlock Text="UrzaGatherer" VerticalAlignment="Center" 
FontSize="12" FontFamily="Segoe UI" FontWeight="Normal" Margin="10,0"></TextBlock> </Grid> <TextBox Grid.Column="2" x:Name="SearchBox" x:Uid="SearchBox"></TextBox> <Grid Grid.Column="3" x:Name="RightMask"/> </Grid>

So we have a TitleBar container which contains:

  • My custom back button (I wanted to replace the original one because I was not able to control the background color which is forced to Windows Accent color and I wanted to use my lovely purple)
  • The MainTitleBar part which will be used by Windows to allow the user to grab and move your application.
  • My search box
  • A RightMask control which will be cover by windows’ controls (Minimize, Maximize and Close)


Please note that MainTitleBar cannot contains interactive items as all inputs will be redirected to Windows

The C# part

From the C# point of view, we have to call a couple of simple APIs:

CoreApplicationViewTitleBar coreTitleBar = CoreApplication.GetCurrentView().TitleBar;
coreTitleBar.ExtendViewIntoTitleBar = true;

TitleBar.Height = coreTitleBar.Height;

The idea is to get the current title bar and then ask for extending your view into the title bar.

Height is defined using the current title bar’s height.

Window.Current.SetTitleBar is used to identify which control will handle user inputs (Grab and move)

Being a great Windows citizen

As the new title bar owner, you also have some little responsibilities.

First of all you have to keep your house clean, which means that you have to clearly indicate to your user that your window is or is not the main window. To do so, I suggest mimicking what Windows is doing with just this small piece of code:

Window.Current.Activated += Current_Activated;
private void Current_Activated(object sender, WindowActivatedEventArgs e)
    if (e.WindowActivationState != CoreWindowActivationState.Deactivated)
        BackButtonGrid.Visibility = Visibility.Visible;
        MainTitleBar.Opacity = 1;
        SearchBox.Opacity = 1;
        BackButtonGrid.Visibility = Visibility.Collapsed;
        MainTitleBar.Opacity = 0.5;
        SearchBox.Opacity = 0.5;

So when my window is the main window, I have this rendering:

And when it is not the main window, we switch to this one:

You also have to react to Continuum properly, which means that you are supposed to hide your title when user switch to Tablet mode. But no worry, this task is easy as well:

coreTitleBar.IsVisibleChanged += CoreTitleBar_IsVisibleChanged;
void CoreTitleBar_IsVisibleChanged(CoreApplicationViewTitleBar titleBar, object args)
    TitleBar.Visibility = titleBar.IsVisible ? Visibility.Visible : Visibility.Collapsed;

And finally you may want to respond to layout metrics changes (like scale change, etc.):

coreTitleBar.LayoutMetricsChanged += CoreTitleBar_LayoutMetricsChanged;
private void CoreTitleBar_LayoutMetricsChanged(CoreApplicationViewTitleBar sender, object args)
    TitleBar.Height = sender.Height;
    RightMask.Width = sender.SystemOverlayRightInset;

And that’s it! You’re now ready to integrate your own UI into the title bar like Edge for instance:

[Babylon.js] Open-sourcing the documentation

While working on Babylon.js 2.2, I have to admit that the framework is becoming a really important piece of software with a big number of APIs.

With David Rousset, our first priority when designing the API is to keep things simple. A good framework has to be easy to understand and extremely easy to use.

In the same way I consider that a big part of the quality of a framework comes from the documentation itself. That’s why we spend a lot of time writing tutorials and articles for

However and mostly for an open source project, keeping the documentation up to date and accessible enough is a real challenge.

I often refrain myself from writing a new feature while the previous ones are not well documented (And I can assure you that this is tough).

The first quality of Babylon.js is not the code or the easiness of the API but its community. These folks are doing a tremendous work helping developers on the forum ( or providing samples on the playground (

So we decide to rewrite our documentation site in order to let the community works on the documentation as well.

Why reinventing the wheel?

The first version of the documentation site we did was an autonomous site where we had to handle users rights, validation, history and so on. it was a failure because we were not able to dedicate enough time to make it great.

In the other hand we have Github, a fantastic site with integrated users management and all the great tools that we need to handle contributions.

So we decided to open source our documentation through Github. The basic idea was simple: People will be able to submit pull requests for .md files that will then be processed to generate html files used by the documentation site. Then thanks to Github, we (as administrators) can merge, comments, accept or reject changes.

Why markdown files?

We chose markdown files because the syntax language is extremely simple allowing you to focus not on the style but on the content. This is the job of the documentation site to present your content. You do not have to worry about visual styles when writing a sample or a tutorial. API documentation look should be very clean because readability is the most important aspect of the look-and-feel.

Furthermore due to its simplicity, markdown files can be easily processed to generate HTML files.

Node.js, grunt and Azure

The documentation is hosted on a node.js thanks to Express. Nothing really fancy here. And this is what I like. Node.js works on all operating system so developers can easily host the documentation on their computer while working on updates.

Because we wanted to work with .md files, we decided to rely on Grunt to provide a clean integration process:

  • After forking and cloning the repository, developers just have to run “grunt serve
  • This will automatically launch a webserver and a watcher
  • The watcher will take care of recompiling everything when you update a .md file
  • So from the point of view of the user, you update a .md file and then you can go to your browser, navigate to https://localhost:3000 to immediately see the result
  • You can use Visual Studio Code or SublimeText to edit .md files but you can also edit them online directly on GitHub site.
  • When you’re done, simply submit a pull request from your fork to the main repository


After a pull request is validated on the repository, administrators can merge the master branch to the production branch to automatically publish to Azure. This feature is definitely huge: you can configure your Azure website to point on a specific GitHub branch and have it directly deployed:


As you can see, Azure integration with Github allowed us to have a complete and clean pipeline for Babylon.js community to contribute to the documentation site. Thanks to Grunt, the editing process is straight-forward and can be done on any operating system supported by node.js.

If you want to try it, please go to and let us know what do you think about it!

[Vorlon.js] Focus on DOM Explorer

This article is the first of a new series of articles about Vorlon.js. The goal is to do a focus on a specific feature each time.

Today I would like to start with one of the biggest: the DOM Explorer:

Installing Vorlon.js

Just as a reminder, here is what you have to do to use Vorlon.js:

  • Install and run the Vorlon.JS server from npm:

    $ npm i -g vorlon
    $ vorlon

  • Once Vorlon.js is done installing, you can now run the server:

    $ vorlon
    The Vorlon server is running

  • With the server is running, open https://localhost:1337 in your browser to see the Vorlon.js dashboard.

  • The last step is to enable Vorlon.js by adding this script tag to your app:

Now when you open your app you should see your client appear on the dashboard.

Using the DOM Explorer

By default, the DOM explorer is on but if you need to enable it, you have to go to [Vorlon folder]Serverconfig.json and enable the plugin:

    "useSSL": false,
    "includeSocketIO": true,
    "activateAuth": false,
    "username": "",
    "password": "",
    "plugins": [
        { "id": "DOM", "name": "Dom Explorer", "panel": "top", "foldername": "domExplorer", "enabled": true }

Once enabled you will be able to control almost everything related to the DOM through plugin’s main window.

And here is what you’ll be able to do:

Selection overlay

By moving your mouse over any node, you will be able to see where this node belongs on the client side:

Live text editing

By double-clicking on any text inside the DOM explorer window, you have the ability to live edit it:

But you can also use the HTML section on the right pane to edit HTML text content:

This feature can also be reached by right-clicking on the node itself

Attributes edition

Nodes’ attributes are also editable by just clicking on them:

But you can also right click on the node name itself to add a new attribute:

By right-clicking on existing attribute, you will get even more options like updating value or name or deleting the attribute:

Search using CSS selector

When dealing with big HTML pages you may want to search a specific node. This is why we introduced the “search node by css selector” feature.

Just enter your selector in the search box and you’re done!

Dynamic refresh

The DOM Explorer window can either be automatically refreshed when client DOM changes (beware as this could consume a lot of CPU power and network bandwidth even if we use delta updates) or can be refreshed on demand.

Auto refresh in controlled on the settings pane:

When auto refresh is off, the Refresh button can tell you if there are available updates on the client side (The little dot on the button will turn red):

In this case, just clicking on the button will launch a complete refresh of the page

Styles editor

When you click on a node, the Styles pane will present you all the styles explicitly defined for this node:

You can then use the “+” button to add new style or click on existing ones to change their value:

To see ALL styles applied to a node (including implicit ones), you just have to use the Computed Styles pane:


Like browsers’ F12 tools, the Layout pane is here to help you understand the layout of every node that you select:

Finally, last one thing that you can find useful: When a node has an ID, you can click on the little button on the right of the node to have it linked directly in the interactive console where you will be able to execute the code you want with it:

Going further

That’s a lot of feature for a single plugin. I hope it will help you debug and fix your remote sites or web apps!

If you are interested by going further with Vorlon.js, you may find these articles interesting:

We are also looking for more contributors to help us creating the most useful tool possible. So if you are interested in contributing, please visit our gihub repository:

RangeSelector: A new control for your XAML for WinRT application

While working on UrzaGatherer v3.0, I found myself in the need of a range selector control. Something like the slider control but with two thumbs.

Feel free to ping me on Twitter(@deltakosh) if you want to discuss about this article

because this control is not part of the default library, I decided to create one. Feel free to download the complete solution here.

For flexibility reason, I created a custom control named RangeSelector and based on this template:

<Style TargetType="local:RangeSelector" >
    <Setter Property="Template">
            <ControlTemplate TargetType="local:RangeSelector">
                <Grid Height="32">
                        <Style TargetType="Thumb">
                            <Setter Property="Template">
                                    <ControlTemplate TargetType="Thumb">
                                        <Ellipse Width="32" Height="32" Fill="{TemplateBinding Background}" 
Stroke="{TemplateBinding Foreground}" StrokeThickness="4" RenderTransformOrigin="0.5 0.5"> <Ellipse.RenderTransform> <TranslateTransform X="-16"></TranslateTransform> </Ellipse.RenderTransform> </Ellipse> </ControlTemplate> </Setter.Value> </Setter> </Style> </Grid.Resources> <Rectangle Height="8" Fill="{TemplateBinding Background}" Margin="12,0"></Rectangle> <Canvas x:Name="ContainerCanvas" Margin="16,0"> <Rectangle x:Name="ActiveRectangle" Fill="{TemplateBinding Foreground}" Height="8" Canvas.Top="12"></Rectangle> <Thumb x:Name="MinThumb" Background="{TemplateBinding Background}" /> <Thumb x:Name="MaxThumb" Background="{TemplateBinding Background}"/> </Canvas> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style>

So we mainly have a canvas that hosts two thumbs and a rectangle.

The goal is to have this kind of rendering:

The C# control by itself is then based on a Control class with the following “plumbing” done to connect parts:

public sealed class RangeSelector : Control

    Rectangle ActiveRectangle;
    Thumb MinThumb;
    Thumb MaxThumb;
    Canvas ContainerCanvas;

    public RangeSelector()
        DefaultStyleKey = typeof(RangeSelector);

    protected override void OnApplyTemplate()
        ActiveRectangle = GetTemplateChild("ActiveRectangle") as Rectangle;
        MinThumb = GetTemplateChild("MinThumb") as Thumb;
        MaxThumb = GetTemplateChild("MaxThumb") as Thumb;
        ContainerCanvas = GetTemplateChild("ContainerCanvas") as Canvas;

        MinThumb.DragCompleted += Thumb_DragCompleted;
        MinThumb.DragDelta += MinThumb_DragDelta;
        MinThumb.DragStarted += MinThumb_DragStarted;

        MaxThumb.DragCompleted += Thumb_DragCompleted;
        MaxThumb.DragDelta += MaxThumb_DragDelta;
        MaxThumb.DragStarted += MaxThumb_DragStarted;

        ContainerCanvas.SizeChanged += ContainerCanvas_SizeChanged;


    private void ContainerCanvas_SizeChanged(object sender, SizeChangedEventArgs e)

Basically we need to connect to the drag events of our thumbs and provide a SyncThumbs() method to move the thumbs and the rectangle in sync with range values.

This range values are defined by regular dependencies properties:

public static readonly DependencyProperty MinimumProperty = 
DependencyProperty.Register("Minimum", typeof(double), typeof(RangeSelector), new PropertyMetadata(0.0, null));

public static readonly
DependencyProperty MaximumProperty =
DependencyProperty.Register("Maximum", typeof(double), typeof(RangeSelector), new PropertyMetadata(1.0, null));
public static readonly
DependencyProperty RangeMinProperty =
DependencyProperty.Register("RangeMin", typeof(double), typeof(RangeSelector), new PropertyMetadata(0.0, null));
public static readonly
DependencyProperty RangeMaxProperty =
DependencyProperty.Register("RangeMax", typeof(double), typeof(RangeSelector), new PropertyMetadata(1.0, null));

The SyncThumbs() method is just a simple translation between Maximum<->Minimum and the canvas’ width:

public void SyncThumbs()
    if (ContainerCanvas == null)

    var relativeLeft = ((RangeMin - Minimum) / (Maximum - Minimum)) * ContainerCanvas.ActualWidth;
    var relativeRight = ((RangeMax - Minimum) / (Maximum - Minimum)) * ContainerCanvas.ActualWidth;

    Canvas.SetLeft(MinThumb, relativeLeft);
    Canvas.SetLeft(ActiveRectangle, relativeLeft);

    Canvas.SetLeft(MaxThumb, relativeRight);

    ActiveRectangle.Width = Canvas.GetLeft(MaxThumb) - Canvas.GetLeft(MinThumb);

The drag events are then responsible for doing the opposite transformation:

private void MinThumb_DragDelta(object sender, DragDeltaEventArgs e)
    RangeMin = DragThumb(MinThumb, 0, Canvas.GetLeft(MaxThumb), e);

private void MaxThumb_DragDelta(object sender, DragDeltaEventArgs e)
    RangeMax = DragThumb(MaxThumb, Canvas.GetLeft(MinThumb), ContainerCanvas.ActualWidth, e);

private double DragThumb(Thumb thumb, double min, double max, DragDeltaEventArgs e)
    var currentPos = Canvas.GetLeft(thumb);
    var nextPos = currentPos + e.HorizontalChange;

    nextPos = Math.Max(min, nextPos);
    nextPos = Math.Min(max, nextPos);

    Canvas.SetLeft(thumb, nextPos);

    return (Minimum + (nextPos / ContainerCanvas.ActualWidth) * (Maximum - Minimum)); ;

private void MinThumb_DragStarted(object sender, DragStartedEventArgs e)
    Canvas.SetZIndex(MinThumb, 10);
    Canvas.SetZIndex(MaxThumb, 0);

private void MaxThumb_DragStarted(object sender, DragStartedEventArgs e)
    Canvas.SetZIndex(MinThumb, 0);
    Canvas.SetZIndex(MaxThumb, 10);

Please note that I also defined the ZIndex property to be sure to have the right thumb on top of the canvas

Using this control is then a piece of cake:

<StackPanel Orientation="Vertical" VerticalAlignment="Center">
    <TextBlock FontSize="20" Text="{Binding RangeMin, ElementName=RangeSelector}" HorizontalAlignment="Center" />
    <local:RangeSelector x:Name="RangeSelector" Background="Gray" Foreground="Red" BorderThickness="4" 
Minimum="0" Maximum="100" RangeMin="20" RangeMax="80"/> <TextBlock FontSize="20" Text="{Binding RangeMax, ElementName=RangeSelector}" HorizontalAlignment="Center" /> </StackPanel>

I hope you will find this control useful!

Using Win2D to apply effects on your files

It’s been a long time since my last post about C# but I’m still using it, mainly for a personal project: UrzaGatherer 3.0.

Version 2.0 was done using WinJS and JavaScript but because I love discovering new things I decided that version 3.0 will be developed using C# and XAML for Windows 10.

One of the feature I’m working on is a blurred lockscreen background. Basically, the idea is to pick a card and use the picture as lockscreen background.

The main problem that I was facing is that the cards scans are in a too low resolution. So to get rid of the inevitable aliasing produced by scaling my pictures up, I decided to add some Gaussian blur.

The first version of my blurred lockscreen background used a kind of brute force approach: Going through all the pixels and applying my filter. On my desktop PC: no problem. But on my phone (Remember, it is a Windows 10 universal application that I’m working on), the operation was too slow.

Then enters Win2D!

Thanks to it I was able to produce a method to blur my files that uses GPU and DirectX. So faster results and in the same time less battery consumption.

Even the code is pretty simple:

var file = await Package.Current.InstalledLocation.GetFileAsync("test.png");
using (var stream = await file.OpenAsync(FileAccessMode.Read))
    var device = new CanvasDevice();
    var bitmap = await CanvasBitmap.LoadAsync(device, stream);
    var renderer = new CanvasRenderTarget(device, bitmap.SizeInPixels.Width, bitmap.SizeInPixels.Height, bitmap.Dpi);

    using (var ds = renderer.CreateDrawingSession())
        var blur = new GaussianBlurEffect();
        blur.BlurAmount = 8.0f;
        blur.BorderMode = EffectBorderMode.Hard;
        blur.Optimization = EffectOptimization.Quality;
        blur.Source = bitmap;
    var saveFile = await ApplicationData.Current.LocalFolder.CreateFileAsync("temp.jpg", CreationCollisionOption.ReplaceExisting);

    using (var outStream = await saveFile.OpenAsync(FileAccessMode.ReadWrite))
        await renderer.SaveAsync(outStream, CanvasBitmapFileFormat.Png);

So basically:

  • Open the file stream
  • Create a  canvasDevice and a CanvasRenderTarget to have offscreen rendering capabilities
  • Create the effect you want to use (GaussianBlurEffect here)
  • Apply the effect
  • Save your file

Insanely simple, right?



Win2D is a great library that you can find here: 

Documentation can be found here:

A series of posts you may find interesting about Win2D effects:

What’s new in Babylon.js v2.1

It is always a pleasure for me to write this kind of article. Talking about the great stuff we are filling Babylon.js with always make me proud and happy. And this time, this is even more true than ever because this version is definitely the most community oriented ever.

What I mean with community oriented is that community helped David (Rousset) and I a lot to ship this version by literally developing big bunch of it.

As you can see in this chart of top 6 contributor, David and I (we commit under deltakosh account) are clearly helped by others community guys:

Thanks to all these wonderful people we were able to release a LOT of new features and improvements:

Among these features, I would like to highlight here some of my favorites.

Unity 5 exporter

Unity is an awesome tool to create game that can work on almost all operating systems out there. I love the Unity 5 WebGL exporter. I can export all your game to a WebGL/ASM.JS/WebAudio website.

To complete this solution, if you want to export meshes to a lighter projection that could run without ASM.JS, you can now install the Babylon.js exporter:

When installed the exporter allow you to export a scene by going to Babylon.js exporter menu:

After a few seconds, a .babylon is generated alongside associated textures:


You can now load this Babylon from you JavaScript project or directly test it using the Babylon.js sandbox:


Decals are usually used to add details on 3d objects (bullets hole, local details, etc…). Internally a decal is a mesh produced from a subset of a previous one with a small offset in order to appear on top of it.

The offset can be seen like the zIndex property when using CSS. Without it, you would have seen z-fighting issues when two 3d objects are exactly at the same place:


The code to create a new decal is this one:

var newDecal = BABYLON.Mesh.CreateDecal("decal", mesh, decalPosition, normal, decalSize, angle);

For instance, in the following demo, you can click on the cat to add some bullet holes on it:


Microsoft Edge along with Firefox and Chrome announced support for SIMD.js which is an API to use the raw power of your multi-scalars CPU directly from your JavaScript code. This is especially useful for scalar operations like matrix multiplication.

We decided (with the great help of Intel) to integrate SIMD support directly into our math library.

And this, for instance, leads to evolving this kind of code (where the same operation is applied 4 times):


The main idea is to load the SIMD register with data and then execute only one instruction where multiple were required before.

You can try it now directly on our site:

This demo tries to keep a constant framerate (50fps by default) while adding new dancer every second. This leads into a huge amount of matrices multiplication for animating skeletons used by the dancers.

If your browser supports SIMD, you can enable it and see the performance boost (please note that for now, Microsoft Edge support SIMD only inside ASM.js code but this limitation will be removed in a future version).

Collisions webworkers

Ranaan Weber (a top contributor to Babylon.js) did a tremendous work to greatly improve the collisions engine by allowing Babylon.js to compute the collisions on a dedicated webworker.

Before this, if you wanted to enable collisions on a scene you ended adding invisible impostors around your objects in order to reduce the computations required. Now this is still valid but because the computations are not done on the main thread, you can easily address much more complicated scenes.

For instance, let’s take this scene where we have a pretty decent mesh (a beautiful skull) with collisions enabled on the camera (which means that if you use the mouse wheel you won’t be able to go through the skull). This demo does not use an impostor for the collisions but the real mesh itself which has more than 41000 vertices to check.

With regular collisions, the main thread has to work on rendering the scene AND also compute collisions.

With the webworkers enabled, the main thread does not have to care about collisions because a webworker (so another thread) works on it. As mostly all CPU have at least 2 cores nowadays, this is a really awesome optimization.

To enable the collisions on a webworker, you have to execute this code:

scene.workerCollisions = true|false;

To know more about collisions:

Raanan also wrote two great articles on this topic:



New shadows engine

Adding shadows to a scene always give a boost to realism. The previous version of the shadows engine was only able to process dynamic shadows for directional lights. The new version adds support for spot lights as well as two new filters to produce very good looking soft shadows as you can see with this demo:

This other demo shows you the various options you now have to cast dynamic shadows:

To go further with shadows please read associated documentation:

Parametric shapes

Jerome Bousquie (another top contributor) added an insane amount of new meshes based on parametric shapes.

The basic meshes you’ve seen up until now with Babylon.js have an expected shape: when you create a sphere mesh, you expect to see a spherical shape. The same goes for a box mesh, a torus, a cylinder, etc.

There is another kind of mesh whose final shapes aren’t fixed. Their final shape depends upon some parameters used by a specific function. So we call these meshes “Parametric Shapes”.

Jerome, using these parametric shapes added the following shapes to the out of the box list of meshes:


If you want to know more about parametric shapes:

Jerome also created a tutorial to better understand ribbons:

New lens effect

Jahow (guess what? Another top contributor Smile) used the postprocess rendering pipeline of Babylon.js to allow you to achieve photograph-like realism.

Two post-processes are used in the pipeline:

· First, a ‘chromatic aberration’ shader, which shifts very slightly red, green and blue channels on screen. This effect is stronger on the edges.

· Second, a ‘depth-of-field’ shader, which actually does a bit more than that:

  • Blur on the edge of the lens
  • Lens distortion
  • Depth-of-field blur & highlights enhancing
  • Depth-of-field ‘bokeh’ effect (shapes appearing in blurred areas)
  • Grain effect (noise or custom texture)

You can play with a live demo in the playground:

And as always, if you want to go further:

And so many more things

As I mentioned before, this is just an extract of all the features we added. So please feel free to test it by yourself using the following links:

· Main website:

· GitHub repo:

· Learn by experimenting with Playground:

· Documentation:

Why we made vorlon.js and how to use it to debug your JavaScript remotely

Today at //BUILD/ 2015 we announced vorlon.js – an open source, extensible, platform-agnostic tool for remotely debugging and testing your JavaScript. I had the opportunity to create vorlon.JS with the help of some talented engineers and tech evangelists at Microsoft. (The same guys that brought you

Vorlon.js is powered by node.JS,, and late-night coffee. I would like to share with you why we made it, how to incorporate it into your own testing workflow, and also share some more details into the art of building a JS library like it.

Why Vorlon.js?

Vorlon.js helps you remotely load, inspect, test and debug JavaScript code running on any device with a web browser. Whether it is a game console, mobile device, or even an IoT- connected refrigerator, you can remotely connect up to 50 devices and execute JavaScript in each or all of them. The idea here is that dev teams can also debug together – each person can write code and the results are visible to all. We had a simple motto in this project: No native code, no dependency to a specific browser, only JavaScript, HTML and CSS running on the devices of your choice.

Vorlon.js itself is a small web server you can run from your local machine, or install on a server for your team to access, that serves the Vorlon.js dashboard (your command center) and communicates with the remote devices. Installing the Vorlon.js client in your web site or app is as easy as adding a single script tag. It’s also extensible where devs can write plug-ins that add features to both the client and the dashboard, for example: feature detection, logging, and exception tracking.

So why the name? There are actually two reasons. The first one is because I am just crazy about Babylon 5 (The TV show). Based on this, the second reason is because the Vorlons are one of the wisest and ancient race of the universe and thus, they are helpful as diplomats between younger races. Their helpfulness is what inspired us: for web devs, it’s still just too hard to write JavaScript that works reliably in the various devices and browsers. Vorlon.js seeks to make it just a little easier.

You mentioned Vorlon.js has plug-ins?

Vorlon.js has been designed so that you can extend the dashboard and client application easily by writing or installing additional plugins. You can resize or add extra panes to the dashboard which can communicate bidirectionally with the client application. There are three plug-ins to begin with:

Logging: The console tab will stream console messages from the client to the dashboard that you can use for debugging. Anything logged with console.log(), console.warn() or console.error() will appear in the dashboard. Like the F12 Dev Tool DOM explorer, you can see the DOM tree, select a node (which will be highlighted on the device, and update or add new CSS properties).

Interactivity: You can also interact with the remote webpage by typing code into the input. Code entered will be evaluated in the context of the page.

DOM Explorer The DOM inspector shows you the DOM of the remote webpage. You can inspect the DOM, clicking on nodes will highlight them in the host webpage, and if you select one you can also view and modify its css properties.

Modernizr The Modernizr tab will show you the supported browser features as reported by Modernizr. You can use this to determine what features are actually available. This might be particularly useful on unusual mobile devices, or things like games consoles.


How do I use it?

From you node command line, just execute this:

$ npm i -g vorlon

$ vorlon

Now you have a server running on your localhost on port 1337.
To get access to the dashboard, just navigate to https://localhost:1337/dashboard/SESSIONID. Where SESSIONID is the id for the current dashboard session. This can be any string you want.

You have then to add a single reference in your client project:

<span style="color: #333333; font-family: Consolas;"><script src="https://localhost:1337/vorlon.js/SESSIONID"></script> </span>

Please note that SESSIONID can be omitted and in this case, it will be automatically replaced by “default”
And that’s it! Now your client will send debug information to your dashboard seamlessly. Let’s now have a look at an example using a real site. 

Debugging using vorlon.js

Let’s use for our example. First, I have to launch my server (using node start.js inside the /server folder). Then, I have just have to add this line to my client site:

<script src=”https://localhost:1337/vorlon.js”></script>

Because I am not defining a SESSIONID, I can just go to https://localhost:1337/dashboard The dashboard looks like this:

Sidenote: The browser shown above is Project Spartan, Microsoft’s new browser for Windows 10. YO can also test your web apps for it remotely on your Mac, iOS, Android, or Windows device @ https://modern.IE. Or try vorlon.js too.

Back to it: I can see console messages for instance, which is useful when I debug babylon.js on mobile devices (like iOS, Android or Windows Phone).
I can click on any node on the DOM Explorer to get info about CSS properties:


On the client side, the selected node is highlighted with a red border:


Moreover, I can switch to Modernizr tab to see capabilities of my specific device:


On the left side, you can see the list of currently connected clients and you can use the [Identify a client] button to display a number on every connected devices.

A little more on how we built vorlon.js

From the very beginning, we wanted to be sure that vorlon.js remains as mobile-first and platform-agnostic as possible. So we decided to use open source tech that worked across the broader number of environments.

Our dev environment was Visual Studio Community which you can get for free now. We used the Node.JS tools for Visual Studio and Azure for the back-end. Our front-end was JavaScript and TypeScript. If you’re not familiar with TypeScript, you can learn why we’ve built babylon.js with it here. Recently Angular 2 has been built with TypeScript. But you don’t have to know it to use vorlon.js.

Here’s a global schema of how it works:


Here’s are the parts with built with:

  • A node.js server is hosting a dashboard page (served using express) and a service

  • The service is using to establish a direct connection with both the dashboard and the various devices

  • Devices have to reference a simple vorlon.js page served by the server. It contains all the plugins client code which interact with the client device and communicate with the dashboard through the server.

  • Every plug-in is split in two parts:

    • The client side, used to capture information and to interact with the device

    • The dashboard side, used to generate a command panel for the plugin inside the dashboard

For instance, the console plugin works this way:

  • The client side generates a hook on top of console.log(), console.warn() or console.error(). This hook is used to send the parameters of these functions to the dashboard. It can also receive orders from the dashboard side that it will evaluate

  • The dashboard side gathers these parameters and display them on the dashboard

The result is simply a remote console:


You can get an even better understanding of vorlon.js extensibility including how to build your own plug-ins here.

What’s next?

Vorlon.js is built on the idea of extensibility. We encourage you to contribute! And we’re already about how we might integrate vorlonJS into browser dev tools as well as Web Audio debugging.

If you want to try it, you are just one click away:
And the more technical docs are here on our GitHub.

And before closing this article, I just want to give credits to all the core team: