Kinect Keyboard Simulator & Kinect Sabre for Kinect For Windows SDK 1.0

Following the official release of Kinect for Windows SDK 1.0 and Kinect Toolbox, I’m pleased to share with you two (useful ?) samples I wrote:

https://www.catuhe.com/msdn/kinecttools.zip

Kinect Keyboard Simulator

This tool allows you to send keys to a specified application when gestures are detected. An obvious usage is to change slides on Powerpoint when you make a swipe to left or to right.

Kinect Sabre

Kinect Sabre is THE mandatory tool for your Kinect!! It creates a reality augmented vision of yourself with a LIGHT SABER in your left hand!!! (Warning: You need to install XNA Game Studio 4.0 to use Kinect Sabre)

 

Hope you like these tools Sourire

MishraReader 1.0.2.0 (beta 2) is out!

 

After several months of coding, we are proud to announce the beta 2 of MishraReader. This version is a full rewrite of the beta1 to include MVVM and dependencies injection design patterns.

But the major feature of this version is the reshaping of the user interface to support METRO guidelines:

You can grab it freely here (MishraReader is available in french and english):

https://mishrareader.codeplex.com

To use it, just follow these instructions:

Connection

On this screen, you just have to type your Google Reader account information and click on [SIGN IN]:

Using MishraReader

Once connected, you are able to see unread/starred or all posts:

Using the [SHOW] dropdown menu, you can select only the subscription you want to read:

When you are reading a post, you can:

  • Open the post inside your favorite browser by clicking on the post title or by using the icon.
  • Mar a post as starred with the icon
  • Mark the post as read by using the icon
  • Share a post on Facebook and Twitter (new services will be available in beta 3) with the icon
  • Bookmark a post with the icon (Currently no bookmark services are available but we will add new ones in the beta 3)

Configuring MishraReader

Using the settings menu, you can configure different options (don’t forget to click on [SAVE CHANGES]):

Account

In the account screen, you are able to disconnect current account and decide if you want to automatically mark items as read when you select them:

Sharing Services & Bookmark Services

With this screen, you can configure sharing and bookmark services.

Display

MishraReader can use 8 different accent colors that you can select using the Display screen.

You can also choose to:

  • Show only posts summary (instead of using a web view of the full post) which is quicker to load
  • Use a notification icon and in this case show or hide the main window in the taskbar

Network

The network screen allows you to select:

  • the quantity of item downloaded per request (between 10 and 500)
  • the automatic refresh interval (between 1 minute and 1 hour)

Conclusion

I hope you will like this version as we worked hard to make it the better feeds reader available!

Do not hesitate to give us your feedbacks using the https://mishrareader.codeplex.com site Clignement d'œil

Official Kinect for Windows SDK and Kinect Toolbox 1.1.1 are out!

purchase-hero 

The official Kinect for WIndows SDK is out and you can grab it here:

https://www.microsoft.com/en-us/kinectforwindows/develop/overview.aspx

 

 

The key points are:

  • As long as you use a Kinect for Windows sensor (not the XBox360 one) and the official SDK, you can develop commercial applications using Kinect technologies.
  • New near mode for depth values (no skeleton tracking in this version) which enables the depth camera to see objects as close as 40 centimeters
  • Up to 4 sensors can be connected to the same computer

 

Alongside with the SDK a new sensor is available. If you want to buy it ($249.99), you can go there:

https://www.microsoft.com/en-us/kinectforwindows/purchase/

 

Of course the Kinect Toolbox 1.1.1 is also out and supports the final version of Kinect for Windows SDK:

https://kinecttoolbox.codeplex.com/

The NuGet package can be found there:

https://nuget.org/List/Packages/KinectToolbox

Use the power of Azure to create your own raytracer

6d594911-ad38-43af-ae46-2df304f020bb[1]The power available in the cloud is growing every day. So I decided to use this raw CPU power to write a small raytracer.

I’m certainly not the first one to have had this idea as for example Pixar or GreenButton already use Azure to render pictures.

During this article, we will see how to write our own rendering system using Azure in order to be able to realize your own 3D rendered movie.

The article will be organized around the following axis:

  1. Prerequisites
  2. Architecture
  3. Deploying to Azure
  4. Defining a scene
  5. Web server and workers roles
  6. How it works?
  7. JavaScript client
  8. Conclusion
  9. To go further

The final solution can be downloaded here and if you want to see the final result, please go there: https://azureraytracer.cloudapp.net/

image94

You can use a default scene or create your own scene definition (we will see later how to do that).

The rendered pictures are limited to a 512×512 resolution (you can of course change this settings).

Prerequisites

To be able to use the project, you must have:

You will also need an Azure account. You can get a free one just there: https://www.windowsazure.com/en-us/pricing/free-trial/

Architecture

Our architecture can be defined using the following schema:

image_thumb49

The client will connect to a web server composed of one or more web roles (in my case, there are 2 web roles). The web roles will provide the web pages and a web service used to get the status of a request. When an user wants to render a picture, the associated web role will write a render message in an Azure queue. A farm of worker roles will read the same queue and will process any incoming render message. Azure queues are transactionals and atomic so only one worker role will grab the order. The first available worker will read and remove the message. As queues are transactionals, if the worker role crashes the render message is reintroduced in order to avoid losing your work.

In our sample, I decided to use a semaphore in order to limit the maximum number of requests executed concurrently. Indeed, I prefer not to overload my workers in order to give maximum CPU power to each render task.

Deploying to Azure

After opening the solution, you will be able to launch it directly from Visual Studio inside the Azure Emulator. You will be so able to debug and fine tune your code before sending it to the production stage.

Once you’re ready, you can deploy your package on your Azure account using the following procedure:

  • Open the “AzureRaytracer.sln” __solution inside Visual Studio
  • Configure your Azure account: to do so, right click on the “AzureRaytracer” project and choose “Publish” menu. You will get the following screen:

image_thumb11

  • Using this screen, please choose “Sign in to download credentials” option which will let you download an automatic configuration file on your Azure account :

image_thumb13

  • Once the file is downloaded, we will import it inside the :

image_thumb15

  • After importing the information, Visual Studio will ask you to give a name for the service:

image_thumb50

  • The next screen will present a summary of all selected options:

image_thumb19

  • Before publishing, we must change some parameters to prepare our package to the production stage. First of all, we have to go the Azure portal: https://windows.azure.com. Go the the storage accounts tab to grab required information:

image_thumb22

  • On the right pane, you can get the primary access key:

image_thumb51

  • With this information, you can go to your project:

image_thumb26

  • On every role, you have to go to the settings menu in order to define the Azure connection string (you will use here the information grabbed on the Azure portal):

image_thumb29

  • You must change the “AzureStorage” value using the “…” button:

image_thumb31

  • In the Configuration tab, you can change the instance count for each role:

image_thumb33

image_thumb35

image_thumb37

Your raytracer is now ONLINE !!! We will no see how to use it Sourire

Defining a scene

To define a scene, you have to specify it using an xml file. Here is a sample scene:






  1. <?xml version=1.0 encoding=utf-8 ?>


  2. <scene FogStart=5 FogEnd=20 FogColor=0, 0, 0 ClearColor=0, 0, 0 AmbientColor=0.1, 0.1, 0.1>


  3.   <objects>


  4.     <sphere Name=Red Sphere Center=0, 1, 0 Radius=1>


  5.       <defaultShader Diffuse=1, 0, 0 Specular=1, 1, 1 ReflectionLevel=0.6/>


  6.     </sphere>


  7.     <sphere Name=Transparent Sphere Center=-3, 0.5, 1.5 Radius=0.5>


  8.       <defaultShader Diffuse=0, 0, 1 Specular=1, 1, 1 OpacityLevel=0.4 RefractionIndex=2.8/>


  9.     </sphere>


  10.     <sphere Name=Green Sphere Center=-3, 2, 4 Radius=1>


  11.       <defaultShader Diffuse=0, 1, 0 Specular=1, 1, 1 ReflectionLevel=0.6 SpecularPower=10/>


  12.     </sphere>


  13.     <sphere Name=Yellow Sphere Center=-0.5, 0.3, -2 Radius=0.3>


  14.       <defaultShader Diffuse=1, 1, 0 Specular=1, 1, 1 Emissive=0.3, 0.3, 0.3 ReflectionLevel=0.6/>


  15.     </sphere>


  16.     <sphere Name=Orange Sphere Center=1.5, 2, -1 Radius=0.5>


  17.       <defaultShader Diffuse=1,0.5, 0 Specular=1, 1, 1 ReflectionLevel=0.6/>


  18.     </sphere>


  19.     <sphere Name=Gray Sphere Center=-2, 0.2, -0.5 Radius=0.2>


  20.       <defaultShader Diffuse=0.5, 0.5, 0.5 Specular=1, 1, 1 ReflectionLevel=0.6 SpecularPower=1/>


  21.     </sphere>


  22.     <ground Name=Plane Normal=0, 1, 0 Offset=>


  23.       <checkerBoard WhiteDiffuse=1, 1, 1 BlackDiffuse=0.1, 0.1, 0.1 WhiteReflectionLevel=0.1 BlackReflectionLevel=0.5/>


  24.     </ground>


  25.   </objects>


  26.   <lights>


  27.     <light Position=-2, 2.5, -1 Color=1, 1, 1/>


  28.     <light Position=1.5, 2.5, 1.5 Color=0, 0, 1/>


  29.   </lights>


  30.   <camera Position=0, 2, -6 Target=-0.5, 0.5, 0 />


  31. </scene>




The file structure is the following:

  • A [scene] tag is used as root tag and allows you to define the following parameters:
  • FogStart / FogEnd : Define the range of the fog from the camera.
  • FogColor : RGB color of the fog
  • ClearColor : Background RGB color
  • AmbientColor : Ambient RGB

  • A [objects] tag which contains the objects list

  • A [lights] tag which contains the lights list
  • A [camera] tag which define the scene camera. It is our point of view, defined by the following parameters:
  • Position : Camera position (X,Y,Z)
  • Target : Camera target (X, Y, Z)

All objects are defined by a name and can be of one of the following type:

  • sphere : Sphere defined by its center and radius
  • ground : Plane representing the ground defined by its offset from 0 and the direction of its normal
  • mesh : Complex object defined by a list of vertices and faces. It can be manipulated with 3 vectors:Position, Rotation and Scaling:





  1. <mesh Name=Box Position=-3, 0, 2 Rotation=0, 0.7, 0>


  2.   <vertices count=24>-1, -1, -1, -1, 0, 0,-1, -1, 1, -1, 0, 0,-1, 1, 1, -1, 0, 0,-1, 1, -1, -1, 0, 0,-1, 1, -1, 0, 1, 0,-1, 1, 1, 0, 1, 0,1, 1, 1, 0, 1, 0,1, 1, -1, 0, 1, 0,1, 1, -1, 1, 0, 0,1, 1, 1, 1, 0, 0,1, -1, 1, 1, 0, 0,1, -1, -1, 1, 0, 0,-1, -1, 1, 0, -1, 0,-1, -1, -1, 0, -1, 0,1, -1, -1, 0, -1, 0,1, -1, 1, 0, -1, 0,-1, -1, 1, 0, 0, 1,1, -1, 1, 0, 0, 1,1, 1, 1, 0, 0, 1,-1, 1, 1, 0, 0, 1,-1, -1, -1, 0, 0, -1,-1, 1, -1, 0, 0, -1,1, 1, -1, 0, 0, -1,1, -1, -1, 0, 0, -1,</vertices>


  3.   <indices count=36>0,1,2,2,3,0,4,5,6,6,7,4,8,9,10,10,11,8,12,13,14,14,15,12,16,17,18,18,19,16,20,21,22,22,23,20,</indices>


  4. </mesh>




Faces are indexes to vertices. A face contains 3 vertices and each vertex is defined by two vectors: position (X, Y, Z) and normal (Nx, Ny, Nz).

Objects can have a child node used to define the applied materials:

  • defaultShader : Default material defined by:
  • Diffuse : Base RGB color
  • Ambient : Ambiant RGB color
  • Specular : Specular RGB color
  • Emissive : Emissive RGB color
  • SpecularPower : Sharpness of the specular
  • RefractionIndex : Refraction index (you must also define OpacityLevel to use it)
  • OpacityLevel : Opacity level (you must also define RefractionIndex to use it)
  • ReflectionLevel : Reflection level (0 = no reflection)

  • checkerBoard : material defining a checkerboard with the following properties:

  • WhiteDiffuse : “White” square diffuse color
  • WhiteAmbient : “White” square ambient color
  • WhiteReflectionLevel : “White” square reflection level
  • BlackDiffuse : “Black” square diffuse color
  • BlackAmbient : “Black” square ambient color
  • BlackReflectionLevel : “Black” square reflection color

Lights are defined via the [light] tag which can have Position and Color attributes. Lights are omnidirectionals.

Finally, if we use this scene file:






  1. <?xml version=1.0 encoding=utf-8 ?>


  2. <scene FogStart=5 FogEnd=20 FogColor=0, 0, 0 ClearColor=0, 0, 0 AmbientColor=1, 1, 1>


  3.   <objects>


  4.     <ground Name=Plane Normal=0, 1, 0 Offset=>


  5.       <defaultShader Diffuse=0.4, 0.4, 0.4 Specular=1, 1, 1 ReflectionLevel=0.3 Ambient=0.5, 0.5, 0.5/>


  6.     </ground>


  7.     <sphere Name=Sphere Center=-0.5, 1.5, 0 Radius=1>


  8.       <defaultShader Diffuse=0, 0, 1 Specular=1, 1, 1 ReflectionLevel= Ambient=1, 1, 1/>


  9.     </sphere>


  10.   </objects>


  11.   <lights>


  12.     <light Position=-0.5, 2.5, -2 Color=1, 1, 1/>


  13.   </lights>


  14.   <camera Position=0, 2, -6 Target=-0.5, 0.5, 0 />


  15. </scene>




We will obtain the following picture:

53ac53ad-b971-4d5e-8526-e7a4e39c3bb1

Web server and worker roles

The web server is running under ASP.Net and will provide two functionalities:

  • Connection to worker roles using the queue in order to launch a rendering:





  1. void Render(string scene)


  2. {


  3.     try


  4.     {


  5.         InitializeStorage();


  6.         var guid = Guid.NewGuid();


  7.  


  8.         CloudBlob blob = Container.GetBlobReference(guid + “.xml”);


  9.         blob.UploadText(scene);


  10.  


  11.         blob = Container.GetBlobReference(guid + “.progress”);


  12.         blob.UploadText(“-1”);


  13.  


  14.         var message = new CloudQueueMessage(guid.ToString());


  15.         queue.AddMessage(message);


  16.  


  17.         guidField.Value = guid.ToString();


  18.     }


  19.     catch (Exception ex)


  20.     {


  21.         System.Diagnostics.Trace.WriteLine(ex.ToString());


  22.     }


  23. }




As you can see, the web server will generate for each request a GUID to identify the rendering job. Subsequently, the description of the scene (the xml file) is copied to a blob (with the GUID as name) in order to allow the worker roles to access it. Finally a message is sent to the queue and a blob is created to give a feedback on the request progress.

  • Publish a web service to expose requests progress:





  1. [OperationContract]


  2. [WebGet]


  3. public string GetProgress(string guid)


  4. {


  5.     try


  6.     {


  7.         CloudBlob blob = _Default.Container.GetBlobReference(guid + “.progress”);


  8.         string result = blob.DownloadText();


  9.  


  10.         if (result == “101”)


  11.             blob.Delete();


  12.  


  13.         return result;


  14.     }


  15.     catch (Exception ex)


  16.     {


  17.         return ex.Message;


  18.     }


  19. }




The web service will get the content of the blob and return the result. If the request is queued, the value will be –1 and if the request is finished the value will be 101 (and in this case the blob will be deleted).

The worker roles will read the content of the queue and when a message is available, a worker will get it and will handle it:






  1. while (true)


  2. {


  3.     CloudQueueMessage msg = null;


  4.     semaphore.WaitOne();


  5.     try


  6.     {


  7.         msg = queue.GetMessage();


  8.         if (msg != null)


  9.         {


  10.             queue.DeleteMessage(msg);


  11.             string guid = msg.AsString;


  12.             CloudBlob blob = container.GetBlobReference(guid + “.xml”);


  13.             string xml = blob.DownloadText();


  14.  


  15.             CloudBlob blobProgress = container.GetBlobReference(guid + “.progress”);


  16.             blobProgress.UploadText(“0”);


  17.  


  18.             WorkingUnit unit = new WorkingUnit();


  19.  


  20.             unit.OnFinished += () =>


  21.                                    {


  22.                                        blob.Delete();


  23.                                        unit.Dispose();


  24.                                        semaphore.Release();


  25.                                    };


  26.  


  27.             unit.Launch(guid, xml, container);


  28.         }


  29.         else


  30.         {


  31.             semaphore.Release();


  32.         }


  33.         Thread.Sleep(1000);


  34.     }


  35.     catch (Exception ex)


  36.     {


  37.         semaphore.Release();


  38.         if (msg != null)


  39.         {


  40.             CloudQueueMessage newMessage = new CloudQueueMessage(msg.AsString);


  41.             queue.AddMessage(newMessage);


  42.         }


  43.         Trace.WriteLine(ex.ToString());


  44.     }


  45. }




Once the scene is loaded, the worker will update the progress state (using the associated blob) and will create a WorkingUnit which will be in charge of producing asynchronously the picture. It will raise a OnFinished event when the render is done in order to clean and dispose all associated resources.

We can also see here the usage of the semaphore in order to limit the number of concurrent renders.

The WorkingUnit is mainly defined like this:






  1. public void Launch(string guid, string xml, CloudBlobContainer container)


  2. {


  3.     try


  4.     {


  5.         XmlDocument xmlDocument = new XmlDocument();


  6.         xmlDocument.LoadXml(xml);


  7.         XmlNode sceneNode = xmlDocument.SelectSingleNode(“/scene”);


  8.  


  9.         Scene scene = new Scene();


  10.         scene.Load(sceneNode);


  11.  


  12.         ParallelRayTracer renderer = new ParallelRayTracer();


  13.  


  14.         resultBitmap = new Bitmap(RenderWidth, RenderHeight, PixelFormat.Format32bppRgb);


  15.  


  16.         bitmapData = resultBitmap.LockBits(new Rectangle(0, 0, RenderWidth, RenderHeight), ImageLockMode.WriteOnly, PixelFormat.Format32bppRgb);


  17.         int bytes = Math.Abs(bitmapData.Stride) bitmapData.Height;


  18.         byte[] rgbValues = new byte[bytes];


  19.         IntPtr ptr = bitmapData.Scan0;


  20.  


  21.         renderer.OnAfterRender += (obj, evt) =>


  22.                                       {


  23.                                           System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes);


  24.  


  25.                                           resultBitmap.UnlockBits(bitmapData);


  26.                                           using (MemoryStream ms = new MemoryStream())


  27.                                           {


  28.                                               resultBitmap.Save(ms, ImageFormat.Png);


  29.                                               ms.Position = 0;


  30.                                               CloudBlob finalBlob = container.GetBlobReference(guid + “.png”);


  31.                                               finalBlob.UploadFromStream(ms);


  32.                                               CloudBlob blob = container.GetBlobReference(guid + “.progress”);


  33.                                               blob.UploadText(“101”);


  34.                                           }


  35.                                           OnFinished();


  36.                                       };


  37.  


  38.         int previousPercentage = -10;


  39.         renderer.OnLineRendered += (obj, evt) =>


  40.                                        {


  41.                                            if (evt.Percentage – previousPercentage < 10)


  42.                                                return;


  43.                                            previousPercentage = evt.Percentage;


  44.                                            CloudBlob blob = container.GetBlobReference(guid + “.progress”);


  45.                                            blob.UploadText(evt.Percentage.ToString());


  46.                                        };


  47.  


  48.         renderer.Render(scene, RenderWidth, RenderHeight, (x, y, color) =>


  49.         {


  50.             var offset = x 4 + y bitmapData.Stride;


  51.             rgbValues[offset] = (byte)(color.B 255);


  52.             rgbValues[offset + 1] = (byte)(color.G 255);


  53.             rgbValues[offset + 2] = (byte)(color.R 255);


  54.         });


  55.     }


  56.     catch (Exception ex)


  57.     {


  58.         CloudBlob blob = container.GetBlobReference(guid + “.progress”);


  59.         blob.DeleteIfExists();


  60.         blob = container.GetBlobReference(guid + “.png”);


  61.         blob.DeleteIfExists();


  62.         Trace.WriteLine(ex.ToString());


  63.     }


  64. }




The WorkingUnit works according to the following algorithm:

  • Loading the scene
  • Creating the raytracer
  • Generating the picture and accessing the bytes array
  • When the picture is rendered, we can save it in a blob and we update the job progress state
  • Launching the render

The raytracer

The raytracer is entirely written in C# 4.0 and uses TPL (Task Parallel Libray) to enable parallel code execution.

The following functionalities are supported (but as Yoda said “Obvious is the code”, so do not hesitate to browse the code):

  • Fog
  • Diffuse
  • Ambient
  • Transparency
  • Reflection
  • Refraction
  • Shadows
  • Complex objects
  • Unlimited light sources
  • Antialiasing
  • Parallel rendering
  • Octrees

The interesting point with a raytracer is that it is a massively parallelizable process. Indeed, a raytracer will execute strictly the same code for each pixel of the screen.

So the central point of the raytracer is:






  1. Parallel.For(0, RenderHeight, y => ProcessLine(scene, y));




So for each line, we will execute the following method in parallel on all CPU cores of the computer:






  1. void ProcessLine(Scene scene, int line)


  2. {


  3.     for (int x = 0; x < RenderWidth; x++)


  4.     {


  5.         if (!renderInProgress)


  6.             return;


  7.         RGBColor color = RGBColor.Black;


  8.  


  9.         if (SuperSamplingLevel == 0)


  10.         {


  11.             color = TraceRay(new Ray { Start = scene.Camera.Position, Direction = GetPoint(x, line, scene.Camera) }, scene, 0);


  12.         }


  13.         else


  14.         {


  15.             int count = 0;


  16.             double size = 0.4 / SuperSamplingLevel;


  17.  


  18.             for (int sampleX = -SuperSamplingLevel; sampleX <= SuperSamplingLevel; sampleX += 2)


  19.             {


  20.                 for (int sampleY = -SuperSamplingLevel; sampleY <= SuperSamplingLevel; sampleY += 2)


  21.                 {


  22.                     color += TraceRay(new Ray { Start = scene.Camera.Position, Direction = GetPoint(x + sampleX size, line + sampleY size, scene.Camera) }, scene, 0);


  23.                     count++;


  24.                 }


  25.             }


  26.  


  27.             if (SuperSamplingLevel == 1)


  28.             {


  29.                 color += TraceRay(new Ray { Start = scene.Camera.Position, Direction = GetPoint(x, line, scene.Camera) }, scene, 0);


  30.                 count++;


  31.             }


  32.  


  33.             color = color / count;


  34.         }


  35.  


  36.         color.Clamp();


  37.  


  38.         storePixel(x, line, color);


  39.     }


  40.  


  41.     // Report progress


  42.     lock (this)


  43.     {


  44.         linesProcessed++;


  45.         if (OnLineRendered != null)


  46.             OnLineRendered(this, new LineRenderedEventArgs { Percentage = (linesProcessed * 100) / RenderHeight, LineRendered = line });


  47.     }


  48. }




The main part is the TraceRay method which will cast a ray for each pixel of a line:






  1. private RGBColor TraceRay(Ray ray, Scene scene, int depth, SceneObject excluded = null)


  2. {


  3.     List<Intersection> intersections;


  4.    


  5.     if (excluded == null)


  6.         intersections = IntersectionsOrdered(ray, scene).ToList();


  7.     else


  8.         intersections = IntersectionsOrdered(ray, scene).Where(intersection => intersection.Object != excluded).ToList();


  9.  


  10.     return intersections.Count == 0 ? scene.ClearColor : ComputeShading(intersections, scene, depth);


  11. }




If the ray intersects no object then the color of the background is returned (ClearColor). In the other case, we will have to evaluate the color of the intersected object:






  1. private RGBColor ComputeShading(List<Intersection> intersections, Scene scene, int depth)


  2. {


  3.     Intersection intersection = intersections[0];


  4.     intersections.RemoveAt(0);


  5.  


  6.     var direction = intersection.Ray.Direction;


  7.     var position = intersection.Position;


  8.     var normal = intersection.Normal;


  9.     var reflectionDirection = direction – 2 Vector3.Dot(normal, direction) normal;


  10.  


  11.     RGBColor result = GetBaseColor(intersection.Object, position, normal, reflectionDirection, scene, depth);


  12.  


  13.     // Opacity


  14.     if (IsOpacityEnabled && intersections.Count > 0)


  15.     {


  16.         double opacity = intersection.Object.Shader.GetOpacityLevelAt(position);


  17.         double refractionIndex = intersection.Object.Shader.GetRefractionIndexAt(position);


  18.  


  19.         if (opacity < 1.0)


  20.         {


  21.             if (refractionIndex == 1 || !IsRefractionEnabled)


  22.                 result = result opacity + ComputeShading(intersections, scene, depth) (1.0 – opacity);


  23.             else


  24.             {


  25.                 // Refraction


  26.                 result = result opacity + GetRefractionColor(position, Utilities.Refract(direction, normal, refractionIndex), scene, depth, intersection.Object) (1.0 – opacity);


  27.             }


  28.         }


  29.     }


  30.  


  31.     if (!IsFogEnabled)


  32.         return result;


  33.  


  34.     // Fog


  35.     double distance = (scene.Camera.Position – position).Length;


  36.  


  37.     if (distance < scene.FogStart)


  38.         return result;


  39.  


  40.     if (distance > scene.FogEnd)


  41.         return scene.FogColor;


  42.  


  43.     double fogLevel = (distance – scene.FogStart) / (scene.FogEnd – scene.FogStart);


  44.  


  45.     return result (1.0 – fogLevel) + scene.FogColor fogLevel;


  46. }




The ComputeShading method will compute the base color of the object (taking in account all light sources). If the object is transparent or uses refraction or reflection, a new ray must be casted to compute the induced color.

At the end, the fog is added and the final color is returned.

As you can see, computing each pixel is really resource consuming. So having a huge raw power can drastically improve the rendering speed.

The client

The front client is written using HTML with a small part of JavaScript in order to make it a bit more dynamic:






  1. var checkState = function () {


  2.     $.getJSON(“RenderStatusService.svc/GetProgress”, { guid: guid, noCache: Math.random() }, function (result) {


  3.         var percentage = result.d;


  4.         var percentageAsNumber = parseInt(percentage);


  5.  


  6.         if (percentage == “-1”) {


  7.             $(“#progressMessage”).text(“Request queued”);


  8.             setTimeout(checkState, 1000);


  9.             return;


  10.         }


  11.  


  12.         if (isNaN(percentageAsNumber)) {


  13.             window.localStorage.removeItem(“currentGuid”);


  14.             restartUI();


  15.             return;


  16.         }


  17.  


  18.         if (percentageAsNumber != 101) {


  19.             $(“#progressBar”).progressbar({ value: percentageAsNumber });


  20.             $(“#progressMessage”).text(“Rendering in progress…” + result.d + “%”);


  21.             setTimeout(checkState, 1000);


  22.         }


  23.         else {


  24.             $(“#renderInProgressDiv”).slideUp(“fast”);


  25.             $(“#final”).slideDown(“fast”);


  26.             $(“#imageLoadingMessage”).slideDown(“fast”);


  27.             $.getJSON(“RenderStatusService.svc/GetImageUrl”, { guid: guid, noCache: Math.random() }, function (url) {


  28.                 finalImage.src = url.d;


  29.                 document.getElementById(“imageHref”).href = url.d;


  30.             });


  31.             window.localStorage.removeItem(“currentGuid”);


  32.         }


  33.     });


  34. };




If the web service returns –1,the request is queued. If the returned value is between 0 and 100, we can update the progress bar et if the value is –1, we can get and display the rendered picture.

Conclusion

As we can see, Azure gives us all the required tools to develop and debug for the cloud.

I sincerely invite you to install the SDK to develop your own raytracer !

To go further

Some useful links:

Silverlight 5 is out!

homeSlide5

It’s a really big pleasure for me to announce that Silverlight 5 is finally available:

https://www.microsoft.com/silverlight/

Links

The Silverlight 5 Toolkit was also updated to support the RTM: https://silverlight.codeplex.com/releases/view/78435 

And don’t forget to have a look to my blog about all the new features of the toolkit: https://blogs.msdn.com/b/eternalcoding/archive/2011/12/10/silverlight-toolkit-september-2011-for-silverlight-5-what-s-new.aspx)

 

And of course, Babylon was updated for the RTM too: https://code.msdn.microsoft.com/Babylon-3D-engine-f0404ace

 

For all the downloads and the features list, please go to: https://www.silverlight.net/learn/overview/what’s-new-in-silverlight-5

Security and 3D

First of all, please read this article: https://blogs.msdn.com/b/eternalcoding/archive/2011/10/18/some-reasons-why-my-3d-is-not-working-with-silverlight-5.aspx

By the way, you may experience security errors with Silverlight 5 RTM when you want to use the wonderful new 3D feature. In fact, some graphics drivers may allow malicious code to execute. That may lead to an unwanted hard reset or a blue screen.

Starting with the beta version, to protect users for this kind of trouble, we initiate a first scenario where all Windows XP Display Driver Model (XPDM) drivers on Windows XP, Windows Vista, and Windows 7 will be blocked by default. Permission will be granted automatically in elevated trust scenarios and Windows Display Driver Model (WDDM) drivers will not require user consent at run-time.

But as always, features, including security features, continue to be refined and added during post-beta development.

And for the RTM version, there were a number of approaches considered to further improve security and stability, but the solution to block 3D in partial trust by default was the best option for this release. Permission is still granted automatically in elevated trust scenarios.

To grant 3D permissions, you just have to right click on your Silverlight plugin, go to the Permissions tab and allow your application:

 

You can of course help your users detect and understand this by using the following code in order to tailor an good user experience:






  1. if (GraphicsDeviceManager.Current.RenderMode != RenderMode.Hardware)


  2. {


  3. switch (GraphicsDeviceManager.Current.RenderModeReason)


  4. {


  5. case RenderModeReason.GPUAccelerationDisabled:


  6. throw new Exception(Strings.NoGPUAcceleration);


  7. case RenderModeReason.SecurityBlocked:


  8. throw new Exception(Strings.HardwareAccelerationBlockedBySecurityReason);


  9. case RenderModeReason.Not3DCapable:


  10. throw new Exception(Strings.HardwareAccelerationNotAvailable);


  11. case RenderModeReason.TemporarilyUnavailable:


  12. throw new Exception(Strings.HardwareAccelerationNotAvailable);


  13. }


  14. }




It is really important to explain your users why the 3D is deactivated. As there is a potential security hole, it is their responsibility to allow 3D experience.

Support and lifecyle

The support status for Silverlight is now updated for SL5:

https://support.microsoft.com/gp/lifean45#sl5

Here is the extract for Silverlight 5:

“Silverlight 5 – Microsoft will provide assisted and unassisted no charge support for customers using versions of Silverlight 5. Paid support options are available to customers requiring support with issues beyond install and upgrade issues. Microsoft will continue to ship updates to the Silverlight 5 runtime or Silverlight 5 SDK, including updates for security vulnerabilities as determined by the MSRC. Developers using the Silverlight 5 development tools and developing applications for Silverlight 5 can use paid assisted-support options to receive development support.

Silverlight 5 will support the browser versions listed on this page through 10/12/2021, or though the support lifecycle of the underlying browsers, whichever is shorter. As browsers evolve, the support page will be updated to reflect levels of compatibility with newer browser versions.”

Silverlight Toolkit (December 2011) for Silverlight 5–What’s new?

The new version of the Silverlight Toolkit (December 2011) for Silverlight 5 is out and you can grab it here:

https://silverlight.codeplex.com/releases/view/78435

Update: Babylon Engine now uses Silverlight 5 Toolkit: https://code.msdn.microsoft.com/Babylon-3D-engine-f0404ace

I had the pleasure of working on this version and I’m pleased to write this article to help you discover how the Toolkit enhances Silverlight 5 with the following features:

  1. Seamless integration of 3D models and other assets with the Content Pipeline
  2. New Visual Studio templates for creating:
    1. Silverlight 3D Application
    2. Silverlight 3D Library
    3. Silverlight Effect
  3. New samples to demo these features

Seamless integration with the Content Pipeline

The toolkit comes with a new assembly : Microsoft.Xna.Framework.Content.dll. This assembly allows you to load assets from the .xnb file format (produced by the Content Pipeline).

Using the new Visual Studio templates (which I will describe later), you can now easily port existing 3D projects directly to Silverlight 5!

Microsoft.Xna.Framework.Content.dll assembly will add the following classes to Silverlight 5:

  • ContentManager
  • Model
  • SpriteFont and SpriteBatch

The toolkit comes also with the Microsoft.Xna.Framework.Tookit.dll assembly which will add the following classes to Silverlight 5:

  • SilverlightEffect
  • Mouse, MouseState
  • Keyboard, KeyboardState

ContentManager

The documentation for this class can be found here:
https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.content.contentmanager.aspx

The ContentManager class is the representative for the Content Pipeline inside your code. It is responsible for loading objects from .xnb files.

To create a ContentManager you just have to call the following code:






  1. ContentManager contentManager = new ContentManager(null, “Content”);




There are restrictions for this class : The ContentManager for Silverlight can only support one Content project and the RootDirectory must be set to “Content”

Using it is really simple because it provides a simple Load method which can be used to create your objects:






  1. // Load fonts


  2. hudFont = contentManager.Load<SpriteFont>(“Fonts/Hud”);


  3.  


  4. // Load overlay textures


  5. winOverlay = contentManager.Load<Texture2D>(“Overlays/you_win”);


  6.  


  7. // Music


  8. backgroundMusic = contentManager.Load<SoundEffect>(“Sounds/Music”);




Model

The documentation for this class can be found here:
https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.model.aspx

The model class has the same API as in XNA 4 and it will allow you to load and render 3D models from XNB files:






  1. // Draw the model.


  2. Model tankModel = content.Load<Model>(“tank”);


  3. tankModel.Draw();




You can also use bones if your model supports them:






  1. Model tankModel = content.Load<Model>(“tank”);


  2. tankModel.Root.Transform = world;


  3. tankModel.CopyAbsoluteBoneTransformsTo(boneTransforms);


  4.  


  5. // Draw the model.


  6. foreach (ModelMesh mesh in tankModel.Meshes)


  7. {


  8.     foreach (BasicEffect effect in mesh.Effects)


  9.     {


  10.         effect.World = boneTransforms[mesh.ParentBone.Index];


  11.         effect.View = view;


  12.         effect.Projection = projection;


  13.  


  14.         effect.EnableDefaultLighting();


  15.     }


  16.  


  17.     mesh.Draw();


  18. }




You can import models using .x or .fbx format:

And thanks to the FBX importer, you can also import .3ds, .obj, .dxf and even Collada.

SpriteFont & SpriteBatch

The documentation for these classes can be found here:
https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.spritebatch.aspx
https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.spritefont.aspx

The SpriteBatch class is used to display 2D textures on top of the render. You can use them for displaying a UI or sprites.






  1. SpriteBatch spriteBatch = new SpriteBatch(graphicsDevice);


  2.  


  3. spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque);


  4.  


  5. spriteBatch.Draw(texture, new Rectangle(0, 0, width, height), Color.White);


  6.  


  7. spriteBatch.End();




As you can see, SpriteBatch only needs a texture to display.

SpriteFont allows you to use sprites to display text.






  1. SpriteFont hudFont = contentManager.Load<SpriteFont>(“Fonts/Hud”);


  2. spriteBatch.DrawString(hudFont, value, position + new Vector2(1.0f, 1.0f), Color.Black);


  3. spriteBatch.DrawString(hudFont, value, position, color);




SpriteFont relies on SpriteBatch to draw and needs a font definition from the ContentManager:

SilverlightEffect

The toolkit introduces a new class called SilverlightEffect which can be used to apply .fx files.

It also support .slfx which is the default extension. There is no difference between .slfx and .fx but as XNA Effect Processor is already associated with .fx, the Silverlight Content Pipeline had to select another one.

You can now define a complete effect inside a Content project and use it for rendering your models.

To do so:

  • Create a .fx file with a least one technique
  • Shader entry points must be parameterless
  • Define render states

For example here is a simple .fx file:






  1. float4x4 WorldViewProjection;


  2. float4x4 World;


  3. float3 LightPosition;


  4.  


  5. // Structs


  6. struct VS_INPUT


  7. {


  8.     float4 position                : POSITION;


  9.     float3 normal                : NORMAL;


  10.     float4 color                : COLOR0;


  11. };


  12.  


  13. struct VS_OUTPUT


  14. {


  15.     float4 position                : POSITION;


  16.     float3 normalWorld            : TEXCOORD0;


  17.     float3 positionWorld        : TEXCOORD1;


  18.     float4 color                : COLOR0;       


  19. };


  20.  


  21. // Vertex Shader


  22. VS_OUTPUT mainVS(VS_INPUT In)


  23. {


  24.     VS_OUTPUT Out = (VS_OUTPUT);


  25.  


  26.     // Compute projected position


  27.     Out.position = mul(In.position, WorldViewProjection);


  28.  


  29.     // Compute world normal


  30.     Out.normalWorld = mul(In.normal,(float3x3) WorldViewProjection);


  31.  


  32.     // Compute world position


  33.     Out.positionWorld = (mul(In.position, World)).xyz;


  34.  


  35.     // Transmit vertex color


  36.     Out.color = In.color;


  37.  


  38.     return Out;


  39. }


  40.  


  41. // Pixel Shader


  42. float4 mainPS(VS_OUTPUT In) : COLOR


  43. {


  44.     // Light equation


  45.     float3 lightDirectionW = normalize(LightPosition – In.positionWorld);


  46.     float ndl = max(, dot(In.normalWorld, lightDirectionW));


  47.  


  48.     // Final color


  49.     return float4(In.color.rgb * ndl, 1);


  50. }


  51.  


  52. // Technique


  53. technique MainTechnique


  54. {


  55.     pass P0


  56.     {


  57.         VertexShader = compile vs_2_0 mainVS(); // Must be a non-parameter entry point


  58.         PixelShader = compile ps_2_0 mainPS(); // Must be a non-parameter entry point


  59.     }


  60. }




The Toolkit will add required processors to the Content Pipeline in order to create the .xnb file for this effect:

To use this effect, you just have to instantiate a new SilverlightEffect inside your code:






  1. mySilverlightEffect = scene.ContentManager.Load<SilverlightEffect>(“CustomEffect”);




Then, you can retrieve effect’s parameters:






  1. worldViewProjectionParameter = mySilverlightEffect.Parameters[“WorldViewProjection”];


  2. worldParameter = mySilverlightEffect.Parameters[“World”];


  3. lightPositionParameter = mySilverlightEffect.Parameters[“LightPosition”];




To render an object with your effect, it is the same code as in XNA 4:






  1. worldParameter.SetValue(Matrix.CreateTranslation(1, 1, 1));


  2. worldViewProjectionParameter.SetValue(WorldViewProjection);


  3. lightPositionParameter.SetValue(LightPosition);


  4. foreach (var pass in mySilverlightEffect.CurrentTechnique.Passes)


  5. {


  6.     // Apply pass


  7.     pass.Apply();


  8.  


  9.     // Set vertex buffer and index buffer


  10.     graphicsDevice.SetVertexBuffer(vertexBuffer);


  11.     graphicsDevice.Indices = indexBuffer;


  12.  


  13.     // The shaders are already set so we can draw primitives


  14.     graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, VerticesCount, 0, FaceCount);


  15. }




Texture2D, TextureCube & SoundEffect

Silverlight 5 provides Texture2D, TextureCube and SoundEffect classes. With the Toolkit, you will be able to load them from the ContentManager:






  1. // Load overlay textures


  2. winOverlay = contentManager.Load<Texture2D>(“Overlays/you_win”);


  3.  


  4. // Music


  5. backgroundMusic = contentManager.Load<SoundEffect>(“Sounds/Music”);




Mouse and Keyboard

In order to facilitate porting existing 3D applications and to accommodate polling input application models, we also added partial support for Microsoft.Xna.Framework.Input namespace.

So you will be able to request MouseState and KeyboardState everywhere you want:






  1. public MainPage()


  2. {


  3.     InitializeComponent();


  4.  


  5.     Mouse.RootControl = this;


  6.     Keyboard.RootControl = this;


  7. }




However, there is a slight difference from original XNA on other endpoints: you have to register the root control which will provide the events for Mouse and Keyboard. The MouseState positions will be relative to the upper left corner of this control:






  1. private void myDrawingSurface_Draw(object sender, DrawEventArgs e)


  2. {


  3.     // Render scene


  4.     scene.Draw();


  5.  


  6.     // Let’s go for another turn!


  7.     e.InvalidateSurface();


  8.  


  9.     // Get mouse and keyboard state


  10.     MouseState mouseState = Mouse.GetState();


  11.     KeyboardState keyboardState = Keyboard.GetState();


  12.  


  13.     …


  14. }




The MouseState and KeyboardState are similar to XNA versions:

Extensibility

Silverlight Content Pipeline can be extended the same way as the XNA Content Pipeline on other endpoints. You can provide your own implementation for loading assets from elsewhere than the embedded .xnb files.

For example you can write a class that will stream .xnb from the network. To do so, you have to  inherit from ContentManager and provide your own implementation for OpenStream:






  1. public class MyContentManager : ContentManager


  2. {


  3.     public MyContentManager() : base(null)


  4.     {


  5.        


  6.     }


  7.  


  8.     protected override System.IO.Stream OpenStream(string assetName)


  9.     {           


  10.         return base.OpenStream(assetName);


  11.     }


  12. }




You can also provide our own type reader. Here is for example the custom type reader for SilverlightEffect:






  1. ///


  2. /// Read SilverlightEffect.


  3. ///


  4. public class SilverlightEffectReader : ContentTypeReader<SilverlightEffect>


  5. {


  6.     ///


  7.     /// Read and create a SilverlightEffect


  8.     ///


  9.     protected override SilverlightEffect Read(ContentReader input, SilverlightEffect existingInstance)


  10.     {


  11.         int techniquesCount = input.ReadInt32();


  12.         EffectTechnique[] techniques = new EffectTechnique[techniquesCount];


  13.  


  14.         for (int techniqueIndex = 0; techniqueIndex < techniquesCount; techniqueIndex++)


  15.         {


  16.             int passesCount = input.ReadInt32();


  17.             EffectPass[] passes = new EffectPass[passesCount];


  18.  


  19.             for (int passIndex = 0; passIndex < passesCount; passIndex++)


  20.             {


  21.                 string passName = input.ReadString();


  22.  


  23.                 // Vertex shader


  24.                 int vertexShaderByteCodeLength = input.ReadInt32();


  25.                 byte[] vertexShaderByteCode = input.ReadBytes(vertexShaderByteCodeLength);


  26.                 int vertexShaderParametersLength = input.ReadInt32();


  27.                 byte[] vertexShaderParameters = input.ReadBytes(vertexShaderParametersLength);


  28.  


  29.                 // Pixel shader


  30.                 int pixelShaderByteCodeLength = input.ReadInt32();


  31.                 byte[] pixelShaderByteCode = input.ReadBytes(pixelShaderByteCodeLength);


  32.                 int pixelShaderParametersLength = input.ReadInt32();


  33.                 byte[] pixelShaderParameters = input.ReadBytes(pixelShaderParametersLength);


  34.  


  35.                 MemoryStream vertexShaderCodeStream = new MemoryStream(vertexShaderByteCode);


  36.                 MemoryStream pixelShaderCodeStream = new MemoryStream(pixelShaderByteCode);


  37.                 MemoryStream vertexShaderParametersStream = new MemoryStream(vertexShaderParameters);


  38.                 MemoryStream pixelShaderParametersStream = new MemoryStream(pixelShaderParameters);


  39.  


  40.                 // Instanciate pass


  41.                 SilverlightEffectPass currentPass = new SilverlightEffectPass(passName, GraphicsDeviceManager.Current.GraphicsDevice, vertexShaderCodeStream, pixelShaderCodeStream, vertexShaderParametersStream, pixelShaderParametersStream);


  42.                 passes[passIndex] = currentPass;


  43.  


  44.                 vertexShaderCodeStream.Dispose();


  45.                 pixelShaderCodeStream.Dispose();


  46.                 vertexShaderParametersStream.Dispose();


  47.                 pixelShaderParametersStream.Dispose();


  48.  


  49.                 // Render states


  50.                 int renderStatesCount = input.ReadInt32();


  51.  


  52.                 for (int renderStateIndex = 0; renderStateIndex < renderStatesCount; renderStateIndex++)


  53.                 {


  54.                     currentPass.AppendState(input.ReadString(), input.ReadString());


  55.                 }


  56.             }


  57.  


  58.             // Instanciate technique


  59.             techniques[techniqueIndex] = new EffectTechnique(passes);


  60.         }


  61.  


  62.         return new SilverlightEffect(techniques);


  63.     }


  64. }




 

New Visual Studio templates

The toolkit will install two new project templates and a new item template:

Silverlight3DApp

This template will produce a full working Silverlight 3D application.

The new solution will be composed of 4 projects:

  • Silverlight3DApp : The main project
  • Silverlight3DAppContent : The content project attached with the main project
  • Silverlight3DWeb : The web site that will display the main project
  • Silverlight3DWebContent : A content project attached to the website if you want to stream your .xnb from the website instead of using embedded ones. This will allow you distribute a smaller .xap.

The main project (Silverlight3DApp) is built around two objects:

  • A sceneobject which
    • Create the ContentManager
    • Handle the DrawingSurface Draw event
  • A cubeobject
    • Create a vertex buffer and index buffer
    • Use the ContentManager to retrieve a SilverlightEffect (Customeffect.slfx) from the content project
    • Configure and use the SilverlightEffect to render

Silverlight3DLib

This template will produce a Silverlight Library without any content but with all Microsoft.Xna.Framework references set:

And the resulting project will look like:

SilverlightEffect

This item template can be used inside a Content project to add a custom .slfx file that will work with SilverlightEffect class:

The file content will be the following:






  1. float4x4 World;


  2. float4x4 View;


  3. float4x4 Projection;


  4.  


  5. // TODO: add effect parameters here.


  6.  


  7. struct VertexShaderInput


  8. {


  9.     float4 Position : POSITION0;


  10.  


  11.     // TODO: add input channels such as texture


  12.     // coordinates and vertex colors here.


  13. };


  14.  


  15. struct VertexShaderOutput


  16. {


  17.     float4 Position : POSITION0;


  18.  


  19.     // TODO: add vertex shader outputs such as colors and texture


  20.     // coordinates here. These values will automatically be interpolated


  21.     // over the triangle, and provided as input to your pixel shader.


  22. };


  23.  


  24. VertexShaderOutput VertexShaderFunction(VertexShaderInput input)


  25. {


  26.     VertexShaderOutput output;


  27.  


  28.     float4 worldPosition = mul(input.Position, World);


  29.     float4 viewPosition = mul(worldPosition, View);


  30.     output.Position = mul(viewPosition, Projection);


  31.  


  32.     // TODO: add your vertex shader code here.


  33.  


  34.     return output;


  35. }


  36.  


  37. float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0


  38. {


  39.     // TODO: add your pixel shader code here.


  40.  


  41.     return float4(1, , , 1);


  42. }


  43.  


  44. technique Technique1


  45. {


  46.     pass Pass1


  47.     {


  48.         // TODO: set renderstates here.


  49.  


  50.         VertexShader = compile vs_2_0 VertexShaderFunction();


  51.         PixelShader = compile ps_2_0 PixelShaderFunction();


  52.     }


  53. }




New samples to demo these features

Finally, to help you discover and learn all these features, we added some cools samples:

Bloom

This sample shows you how to use sprites to accomplish post-processing effects such as “bloom”. It also uses the Content Pipeline to import a tank model from a .fbx file.

CustomModelEffect

This sample shows you how custom effects can be applied to a model using the Content Pipeline.

Generated geometry

This sample shows how 3D models can be generated by code during the Content Pipeline build process.

Particles

This sample introduces the concept of a particle system, and shows how to draw particle effects using SpriteBatch. Two particle effects are demonstrated: an explosion and a rising plume of smoke:

Primitives3D

This sample provides easily reusable code for drawing basic geometric primitives:

Platformer

This sample is a complete game with 3 levels provided (you can easily add yours). It shows the usage of SpriteBatch, SpriteFont and SoundEffect inside a platform game. It also uses Keyboard class to control the player.

SimpleAnimation

This sample shows how to apply program controlled rigid body animation to a 3D model loaded with the ContentManager:

Skinning

This sample shows how to process and render a skinned character model using the Content Pipeline.

Conclusion

As you noticed, all these new additions to the Silverlight Toolkit are made to make it easy to get started with new Silverlight 3D features by providing developer tools to improve usability and productivity.

You can now easily start a new project that leverages both concepts of XNA and Silverlight. It becomes easy to work with 3D concepts and resources like shaders, model, sprites, effects, etc…

We also try to reduce the effort to port existing 3D applications to Silverlight.

So now it’s up to you to discover the wonderful world of 3D using Silverlight 5!

Silverlight 5 Toolkit Compile error :-2147024770 (0, 0): error : Unknown compile error (check flags against DX version)

(Woow what a funny title!)

Some Silverlight 5 Toolkit users send me mails about this error message:

Error 1 Compile error -2147024770
(0, 0): error : Unknown compile error (check flags against DX version) (myfile.slfx)

 

To correct the problem, you just have to install the latest DirectX Runtime:

https://www.microsoft.com/download/en/details.aspx?id=8109

 

This error is generated by the Silverlight effect file compiler which wants to use the DirectX Effect compiler and don’t manage to locate it.

The DirectX Effect compiler is located in a library called d3dx9_xx.dll. The “_xx_” part may vary according to the version of the DirectX SDK used (in the case of Silverlight 5 Toolkit, we used the June 2010 version which relies on d3dx9_43.dll).

The problem is that this library is not installed by default on Windows (but a lot of applications installed it). That’s why you may be required to installed the DirectX Runtime which comes with all versions of the library.

Kinect for Windows beta 2 is out

The new site for Kinect for Windows and the new beta of the SDK are out!

Kinect

This new version focuses on stability and performance:

  • Faster and improved skeletal tracking
  • Status change support
  • Improved joint tracking
  • 64-bit support
  • Audio can be used from UI thread

We also announce the release date of the commercial version for early 2012.

Faster and improved skeletal tracking

With updates to the multi-core exemplar, Kinect for Windows is now 20% faster than it was in the last release (beta 1 refresh). Also, the accuracy rate of skeletal tracking and joint recognition and been substantially improved.

When using 2 Kinects, you can now specify which one is used for skeletal tracking.

Status change support

You can now plug and unplug your Kinect without losing work. API support for detecting and managing device status changes, such as device unplugged, device plugged in, power unplugged, etc. Apps can reconnect to the Kinect device after it is plugged in, after the computer returns from suspend, etc..

Improved joint tracking

Substantially improve the accuracy rate of joint recognition and tracking.

64-bit support

The SDK can be used to build 64-bit applications. Previously, only 32-bit applications could be built.

Audio can be used from UI thread

Developers using the audio within WPF no longer need to access the DMO from a separate thread. You can create the KinectAudioSource on the UI thread and simplify your code.

Additional information

Furthermore, this new version now supports Windows ”8” (Desktop side) Sourire

The new site and the sdk can be found here:

https://www.kinectforwindows.org

To use this new version you only need to recompile your code as no breaking changes were introduced. 

Kinect Toolbox

The Kinect Toolbox was obviously updated to support the new SDK:

https://kinecttoolbox.codeplex.com/

The nuGet package can be found here:

https://nuget.org/List/Packages/KinectToolbox

Some reasons why my 3D is not working with Silverlight 5

The aim of this post is to give you some tricks to enable 3D experiences with Silverlight 5 for your applications.

But first of all, let’s see how you can activate accelerated 3D support inside a Silverlight 5 project.

Standard way

To activate accelerated 3D support, the host of Silverlight must activate it using a param named “enableGPUAcceleration”:

By doing this, you will allow Silverlight to use the graphic card’s power to render XAML elements. And at the same time, you will activate the accelerated 3D support.

Troubleshooting

You can detect if 3D is activated or not in your code through the GraphicsDeviceManager class:






  1. // Check if GPU is on


  2. if (GraphicsDeviceManager.Current.RenderMode != RenderMode.Hardware)


  3. {


  4. MessageBox.Show(“Please activate enableGPUAcceleration=true on your Silverlight plugin page.”, “Warning”, MessageBoxButton.OK);


  5. }




The RenderMode property will be set to Unavailable when 3D is not activated.

In this case, the property GraphicsDeviceManager.Current.RenderModeReason will be set to one of these values:

  • Not3DCapable
  • GPUAccelerationDisabled
  • SecurityBlocked
  • TemporarilyUnavailable

 

Not3DCapable

You will get this reason when your graphic card is too old to support 3D required features such as shader model 2.0.

GPUAccelerationDisabled

You forgot to set enableGPUAcceleration to true in the hosting HTML page.

SecurityBlocked

When you launch your application, a specific domain entry for 3D can be set to the Permissions tab of the Silverlight Configuration panel.

You will not see this entry unless your domain group policy set it (and in this case you have to ask permissions to your administrator) or you run your Silverlight application under Windows XP.

When you run a Silverlight application under Windows XP, the following behavior happen:

  • 3D is enabled automatically in elevated trust (out of browser)
  • The first time a user runs a non-elevated 3D application, a domain entry set to Deny is added to the Permissions tab of the Silverlight Configuration panel

To use a 3D application under Windows XP in a non-elevated context, you must change the pre-created Deny entry to Allow.

TemporarilyUnavailable

This happen when the device is lost (for example under lock screen on Windows XP.  It doesn’t happen much in WDDM) where Silverlight expects the rendering surface to return at some point.

Additional tips

One other important point to know is that 3D won’t work correctly inside windowless mode. In this case, the draw event will be driven from the UI thread and so will be fired only during UI events such as page scroll for example.

Conclusion

I hope this post was useful to help you use the wonderful accelerated 3D experience of Silverlight 5.

As you can see, the better solution to handle 3D support is to use GraphicsDeviceManager.Current.RenderModeReason.

Useful links

New version of Babylon engine for Silverlight 5 and Silverlight 5 Toolkit

I’ve just updated the source of Babylon engine. You can grab the bits here:

https://code.msdn.microsoft.com/silverlight/Babylon-3D-engine-f0404ace

This new version uses the new content pipeline of the toolkit and is compiled using Silverlight 5 RC.

You can play with the exposed shaders using the new SilverlightEffect class.

It is now time to unleash the power of accelerated 3D!!