none
OpenDepth—Kinect to Web Browser RRS feed

  • General discussion

  • Hello,

    We did a Kinect to web integration during a hackathon last week. The server is written in C# and is based on the Kinect SDK 1.5 tutorials from channel9. The client is written in Javascript, uses Three.js webGL rendering and works in modern browsers with webGL and HTML5 sockets.

    You can have a look at http://opendepth.net/. There are 2 pre canned demos. One is a point cloud snapshot and the other is a skeleton tracking.

    So far the app have 2 functions:

    1. Multiple skeleton tracking and live streaming. The data is sent via HTML5 WebSockets into the browser. It tracks 2 players and supports bone rotation

    2. Point Cloud snapshot. We send 640X480 points containing XYZ and RGB data to the browser and render it in 3d space with three.js. Its a quite cool effect to take a picture of yourself and navigate through it.

    We wish to keep developing this and bring Kinect closer to the web. There are many opportunities having this technology contacted to the web.

    We are looking for feedback, please let us know what would like to see? We are thinking at few things we could add in near feature:

    • Making the server as a browser extension
    • Skeleton and point cloud data recording and saving
    • Mesh creation from point cloud
    • Face tracking
    • Rigged 3d characters
    • Iterative Closest Point algorithm for 3d objects scan
    • More Kinect SDK methodes and data exposed

    You can check it out at http://opendepth.net/, the code is at https://github.com/mirceapiturca/OpenDepth

    Thank you.

    Monday, July 2, 2012 9:52 AM

All replies