locked
Scientific Paper Behind Sample RRS feed

  • Question

  • can i ask if i can get the scientific paper behind  two samples in the installation folder of MRDS 

    1- ExpolerSim

    2-SimpleVision

    Thursday, May 17, 2012 10:04 PM

Answers

  • The best way to understand the techniques used in the samples is to look at the code. The source code is provided for all the samples.  The Obstacle Avoidance sample does have some additional documentation in the documentation folder.

    For "object detection", do you mean the "blob tracking" sample? There is also a "Color Segmentation" sample. How this sample are used is described in the documentation, and the source code is available but there is no documentation describing the technique or algorithm used.  It is not hard to follow if you can read C#.  Basically there is a training step where the code determines a upper and lower threshold for each color channel based on all the pixels in the training region, then each frame of video is evaluated looking for adjacent pixels who’s color channel values are within the prescribed threshold.

    For Obstacle Avoidance, there is a more detailed description provided as a PDF file in the documentation folder. The algorithm is not described in detail, but again, the source code is provided.  The basic steps are to start with the depth frame provided by the Kinect camera, set any pixels that are the floor to “far” (done by comparing to a synthesized floor created at startup) then collapse the depth frame into a one-pixel high depth profile that represents the nearest distance across the field of view.  Combine this profile with data colleced from the Sonar and IR sensors, then determine the best direction to move by finding the widest projected space available in the profile.

    Hope that helps.

    Gershon

    • Proposed as answer by Gershon Parent Thursday, May 24, 2012 8:00 PM
    • Marked as answer by Fatoma90 Friday, May 25, 2012 8:52 PM
    Thursday, May 24, 2012 8:00 PM

All replies

  • Hi,

    I am not aware of any scientific papers behind those samples. What made you think there are any?

    Are you refering to documentation perhaps? Those samples are described in "Professional Robotics Developer Studio" book by Kyle Johns and Trevor Taylor

    Friday, May 18, 2012 1:02 AM
    Moderator
  • Hi and thanks for replay 

    yes i want them for documentation. It's right !

    ok i know this book but are you sure that i will find some material will help in documentation as i found some but only describe how to make simulation work !

    and again thanks for help

    Friday, May 18, 2012 9:42 PM
  • It depends on what you are looking for. If its just to make those samples work, then the book describes it, but it does not go in-depth into many details.

    Can you explain what are you trying to accomplish? It will help in understanding how to best help you. 

    Monday, May 21, 2012 8:35 AM
    Moderator
  • Hi,

    I need to use those examples in my graduation project, so i need to write in the documantation of the projet how the sample make object detection or

    skin detection and how it can avoid obstacles in its way!!

    Thanks

    Wednesday, May 23, 2012 6:23 PM
  • The best way to understand the techniques used in the samples is to look at the code. The source code is provided for all the samples.  The Obstacle Avoidance sample does have some additional documentation in the documentation folder.

    For "object detection", do you mean the "blob tracking" sample? There is also a "Color Segmentation" sample. How this sample are used is described in the documentation, and the source code is available but there is no documentation describing the technique or algorithm used.  It is not hard to follow if you can read C#.  Basically there is a training step where the code determines a upper and lower threshold for each color channel based on all the pixels in the training region, then each frame of video is evaluated looking for adjacent pixels who’s color channel values are within the prescribed threshold.

    For Obstacle Avoidance, there is a more detailed description provided as a PDF file in the documentation folder. The algorithm is not described in detail, but again, the source code is provided.  The basic steps are to start with the depth frame provided by the Kinect camera, set any pixels that are the floor to “far” (done by comparing to a synthesized floor created at startup) then collapse the depth frame into a one-pixel high depth profile that represents the nearest distance across the field of view.  Combine this profile with data colleced from the Sonar and IR sensors, then determine the best direction to move by finding the widest projected space available in the profile.

    Hope that helps.

    Gershon

    • Proposed as answer by Gershon Parent Thursday, May 24, 2012 8:00 PM
    • Marked as answer by Fatoma90 Friday, May 25, 2012 8:52 PM
    Thursday, May 24, 2012 8:00 PM