Preprocessing depth data and dogs RRS feed

  • Question

  • Hi, 
    It may be a redundant question, but it is not clear to me if it is possible to preprocess the depth data before it is processed by the skeleton generator. I want to do the following:
    The sensor is mounted on a robot, which knows the room for the most part, therefore we can subtract those parts from the point cloud, also we will have a vague idea where a human will be (overhead cameras, etc.); there is no need to process the entire room for that. The reason i want to do this is that the robot will probably have 3 Kinect sensors on it, and will have to be able to handle various so we will need all the processing power we can save. Also, from the sensors' output we would be able to create one big pointcloud instead of processing 3 little ones. 

    My other question it a bit related: the robot will encounter dogs in the lab where it will operate (ethology lab); we want it to be able to recognize dogs to a certain extent: is it possible to fiddle with the skeleton generator's parameters? I'm not sure how the algorithm works, though I suspect that it expects the users to have certain body ratios: longer legs than arms etc. and upright position. It could probably find a skeleton if it assumes it is a person on all fours and short legs :).  

    Thanks for the help
    Wednesday, June 13, 2012 8:42 PM