This project is read-only.

Output very noisy, ideas?

Oct 23, 2012 at 8:19 PM

I'm currently starting to use the framework heavily for a school project, and I have to say, my first impressions are very good! The code is very clean, nicely structured, and has well defined interfaces.

My only problem so far is that the output is very noisy, points change rapidly. I understand that this is inevitable, but has anybody thought about solutions to handle this? Maybe some kind of smoothing?


Oct 24, 2012 at 6:53 PM
Edited Oct 24, 2012 at 6:54 PM

Hi Piedone

It helps to roll back the sleeves. You can try to increase the setting FramesForNewFingerPoint (HandDataSourceSettings).

The depth data the kinect provides is quite noisy. Smoothing this would probably help, but that's not done so quickly.

- Stefan

Oct 24, 2012 at 10:32 PM


thank you very much! You nailed it, my sweater's sleeves accounted for a bunch of errors. FramesForNewFingerPoint and  FramesForDiscontinuedFingerPoint does reduced noise to an extent.

Do you advice to experiment with other options, even on ClusterDataSourceSettings? Particularly I'm curious what MaximumDepthThreshold means? Is it possible with this option (and other similar ones) to extend the depth range where hands are recognized? Because currently I think there is a cca. 20cm depth interval where hands are detected at all and maybe a 10 cm interval where the fingers are properly recognized.

Oct 25, 2012 at 6:41 PM

Yes, Minimum- and MaximumDepthThreshold control the depth segment that is used to search for hands and fingers. The default values are 500 - 800 (if you use OpenNI or the Kinect for Windows hardware. It's 800 - 1000 or so for the Kinect SDK with the Kinect for Xbox.

You can also play with the other settings, but I'm not sure if you'll find better settings. I once wanted to collect lots of test data to find the best parameters with an algorithm, but I did not have enough time.

Oct 26, 2012 at 1:49 PM

Thank you, I'll experiment with depth threshold, because I find the detection depth interval too narrow.

Oct 26, 2012 at 10:59 PM

Also on a second thought: I see that also a lot of noise comes from Candescent's algorithm detecting fingers and hands where they clearly aren't (like on my stomach). However the Kinect SDK's built-in skeletal tracking gives you the hand joints what are quite accurate and apparently not bothered by the same interferences that forces Candescent to detect an misplaced hand (I tried). Wouldn't it be possible to use the SDK'd data to throw away data that is very off?

Oct 28, 2012 at 9:36 AM

The problem is that the skeleton tracking algorithm works for 2 meters or farther while the finger detection requires more detail and only works up to 1.5 meters max. So with the current resolution it's not an option to combine the two. I'm not sure if the newer Kinect SDK offers modes where it only tracks the upper body.

OpenNI has a hand tracking feature which I'm using to do what you describe: Hand Tracking (NITE) in the samples project.

Oct 29, 2012 at 1:25 PM

I see, thanks! For now I've implemented a simple filtering where if there are more than five fingers the longest ones (having the biggest distance from the averaged hand center between the hand position and palm position) are purged.