Page 8 of 12

Re: [Video] Alternative Kinect Control (using depth images)

Posted: Tue Jul 08, 2014 5:36 am
by omwdunkley
Veerachart wrote:Some more investigations showed me that trying to do screencast uses quite considerable resources and cause some delays. Without doing that (and a little bit modification on numpy matrix manipulations - not sure how much this helps), I experience the delay less; most of the time no problem, but sometimes it just comes out of nowhere.

About the detection size: for my case I start using the Kinect to control the flie since taking off. To make it detectable, I have to put a box on the floor as the take-off stand, so that the difference from the background is large enough. Also in my case, the detected size is smaller when the flie is on the box, as the part closer to the box has too small difference to be detected. Setting wider range of detection size already solved this problem (or at least seems so.) (I also found a little problem with part of my lab's wall, that it somehow reflects the IR light weirdly, causing too much noise which are around the same size as the flie, so they got detected. Anyway, I solved this by using partitions to block those materials and everything comes to a peaceful state :) )

Changing from freenect to OpenNI also works well, just need to change rosdep in the manifest file.
Thanks Veer for your comments! Im going to recode most of it in c++ to get rid of that latency. I just moved to the US and dont have a kinect yet, so it might take some days for me to get started. I couldnt get it working in IR light rich envirnments either. I wanted to measure the detection accuracy using a motion capture system..but no change. Too much IR reflection. Let me know if oyu think any new features should be added before i start implementing the c++ version.

Cheers

Re: [Video] Alternative Kinect Control (using depth images)

Posted: Thu Jul 17, 2014 2:48 am
by Veerachart
Sorry for long time to reply. I've been busy with my research lately.
tobias wrote:Any pictures of your setup? Would be really nice to see!
Actually it's just a normal setting, with Kinect put on a shelf (1.41 m above the floor) and an empty area. As I read from the code, the detected object should be far from the background more than the "Foreground Distance" value, therefore placing it on the floor will have only around 5 cm of distance, which is too small. So, I add a small box on the floor and put the flie on top, so it takes the background value of the far away floor, or maybe the wall.
Box.jpg
omwdunkley wrote:Thanks Veer for your comments! Im going to recode most of it in c++ to get rid of that latency. I just moved to the US and dont have a kinect yet, so it might take some days for me to get started. I couldnt get it working in IR light rich envirnments either. I wanted to measure the detection accuracy using a motion capture system..but no change. Too much IR reflection. Let me know if oyu think any new features should be added before i start implementing the c++ version.

Cheers
Now I think I experience less latency for just controlling the flie. But as I'm also doing some processing of depth images from Kinect, when these two come together, the delay occurs and making the quadrotor unstable (growing oscillation in z direction until out of frame)

Some more findings:
I found that the quad's position is requested at 100 Hz, in getErrorToGoal function which is called in new_joydata interrupt. However, as I understand, this position is updated at around 30 Hz (the Kinect frame rate.) Looking into the result also see the position get updated around every 3 samples (sometimes there are also 2-3 consecutive changes.) My friend suggested me to do some profiling on the code and see where the load is. (I may try it later but now I may just change the control loop rate to 30 Hz and see if it works.)

I also found that in this line 471 in trackManager.py, the output from clip function is not kept in img. It should be

Code: Select all

np.clip(img, 0.0, 1.0,out=img)
This also helps reduce false detection of the background.
Also I think this line 522 is not necessary.

DistFunc in the CrazyflieDriver has no work? I can't find it in get_command function.

I just did accuracy test for Kinect to see its accuracy by detecting the flie put at different known points.
Truth.png
Each point is 0.5 m apart, and the big square is the Kinect position at (-2, 0, 1.41) and each point of the quadrotor is 0.651 m above the floor.

However I got something like this
Top view
Top view
View from the Kinect
View from the Kinect
It seems no problem for y axis and not so unexpected result for the curved x-axis (the curvature of depth field) but need to be fixed anyway.
The error for z axis is quite large (both error and spread) but so far the control in z axis is the best from 3 axes, maybe due to the fact that it is just the thrust that affects this.

To be honest, I never sync my code with yours since the first time I pulled it. Now I'm even not dare enough to pull because there would be so much differences. :D

Re: [Video] Alternative Kinect Control (using depth images)

Posted: Fri Jul 18, 2014 7:20 am
by Veerachart
Well, I tried at 30 Hz and things seem to go well. Even the manual control by GUI also seems better! ;)

Re: [Video] Alternative Kinect Control (using depth images)

Posted: Mon Jul 21, 2014 2:59 am
by omwdunkley
Veerachart wrote:Well, I tried at 30 Hz and things seem to go well. Even the manual control by GUI also seems better! ;)

Really like your work on this - awesome! Thanks for sharing.
So you run the joystick polling frequency at 30hz and its better?
What sort of PID values are you using?

Cheers!

Re: [Video] Alternative Kinect Control (using depth images)

Posted: Mon Jul 21, 2014 3:58 am
by omwdunkley
Veerachart wrote: It seems no problem for y axis and not so unexpected result for the curved x-axis (the curvature of depth field) but need to be fixed anyway.
The error for z axis is quite large (both error and spread) but so far the control in z axis is the best from 3 axes, maybe due to the fact that it is just the thrust that affects this.
Thanks for pointing out the "curvature". You can fix it by updating the following function to:

Code: Select all

def projectRay(self, u,v,d):
    (x,y,z) = self.cameraModel.projectPixelTo3dRay((u,v))
     return [x*d/z,y*d/z,d]
Pretty stupid bug:
The c++ version documentation states: "In 1.4.x, the vector has z = 1.0. Previously, this function returned a unit vector"
The python version doc string reads: "Returns the unit vector which passes from the camera center to through rectified pixel"

I just assumed they were the same, my bad :)

Anyway - I hope that fixes it. If you could so a quick test that would be awesome!

Re: [Video] Alternative Kinect Control (using depth images)

Posted: Tue Jul 22, 2014 3:45 am
by Veerachart
omwdunkley wrote: Really like your work on this - awesome! Thanks for sharing.
So you run the joystick polling frequency at 30hz and its better?
What sort of PID values are you using?

Cheers!
Yes, just changed the frequency of the joy publisher to 30Hz. It may be better if it's called by using interrupt by the Kinect's data.
omwdunkley wrote:
Thanks for pointing out the "curvature". You can fix it by updating the following function to:

Code: Select all

def projectRay(self, u,v,d):
    (x,y,z) = self.cameraModel.projectPixelTo3dRay((u,v))
     return [x*d/z,y*d/z,d]
Pretty stupid bug:
The c++ version documentation states: "In 1.4.x, the vector has z = 1.0. Previously, this function returned a unit vector"
The python version doc string reads: "Returns the unit vector which passes from the camera center to through rectified pixel"

I just assumed they were the same, my bad :)

Anyway - I hope that fixes it. If you could so a quick test that would be awesome!
I think I have tried printing z value in that function out, and it's not exactly one. Close to one in the middle, but a bit less if
the quadrotor is to the left/right. Anyway, thanks for the suggestion for the fix. I will try it later, maybe some long time later.
I'm working on a paper, and some reports for the coursework here. And next month I'm going to take a vacation back to my
home country.

Re: [Video] Alternative Kinect Control (using depth images)

Posted: Wed Aug 20, 2014 6:59 pm
by desertdiver
Hi, I've been trying to setup this with ROS but I ran into problems. First of all I have Ubuntu 14.04 and installed ROS indigo.
I didn't understand where to clone the crazyflieROS repository. I made a catkin workspace (~/catkin_ws) and cloned the crazyflieROS in "~/catkin_ws/src".Next I ran catkin_make in the workspace. Is this how it's supposed to be done or do I have to do it some other way?

edit:
When I try to run

Code: Select all

rosrun crazyflieROS driver.py
I get this error message:

Code: Select all

    from crazyflieROS import msg as msgCF
ImportError: cannot import name msg

Re: [Video] Alternative Kinect Control (using depth images)

Posted: Thu Aug 21, 2014 8:02 am
by Veerachart
desertdiver wrote:Hi, I've been trying to setup this with ROS but I ran into problems. First of all I have Ubuntu 14.04 and installed ROS indigo.
I didn't understand where to clone the crazyflieROS repository. I made a catkin workspace (~/catkin_ws) and cloned the crazyflieROS in "~/catkin_ws/src".Next I ran catkin_make in the workspace. Is this how it's supposed to be done or do I have to do it some other way?

edit:
When I try to run

Code: Select all

rosrun crazyflieROS driver.py
I get this error message:

Code: Select all

    from crazyflieROS import msg as msgCF
ImportError: cannot import name msg
I'm not sure if there is any update or not, but the package I cloned is not catkinized and is still in the older ROS file system.

Try going into the cloned directory (crazyflieROS) and run

Code: Select all

rosmake
You may need to source the setup file after that. Then, try running it again.

Re: [Video] Alternative Kinect Control (using depth images)

Posted: Thu Aug 21, 2014 12:27 pm
by desertdiver
Thanks =] I used rosmake to build the thing and it worked. I made a new folder ~/indigo_ws/sandbox (like it says in the tutorial on the ROS wiki) and git cloned there.
Now I'm having some trouble installing freenect part (In the end I want to try using kinect to control the flie). I built the freenect_stack just like I did with crazyflieROS but I get this error:

Code: Select all

ResourceNotFound: rgbd_launch
.
Also I don't have a PS3 controller, only xbox. Is there any way to get that to work?

Re: [Video] Alternative Kinect Control (using depth images)

Posted: Fri Aug 22, 2014 8:08 pm
by DerP0LE
desertdiver wrote:Thanks =] I used rosmake to build the thing and it worked. I made a new folder ~/indigo_ws/sandbox (like it says in the tutorial on the ROS wiki) and git cloned there.
Now I'm having some trouble installing freenect part (In the end I want to try using kinect to control the flie). I built the freenect_stack just like I did with crazyflieROS but I get this error:

Code: Select all

ResourceNotFound: rgbd_launch
.
Also I don't have a PS3 controller, only xbox. Is there any way to get that to work?
Im to 100% exactly at the same point. im working with a xbox controller, too and got the same error :(