How do I get started with Point Grey Bumblebee 2/ XB3 Stereo Cameras?

Chances are you probably stumbled across this article having bought a stereo camera from Point Grey either the XB3 or Bumblebee 2 and are perhaps a little overwhelmed by where to start?

I personally think starting out with a Point Grey Stereo camera gave me a good glimpse of 3D image processing. I’m only a beginner still, but I was able to jump onto point clouds by utilizing the capabilities of the Triclops library provided (not having to worry about any rectification or stereo processing). However, eventually as I get more time, I plan to play around with camera calibration and generate my own stereo algorithm.

Triclops SDK? Flycapture SDK? Which one do I use? Why isn’t there just a single Point Grey  SDK? If that is what you’ve been asking too, let’s simplify this! :

Disclaimer: Significant content within this article might be from the Point Grey website, their SDK manuals/documentation etc.

The comments in this article are my own experiences and might not necessarily be technically sound. So please use with discretion and a grain of salt!

Connecting your Hardware 

You need to connect the Firewire port from your Bumblebee camera into your PC/Laptop. Once connected successfully, you will see a green status LED turn on (located between the lenses).

Figure1: Firewire port on the Bumblebee 2

Figure2: Bumblebee 2 with the green status LED

Connecting to your Software

You will have to install both the Flycapture and Triclops SDK depending on your Operating System and Camera you have. Head over to Point Grey’s download section: http://www.ptgrey.com/support/downloads/downloads_admin/index.aspx, once you create a login. It is as simple as selecting your:

  • Camera Family
  • Model Number
  • Operating System

You will get a few results, head over to the Software tab and download the corresponding versions for you (32 or 64 bit). Next, install both SDKs.

triclopsDemo.exe

Now that you’ve connected your camera you might be wondering how to get images from the setup. The quickest way initially is to use one of the supplied demo programs called triclopsDemo.exe. You can access this by going to Start -> Programs -> Point Grey Research -> Triclops Stereo Vision SDK -> Examples -> triclopsDemo

If you have no camera connected, this is what you should be seeing:

Figure 3: No camera connected for triclopsDemo.exe

Assuming you did follow the above steps currently, you should see your device in the list as follows:

Figure 4: Successfully connected a camera for triclopsDemo.exe

As you can see from the above figure (Figure 4), our camera was detected as a connected device. It shows you your model type (BB2-08S2M in our case), resolution and whether the camera is color or B/W. The last digital of the model number corresponds to: M = Monochrome (B/W), C specifies that the camera is able to record color. Hit OK!

Figure 5: Left/Right images from triclopsDemo.exe

Now you are streaming in the raw images from both the left/right camera simultaneously. To generate a 3D view of the scene, you can go to Window -> New 3D Window. The Figure below (Figure 6) shows you the 3D point cloud generated by the software:

Figure 6: Point Cloud generated

As you can see the point cloud is quite poor and not a good representation of the scene. I am merely a beginner in Stereo Vision and still trying to figure out how to get a better image. However, I think this is because of a few factors:

  • The point cloud is generated using a simple algorithm called SAD (Sum of Absolute Differences) to accomplish the matching between the two images. Although this is fast and helps in real-time applications. There might be other methods of accomplishing this. More information at: http://www.ptgrey.com/support/kb/index.asp?a=4&q=48&ST=
  • The scene I have imaged, lacks texture/variation to help with making a strong match.
  • The default stereo parameters are not best adjusted for what I am currently imaging.
I’ve noticed significantly improved results with varying the stereo parameters. Here is how you go about accomplishing it. On the top left of the screen, you will notice a button called Stereo Params. Click that and a window will pop-up with a multitude of adjustable settings:

Figure 7: Stereo Parameters

The below variations are just suggestions and will vary based on what you are imaging:

  1. I noticed that removing the Validation masks (eg: Surface, Texture, Back-forth and Uniqueness) gives a lot more information (although each pixel is less accurate now). More information on what each mask does can be found at: http://www.ptgrey.com/support/kb/index.asp?a=4&q=53
  2. I noticed that increasing the Stereo Mask and Edge Mask size to the maximum (23 and 11 respectively) helped give a better image.

The figure below shows the improved 3D image generated with modifying the above settings. As can be seen the lower figure has significantly more detail and texture information than the above (with the default settings).

Figure 8: After adjusting the Stereo Parameters (removing validation masks)

  1.  The next thing that makes a difference is adjusting the Disparity Range. Again, these values differ a lot from what you are imaging. For my set-up, I found that keeping the minimum disparity at 0 and changing the maximum disparity to 240, gave me the best results.

 Figure 9: After adjusting the Stereo Parameters (disparity range)

Clearly it can be seen from the above figure that setting an appropriate disparity range yields a better image.

The demo program is easy to use in the sense that it allows you to see different views with minimal effort. For example we can generate a rectified/disparity or edge image in real-time as well.

All you have to do is:

  • select the window where you see the left/right image to make it active
  • change the Raw option from the drop-down to whatever it is you are interested in
  • you can also change the resolution that the camera is seeing. However note, higher resolution will mean slower processing time.

Figure 10, below shows the effect of the rectification on the right image, it can be clearly seen that the algorithm successfully corrected for the distortion added by the camera (see the curvature of the top of the mug).

  • Top image:  non-rectified
  • Bottom image: rectified

Figure 10: Rectified image

Figure 11, below shows the edge image of the object.

Figure 11: Edge image

Next, for whatever reason if you wish to save your camera calibration file. The triclopsDemo program also offers you that option. You can do so by going to: Save/Load -> Calibration File -> Save Current

Just specify the filename and a .cal file will be generated that holds all the camera parameters as well as the stereo settings that were used (edge mask size, stereo mask size, resolution, baseline, focal length etc.). More details about the contents of the file can be found here: http://www.ptgrey.com/support/kb/index.asp?a=4&q=243&ST=calibration+file 

C++ Development 

Now that you’ve got your hands a little dirty with understanding how things work, you’d probably like to create something of your own in terms of image/point cloud manipulation.

If you were wondering how to go about this: My suggestion is to start out with their example programs. They are written in C++ so that is what I’d recommend starting out with.  Additionally, I would highly recommend going through their example programs since they teach you what the Flycapture and Libraries are practically capable of doing. The examples are also very well commented.

  • Browse to: Program Files -> Point Grey Research -> Triclops Stereo Vision SDK -> src -> examples -> win32
  • I would recommend starting out with grabstereo and stereoto3dpoints. Both these example programs are an excellent source to understand how to grab images from the camera, rectification, depth map and 2D to 3D conversion API function calls are meant to be used.

Good luck!

Leave a Reply