Webcam Performance with Stage3D – Part I
(desktop/mobile)

This article is all about performance using a webcam in Stage3D for e.g. augmented reality applications. If Stage3D is not necessary you should consider the use of StageVideo instead. Unfortunately StageVideo and Stage3D can’t be layered today and I doubt that this will change anytime soon.
edit [2013-06-14]: Today at the Flash Player roadmap online meeting, product manager Bill Howard just said, it is absolutely doable and he might get that back on the list.

Problems

Using the Camera in Stage3D Projects can be quite something. Especially when it comes to performance and if you have no experience with GPUs it might give you some headaches. The vast amount of supported resolutions and variety in CPU/GPU power makes this quite difficult, especially targeting mobile devices.

First of all, as you might know, textures have to be a power of 2 in width and height. The image of your camera most likely isn’t and I guess you rarely want it to be. Next thing is, you have to upload it. Uploading textures is a bottleneck. Once the texture is on the GPU, everything is fine, but uploading takes quite some time. You know that if you like games. Even after all those years the preloaders entering a map are still there. The continuously uploading is quite the problem.

Possible Methods

If you read Jackson Dunston’s blog you might know his upload speed tester. The results suggest to use BitmapData (with alpha, which a webcam has not) or ByteArrays. So there are several options coming to mind, to get your camera image on the GPU. I tested the following:

  • camera.drawToBitmapData() w/ alpha -> uploadFromBitmapData
  • camera.drawToBitmapData() w/o alpha -> uploadFromBitmapData
  • camera.drawToByteArray() -> uploadFromByteArray
  • bitmapData.draw() w/ alpha -> uploadFromBitmapData
  • bitmapData.draw() w/o alpha -> uploadFromBitmapData
  • bitmapData.draw() w/ alpha -> copyToByteArray -> uploadFromByteArray
  • bitmapData.draw() w/o alpha -> copyToByteArray->uploadFromByteArray

Test Results

Here is what I got, capturing the camera image and uploading it to the gpu. So far, I tested my MBP (two times), Samsung Galaxy S III and iPhone 4. More may follow. The tests do not include rendering time. For more precision I used 1000 loops on desktop and 10 on mobile.

I hope you can see, I tested every possible resolution, and that four times (w/o cropping, horizontal cropping, vertical cropping and both). Unfortunately in the JS-library I’ve chosen, it seems to be impossible to add multiline text at the bottom of the bars. So I could only label it with the webcam resolution and couldn’t add the result size of the texture. If someone knows a better lib with grouping, stacking etc. just give me a hint. Till then I hope you can read it anyway. But by the way: toggleStyles


Loading…

Conclusion

As you can see there is no clear winner. All I can say is, it is very difficult to make a good choice without testing the real device. Maybe it is the best to test the device when the app runs the first time automatically. But regardless of the chosen upload method keep the following things in mind.

Dimensions

The Dimensions of webcam images and textures differ in most cases very much. For example if your camera has a maximum resolution of 1280×720 (921.600) which you would like to use, the texture would have to be 2048×1024 (2.097.152) in size. That’s 228 % of your webcam data. This can be cropped with UVData of course, but it’s a huge waste of uploaded bytes and therefore bad for performance.

While this might be no problem on most desktop computers, it’s crucial to avoid such unnecessary uploads on mobile. Cropping the camera image beforehand will help you to avoid wasted uploads. Use a smaller texture like 1024×1024 (1.048.576) and you lose 20 % of your camera width, what seems to be a lot but is only 10 % on each side if you are using a matrix to keep the image centered. That way you will be only wasting 13 % of your uploaded data. The catch is, that you might have to use draw() and a matrix to keep the center in it’s place, or translate the bitmapData. Another option is the use of copyToByteArray() and setting the rectangle to the center. (Note: You can not avoid cropping with this method, unless your camera image is exactly the size of the texture.)

Even better, if you chose a camera resolution of 1024×576 and a 1024×512 texture, you’ll sure lose some camera quality and 12 % of the height. But on the other hand your upload will be useful to 100 %. And since your image will only be cropped vertically, the use of a matrix will most likely be unnecessary.

Choose the resolution very wisely.

Framerate

The 24 fps standard used in the film industry for decades, works great thanks to the motion blur as result of exposure time. Most webcams will have framerates of 15 – 30 fps. That is still quite enough for the sense of motion.

On the other hand adding a realistic motion blur to a rendered scene is quite a heavy task, even today. That’s why most games these days try to target a framerate of 60 fps. The problem is obvious. If your main loop updates the camera image every 60 fps regardless of whether it has changed or not (e.g. away3d’s standard WebcamTexture class) you are wasting a lot of performance. Make use of the Event.VIDEO_FRAME instead.

Given your app targets 60 fps and your webcam has a framerate of 15 fps, your app would have a difficult job every 4th frame. So you could even think of splitting the necessary steps (drawing the camera image, eventually preprocessing, uploading) and executing them frame by frame. That way your image will be updated 16-33 ms later but it also evens the workload and may give you some desperately needed extra ms every frame.

Taking this thought further it might be not the worst idea to split the image in more than one textures. This way you can split the work even further which is the only way to make the upload process somehow “asynchronous”. (But before you do, check out the next article, about the new AS3 RectangleTexture. ;))

Mipmapping

GPU textures are image files oftenly using mipmaps. Those are several versions of the same image, always half the size of the foregoing. This has huge benefits for rendering in 3D space or rendering scaled images. But if you are creating a augmented reality application your webcam image will most likely be a fixed background without any transformations. The creation of mipmaps would be very counterproductive since this task is accomplished on CPU and therefore increases the amount of data that has to be uploaded to the GPU. Make sure to avoid this if possible.

And of course: Reuse!

It’s obvious but always reuse your objects. Regardless whether it’s a byteArray, bitmapData or a texture on your GPU! In some frameworks this might be a little bit tricky but otherwise it will slow down your app tremendously.

So far so good. I hope this is of help for anyone. Don’t hesitate to drop a comment if you have questions/suggestions.

PS: I couldn’t find data about Eugene Zatepyakin’s CaptureDevice ANE regarding performance. If you have some facts about it, please let me know.

Ein Gedanke zu „Webcam Performance with Stage3D – Part I
(desktop/mobile)

  1. Pingback: Integrate a webcam/video with Starling/away3d --> Stage3D - Seite 3 - Flashforum

Kommentare sind geschlossen.