Mission1: Integrate Blender and GTK3
The first requirement of this project was to integrate Blender and GTK3 without any compiled Python modules, or modifications to the Blender C source code. To acheive this, only Python and ctypes are used. The GTK3 ctypes wrappers were generated by Rpythonic. GObject Introspection is not used or required.
Rpythonic is also used to generate wrappers to the Blender C API. This allows us to control the Blender main-loop from Python and integrate it with our own custom main-loop.
The XEmbed protocol is used to embed Blender's window into a GtkSocket. The Blender window is placed on a GtkFixed canvas as the bottom layer. Gtk widgets can be drawn over-top of the Blender window by wrapping them in a GtkEventBox. The Properties sub-window is replaced with a Gtk widget by checking the location and size of the Area->Region object each main loop iteration. Bpy provides the width and height of this struct, but not its location in window-space. The ctypes wrapper to Blender (libblender) is used to get the location:
# the pointer of any bpy object can be read with "as_pointer()" #
addr = reg.as_pointer()
ptr = ctypes.POINTER(ctypes.c_void_p).from_address( addr )
creg = libblender.ARegion( pointer=ctypes.pointer(ptr), cast=True )
rect = creg.winrct # blender window space
print( rect.xmin, rect.xmax, rect.ymin, rect.ymax )
Mission2: Integrate Blender and Webcamera Streaming
OpenCV provides interesting effects and an interface for reading data from a web-camera using the highgui.QueryFrame function. QueryFrame is slow and blocks until the next frame is ready, not something that we can allow to slow down the main loop. Python multi-threading works very well when combined with ctypes, where the blocking call will release the GIL and allow other threads to continue. We only need to lock and release around the call that writes the final image data to a pixel buffer that Gtk displays.
To display the webcam buffer as a texture in the Blender viewport, OpenGL is used directly. I first tried BGL, but it seems that the call to glTexImage2D requires a BGL.Buffer object which wraps a python list. Converting the raw image data to a python list would be another speed hit. Instead, I used a pure ctypes wrapper to OpenGL where raw data pointers can be used directly. In order to get OpenGL over ctypes to work in Blender I found that the library can not be loaded from an external DLL, the magic trick is to call ctypes.CDLL("") with an empty string which forces ctypes to load the library from the current process.
The main loop checks the 'webcam' bpy Image each frame to see if its OpenGL "bindcode" is active, meaning that Blender has cached the image to be displayed in the view. Using the bindcode the texture can be updated dynamically.
img = bpy.data.images['webcam']
bind = img.bindcode
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE)
ptr = self.webcam.preview_image.imageData
GL_TEXTURE_2D, 0, GL_RGB, 320, 240, 0,
GL_RGB, GL_UNSIGNED_BYTE, ptr