Friday, December 16, 2011

Pyppet2 - Biped Solver - Part1



http://pyppet.googlecode.com/files/pyppet-1.9.2f.tar.bz2


def update(self, context):
Ragdoll.update(self,context)

loc,rot,scl = self.pelvis.shadow.matrix_world.decompose()
euler = rot.to_euler()
tilt = sum([abs(math.degrees(euler.x)),abs(math.degrees(euler.y))])/2.0

x1,y1,z1 = self.pelvis.get_location()
current_pelvis_height = z1
x2,y2,z2 = self.head.get_location()
x = (x1+x2)/2.0
y = (y1+y2)/2.0
ob = self.pelvis.shadow
ob.location = (x,y,0)
loc,rot,scale = ob.matrix_world.decompose()
euler = rot.to_euler()

rad = euler.z - math.radians(90)
cx = math.sin( -rad )
cy = math.cos( -rad )
if not self.left_foot_loc or random() > 0.9:
v = self.left_foot.shadow.location
v.x = x+cx
v.y = y+cy
v.z = .0
self.left_foot_loc = v

rad = euler.z + math.radians(90)
cx = math.sin( -rad )
cy = math.cos( -rad )
if not self.right_foot_loc or random() > 0.9:
v = self.right_foot.shadow.location
v.x = x+cx
v.y = y+cy
v.z = .0
self.right_foot_loc = v

Updates foot targets, and measures tilt of head/pelvis offset.



## falling ##
if current_pelvis_height < self.pelvis.rest_height * (1.0-self.standing_height_threshold):

for target in self.foot_solver_targets: # reduce foot step force
target.weight *= 0.9

for target in self.hand_solver_targets: # increase hand plant force
if target.weight < self.when_falling_hand_target_goal_weight:
target.weight += 1


for hand in (self.left_hand, self.right_hand):

self.head.add_local_torque( -self.when_falling_head_curl, 0, 0 )

u = self.when_falling_pull_hands_down_by_tilt_factor * tilt
hand.add_force( 0,0, -u )

x,y,z = hand.get_location()
if z < 0.1:
self.head.add_force(
0,
0,
tilt * self.when_falling_and_hands_down_lift_head_by_tilt_factor
)
hand.add_local_force( 0, -10, 0 )
else:
hand.add_local_force( 0, 3, 0 )

If falling pull hands down to break fall, then try to sit up.




else: # standing
for target in self.foot_solver_targets:
if target.weight < self.when_standing_foot_target_goal_weight:
target.weight += 1

for target in self.hand_solver_targets: # reduce hand plant force
target.weight *= 0.9


## lift feet ##
head_lift = self.when_standing_head_lift

foot = self.left_foot
v1 = foot.get_location().copy()
if v1.z < 0.1: self.head.add_force( 0,0, head_lift )
v2 = self.left_foot_loc.copy()
v1.z = .0; v2.z = .0
dist = (v1 - v2).length
if dist > 0.5:
foot.add_force( 0, 0, self.when_standing_foot_step_far_lift)
#self.pelvis.add_force( 0,0, -head_lift*0.25 )
elif dist < 0.25:
foot.add_force( 0, 0, -self.when_standing_foot_step_near_pull)
#self.head.add_force( 0,0, head_lift )

foot = self.right_foot
v1 = foot.get_location().copy()
if v1.z < 0.1: self.head.add_force( 0,0, head_lift )
v2 = self.right_foot_loc.copy()
v1.z = .0; v2.z = .0
dist = (v1 - v2).length
if dist > 0.5:
foot.add_force( 0, 0, self.when_standing_foot_step_far_lift)
#self.pelvis.add_force( 0,0, -head_lift*0.25 )
elif dist < 0.25:
foot.add_force( 0, 0, -self.when_standing_foot_step_near_pull)
#self.head.add_force( 0,0, head_lift )




If standing take step to next foot target, and lift head if foot is touching the ground.

Saturday, December 10, 2011

pyppet2 - breakable ragdoll



Using ODE joint feedback the stress of each joint can be measured, if over the user defined threshold Pyppet will break the joint.

Joint Damage Test


Tuesday, December 6, 2011

RPythonic 0.4.4



Rpythonic has reached another major milestone and is now able to generate PyPy compatible ctypes bindings to C libraries. The two screen shots show OpenCV, Gtk3, and libfreenect running under PyPy 1.7.
The Rpythonic download contains pre-generated wrappers for the following libraries:

  • SDL

  • OpenAL

  • OpenCV

  • Emokit

  • Fluidsynth

  • GTK3

  • libfreenect

  • ODE

  • OpenGL

  • OpenJPEG

  • Wiiuse

Monday, December 5, 2011

Pyppet2



Changes from Pyppet1 to Pyppet2:



  • ported from Blender2.49 to Blender2.6

  • WMD replaced with R.Pavlik's enhanced Wiiuse

  • Pygame replaced with ctypes-SDL

  • PyGTK2 replaced with ctypes-GTK3

  • PyODE replaced with ctypes-ODE





New in Pyppet2:



  • Kinect streaming using libfreenect

  • Web-camera streaming using OpenCV

  • Drag'n'Drop device config

  • Multi-threaded



ODE Joint Physics




http://pyppet.googlecode.com/files/pyppet-1.9.0.tar.bz2

Thursday, December 1, 2011

Pyppet2 Update



Mission1: Integrate Blender and GTK3


The first requirement of this project was to integrate Blender and GTK3 without any compiled Python modules, or modifications to the Blender C source code. To acheive this, only Python and ctypes are used. The GTK3 ctypes wrappers were generated by Rpythonic. GObject Introspection is not used or required.
Rpythonic is also used to generate wrappers to the Blender C API. This allows us to control the Blender main-loop from Python and integrate it with our own custom main-loop.

XEmbed:


The XEmbed protocol is used to embed Blender's window into a GtkSocket. The Blender window is placed on a GtkFixed canvas as the bottom layer. Gtk widgets can be drawn over-top of the Blender window by wrapping them in a GtkEventBox. The Properties sub-window is replaced with a Gtk widget by checking the location and size of the Area->Region object each main loop iteration. Bpy provides the width and height of this struct, but not its location in window-space. The ctypes wrapper to Blender (libblender) is used to get the location:

# the pointer of any bpy object can be read with "as_pointer()" #
addr = reg.as_pointer()
ptr = ctypes.POINTER(ctypes.c_void_p).from_address( addr )
creg = libblender.ARegion( pointer=ctypes.pointer(ptr), cast=True )
rect = creg.winrct # blender window space
print( rect.xmin, rect.xmax, rect.ymin, rect.ymax )

Mission2: Integrate Blender and Webcamera Streaming


OpenCV provides interesting effects and an interface for reading data from a web-camera using the highgui.QueryFrame function. QueryFrame is slow and blocks until the next frame is ready, not something that we can allow to slow down the main loop. Python multi-threading works very well when combined with ctypes, where the blocking call will release the GIL and allow other threads to continue. We only need to lock and release around the call that writes the final image data to a pixel buffer that Gtk displays.
To display the webcam buffer as a texture in the Blender viewport, OpenGL is used directly. I first tried BGL, but it seems that the call to glTexImage2D requires a BGL.Buffer object which wraps a python list. Converting the raw image data to a python list would be another speed hit. Instead, I used a pure ctypes wrapper to OpenGL where raw data pointers can be used directly. In order to get OpenGL over ctypes to work in Blender I found that the library can not be loaded from an external DLL, the magic trick is to call ctypes.CDLL("") with an empty string which forces ctypes to load the library from the current process.
The main loop checks the 'webcam' bpy Image each frame to see if its OpenGL "bindcode" is active, meaning that Blender has cached the image to be displayed in the view. Using the bindcode the texture can be updated dynamically.

img = bpy.data.images['webcam']
if img.bindcode:
bind = img.bindcode
glBindTexture(GL_TEXTURE_2D, bind)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE)
ptr = self.webcam.preview_image.imageData
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGB, 320, 240, 0,
GL_RGB, GL_UNSIGNED_BYTE, ptr
)


http://pyppet.googlecode.com/files/pyppet-dec2-2011.tar.bz2