Sunday, August 22, 2010

iTaSC Pole Vector Free Rigging

Blender now has a very powerful set of IK solvers: Spline IK, and simulation based IK (iTaSC). When iTaSC is used in `Simulation` mode it becomes very difficult to make an IK chain flip. This could greatly simplify the rigging and animation process if we can remove the need for a pole-vector object to control knee/elbow tilt. One of the problems in using a pole-vector is that it imposes a non-ideal flipping point, an arm or leg will have a large range of possible motion, flipping then becomes unavoidable past 180 degrees. Animators may have to resort to scripting to control the pole-vector or bone roll directly, but these hacks complicate animation blending in NLA or are cumbersome to use and setup.

Using iTaSC no hacks are required to reach almost 360 degrees of rotation. Simply by using two IK constraints, one for position and the other for rotation, is all that is needed for full elbow or knee tilt while keeping the optimal flipping point at 360 degrees. The interface may not lead us directly to this simple solution, the old pole-vector option remains available even though its not very useful combined with iTaSC. The correct steps are as follows:
1. From the Armature panel -> iTaSC options -> select `Simulation` mode (keep all the default options)
2. Go into pose-mode, create a position IK for the ankle, do not assign a pole-vector target
3. Pick the thigh bone, create another IK constraint, pick the ankle IK handle as the target
4. Set the constraint to use rotation, and only in the Y, and turn off position.
5. If your model faces -Y (the default in Blender) your leg will have flipped, rotate the IK handle 180 on the Y.
6. Test the leg by moving the handle, you should see it flip when moving forward and back,
this is because the X rotation of the handle will define where the monopole is located that iTaSC will flip around,
by relocating the monopole to 180 degrees from the leg's natural rest position,
we gain almost 360 degrees of flip-free movement.
7. Adjust the rotation of the handle in X to minimize the flipping,
some where between 45 to 90 degrees should give the best results.
8. After you have found the optimal rotation in X of the handle, go back to the constraint options and set the weight to 0.5,
this helps keep the response of the IK chain smooth when nearing 360 degrees.

Torso and Shoulder Setup:
Shoulder rigging and weighting is another area where many techniques have been tried, but most of these hacks are hard to setup, so it becomes a serious bottleneck for creating digital characters. The interaction of all the ribs, muscles, clavicle and shoulder-blades is far too complex to be modeled by a simple single chain spine with arms branching directly from it. The following approach instead uses IK to model the shoulder blades and stretch-to constraints to model the shoulders and chest. This setup can only work with bone-to-bone constraints because of the way Blender computes its DAG node graph, in other words you will get cyclic dependency errors trying to use bone-to-empty constraints.

. Even if bone tips are perfectly snapped to the heads of their stretch-to targets, they will still move slightly when the constraint is applied.
. IK bone stretch scales opposite of stretch-to constraint. IK stretch gets larger as distance grows, while stretch-to maintains volume and thins.
. Stretch-to constraint when using a bone target allows for slider control between head and tail. IK lacks this option.
. iTaSC treats bone stiffness slightly different from legacy IK, stiffness of 0.99 is considered absolute and locked.
. iTaSC bone stretch is disabled if two IK constraints have overlapping bone influence.
. iTaSC not as compatible with other constraints as the legacy IK was. (locked-track was compatible with legacy IK)
. iTaSC has its own set of internal IK types: copy-pose and limit distance, with more comming in the future!
. Too many bones in an IK chain with stiffness of 0.99 can lead to jerking and unstable response.
. Bone shake due to overlapping IK is usally solved by lowering the constraint weights, good ranges are from 0.5 - 1.0

Sunday, August 15, 2010

RPython callbacks from C

It is possible to define an RPython function that is passed to a C function as a callback argument. The RPython function will then be invoked from C whenever an event causes it to trigger. Pyaudio never implemented the PortAudio async API, because it requires callback functionality and that may take a great amount of effort from the Python C API. From RPython is rather easy once we know the steps:
1. import llhelper from pypy.rpython.annlowlevel
2. define the function signature (argument types, return type) using lltype.FuncType
3. create a pointer to the signature lltype.Ptr( mysignature )
4. pass the pointer as an argument to rffi.llexternal when defining the wrapper
5. within the entrypoint use llhelper:
mycallback = llhelper(lltype.Ptr( mysignature ), my_rpython_function )
6. pass mycallback as an argument to the external C function

This excerpt is from RpyPortAudio 0.3:

mult16bits = 2**15
def stream_callback( ibuf, obuf, frameCount, info, flags, user ):
t = time.time()
for i in range(512):
raw = ibuf[i]
x=rffi.cast(lltype.Signed, raw)
samp = math.sin( t + (float(i)/512.0) ) * mult16bits
samp = int(samp) + x
obuf[i] = rffi.cast(rffi.INT, samp)
return rffi.cast(rffi.INT, 0) # 0=continue, 1=complete, 2=abort

StreamCallbackTimeInfoPtr.TO.become( StreamCallbackTimeInfo )
stream_cb_signature = lltype.FuncType([RBufferPtr, RBufferPtr, rffi.INT, StreamCallbackTimeInfoPtr, rffi.INT, rffi.VOIDP], rffi.INT)
stream_callback_ptr = lltype.Ptr( stream_cb_signature )

OpenDefaultStream = rffi.llexternal( 'Pa_OpenDefaultStream',
StreamRefPtr, # PaStream**
rffi.INT, # numInputChannels
rffi.INT, # numOutputChannels
rffi.INT, # sampleFormat
rffi.DOUBLE, # double sampleRate
rffi.INT, # unsigned long framesPerBuffer
stream_callback_ptr, #PaStreamCallback *streamCallback
rffi.VOIDP, #void *userData
rffi.INT, # return

def entrypoint():
streamptr = lltype.malloc(StreamRefPtr.TO, 1, flavor='raw') # must have length 1
userdata = lltype.nullptr(rffi.VOIDP.TO)
callback = llhelper(lltype.Ptr( stream_cb_signature ), stream_callback)
ok = OpenDefaultStream( streamptr, 2, 2, Int16, 22050.0, 512, callback, userdata )
stream = streamptr[0]
startok = StartStream( stream )

Wednesday, August 11, 2010

Interfacing RPython with C

The documentation on how to interface RPython with C is rather limited, the extending doc mentions that `MixedModules` using rffi is the most advanced method available. The rffi document only provides a quick glance at how to use rffi.llexternal. The best place to get started is looking at the source code in rlib/rsdl/ which binds RPython to SDL. What should be clear after reading is that PyPy provides a very direct and easy to understand interface for C; that should also provide the highest possible performance.

The source code in (found in pypy/rpython/tool) is very interesting, it generates dynamic C code that it compiles to get information about the C code we are trying to wrap, for example if you had used the wrong name in a struct it will generate an error. Using rffi_platform different C types can be defined, such as structs, constant integers; these are then put into a special container class called CConfig, which is then parsed by rffi_platform.configure(CConfig) and wrappers returned. After we have the wrappers we must tell any pointers we had previously used in the CConfig that they `become` those objects, MyPointer.TO.become(MyStruct).

RSDL that is included in PyPy is incomplete and lacks wrappers for Joystick, the code below wraps SDL Joystick. Full source code is available here.

eci = get_rsdl_compilation_info()
## wrapper for rffi.llexternal just to shorten the call
def external(name, args, result): return rffi.llexternal(name, args, result, compilation_info=eci)

JoystickPtr = lltype.Ptr(lltype.ForwardReference())
JoyAxisEventPtr = lltype.Ptr(lltype.ForwardReference())
JoyBallEventPtr = lltype.Ptr(lltype.ForwardReference())
JoyButtonEventPtr = lltype.Ptr(lltype.ForwardReference())
JoyHatEventPtr = lltype.Ptr(lltype.ForwardReference())

class CConfig:
_compilation_info_ = eci
Joystick = platform.Struct('SDL_JoyAxisEvent', []) # just and ID, struct contains nothing
# rsdl/ already defines SDL_JOYAXISMOTION, SDL_JOYBALLMOTION, etc..
JoyAxisEvent = platform.Struct('SDL_JoyAxisEvent',
[('type', rffi.INT),
('which', rffi.INT),
('axis', rffi.INT),
('value', rffi.INT)])

for name in CONSTS.split():
name = name.strip()
if name:
ci = platform.ConstantInteger('SDL_%s' %name)
setattr( CConfig, name, ci )


JoystickUpdate = external('SDL_JoystickUpdate', [], lltype.Void)
NumJoysticks = external('SDL_NumJoysticks', [], rffi.INT)
## CCHARP seems to stand for C char pointer ##
JoystickName = external('SDL_JoystickName', [rffi.INT], rffi.CCHARP)
JoystickOpen = external('SDL_JoystickOpen', [rffi.INT], JoystickPtr)
JoystickOpened = external('SDL_JoystickOpened', [rffi.INT], rffi.INT)

JoystickEventState = external('SDL_JoystickEventState', [rffi.INT], rffi.INT)

def handle_event( etype, event ):
p = rffi.cast( JoyAxisEventPtr, event )
axis = rffi.getintfield(p, 'c_axis')
value = rffi.getintfield(p, 'c_value')
print 'axis: %s value: %s' %(axis, value)

def poll(loops=1000):
event = lltype.malloc(RSDL.Event, flavor='raw')
i = 1
while i < ok =" RSDL.PollEvent(event);" ok =" rffi.cast(lltype.Signed,">= 0
if ok > 0: c_type = rffi.getintfield(event, 'c_type'); handle_event( c_type, event )
i += 1
finally:, flavor='raw')

def test():
num = NumJoysticks(); print 'number of joysticks/gamepads: %s' %num
JoystickEventState( RSDL.ENABLE )
if num:
joy = JoystickOpen( 0 )
numaxes = JoystickNumAxes( joy ); print 'number of axes: %s' %numaxes
numbut = JoystickNumButtons( joy ); print 'number of buttons: %s' %numbut

if __name__ == '__main__':
from pypy.translator.interactive import Translation
t = Translation( test )
t.annotate(); t.rtype()
entrypoint = t.compile_c()

Tuesday, August 10, 2010

RPython Struct

There is no drop in replacement for the struct module in RPython. In Rlib there is a "rstruct" folder, but only "unpack" is implemented with a warning in the header that it is incomplete; what it aparently lacks is proper endian support. There was never any pure RPython example made for how to implement "pack", there is however one example how to implement pack/unpack for the interpreter level. Looking at those examples it is clear we need to use the FormatIterator found in rlib/rstruct. The runpack example uses meta-programming to generate tuples for returning different types, but this seems like overkill, at least it is in the case of OSC where we know excatly what types we are dealing with. The two classes below (RStructUnpacker and RStructPacker) are simple and restricted RPython replacements for the struct module. They can not be used as drop in replacements; the Packer expects to be packing the same type, and the Unpacker does not return what it has unpacked and instead sorts it into lists by type.

class RStructUnpacker( pypy.rlib.rstruct.formatiterator.FormatIterator):
def __init__( self, fmt, data ):
self.strings = []
self.ints = []
self.floats = []
self.longs = []
self.input = data
self.inputpos = 0
self.interpret( fmt )

def operate(self, fmtdesc, repetitions):
if fmtdesc.needcount:
fmtdesc.unpack(self, repetitions)
for i in range(repetitions):
operate._annspecialcase_ = 'specialize:arg(1)'
_operate_is_specialized_ = True

def align(self, mask):
self.inputpos = (self.inputpos + mask) & ~mask

def read(self, count):
end = self.inputpos + count
if end > len(self.input): raise SyntaxError
s = self.input[self.inputpos : end]
self.inputpos = end
return s

def appendobj( self, ob ):
if isinstance(ob, str): self.strings.append( ob )
elif isinstance(ob, int): self.ints.append( ob )
elif isinstance(ob, float): self.floats.append( ob )
#elif isinstance(ob, long): self.longs.append( ob ) isinstance(ob, long) not RPython, how do we check for long?

class RStructPacker(pypy.rlib.rstruct.formatiterator.FormatIterator):
# not a drop in replacement for struct.pack, but as close as RPython can get (easily).
def __init__(self, fmt, strings=[], ints=[], floats=[], longs=[], unicodes=[], uints=[] ):
self.strings = strings
self.ints = ints
self.floats = floats
self.longs = longs
self.unicodes = unicodes
self.uints = uints
self.args_index = 0
self.result = [] # list of characters
self.interpret( fmt )

def operate(self, fmtdesc, repetitions):
if fmtdesc.needcount:
fmtdesc.pack(self, repetitions)
for i in range(repetitions):
operate._annspecialcase_ = 'specialize:arg(1)'
_operate_is_specialized_ = True

def align(self, mask):
pad = (-len(self.result)) & mask
for i in range(pad):

def finished(self): pass

def accept_str_arg(self):
assert self.strings
a = self.strings[ self.args_index ]
self.args_index += 1
return a
def accept_int_arg(self):
assert self.ints
a = self.ints[ self.args_index ]
self.args_index += 1
return a
def accept_float_arg(self):
assert self.floats
a = self.floats[ self.args_index ]
self.args_index += 1
return a
def accept_unicode_arg(self):
assert self.unicodes
a = self.unicodes[ self.args_index ]
self.args_index += 1
return a
def accept_uint_arg(self):
assert self.uints
a = self.uints[ self.args_index ]
self.args_index += 1
return a

Saturday, August 7, 2010

Meta-Programming Part2: rbpy Internals

RNA Introspection
Blender's bpy API is generated by RNA introspection; and this information is still preserved at the Python-level, each object either has a bl_rna attribute or a get_rna() function that contains all the C-level introspection information. Knowing the names and types of all function arguments and their returns allows for more efficient remote wrappers that do less data marshaling because they know what type the server is sending them.

Blender objects are not passed directly to the class/function wrapper generator; instead, they are first converted into an intermediate class during introspection because some special-case hacks must be applied. The intermediate class "genRNA" is then passed to the wrapper generator that packages the information and gets it ready for pickling. Pickling saves the user from having to run introspection every time they run their script, and also is the only workaround for transferring data from Python3 back to Python2 (Blender uses Python3 while RPython is a subset of Python2).

When rbpy is launched again as a subprocess this time from Python2, it loads the introspection pickle and generates the final remote wrapper classes. The __init__ of all objects uses *args so it can take any number of arguments or types. Normally this is not RPython compatible because a tuple can not be iterated over, to workaround this we use additional meta-programming to unroll the *args in a RPython safe manner, the technique is detailed here.

Below is the function that does the job of RNA introspection from a live blender object and generation of the intermediate class.

def reflect_RNA_instance( ob, classpath=None ):
class genRNA(object):
if isinstance(ob,bpy.types.Object): type = ob.type
genRNA.__name__ = sn = rpython_shadow_name( ob, classpath )
genRNA._class_name_ = sn
if classpath: genRNA._bpy_class_name_ = classpath
else: genRNA._bpy_class_name_ = ob.__class__.__module__+'.'+ob.__class__.__name__

if ob.__class__.__name__ == 'bpy_ops_submodule': # bpy.ops..
for opname in dir(ob):
func = getattr( ob, opname )
rna = func.get_rna()
args = []
for prop in
if prop.identifier == 'rna_type': continue
args.append( reflect_RNA_prop( prop ) )
d = lambda:()
d._argument_info = args
d._return_info = None
setattr( genRNA, opname, d )

elif ob.__class__.__name__=='bpy_prop_collection': #
genRNA.length = 0 # these should not be a special case and all GET funcs should assign, can setattr work with nonconcretes?
genRNA.GET_length = d = lambda:(); d._return_info = int
d = lambda:(); d._return_info = (str,)
setattr( genRNA, 'keys', d )
d = lambda:(); d._return_info = object
setattr( genRNA, 'get', d )
d = lambda:(); d._return_info = object
setattr( genRNA, 'new', d )
d = lambda:(); d._return_info = (object,)
setattr( genRNA, 'values', d )

elif ob.__class__.__name__ == 'vector':
genRNA.vec = [.0,.0,.0] # specialcase, use v.vec = v.to_tuple()
genRNA.angle = d = lambda:(); d._return_info = float
genRNA.copy = d = lambda:(); d._return_info = object
genRNA.cross = d = lambda:(); d._return_info = object
genRNA.difference = d = lambda:(); d._return_info = object = d = lambda:(); d._return_info = float
genRNA.lerp = d = lambda:(); d._return_info = object
genRNA.negate = d = lambda:(); d._return_info = object
genRNA.normalize = d = lambda:(); d._return_info = object
genRNA.project = d = lambda:(); d._return_info = object
genRNA.reflect = d = lambda:(); d._return_info = object
genRNA.to_tuple = d = lambda:(); d._return_info = (float,)
for p in 'length magnitude x y z w xyz'.split():
dummy = lambda:(); dummy._return_info = []; dummy._argument_info = float
setattr( genRNA, 'SET_%s' %p, dummy )
d = lambda:(); d._return_info = float; d._argument_info = None
setattr( genRNA, 'GET_%s' %p, d )

for prop in
#if not prop.is_readonly:
attr = getattr(ob, prop.identifier)
if hasattr( genRNA, prop.identifier ) and prop.identifier != 'type':
print( 'this should never happen', prop.identifier )
raise SyntaxError
dummy = lambda:()
dummy._argument_info = ai = reflect_RNA_prop( prop )
dummy._return_info = [] # do not return from a SET
setattr( genRNA, 'SET_%s' %prop.identifier, dummy )
d = lambda:()
d._argument_info = None
d._return_info = ri = reflect_RNA_prop( prop, attr )
setattr( genRNA, 'GET_%s' %prop.identifier, d )
#if prop.identifier == 'location': raise

for bfunc in ob.bl_rna.functions:
if hasattr( genRNA, bfunc.identifier ) or bfunc.identifier=='select': # what is this select bug? TODO check
print( 'this should never happen', bfunc.identifier )
raise SyntaxError
args = []
returns = []
for prop in bfunc.parameters:
if prop.use_output: returns.append( reflect_RNA_prop( prop ) )
else: args.append( reflect_RNA_prop( prop ) )
d2 = lambda:()
d2._argument_info = args
d2._return_info = returns
setattr( genRNA, bfunc.identifier, d2 )
return genRNA

Thursday, August 5, 2010

RPython Part2

There are at least two cases where runtime variables can be problematic in RPython with subclasses that have methods with the same name, and 1. return different types, or 2. like-named methods taking different number of arguments - in other words having a different signature. The solution requires not only a if/else block, but also a assert statement that proves type. Note that proving the type in the container's get function is of no help. If the variable outside is known and concrete at translation time then everything works, if however it is passed in from CPython (as it is here) or read from a pipe or socket it is no longer concrete and translation fails.

class T(object): pass

class TA( T ):
def whoami(self): print 'i am instance of TA'
def incompatible_return( self ): return 100
def diff_args( self, a,b,c ): print 'TA: ', a, b, c
def diff_num_args( self, a,b ): print 'TA: ', a, b

class TB( T ):
def whoami(self): print 'i am instance of TB'
def incompatible_return(self): return 'string'
def diff_args( self, x='1', y='y', z='z' ): print 'TB: ', x, y, z
def diff_num_args( self, a,b,c ): print 'TA: ', a, b, c

class Container(object):
def __init__(self, *args):
self.items = []
for item in list(args): self.items.append( item )
def get( self, index ):
a = self.items[index]
if index==0:
assert isinstance(a, TA)
return a, TA
assert isinstance(a, TB)
return a, TB

#File "pypy/pypy/translator/c/", line 998, in _python_c_name
# Exception: don't know how to simply render py object: 100
def entrypoint_incompatible_fails( outside ):
ta = TA()
tb = TB()
con = Container( ta, tb )
a,klass = con.get(outside)
assert isinstance(a, klass) # too bad this wont work
b = a.incompatible_return()
print b

## this works, so SomeObject subclasses method calls are fine as long as the number of args it the same and the return is the same type.
def entrypoint_diffargs_works( outside ):
ta = TA()
tb = TB()
con = Container( ta, tb )
a,klass = con.get(outside)
a.diff_args( 1,2,3 )

## assert is required ##
def entrypoint_branching_fails( outside ):
ta = TA()
tb = TB()
con = Container( ta, tb )
a,klass = con.get(outside)
if a.__class__ is TA:
a.diff_num_args( 1,2 )
elif a.__class__ is TB:
a.diff_num_args( 1,2,3 )

def entrypoint_branching_works( outside ):
ta = TA()
tb = TB()
con = Container( ta, tb )
a,klass = con.get(outside)
if a.__class__ is TA:
assert isinstance( a, TA ) # this is required
a.diff_num_args( 1,2 )
elif a.__class__ is TB:
assert isinstance( a, TB ) # this is required
a.diff_num_args( 1,2,3 )

PATH2PYPY = 'pypy'
sys.path.append(PATH2PYPY) # assumes you have pypy dist in a subfolder, you may need to rename this pypy-trunk

def tests( outside ):

from pypy.translator.interactive import Translation
t = Translation( tests )
t.annotate([int]); t.rtype()
f = t.compile_c()
print( 'end of program' )

Tuesday, August 3, 2010

Meta-Programming in RPython

What is Restricted Python?

Its hard to find a good definition, mainly because there is no formal definition. RPython was born as a side effect of the PyPy interpreter needing a way to compile itself. Since then it has become a general purpose language, but apparently has not been written about much other than a few academic papers [1], [2]. Meta-programming is RPython most exciting feature, more on that below.

Language Restrictions/Definition:
1. Lists and Dicts must contain a compatible type.
(objects with a common base class are compatible)
2. Function arguments types must not be changed after the first call, each call must be consistent with the others in the ordering of types.
(this also applies to function returns)
3. Subclasses must redefine functions that interact with an attribute they have made a new type.*
4. String and char are different, any string of length 1 becomes a char. (some builtin functions accept only a char)
5. Tuples may have incompatible types, but can not be iterated over, yet. (the error messages suggest the iteration limitation is going away)
6. Only __init__ and __del__ are allowed, so no custom attribute access or operator overloading.
7. Globals are considered constants (use singletons for global changeable state)
8. No runtime definition of functions or classes.
9. Slicing can not use negative indexes except for [:-1]
10. Some builtin functions requires literals, like getattr, for example:

arg = 'x'
getattr( o, arg )

There are a few other rules and exceptions not listed above, but those are small and the RPython translator/compiler is going to give you clear errors when you violate them. Getting started with RPython it is easy to make a mistake somewhere and violate rules 1-3, the errors you will get from breaking them are a bit cryptic as well:

a. Exception: don't know how to simply render py object:
b. pypy.rpython.error.TyperError: don't know how to convert from to
c. AttributeError: no field 'll_newlist'
d. pypy.rpython.error.TyperError: reading attribute '__init__': no common base class for [type]

*Rule 3 is unusual and easy to overlook, it means we must be careful with functions defined in our base class that interact with attributes we have redefined in subclasses that are of a new type. In other words, a function defined in the base class expects its the attributes of self to always have those fixed types, but if the subclasses changes the type, the function breaks. In unrestricted Python we would normally want to put as many shared functions in the base class as possible to avoid code duplication, rule 3 turns this upside down. Consider the following where we use wrapper objects for generic types and note that we have put the unwrap function in each of the subclasses.

class Type1(object): pass
class Type2(object): pass

class Abstract(object):
def __init__(self): # be careful with init, do not try to define defaults for what will have different types in subclasses!
#self.x = 'never define what x is in the base class'
self.y = 'this is safe'
def failure(self): self.x = 1.0

class O(object):
def __init__(self, arg='string'): self.a = arg

class X( Abstract ):
def __init__(self):
self.x = Type1()
def unwrap(self): return self.x

class Y( Abstract ):
def __init__(self):
self.x = Type2()
def unwrap(self): return self.x

class Z( Abstract ):
def __init__(self):
self.x = 10.0
def unwrap(self): return self.x

class W( Abstract ):
def __init__(self):
self.x = O
def unwrap(self): return self.x
def new_instance(self, arg): return self.x(arg)

def test_subclasses_fails():
c = Abstract()
c.failure() # this breaks because the type can not be known from the base class
a = X()
b = Y()

def test_subclasses_works():
a = X()
b = Y()
c = Z()
lst = [a,b] # sharing the same base class means they can co-exist in a list
for item in lst: print item
print a.x
print b.x
print c.x
o = W().x()
print o
print a.unwrap()
print b.unwrap()
o2 = W().new_instance( 'hello world' )
print o2.a
print 'test shared complete'

Another case that can easily lead to rule breaking is when dealing with *args which is a tuple, and since tuples can store incompatibles, we may be tempted to use it to pass variable amounts of incompatible types; but they can not be iterated over, and casting args to a list will not work either because lists must contain only compatible types. Tuple indexing must also be done with a literal constant as well, so for loop will fail:
def func( *args ):
for i in range(len(args)):
a = args[i]

Above fails, so the only way to get at items in args is to index each one with a literal like this, not pretty!:
n = len(args)
if n > 1: args[0]
if n > 2: args[1]
if n > 3: args[2]


If we have to code that way, then it seems like we should give up trying to pass variable numbers of incompatible types. However, unlike many other languages which are parsed only from source code, RPython uses live objects and introspection; and before compiling we have the chance to generate classes, modify functions, create new globals, and more, all from unrestricted Python. Using meta-programming we can easily bend some of the restrictions of RPython, in this case we are going to unroll the loop and inline valid RPython code.

STAR_TYPES = 'bool int float str tuple list Meta ID'.split()
class ID(object):
def __init__(self,value='undefined'): self.value = value

class Meta(object): pass

class SimplePack( Meta ): # would be useful to turn this into a minipickler
def init_head(self):
self.ID = '0'
self._init_string = ''

def init_bool(self, v ): self._init_string += '%s%s' %(v, ARGUMENT_SEP)
def init_int(self, v ): self._init_string += '%s%s' %(v, ARGUMENT_SEP)
def init_float(self, v ): self._init_string += '%s%s' %(v, ARGUMENT_SEP)
def init_str(self, v ): self._init_string += '"%s"%s' %(v, ARGUMENT_SEP)
def init_tuple(self, v ): self._init_string += '%s%s' %(v, ARGUMENT_SEP)
def init_list(self, v ): self._init_string += '%s%s' %(v, ARGUMENT_SEP)
def init_Meta(self, v ): self._init_string += '@%s%s' %(v.ID, ARGUMENT_SEP); print 'got meta'
def init_ID(self, v ): self.ID = v.value; print 'got id'

The SimplePack subclass above is going to handle each possible type that may be passed in *args from our generated function. gen_star_func and gen_switch_block are going to create the unrolled function for us, that can be very large depending on what maxitems is set to as we will see below.

def gen_switch_block( prefix, indent, index ):
indent += ' '
r = ''
for type in STAR_TYPES:
if not r:
r += '%sif isinstance( args[%s], %s ): self.%s_%s(args[%s])\n' %(indent,index,type, prefix,type,index)
r += '%selif isinstance( args[%s], %s ): self.%s_%s(args[%s])\n' %(indent,index,type, prefix,type,index)
r += '%selse: print "unknown type", args[%s]\n' %(indent,index)
return r

def gen_star_func( name, handler_prefix='init', head=None, tail=None, maxitems=16 ):
ident = ' '
if head: body = ' self.%s\n' %head
else: body = ''
body += ' n = len(args)\n'

for i in range( maxitems ):
body += '%sif n > %s:\n%s' %(ident, i, gen_switch_block(handler_prefix, ident,i) )
ident += ' '
if tail: body += '\n ' + tail

e = 'def star_func(self, *args):\n' + body
print e
exec( e )
star_func.func_name = name
return star_func

The output of gen_star_func simply checks for each possible type, and forwards it to a handler. In terms of speed there should not be a big expense, since the next switch block is only executed if the length of *args is greater, we lose some memory but it is reasonable as long as maxitems is not too high. As you can see below the output of gen_star_func greatly helps keeps our code maintainable not having to copy and paste into every function where we want to use *args and variable types.

def star_func(self, *args):
n = len(args)
if n > 0:
if isinstance( args[0], bool ): self.init_bool(args[0])
elif isinstance( args[0], int ): self.init_int(args[0])
elif isinstance( args[0], float ): self.init_float(args[0])
elif isinstance( args[0], str ): self.init_str(args[0])
elif isinstance( args[0], tuple ): self.init_tuple(args[0])
elif isinstance( args[0], list ): self.init_list(args[0])
elif isinstance( args[0], Meta ): self.init_Meta(args[0])
elif isinstance( args[0], ID ): self.init_ID(args[0])
else: print "unknown type", args[0]
if n > 1:
if isinstance( args[1], bool ): self.init_bool(args[1])
elif isinstance( args[1], int ): self.init_int(args[1])
elif isinstance( args[1], float ): self.init_float(args[1])
elif isinstance( args[1], str ): self.init_str(args[1])
elif isinstance( args[1], tuple ): self.init_tuple(args[1])
elif isinstance( args[1], list ): self.init_list(args[1])
elif isinstance( args[1], Meta ): self.init_Meta(args[1])
elif isinstance( args[1], ID ): self.init_ID(args[1])
else: print "unknown type", args[1]
if n > 2:
...(to maxitems)

def gen_class( name, base=Meta, items={} ):
cls = types.ClassType(str(name), bases=(base,), dict=items)
globals().update( {name:cls} )
return cls

def gen_metas( names ):
generated = []
init = gen_star_func( '__init__', handler_prefix='init', head='init_head()' )
for name in names.split():
items = {'__init__':init}
cls = gen_class( name, base=SimplePack, items=items )
generated.append( cls )
return generated
GEN_METAS = gen_metas( 'Meta1 Meta2 Meta3' )

def test_meta():
m1 = Meta1()
myid = ID('myid')
print myid
m2 = Meta2('a', 'b', 1.0, 'string', m1, myid )
print m2._init_string
m3 = Meta3( m1, m2 )
print m3._init_string

The test above test_meta takes the variable argument types and packs them into a string that could be sent over a pipe or socket for communication with another process.

To run these tests you need to download the pypy source code, and set PATH2PYPY to where pypy is.
Below the tests function is the pypy entry point, to run the different tests above uncomment test_*
You can download the source code for this whole demo here.

PATH2PYPY = 'pypy'
sys.path.append(PATH2PYPY) # assumes you have pypy dist in a subfolder, you may need to rename this pypy-trunk

def tests( outside ):

from pypy.translator.interactive import Translation
t = Translation( tests )
t.annotate([str]); t.rtype()
f = t.compile_c()
f('from the outside')

print( 'end of program' )