Cubic root solving

I really need to get back to this, but I mocked up this bit of code and finally got it to work.
It is about animation curve evaluation, due to the parametric nature of curves it is very complicated to get the value of a curve on at an arbitrary time, and it involves finding the parameter T for a value X.

This demo shows how the parameter (left to right) relates to the time of the animcurve (top to bottom), so the red curve shows linear parameter increasing results in non linear time samples.
It then computes backwards from the found value to the parameter again, showing we can successfully reconstruct the linear input given the time output.

I intend to clean up this code and make a full 2D example, rendering a 2D cubic spline segment as normal and then overlaying an evaluation based on the X coordinate, but wanted to dump the result nonetheless. Knowing how bad I am at getting back to things…
Using QT purely for demonstration, code itself is pure python…

from PyQt4.QtCore import *
from PyQt4.QtGui import *


def cubicArgs(x0, x1, x2, x3):
    a = x3 + (x1 - x2) * 3.0 - x0
    b = 3.0 * (x2 - 2.0 * x1 + x0)
    c = 3.0 * (x1 - x0)
    d = x0
    return a, b, c, d


def cubicEvaluate(x0, x1, x2, x3, p):
    # convert points to a cubic function & evaluate at p
    a, b, c, d = cubicArgs(x0, x1, x2, x3)
    return a * p * p * p + b * p * p + c * p + d


class CurveDebug(QWidget):
    def __init__(self):
        super(CurveDebug, self).__init__()
        self.t = QTimer()
        self.t.timeout.connect(self.update)
        self.t.start(16)
        self.ot = time.time()

    def paintEvent(self, event):
        painter = QPainter(self)

        life = time.time() - self.ot

        padding = 100
        w = self.width() - padding * 2
        h = self.height() - padding * 2
        painter.translate(padding, padding)

        # zig zag 2D bezier
        x0, y0 = 0, 0
        x1, y1 = (sin(life) * 0.5 + 0.5) * w, 0
        x2, y2 = 0, h
        x3, y3 = w, h

        # draw hull
        # painter.setPen(QColor(100, 220, 220))
        # painter.drawLine(x0, y0, x1, y1)
        # painter.drawLine(x1, y1, x2, y2)
        # painter.drawLine(x2, y2, x3, y3)

        for i in xrange(w):
            p = i / float(w - 1)

            # draw curve
            # painter.setPen(QColor(220, 100, 220))
            # x = cubicEvaluate(x0, x1, x2, x3, p)
            # y = cubicEvaluate(y0, y1, y2, y3, p)
            # painter.drawPoint(x, y)

            # draw X as function of P
            painter.setPen(QColor(220, 100, 100))
            x = cubicEvaluate(x0, x1, x2, x3, p)
            painter.drawPoint(i, x)

            # now let's evaluate the curve at x and see if we can get the original p back
            # make cubic with offset
            a, b, c, d = cubicArgs(x0 - x, x1 - x, x2 - x, x3 - x)

            # find roots
            # http://www.1728.org/cubic2.htm
            f = ((3.0 * c / a) - ((b * b) / (a * a))) / 3.0
            g = (((2.0 * b * b * b) / (a * a * a)) - ((9.0 * b * c) / (a * a)) + ((27.0 * d) / a)) / 27.0
            _h = ((g * g) / 4.0) + ((f * f * f) / 27.0)
            root0, root1, root2 = None, None, None
            if _h <= 0.0:
                # we have 3 real roots
                if f == 0 and g == 0:
                    # all roots are real & equal
                    _i = d / a
                    root0 = -copysign(pow(abs(_i), 0.3333), _i)
                else:
                    _i = sqrt((g * g / 4.0) - _h)
                    j = pow(_i, 0.3333333)
                    k = acos(-(g / (2.0 * _i)))
                    m = cos(k / 3.0)
                    n = sqrt(3.0) * sin(k / 3.0)
                    _p = b / (3.0 * a)
                    root0 = 2.0 * j * m - _p
                    root1 = -j * (m + n) - _p
                    root2 = -j * (m - n) - _p
            else:
                # we have only 1 real root
                R = -(g / 2.0) + sqrt(_h)
                S = copysign(pow(abs(R), 0.3333333), R)
                T = -(g / 2.0) - sqrt(_h)
                U = copysign(pow(abs(T), 0.3333333), T)
                root0 = (S + U) - (b / (3.0 * a))

            painter.setPen(QColor(100, 100, 220))
            painter.drawPoint(i, root0 * h)
            if root1:
                painter.drawPoint(i, root1 * h)
                painter.drawPoint(i, root2 * h)


app = QApplication([])
cvd = CurveDebug()
cvd.show()
cvd.resize(300, 300)
app.exec_()
Lattice -> Joints

It’s not perfect, but here’s a small script that samples a lattice and tries to set joint weights based on the influence of each lattice point.

Given a set of lattice vertices and a model influenced by these vertices it will create joints at every lattice point, bind a skin and set the weights.

Usage: just edit the variables at the top & run the script. It’s slapped together really quickly.

It moves every lattice point one by one & stores the amount of movement that occured per vertex, which is basically the weight of this point for that vertex.

Issues: Small weights are completely vanishing, you could try dividing the sampled movement by the amout of movement to get a 0-1 weight, then apply an inverse s-curve or pow / sqrt to that value and use it as weight instead.

Requirements: to set all weights really fast I use a custom “skinWeightsHandler” command, you can write your own ‘set all weights for all joints and then normalize’ routine or get the plugin by installing Perry Leijten’s skinning tools for which I originally made this plugin.

model = r'polySurface1'
influences = (r'ffd1Lattice.pt[0][0][0]',
r'ffd1Lattice.pt[0][0][1]',
r'ffd1Lattice.pt[0][1][0]',
r'ffd1Lattice.pt[0][1][1]',
r'ffd1Lattice.pt[1][0][0]',
r'ffd1Lattice.pt[1][0][1]',
r'ffd1Lattice.pt[1][1][0]',
r'ffd1Lattice.pt[1][1][1]')

def sample(model):
    return cmds.xform(model + '.vtx[*]', q=True, ws=True, t=True)[1::3]

def difference(list, list2):
    stack = [0] * len(list)
    for i in range(len(list)):
        stack[i] = abs(list2[i] - list[i])
    return stack

def gather(model, influences):
    original = sample(model)
    weights = {}
    for influence in influences:
        cmds.undoInfo(ock=True)
        cmds.xform(influence, ws=True, r=True, t=[0, 1000, 0])
        weights[influence] = difference(sample(model), original)
        cmds.undoInfo(cck=True)
        cmds.undo()
    return weights

weights = gather(model, influences)
# generate joints
joints = []
for influence in influences:
    pos = cmds.xform(influence, q=True, ws=True, t=True)
    cmds.select(cl=True)
    joints.append(cmds.joint())
    cmds.xform(joints[-1], ws=True, t=pos)
# concatenate weights in the right way
vertexCount = len(weights.values()[0])
influenceCount = len(influences)
vertexWeights = [0] * (vertexCount * influenceCount)
for i in xrange(vertexCount):
    tw = 0
    for j, influence in enumerate(influences):
        vertexWeights[i * influenceCount + j] = weights[influence][i]
        tw += weights[influence][i]
    if not tw:
        # weight is 0
        continue
    for j in xrange(influenceCount):
        vertexWeights[i * influenceCount + j] /= tw
# expand to shape
if not cmds.ls(model, type='mesh'):
    model = cmds.listRelatives(model, c=True, type='mesh')[0]
# bind skin
cmds.select(model, joints)
skinCluster = cmds.skinCluster()
# set weights
cmds.SkinWeights([model, skinCluster],  nwt=vertexWeights)
Maya discovery of the day

if you’re looking for all objects with a specific attribute, it is nice to know that ls and it’s wildcards also work on attributes! It even does not care whether you supply the long or the short name. To get all objects with a translateX attribute you can simply use:

cmds.ls('*.tx')

Wildcards do not work with some other modifiers however, so you can not do this:

cmds.ls('*.myMetaData', l=True, type='mesh', sl=True)

because the returned type is not a mesh, but an attribute; but you can of course do this (notice the o=True to return object names not attributes):

cmds.ls(cmds.ls('*.myMetaData', o=True), l=True, type='mesh', sl=True)

Just wanted to share that bit of information! And while we’re at it, python supports ‘or’ in arbitrary expressions, so if you wish to find all transforms that contain a mesh (or get the transforms of selected meshes at the same time), you’ll often find yourself doing this:

selected_transforms = cmds.ls(type='transform', sl=True, l=True)
selected_meshes = cmds.ls(type='mesh', sl=True, l=True)
if selected_transforms is not None:
    meshes = cmds.listRelatives(selected_transforms, c=True, type='mesh', f=True)
    if meshes is not None:
        if selected_meshes is not None:
            selected_meshes += meshes
        else:
            selected_meshes = meshes
selected_mesh_transforms = []
if selected_meshes is not None:
    selected_mesh_transforms = cmds.listRelatives(selected_meshes, p=True)

just because ls and listRelatives return None instead of an empty list this code is super complicated. With ‘or’ we can simply do this:

meshes = (cmds.ls(type='mesh', sl=True, l=True) or []) + (cmds.listRelatives(cmds.ls(type='transform', sl=True, l=True), c=True, type='mesh', f=True) or [])
selected_mesh_transforms = cmds.listRelatives(meshes, p=True, f=True) or []

Admittedly a bit less readable, but make a utility function or name variables appropriately is my advice!

Simple Maya mesh save/load

I recently wanted to capture some frames of an animation into a single mesh and really the easiest way to ditch any dependencies & materials was to export some OBJs, import them and then combine them! This is rather slow, especially reading gigantic models, and I did not need a lot of the data stored in an OBJ.

So here I have a small utility that stores a model’s position & triangulation and nothing else in a binary format closely resembling the Maya API, allowing for easy reading, writing and even combining during I/O.

Use write() with a mesh (full) name and use read() with a filepath to serialize
and deserialize maya meshes:

import struct
from maya.OpenMaya import MSelectionList, MDagPath, MFnMesh, MGlobal, MPointArray, MIntArray, MSpace, MPoint


def _named_mobject(path):
    li = MSelectionList()
    MGlobal.getSelectionListByName(path, li)
    p = MDagPath()
    li.getDagPath(0, p)
    return p


def writeCombined(meshes, file_path):
    # start streaming into the file
    with open(file_path, 'wb') as fh:
        # cache function sets
        fns = []
        for mesh in meshes:
            fns.append(MFnMesh(_named_mobject(mesh)))

        # get resulting mesh data sizes
        vertex_count = 0
        poly_count = 0
        index_count = 0
        meshPolygonCounts = []
        meshPolygonConnects = []
        for fn in fns:
            vertex_count += fn.numVertices()
            meshPolygonCounts.append(MIntArray())
            meshPolygonConnects.append(MIntArray())
            # we need to get these now in order to keep track of the index_count,
            # we cache them to avoid copying these arrays three times during this function.
            fn.getVertices(meshPolygonCounts[-1], meshPolygonConnects[-1])
            poly_count += meshPolygonCounts[-1].length()
            index_count += meshPolygonConnects[-1].length()

        # write num-vertices as uint32
        fh.write(struct.pack('<L', vertex_count))

        for fn in fns:
            vertices = MPointArray()
            fn.getPoints(vertices, MSpace.kWorld)

            # write all vertex positions as pairs of three float64s
            for i in xrange(vertex_count):
                fh.write(struct.pack('<d', vertices[i].x))
                fh.write(struct.pack('<d', vertices[i].y))
                fh.write(struct.pack('<d', vertices[i].z))

        # write num-polygonCounts as uint32
        fh.write(struct.pack('<L', poly_count))

        for i, fn in enumerate(fns):
            # write each polygonCounts as uint32
            for j in xrange(meshPolygonCounts[i].length()):
                fh.write(struct.pack('<L', meshPolygonCounts[i][j]))

        # write num-polygonConnects as uint32
        fh.write(struct.pack('<L', index_count))

        # keep track of how many vertices there are to offset the polygon-vertex indices
        offset = 0
        for i, fn in enumerate(fns):
            # write each polygonConnects as uint32
            for j in xrange(meshPolygonConnects[i].length()):
                fh.write(struct.pack('<L', meshPolygonConnects[i][j] + offset))
            offset += fn.numVertices()


def write(mesh, file_path):
    writeCombined([mesh], file_path)


def readCombined(file_paths):
    numVertices = 0
    numPolygons = 0
    vertices = MPointArray()
    polygonCounts = MIntArray()
    polygonConnects = MIntArray()

    for file_path in file_paths:
        with open(file_path, 'rb') as fh:
            # read all vertices
            n = struct.unpack('<L', fh.read(4))[0]
            for i in xrange(n):
                vertices.append(MPoint(*struct.unpack('<3d', fh.read(24))))

            # read all polygon counts
            n = struct.unpack('<L', fh.read(4))[0]
            numPolygons += n
            polygonCounts += struct.unpack('<%sL'%n, fh.read(n * 4))

            # read all polygon-vertex indices
            n = struct.unpack('<L', fh.read(4))[0]
            offset = polygonConnects.length()
            polygonConnects += struct.unpack('<%sL'%n, fh.read(n * 4))

            # offset the indices we just added to the match merged mesh vertex IDs
            for i in xrange(n):
                polygonConnects[offset + i] += numVertices

            numVertices += n

    new_object = MFnMesh()
    new_object.create(numVertices, numPolygons, vertices, polygonCounts, polygonConnects)
    return new_object.fullPathName()


def read(file_path):
    with open(file_path, 'rb') as fh:
        numVertices = struct.unpack('<L', fh.read(4))[0]
        vertices = MPointArray()
        for i in xrange(numVertices):
            vertices.append(MPoint(*struct.unpack('<3d', fh.read(24))))
        numPolygons = struct.unpack('<L', fh.read(4))[0]
        polygonCounts = MIntArray()
        polygonCounts += struct.unpack('<%sL'%numPolygons, fh.read(numPolygons * 4))
        n = struct.unpack('<L', fh.read(4))[0]
        polygonConnects = MIntArray()
        polygonConnects += struct.unpack('<%sL'%n, fh.read(n * 4))

    new_object = MFnMesh()
    new_object.create(numVertices, numPolygons, vertices, polygonCounts, polygonConnects)
    return new_object.fullPathName()

I basically used a snippet like this to snapshot my animation:

tempfiles = []
for f in (0,4,8,12):
    cmds.currentTime(f)
    tempfiles.append('C:/%s.mfnmesh'%f)
    writeCombined(cmds.ls(type='mesh', l=True), tempfiles[-1])
newmesh = readCombined(tempfiles)
for p in tempfiles:
    os.unlink(p)

Important notice: I have found some random crashes in using a large amount of memory (high polycount per frame) in the writeCombined function (which may be solvable when ported to C++ an receiving proper error data).

Parameter to nurbs surface node

A simple deformer that reprojects a source mesh (considered as UVW coordinates)
onto a (series of) nurbs surfaces.

Inspired by “It’s a UVN Face Rig”

It takes an array of nurbs surfaces which must be at least length 1,
a polygonal mesh where the point positions are considered parameters on the nurbs surface; Z being an offset in the normal direction (hence UVN),
and an optional int array where there can be one entry per input vertex, stating which nurbs surface this vertex should project onto.

The default surface for every vertex is 0, so for a single nurbs surface projection no array is needed and only overrides have to be specified.

This includes full source + a project compiled using VS2015 for Maya2015 x64.
Download zip

Python test code:

PLUGIN = r'UVNDeformer.mll'

cmds.loadPlugin(path)
node = cmds.createNode('UVNNurbsToPoly')
nurbs = cmds.sphere()[0]
uvn = cmds.polyPlane()[0]
cmds.select(uvn + '.vtx[*]')
cmds.rotate(90, 0, 0, r=True)
cmds.move(0.5001, 0.5001, r=True)
result = cmds.createNode('mesh')
cmds.connectAttr(nurbs + '.worldSpace[0]', node + '.ins[0]')
cmds.connectAttr(uvn + '.outMesh', node + '.iuvnm')
cmds.connectAttr(node + '.outMesh', result + '.inMesh')

cmds.select(uvn + '.vtx[*]')
SIMD Matrix math for Python

Long story short: scroll down for a downloadable DLL and python file that do matrix math using optimized SIMD functions.

Recently I was messing around with some 3D in PyOpenGL and found my most notable slowdowns occuring due to matrix math (multiplications being most common).

So I decided to try and implement some fast matrix functions and call those from python, using C98 limitations and ctypes as explained here by my friend Jan Pijpers.

I won’t go into detail about the math, you can download the project files at the end; any sources used are referenced in there.

I do have some profile results to compare! Doing 100,000 calls for each action listed, time displayed in seconds.

Pure python implementation.

identity: 0.0331956952566
rotateY: 0.0617851720355
transform: 1.70942981948
inverse: 15.095287772
multiply: 0.492130726156
vector: 0.160486968636
perspective: 0.107690428216
transpose: 0.452984656091

Note that the inverse is matrix size agnostic (and not normalized!), therefore no loop unrolling is done by the python compiler. It is not representative of a proper python matrix4x4 inverse.

Using VC++ 14.0 MSBUILD, compiling release with -O2 and running without the debugger.

identity: 0.0333827514946
rotateY: 0.0857555184901
transform: 0.251571437936
inverse: 0.0439880125093
multiply: 0.0420022367291
vector: 0.288415226444
perspective: 0.156626988673
transpose: 0.0889596428649
perspective no SIMD: 0.160488955074
Using LLVM 14.0 from visual studio (not sure which linker is used there), compiling release with -O2 and running without the debugger (-O3 doesnt change the results).

identity: 0.0323709924443
rotateY: 0.0845113462024
transform: 0.23958858222
inverse: 0.0395744785104
multiply: 0.0437013033019
vector: 0.286256299491
perspective: 0.150614703216
transpose: 0.0877707597662
perspective no SIMD: 0.156242612934

Interestingly not all operations are faster using C due to type conversions. For a simple axis aligned rotation all we need is a sin, a cos and a list. The sin/cos of python are not going to be any slower than those in C, so all we did was complicate the program.

But in a practical example represented by the transform function (which is a a separate rotateX, rotateY, rotateZ and translation matrix call, then all four of them multiplied together) we see a very worthwhile performance gain.

The math executes using SIMD instructions, all data is therefore converted to 16-bit memory aligned “__m128″ structures (from “xmmintrin.h”). We need the C identity and rotate constructors to get the proper type of data, then when we actually need this data we must call storeMat44() to get an actual c_float[16] for python usage.

From my current usage 1 in 3 matrices requires a conversion back to python floats in order get passed to PyOpenGL, so here is another multiply performance test with every third multiplication stored back into a float[16]…

python multiply: 0.492130726156
MVC raw: 0.0436761417549
MVC multiply convert every third: 0.06491612928
MVC convert all: 0.0925153667527

So while our raw implementation is about 11 times faster, the fully converting implementation is only 5 times faster. 7.5 times for our real world example. That’s more than 30% lost again… still much better than pure python though!

Download the visual studio project with x86 and x64 binaries here! Tested on Win10 x64 with Python 2.7.10 x64.
Math3Dx64.Dll and math3d.py are the end user files.

One important thing I wish to look into is to pass data by copy instead, currently all functions allocate a new matrix on the heap, and the user has to delete these pointers by hand from python using the deleteMat44() helper function. I do not know enough about DLLs or python’s memory allocation to know whether I can copy data from the stack instead, and if so whether that would be any faster.

I do know that __vectorcall is not compatible with __declspec(dllexport), which kind of makes sense… but more direct data passing could be nice.

Viewing Python profiling results with QCacheGrind

This utility outputs cProfile data as a “callgrind” cache file.

Requires pyprof2calltree:
pip install pyprof2calltree

The resulting files can be viewed using QCacheGrind for Windows:
http://sourceforge.net/projects/qcachegrindwin/

Example usage:

runctx(pythonCodeStr, globals(), locals(), executable=QCACHEGRIND)
import os
import cProfile
import tempfile
import pyprof2calltree
import pstats
import subprocess


QCACHEGRIND = r'YOUR CACHEGRIND EXECUTABLE PATH'


def runctx(cmdstr, globals={}, locals={}, outpath=None, executable=None):
    tmp = tempfile.mktemp()
    if outpath is not None:
        path = os.path.splitext(outpath)[0] + '.callgrind'
        dirpath = os.path.dirname(path)
        if not os.path.exists(dirpath):
            os.makedirs(dirpath)

        cProfile.runctx(cmdstr, globals, locals, filename=tmp)
        pyprof2calltree.convert(pstats.Stats(tmp), path)

        if executable is not None:
            subprocess.Popen([executable, path])
        os.unlink(tmp)
        return path

    cProfile.runctx(cmdstr, globals, locals, filename=tmp)
    pyprof2calltree.convert(pstats.Stats(tmp), tmp)
    if executable is not None:
        subprocess.Popen([executable, tmp])
    return tmp
Maya API Wrapping with CRTP

So I came across this: https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern

The amazing python ‘classmethod’.

A way to inherit static members and define static interfaces.

In python there exists the decorator “classmethod”, which is like a staticmethod that receives the type on which it is called, including inherited types.
Additionally it can be overriden and the base class function can be called separately like any ordinary member function.

Now imagine this construction:

class BaseClass(object):
	def __init__(self, name):
		self.__name = name
		print name

	@classmethod
	def getName(cls):
		raise NotImplementedError()

	@classmethod
	def creator(cls):
		return cls.creator(cls.getName())

This base class can instantiate through the creator method, that includes instantiating any subclasses because the cls argument is the owning type.
This then calls upon another classmethod, which is not implemented, effectively demanding that subclasses implement this method.
What this creates is the ability to define static interfaces, as well as share codes between those interfaces while implementation is still kept abstract.

class SubClass1(BaseClass):
	@classmethod
	def getName(cls):
		return 'SubClass1'

class SubClass2(BaseClass):
	@classmethod
	def getName(cls):
		return 'SubClass2'

instance1 = SubClass1.creator() # prints SubClass1
instance2 = SubClass2.creator() # prints SubClass2

So you see the class inherited the creator method, the creator method was aware of the sub-type and called the right getName method.

The C++ ‘Curiously recurring template pattern’ (or CRTP).

This is a terribly confusing trick that allows us to mimic the above python idiom.

So let’s take a similar scenario!

template <class T> class TemplateBase 
{
protected:
	const char* name;
	TemplateBase(const char* inName)
	{
		name = inName;
	}

public:
	static T* sCreator()
	{
		return new T(T::sName()); 
	}
}

This is functionally similar to the python example. It creates a given type and won’t compile
unless the specified type has a static sName() member returning a const char*.

Sidetracking here, that would allow us to do this:

class SubClass1
{
	const char* name;
public:
	MyName(const char* inName) { name = inName; }
	static const char* sName() { return "SubClass1"; }
}
SubClass1* instance = TemplateBase<SubClass1>::sCreator();

But (no longer sidetracking) here comes the recurring part:

class SubClass2 : public TemplateBase<SubClass2>
{
protected:
	using TemplateBase::TemplateBase;
public:
	static const char* sName() 
	{
		return "SubClass2"; 
	}
}
SubClass2* instance = SubClass2::sCreator();

This generates a template implementation for it’s own subclass, which curiously doesn’t cause
any problems, even though a base class is accessing a subclass’s (static) functions.

This also hides the (inherited) constructor so really only the sCreator function can be used
for instantiation of our class. And because the subclass doesn’t define sCreator we can very intuitively
call it on our subclass itself.

When giving arguments to a template, and then using it, the C++ compiler generates new
code for us specific to the type given to the template. So multiple subclasses currently do not
share a common base class.

Maya API utility using CRTP

The Maya API is oftem implemented in a way that demands things for the programmer, without raising
errors when the programmer does something wrong. With modern C++ we can enforce much more rules and
with the complexity of software we definately should cut the contributer some slack!

Here is a little setup for creating an MPxNode wrapper that allows the compiler to communicate what is needed.

template  class MPxNodeCRTP : public MPxNode
{
public:
	static void* sCreator() { return new T(); }
	static MStatus sInitializeWrapper() { MStatus status = T::sInitialize(); CHECK_MSTATUS_AND_RETURN_IT(status); }
}
// utility macro:
#define INHERIT_MPXNODE(CLASS) class CLASS : public MPxNodeCRTP

I’ve been taking this further to handle registration of input and output attributes,
so attributeAffects gets handled by the base class, as well as MFn::kUnknownParameter exceptions in compute().

Maya quaternion & matrix operation order

Here are some pointers I had to learn the hard way, and don’t ever want to forget.

MQuaternion(MVector a, MVector b)

constructs the rotation to go from B to A!
So if you have an arbitrary aim vector and wish to go from world space to that aim vector use something like

MQuaternion(aimVector, MVector::xAxis)

The documentation is very ambiguous about this. Or rather, it makes you think the opposite!

If you wish to combine matrices in maya think of how children and parents relate in the 3D scene to determine the order of multiplication. Childs go first, e.g.

(leafMatrix * parentMatrix) * rootMatrix

Another way to think about it is adding rotations. So if you have a rotation and you wish to add some rotation to it, you generally parent an empty group to it and rotate that, so you again get this relationship of

additionalRotation * existingRotation

A little note: not sure if adding quaternion rotations works in the same way; should check!

More conventions to come hopefully!

Simple C++ Snippets

These are some simple Win32 C++ snippets I often find myself coming back to when I want to mess around with something. Instead of spending a long time setting up a good environment with strong libraries available I quite often want “just a window” to start doing stuff with. That in itself is much easier than you may often find described online.

So be warned, ugly code ahead!

Part 1, windows and GL context
A C++ program to display a static window; you can’t do anything with it though, it does not process events.

#include <windows.h>
INT WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
    HWND hWnd = CreateWindow("edit", NULL, WS_VISIBLE | WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, hInstance, NULL);
    while(true);
    return 0;
}

We can add event processing, but this is more involved. So let’s split that up. First we need to define a custom window “class” or type that describes mostly which callback this kind of window should use.

HWND CustomWindow(const char* name, HINSTANCE hInstance, WNDPROC callback)
{
    WNDCLASSEX WndClsEx = { 0 };
    WndClsEx.cbSize = sizeof(WNDCLASSEX);
    WndClsEx.style = CS_HREDRAW | CS_VREDRAW;
    WndClsEx.lpfnWndProc = callback;
    WndClsEx.lpszClassName = name;
    WndClsEx.hInstance = hInstance;
    RegisterClassEx(&WndClsEx);

    return CreateWindow(name, name, WS_VISIBLE | WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, hInstance, NULL);
}

We initialize to 0 for safety. The window class can also get default icons, cursors, colors etc. but those can also be changed with Win32 functions later on, so I tend not to bother with them here.

When having many types of windows I tend to use OOP, with only one custom window type, and have an std::map to go from HWND to wrapped instance.then I just call soemthing like WindowBaseClass::instances.find(hWnd)->Update();.

The above function expects a callback argument, this is a function pointer adhering to the Win32 callback. A very basic one allows the user to use ALT+F4 and the close button to end the program completely. Note that this is not something you wish to do in a multi window application; in which case you probably should check for the number of windows before quitting.

LRESULT CALLBACK CustomWindowProc(HWND hWnd, UINT Msg, WPARAM wParam, LPARAM lParam)
{
    switch(Msg)
    {
    case WM_DESTROY:
        PostQuitMessage(WM_QUIT);
        break;
    default:
        return DefWindowProc(hWnd, Msg, wParam, lParam);
    }
    return 0;
}

Now let’s do a proper event loop which will defer message to the custom window proc and exit as required by the quit message.

INT WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
    HWND hWnd = CustomWindow("UI", hInstance, CustomWindowProc);
    MSG msg;
    do
    {
        if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
        {
            if(msg.message == WM_QUIT)
            {
                return msg.wParam;
            }
            TranslateMessage(&msg);
            DispatchMessage(&msg);
        }
    } while(true);
    return msg.wParam;
}

The thing I do the most is extend a window into an openGL context to start rendering in openGL, ignoring any actual Win32 stuff and just testing some graphics thing. This function enriches a created window with an openGL context and returns it’s handle.

HDC GLWindow(HWND hWnd)
{
    /// Creates & "makes current" (activates) an OpenGL target inside the given window
    HDC hDC = GetDC(hWnd);
    static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), 1, PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0 };
    SetPixelFormat(hDC, ChoosePixelFormat(hDC, &pfd), &pfd);
    wglMakeCurrent(hDC, wglCreateContext(hDC));
    return hDC;
}

The simple example we started with can become this instead:

#include <windows.h>
INT WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
    HWND hWnd = CreateWindow("edit", NULL, WS_VISIBLE | WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, hInstance, NULL);
    HDC hDC = GLWindow(hWnd);
    glClearColor(0.1f, 0.2f, 0.3f, 1.0f);
    do
    {
        glClear(GL_COLOR_BUFFER_BIT);
        glColor3f(1.0f, 0.9f, 0.8f);
        glRecti(-1, -1, 1, 1);
        SwapBuffers(hDC);
    } while(true);
    return 0;
}

OpenGL has a default viewport with the (-1, -1) coordinate at the bottom left and (1, 1) at the top right. A useful thing is to shift this to have (0, 0) at the bottom left and the window (width, height) at the top right.

void GLPixelSpace(HWND hWnd)
{
    /// Make viewport coordinates match pixel coordinates; requires a "Current" GL context (wglMakeCurrent, as initialized for us by GLWindow).
    glTranslatef(-1.0f, -1.0f, 0.0f);
    // Get the window's draw-able area
    RECT area;
    GetClientRect(hWnd, &area);
    // Compute scale so 1 pixel matches 1 unit.
    glScalef(2.0f / (area.right - area.left), 2.0f / (area.bottom - area.top), 1.0f);
}

Call this once before entering the drawing loop to have it as default, or use

glPushMatrix();
glLoadIdentity();
GLPixelSpace(hWnd);
// Pixel-space drawing code here
glPopMatrix();

One other useful alternative is the have the (0, 0) coordinate at the TOP left instead:

void GLPixelSpace_FlipY(HWND hWnd)
{
    /// Make viewport coordinates match pixel coordinates; requires a "Current" GL context (wglMakeCurrent, as initialized for us by GLWindow).
    glTranslatef(-1.0f, 1.0f, 0.0f);
    // Get the window's draw-able area
    RECT area;
    GetClientRect(hWnd, &area);
    // Compute scale so 1 pixel matches 1 unit.
    glScalef(2.0f / (area.right - area.left), -2.0f / (area.bottom - area.top), -1.0f);
}

Part 2, openGL fonts using FTGL & FreeType2
A related thing is font rendering. There are tons of options, I’m going to reference the openGL font survey about that.

I just want to supply a TTF file I already have with my application and use that inside of it. I want a flexible system that is fast, can batch, does not flood my memory too greedily when rendering various sizes and has good kerning with font-metrics capabilities.

GLX which seems linux only.
GLC which is Adobe Type 1 fonts only, seems fairly old, has issues with rotating fonts & aliasing.
GLUT’s default font renderer, which is not friendly to introducing new fonts.
GLTT which seems the first decent flexible library.
FTGL which seems GLTT 2.0 using the more modern FreeType 2.0 library (instead of 1.0).
WGL which I tried and flickered and it is in general hard to customize appearance / do anti-aliasing.
GLF is last and seems just as fine as GLTT, be it also outdated with it’s own font file format which as no tools or documentation.

Texture mapping fonts is also not an option for on the fly text display as we are limited to predefined texture atlasses for specific glyphs of specific sizes, which is a lot of manual labour; though I would probably recommend it for a game, combined with BMFont it’s pretty powerful and fast to render large amounts of static (even 3D) text in a single draw call, with minimal memory usage if the atlas is a distance field as well as described here.

So with that out of the way, imagine a rant about the poor distribution of binaries for both FreeType and FTGL. I was happy to discover both these projects (on sourceforge) had a project setup for a ton of build environments on various platforms with various toolsets. It was very easy to copy the vc2008 project, open it in vs2012, auto-upgrade and change the output paths to match a new vs2012 target directory.

I have attached compiled lib and dll binaries from both FTGL and FreeType compiled on windows 7 using visual studio 2012 update 4.

ftgl-2.1.3-rc5__with__freetype-2.6__binaries

Here is a header for intelligently loading windows, openGL and FTGL in the right order, including the required libraries as we go. All you need to do in your project settings is set up the additional include and additional library directories if you’re placing these files elsewhere.

// settings.h
#pragma once


#define VC_EXTRALEAN
#define WIN32_LEAN_AND_MEAN
#include <windows.h>


#pragma comment(lib, "opengl32.lib")
#include <gl/gl.h>


#define FTGL_LIBRARY_STATIC
#ifdef NDEBUG
#pragma comment(lib, "freetype26")
#ifdef FTGL_LIBRARY_STATIC
#pragma comment(lib, "ftgl_static")
#else
#pragma comment(lib, "ftgl")
#endif
#else
#pragma comment(lib, "freetype26d")
#ifdef FTGL_LIBRARY_STATIC
#pragma comment(lib, "ftgl_static_D")
#else
#pragma comment(lib, "ftgl_D")
#endif
#endif
#include <FTGL/ftgl.h>

Here is a full code sample that has a simple or a custom window (#define SIMPLE) which uses the above file to render a rect and a piece of text in a win32 window using openGl.

/*
References

http://www.functionx.com/win32/Lesson01c.htm
For window with custom class setup

http://sizecoding.blogspot.nl/2007/10/tiny-opengl-windowing-code.html
For basic wglContext setup

https://msdn.microsoft.com/en-us/library/windows/desktop/ms644943(v=vs.85).aspx
For buffer swaps between messages

http://ftgl.sourceforge.net/docs/html/ftgl-tutorial.html
For basic font creation

http://stackoverflow.com/questions/28151464/how-to-change-color-in-rgb-format-in-ftgl-opengl
For requiring FTGLUseTextureFont to have glColor work as expected.

http://stackoverflow.com/questions/28313786/undefined-symbol-in-static-library-but-exists-when-in-same-vs-solution
For knowing to define FTGL_LIBRARY_STATIC

https://www.opengl.org/archives/resources/features/fontsurvey/
For the font options which showed FTGL as my preferred option.
*/


#include "settings.h"


// #define SIMPLE


HWND Window(HINSTANCE hInstance)
{
    /// Creates an arbitrary default window
    return CreateWindow("edit", NULL, WS_VISIBLE | WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, hInstance, NULL);
}


LRESULT CALLBACK CustomWindowProc(HWND hWnd, UINT Msg, WPARAM wParam, LPARAM lParam)
{
    switch(Msg)
    {
    case WM_DESTROY:
        PostQuitMessage(WM_QUIT);
        break;
    default:
        return DefWindowProc(hWnd, Msg, wParam, lParam);
    }
    return 0;
}


HWND CustomWindow(const char* name, HINSTANCE hInstance, WNDPROC callback)
{
    WNDCLASSEX WndClsEx = { 0 };
    WndClsEx.cbSize = sizeof(WNDCLASSEX);
    WndClsEx.style = CS_HREDRAW | CS_VREDRAW;
    WndClsEx.lpfnWndProc = callback;
    // WndClsEx.hIcon = LoadIcon(NULL, IDI_APPLICATION);
    // WndClsEx.hCursor = LoadCursor(NULL, IDC_ARROW);
    // WndClsEx.hbrBackground = (HBRUSH)GetStockObject(WHITE_BRUSH);
    WndClsEx.lpszClassName = name;
    WndClsEx.hInstance = hInstance;
    // WndClsEx.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
    RegisterClassEx(&WndClsEx);

    return CreateWindow(name, name, WS_VISIBLE | WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, hInstance, NULL);
}


HDC GLWindow(HWND hWnd)
{
    /// Creates & "makes current" (activates) an OpenGL target inside the given window
    HDC hDC = GetDC(hWnd);
    static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), 1, PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0 };
    SetPixelFormat(hDC, ChoosePixelFormat(hDC, &pfd), &pfd);
    wglMakeCurrent(hDC, wglCreateContext(hDC));
    return hDC;
}


void GLPixelSpace(HWND hWnd)
{
    /// Make viewport coordinates match pixel coordinates; requires a "Current" GL context (wglMakeCurrent, as initialized for us by GLWindow).
    glTranslatef(-1.0f, -1.0f, 0.0f);
    // Get the window's draw-able area
    RECT area;
    GetClientRect(hWnd, &area);
    // Compute scale so 1 pixel matches 1 unit.
    glScalef(2.0f / (area.right - area.left), 2.0f / (area.bottom - area.top), 1.0f);
}


void GLPixelSpace_FlipY(HWND hWnd)
{
    /// Make viewport coordinates match pixel coordinates; requires a "Current" GL context (wglMakeCurrent, as initialized for us by GLWindow).
    glTranslatef(-1.0f, 1.0f, 0.0f);
    // Get the window's draw-able area
    RECT area;
    GetClientRect(hWnd, &area);
    // Compute scale so 1 pixel matches 1 unit.
    glScalef(2.0f / (area.right - area.left), -2.0f / (area.bottom - area.top), -1.0f);
}


void Draw(FTGLTextureFont& font)
{
    // Draw background
    glClear(GL_COLOR_BUFFER_BIT);

    // Draw foreground
    // Set foreground color
    glColor3f(1.0f, 0.9f, 0.8f);
    glRecti(0, 0, 200, 100);
    // Set foreground color
    glColor3f(0.2f, 0.3f, 0.4f);
    font.Render("Hello World!");
}


int ExecSimple(HDC hDC)
{
    /// Simple render loop
    FTGLTextureFont font("C:/Windows/Fonts/Roboto-Light.ttf");
    font.FaceSize(32);
    do
    {
        Draw(font);
        SwapBuffers(hDC);
    } while(true);
    return 0;
}


int Exec(HDC hDC)
{
    /// Render loop with windows messages
    FTGLTextureFont font("C:/Windows/Fonts/Roboto-Light.ttf");
    font.FaceSize(32);
    MSG msg;
    do
    {
        if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
        {
            if(msg.message == WM_QUIT)
            {
                return msg.wParam;
            }
            TranslateMessage(&msg);
            DispatchMessage(&msg);
        }
        Draw(font);
        SwapBuffers(hDC);
    } while(true);
    return msg.wParam;
}


INT WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
#ifdef SIMPLE
    HWND hWnd = Window(hInstance);
    HDC hDC = GLWindow(hWnd);
    GLPixelSpace(hWnd);
    glClearColor(0.1f, 0.2f, 0.3f, 1.0f); // Set background color
    return ExecSimple(hDC);
#else
    HWND hWnd = CustomWindow("UI", hInstance, CustomWindowProc);
    HDC hDC = GLWindow(hWnd);
    GLPixelSpace(hWnd);
    glClearColor(0.1f, 0.2f, 0.3f, 1.0f); // Set background color
    return Exec(hDC);
#endif
}
Python Range Collection

On several occassions in the past year I needed to describe a set of (time) ranges, and find the gaps in between them.
I used it for finding pauses during animations, to trigger different events, fill up the animation or simply hide non-animated props.
Instead of having to do something slow & memory heavy like:

class Range(object):
    '''
    Describes a range of integer values with an interval of +1,
    describing a set similar to python's range(int start, int end).

    Start is inclusive, end is exclusive, like with for loops.
    '''
    def __init__(self, start, end):
        self.start = min(int(start), int(end))
        self.end = max(int(start), int(end))
        if self.start == self.end:
            raise ValueError('Range() can not express a range of size 0; did you mean TimeRange()?')
    def intersects(self, other):
        return other.start <= self.end and other.end >= self.start
    def combine(self, other):
        self.start = min(self.start, other.start)
        self.end = max(self.end, other.end)
    def __repr__(self):
        return 'range[%s,%s)'%(self.start, self.end)
    def __iter__(self):
        for i in xrange(self.start, self.end):
            yield i


class TimeRange(object):
    '''
    A Range() with inclusive end-value; allows for start == end.
    See Range() and RangeCollection() for more information.
    '''
    def __init__(self, start, end):
        self.start = min(int(start), int(end))
        self.end = max(int(start), int(end))
    def intersects(self, other):
        return other.start <= self.end + 1 and other.end + 1 >= self.start
    def combine(self, other):
        self.start = min(self.start, other.start)
        self.end = max(self.end, other.end)
    def __repr__(self):
        return 'range[%s,%s]'%(self.start, self.end)
    def __iter__(self):
        for i in xrange(self.start, self.end + 1):
            yield i


class RangeCollection(object):
    '''
    A list of Range() or TimeRange() objects that is consolidated so not a single instance
    overlaps another one. Allows for consolidated range iteration using segments() and
    remaining gap iteration using gaps().
    '''
    def __init__(self):
        self.segments = []

    def addSegment(self, inRange):
        state = None
        for i in xrange(len(self.segments)):
            segment = self.segments[i]
            if segment.intersects(inRange):
                if state is not None:
                    # If we found two consecutive intersections we close the gap.
                    state.combine(segment)
                    self.segments.pop(i)
                    return
                # If we found the first intersection we check the next node as well.
                state = segment
                continue
            if state is not None:
                # If we only found the first intersection we extend the node.
                break
        if state is not None:
            # If we only found the first intersection we extend the node.
            state.combine(inRange)
            return
        # if we found no intersections we append the new data.
        self.segments.append(inRange)

    def gaps(self, inStart=None, inEnd=None, wantsInclusiveRange=False):
        self.segments.sort(key=lambda x:  x.start)
        offset = 0
        if inStart is None:
            start = self.segments[0].start
        else:
            start = inStart
            while self.segments[offset].start < inStart:
                offset += 1
        end = None
        for i in xrange(offset, len(self.segments)):
            end = self.segments[i].start
            if end - start == 0:
                start = self.segments[i].end + isinstance(self.segments[i], TimeRange)
                continue
            if wantsInclusiveRange:
                yield TimeRange(start, end-1)
            else:
                yield Range(start, end)
            start = self.segments[i].end + isinstance(self.segments[i], TimeRange)
        if inEnd is not None:
            if wantsInclusiveRange:
                yield TimeRange(end, inEnd)
            else:
                yield Range(end, inEnd)

    def iterGapFrames(self, inStart=None, inEnd=None, wantsInclusiveRange=False):
        for gap in self.gaps(inStart, inEnd, wantsInclusiveRange):
            for i in gap:
                yield i

    def iterRangeFrames(self, inStart=None, inEnd=None):
        self.segments.sort(key=lambda x:  x.start)
        for segment in self.segments:
            for i in segment:
                if inStart is not None and i < inStart:
                    continue
                if inEnd is not None and i > inEnd:
                    continue
                yield i


if __name__ == '__main__':
    timeline = RangeCollection()
    testData = [(2, 5), (4, 8), (2, 3), (44, 60), (10, 43), (80, 90), (100, 110), (200, 210), (220, 230), (210, 220), (300, 310), (320, 330), (311, 319)]
    for timeRange in testData:
        timeline.addSegment(TimeRange(*timeRange))
    print timeline.segments
    print list(timeline.gaps(inStart=20, inEnd=400, wantsInclusiveRange=True))
Python profiler output in QT GUI

I wanted to sort my profiler result (using cProfile) but found usign the stats.Stats objects rather complicated.

Profiling is easy:

import cProfile
cProfile.run('''
MY CODE AS A STRING
''')

The profiler outputs itself to the console, so to instead catch it in a file we can change python’s console output.

import cProfile
import sys
import cStringIO

backup = sys.stdout
sys.stdout = cStringIO.StringIO()

cProfile.run('''
MY CODE AS A STRING
''')

profileLog = sys.stdout.getvalue()
sys.stdout.close()
sys.stdout = backup

The original sys.stdout is also stored as sys.__stdout__
but maybe at the point you are doing this the host application already has it’s own
stdout in use, so let’s just backup and restore explicitly so we’re certainly not breaking stuff.

Now the output is a huge ascii table of stats. By converting that to a QTableWidget we can
easily sort and analyse this data. So first let’s set up the table…

from PyQt4.QtCore import *
from PyQt4.QtGui import *

widget = QTableWidget()
widget.setColumnCount(6)
widget.setHorizontalHeaderLabels(['ncalls', 'tottime', 'percall', 'cumtime', 'percall', 'filename:lineno(function)'])

I manually copied the header names from the profile log, you may make the more sensible at your leisure… The widget needs to have it’s size set up before usage, so we can estimate the number of rows beforehand instead of resizing it in every iteration:

logLines = profileLog.split('\n')
widget.setRowCount(len(logLines))

Now this is a bit ugly, we essentially iterate all the lines and put their respective values into the widget. We’re splitting by whitespace with a regex.

enabled = False
y = 0
for i in range(len(logLines)):
    ln = logLines[i].strip()
    # skip empty lines
    if not ln:
        continue
    # start real iteration only after the header information
    if not enabled:
        if ln.lower() == r'ncalls  tottime  percall  cumtime  percall filename:lineno(function)'.lower():
            enabled = True
        continue
    segments = re.split('\s+', ln)
    c = len(segments)
    if c > 6:
        c = 6
        segments[5] = ' '.join(segments[5:])
    for x in range(c):
        item = QTableWidgetItem(segments[x])
        widget.setItem(y, x, item)
    y += 1

We manually increment the row count to account for the header lines and potential empty lines / otherwise ignored lines.
Last we strip off unused rows (remember we assumed line count as row count), enable sorting and show our widget.

widget.setRowCount(y)
widget.setSortingEnabled(True)
widget.show()

For convenience I wanted to make this a function that I could import and use instead of cProfile.run() at any given time. So this is my full code:

import re
import sys
import cProfile
import cStringIO
from PyQt4.QtCore import *
from PyQt4.QtGui import *


def profileToTable(code, globals=None, locals=None):
    backup = sys.stdout
    sys.stdout = cStringIO.StringIO()
    
    cProfile.run(code)
    
    profileLog = sys.stdout.getvalue()
    sys.stdout.close()
    sys.stdout = backup
    
    widget = QTableWidget()
    widget.show()
    widget.setColumnCount(6)
    widget.setHorizontalHeaderLabels(['ncalls', 'tottime', 'percall', 'cumtime', 'percall', 'filename:lineno(function)'])
    
    logLines = profileLog.split('\n')
    widget.setRowCount(len(logLines))
    
    enabled = False
    y = 0
    for i in range(len(logLines)):
        ln = logLines[i].strip()
        # skip empty lines
        if not ln:
            continue
        # start real iteration only after the header information
        if not enabled:
            if ln.lower() == r'ncalls  tottime  percall  cumtime  percall filename:lineno(function)'.lower():
                enabled = True
            continue
        segments = re.split('\s+', ln)
        c = len(segments)
        if c > 6:
            c = 6
            segments[5] = ' '.join(segments[5:])
        for x in range(c):
            item = QTableWidgetItem(segments[x])
            widget.setItem(y, x, item)
        y += 1
    return widget

We must cache the returned widget in memory for otherwise python’s garbage collection will try to delete it and then Qt will close it.

widget = profileToTable('re.compile("foo|bar")')

After that you may wish to add a search bar so you can look for specific functions that you wish to check for potential improvements or suspicious times. At least I did… simple Qt stuff! QTableWidget has a search by (partial) string utility as well as hide and show row functions, so a simple set of loops allows us to select and filter the table.

def filterTable(tableWidget):
    main = QWidget()
    layout = QVBoxLayout()
    main.setLayout(layout)
    
    search = QLineEdit()
    layout.addWidget(search)
    
    layout.addWidget(tableWidget)
    
    def filterTable(widget, text):
        # there seem to be many duplicate entries when we go from a string to an empty string
        rows = []
        if text:
            showItems = widget.findItems(text, Qt.MatchContains)
            for i in showItems:
                rows.append(i.row())
            rows.sort()
        allrows = range(widget.rowCount())
        for i in range(len(rows)-1, -1, -1):
            widget.showRow(rows[i])
            allrows.pop(rows[i])
        for i in allrows:
            widget.hideRow(i)
        
    search.textChanged.connect(functools.partial(filterTable, tableWidget))
    
    main.show()
    return main

This function takes as widget the result of the profile function so it completely appends to what’s already there. Again the returned widget must be cached. You may also make a utility function like so:

# regular usage example
widget = profileToTable('re.compile("foo|bar")')
wrapper = filterTable(widget)

def profileToFilterTable(code, globals=None, locals=None):
    return filterTable(profileToTable(code, globals, locals))

# with utlity
wrapper2 = profileToFilterTable('re.compile("foo|bar")')
Computing 3D polygon volume

I wanted to compute the volume of a mesh in Maya. It was surprisingly simple and elegant to do as well! Using the Divergent Theorem (which is unreadable to me when written mathematically) the only constraints are: the mesh must be closed (no holes, borders or tears; Maya’s fill-hole can help), the mesh must be triangulated (using the Maya API you can already query triangles so no need to manually triangulate in this case).

Now imagine to compute the volume of a prism. All you need is the area of the base triangle * the height. To compute the base I use Heron’s formula as described here.

def distance(a, b):
    return sqrt((b[0]-a[0])*(b[0]-a[0])+
      (b[1]-a[1])*(b[1]-a[1]))

def getTriangleArea(pt0, p1, p2):
    a = distance(pt1, pt0)
    b = distance(pt2, pt0)
    c = distance(pt2, pt1)
    s = (a+b+c) * 0.5
    return sqrt(s * (s-a) * (s-b) * (s-c))

Now notice how this only computes the triangle area in the XY plane. This works simply because the 2D projection of the triangle area is all we need. The height is then defined by the triangle’s center Z.

def getTriangleHeight(pt0, pt1, pt2):
    return (pt0[2] + pt1[2] + pt2[2]) * 0.33333333

Consider any triangle, extrude it down to the floor, and see that this works for any prism defined along the Z axis this way.

A rotated triangle’s area in the XY plane is smaller than the actual area, but by using the face-center the volume will remain accurate.

prisms

Now these prisms have the same volume. The trick is to consider every triangle as such a prism, so call getTriangleVolume on each triangle. The last problem is negative space, for this we compute the normal. I use maya’s normals, so the volume is negative if all normals are inversed, but you can compute them all the same.

def getTriangleVolume(pt0, pt1, pt2):
    area = getTriangleArea(pt0, pt1, pt2) * getTriangleHeight(pt0, pt1, pt2)
    # this is an optimized 2D cross product
    sign = (pt1[0]-pt0[0]) * (pt2[1]-pt0[1]) - (pt1[1]-pt0[1]) * (pt2[0]-pt0[0])
    if not sign:
        return 0
    if sign < 0:
        return -area
    return area

prisms2

The selected wireframe shows the prism defined by the bottom triangle, because the normal.z points downwards it will become negative volume. So adding the initial prism volume and this prism volume will give the accurate volume of this cut off prism. Now consider this:

prisms3

To avoid confusion I placed the object above the grid; but below the grid a negative normal * a negative height will still add volumes appropriately.

So that's it.

from math import sqrt
from maya.OpenMaya import MItMeshPolygon, MDagPath, MSelectionList, MPointArray, MIntArray


def distance(a, b):  
    return sqrt((b[0]-a[0])*(b[0]-a[0]) +   
      (b[1]-a[1])*(b[1]-a[1]))  
      
def getTriangleArea(pt0, pt1, pt2):  
    a = distance(pt1, pt0)  
    b = distance(pt2, pt0)  
    c = distance(pt2, pt1)  
    s = (a+b+c) * 0.5  
    return sqrt(s * (s-a) * (s-b) * (s-c))  
    
def getTriangleHeight(pt0, pt1, pt2):  
    return (pt0[2] + pt1[2] + pt2[2]) * 0.33333333  

def getTriangleVolume(pt0, pt1, pt2):  
    area = getTriangleArea(pt0, pt1, pt2) * getTriangleHeight(pt0, pt1, pt2)  
    # this is an optimized 2D cross product  
    sign = (pt1[0]-pt0[0]) * (pt2[1]-pt0[1]) - (pt1[1]-pt0[1]) * (pt2[0]-pt0[0])  
    if not sign:  
        return 0  
    if sign < 0:  
        return -area  
    return area

def getPolygonVolume(shapePathName):
	volume = 0
	li = MSelectionList()
	li.add(shapePathName)
	path = MDagPath()
	li.getDagPath(0, path)
	iter = MItMeshPolygon(path)
	while not iter.isDone():
		points = MPointArray()
		iter.getTriangles(points, MIntArray())
		for i in range(0, points.length(), 3):
			volume += getTriangleVolume(points[i], points[i+1], points[i+2])
		iter.next()
	return volume
Maya plugin factory

These are 2 utilities I mostly want to use in other projects myself and now they’re easier to find.

Utility to check the variable ‘stat’ and raise & return if something’s wrong. I always have 1 MStatus running through a function and it’s always named ‘stat’… The raised exception prints the file and line number as well!

utils.hpp

#ifndef UTILS_HPP


#define THROWSTAT if(stat != MS::kSuccess){ stat.perror(MString(__FILE__) + " line " + __LINE__); return stat; }
#define THROWSTATMSG(MSG) if(stat != MS::kSuccess){ stat.perror(MString(__FILE__) + " line " + __LINE__ + ": " + MSG); return stat; }


#define UTILS_HPP
#endif

This is the plugin factory. It registers the plugin with “Unknown” as author, you may wish to change that… There’s three interesting bits here.

First is the inclusion of MFnPlugin. Normally you can’t include this file twice because things get defined causing multiple defined objects and linking errors. With these defines the plugin will be safely included and all we get is access to the MFnPlugin class, which is all we need and great and safe!

Next there’s the initializePlugin function. Here you can use the macros to register nodes and commands.
> REGISTERNODE registers a node by the class name. Depending on whether you want maya nodes and functions with capital letters or not you’ll have to drop the common C++ style of classes start with a capital.

And the third interesting bit is, all the rest is automated. No uninitialize, no struggling with ids, all that stuff. The only bit you might want to change is the base value of __id. It is the first ID and any new node will increment the ID by 1. I don’t know much about where to find which ids are actually guaranteed to be free so this is just kind of random.

main.cpp

#include <vector>

#undef NT_PLUGIN
#define MNoVersionString
#include <maya/MFnPlugin.h>
#undef MNoVersionString
#define NT_PLUGIN


#include "utils.hpp"


int __id = 0x00208600;
std::vector<int> ids;
int id()
{
	ids.push_back(__id);
	return __id++;
}

std::vector<MString> cmds;


#define REGISTERNODE(NODE) stat = plugin.registerNode(#NODE, id(), NODE::sCreator, NODE::sInitialize); THROWSTATMSG("RegisterNode failed, is the TypeID already in use?")
#define REGISTERNODETYPE(NODE, TYPE) stat = plugin.registerNode(#NODE, id(), NODE::sCreator, NODE::sInitialize, TYPE); THROWSTAT
#define REGISTERCOMMAND(COMMAND) cmds.push_back(#COMMAND); stat = plugin.registerCommand(#COMMAND, COMMAND::sCreator, COMMAND::sNewSyntax); THROWSTAT


MStatus initializePlugin(MObject& pluginObj)
{
	MFnPlugin plugin(pluginObj, "Uknown", "1.0", "any");
	MStatus stat;
	
	REGISTERNODE(MyNode);
	REGISTERNODETYPE(MyShape, MPxNode::kLocatorNode);
	REGISTERCOMMAND(MyFunction);

	return stat;
}


MStatus uninitializePlugin(MObject& pluginObj)
{
	MFnPlugin plugin(pluginObj);
	MStatus stat;
	for(size_t i = ids.size() - 1; i >= 0; --i)
	{
		stat = plugin.deregisterNode(ids[i]); THROWSTAT
	}
	for(size_t i = cmds.size() - 1; i >= 0; --i)
	{
		stat = plugin.deregisterCommand(cmds[i]); THROWSTAT
	}
	return stat;
}

So yes, this may make your registering plugins life easier. Was messing about with automating other things as well, such as getting and setting plugs, but I feel that got a bit wonky in the end. May be continued…

Natural IK Chain

So RiggingDojo.com shared this video series from Yutaca Sawai:

I decided to test it, and quickly made a script to generate a chain of n-segments
Essentially the left chain is the important one (bold in the video) and the rest are just a construct to propagate a single rotation to a full fletched motion.

Open Maya, run this Python script, see for yourself how one rotation and a bunch of parented joints & ikhandles can generate complex motion!

def joint(x,y,z):
    jt = cmds.joint()
    cmds.xform(jt, t=[x,y,z], ws=True)
    return cmds.ls(jt, l=True)[0]
    
def ikHandle(start, end):
    sl = cmds.ls(sl=True)
    cmds.select(start, end)
    ikh = cmds.ikHandle()[0]
    cmds.select(sl)
    return ikh

def constructBase(cycles = 10):
    cmds.select(cl=True)
    rotator = joint(1,0,0)
    
    #demonstrative animation
    cmds.currentTime(0)
    cmds.setKeyframe('%s.rz'%rotator)
    cmds.currentTime(60)
    cmds.setAttr('%s.rz'%rotator, 360)
    cmds.setKeyframe('%s.rz'%rotator)
    cmds.playbackOptions(min=0, max=60)
    
    root = joint(0,1,0)
    chain2root = joint(-2,-1,0)
    cmds.select(root)
    joint(-2,-1,0)
    anchor = joint(0,-3,0)
    cmds.group(ikHandle(root, anchor)) #group to make the ik handle fixed in place
    
    #chain 1
    cmds.select(anchor)
    ikGroups1 = []
    parents1 = []
    for i in range(cycles):
        ikGroups1.append([joint(2,-1 - i * 8,0)])
        joint(2,-5 - i * 8,0)
        ikGroups1[-1].append(joint(-2,-5 - i * 8,0))
        parents1.append(joint(-2,-9 - i * 8,0))

    #chain 2
    cmds.select(chain2root)
    ikGroups2 = []
    parents2 = []
    for i in range(cycles):
        parents2.append(joint(-2,-5 - i * 8,0))
        ikGroups2.append([joint(2,-5 - i * 8,0)])
        joint(2,-9 - i * 8,0)
        ikGroups2[-1].append(joint(-2,-9 - i * 8,0))
    for i in range(len(ikGroups2)):
        cmds.parent(ikHandle(*ikGroups2[i]), parents1[i])
        
    for i in range(len(ikGroups1)):
        cmds.parent(ikHandle(*ikGroups1[i]), parents2[i])


constructBase()
Classes & Javascript relations

Javascript uses objects for everything, these objects are based on prototypes, their definitions, which are also objects.

You can create an object and extend it’s prototype,
then instantiate this object to get an instance of this prototype. Just like any class definition & instance you can then have the instance operate independently of the prototype.

classes
So the first thing is the class definition, in javascript, this is a function. You give a function your class name and any member variables & default values can be set inside this function.

function BaseClass()
{
    this.a = 'b';
}

Now to instantiate the class, much like C# syntax, you do the following:

var myObject = new BaseClass();

‘myObject’ will now have an ‘a’ property, with a value of ‘b’.

functions
To add functions we must extend the prototype:

function BaseClass()
{
    this.a = 'b'
}
BaseClass.prototype.log = function()
{
    console.log(this.a);
}

And similarly, to have this function use the console (a global variable most browsers provide for debugging) print our ‘a’ value we can simply use this:

var myObject = new BaseClass();
myObject.log();

static properties
Private static properties can be local variables inbetween the prototype functions, public static properties can be added to the class object instead of it’s prototype. Contrary to other languages these properties can not be accessed using the this. object at all and the private properties are just a hack by placing variables in a temporary scope so when later on dynamically altering the prototype these variables are still not acessible.

public statics

BaseClass.staticLogSomething = function()
{
    console.log('something');
}

private statics
These are often wrapped in a surrounding function like so (notice the return!):

var BaseClass = function()
{
    var privateStatic = 0;
    function BaseClass()
    {
        this.a = 'b'
    }
    BaseClass.prototype.log = function()
    {
        console.log(this.a);
    }
    BaseClass.staticLogSomething = function()
    {
        console.log(privateStatic);
        privateStatic += 1;
    }
    return BaseClass;
}();

subclassing
Subclassing actually means creating a new class and then extending it’s prototype with the base class prototype so we share it’s functions and even know it’s constructor.

Then inside the constructor function we can use the base class constructor to inherit all initialized member variables. The hacky statics as described above won’t transfer because they are members of the base class definition object, which is a level above the prototype (which is the bit we inherit).

function SubClass()
{
    BaseClass.call(this);
}
SubClass.prototype = Object.create(BaseClass.prototype);

That’s all there is to it. Now we can extend this function by adding properties, overriding properties, etcetera. This second subclass overrides the ‘a’ and ‘log’ properties and adds a ‘b’ property which is also logged.

function SubClass2()
{
    BaseClass.call(this);
    this.a = 'c';
    this.b = 'd';
}

SubClass2.prototype = Object.create(BaseClass.prototype);

SubClass2.prototype.log = function()
{
    console.log(this.a);
    console.log(this.b);
}

Now this is some test code, putting these three classes together you can clearly see the functionality:

var a = new SubClass();
a.log();
var c = new SubClass();
c.log(); // to prove the sub class has the base class data & functions
var d = new SubClass2();
d.log(); // to prove the sub class can override this data
console.log(d.a); // to prove things are accessable and 'c' == 'this' inside it's functions
d.a = 'f';
console.log(d.a); // to prove we can alter values
var e = new SubClass2();
console.log(e.a); // to prove that that does not affect the prototype

// now let's see what static's do when inherited
var iBase = new BaseClass();
var iSub = new SubClass();
BaseClass.staticLogSomething();
SubClass.staticLogSomething(); // this will trigger an error because staticLogSomething must be accessed by it's object, in this case the BaseClass defintion object

calling base functions
One last thing to add, when you wish to call a baseclass function inside your subclass, all you need to do is ‘call’ it’s function via the prototype and pass in a reference to this (and any other arguments after that).

So essentially Subclass2.log could have been this:

SubClass2.prototype.log = function()
{
    BaseClass.prototype.log.call(this);
    console.log(this.b);
}
Advanced locator

This is another take on the locator. It supports multiple shapes and can have a unique color instead of only Maya’s built-in colors.

locator

It does not:
> Actually draw curves (I just called it that because I usually use degree-1 curves as controls, it just uses GL_LINES).
> Support separate colors per shape. It is in the end one shape node.

It does:
> Save you from the hassle of parenting curve shapes manually and having other scripts break because you suddenly have too many (shape) children.
> Support any color!

Scripts to convert selected curves to a CurveLocator (it samples smooth curves to have enough points so the look is the same):

Plugin:
> Compiled against Maya 2014
MLL file

Source:
> Solution is Visual Studio 2013
Source

The code that made the preview image:

from maya import cmds
cmds.loadPlugin("CurveLocator.mll", qt=True)

#david star
l = cmds.createNode("CurveLocator", n='CurveLocatorShape')
cmds.setAttr('%s.shapes[0].closed'%l, True)
cmds.setAttr('%s.shapes[0].point[0]'%l, 1.8, -1, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[1]'%l, 0.6, -1, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[2]'%l, 0, -2, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[3]'%l, -0.6, -1, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[4]'%l, -1.8, -1, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[5]'%l, -1.2, 0, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[6]'%l, -1.8, 1, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[7]'%l, -0.6, 1, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[8]'%l, 0, 2, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[9]'%l, 0.6, 1, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[10]'%l, 1.8, 1, 0, type='double3')
cmds.setAttr('%s.shapes[0].point[11]'%l, 1.2, 0, 0, type='double3')
cmds.setAttr('%s.color'%l, 0.8, 0.5, 0.1, type='float3')

#joint like shape
from math import sin, cos, pi
l = cmds.createNode("CurveLocator", n='CurveLocatorShape')
cmds.setAttr('%s.shapes[0].closed'%l, True)
cmds.setAttr('%s.shapes[1].closed'%l, True)
cmds.setAttr('%s.shapes[2].closed'%l, True)
for i in range(36):
    cmds.setAttr('%s.shapes[0].point[%s]'%(l, i), cos(i / 18.0 * pi), -sin(i / 18.0 * pi), 0, type='double3')
    cmds.setAttr('%s.shapes[1].point[%s]'%(l, i), cos(i / 18.0 * pi), 0, -sin(i / 18.0 * pi), type='double3')
    cmds.setAttr('%s.shapes[2].point[%s]'%(l, i), 0, cos(i / 18.0 * pi), -sin(i / 18.0 * pi), type='double3')
cmds.setAttr('%s.color'%l, 0.0, 0.87, 0.3, type='float3')

#circle
l = cmds.createNode("CurveLocator", n='CurveLocatorShape')
for i in range(36):
    cmds.setAttr('%s.shapes[0].point[%s]'%(l, i), cos(i / 18.0 * pi), -sin(i / 18.0 * pi), 0, type='double3')
cmds.setAttr('%s.color'%l, 1.0, 0.1, 0.8, type='float3')

#jagged circle
l = cmds.createNode("CurveLocator", n='CurveLocatorShape')
for i in range(36):
    cmds.setAttr('%s.shapes[0].point[%s]'%(l, i), cos(i), -sin(i), 0, type='double3')
cmds.setAttr('%s.color'%l, 1.0, 0.1, 0.8, type='float3')
Animation curve interpolation

Many application use angle-based tangents instead of proper weighted tangents with bezier interpolation. Maya can support weighted tangents and you’ll notice that although they require more performance to evaluate it is much, much more flexible.

Maya without weighted tangents & other software such as Unity3, apparently facefx and I think also Motion Builder use radians to define in and outgoing tangents. They are interpolated using hermit curves, which – if you Google them – are explained in a very confusing mathematical way.

So here’s a python function taking two key objects of type {‘time'<float>: time in arbitrary time unit (I use seconds), ‘value'<float>: value at this time, ‘inAngleRad'<float>: radians of incoming tangent, ‘outAngleRad'<float>: radians of outgoing tangent

def __inteprolateCubicHermiteSpline(self, key0, key1, worldTime):	
    # http://en.wikipedia.org/wiki/Cubic_Hermite_spline #
    duration = key1.time - key0.time
    parameter = (worldTime - key0.time) / duration
    
    p0 = key0.value
    m0 = (p0 + sin(key0.outAngleRad))
    p1 = key1.value
    m1 = (p1 + sin(key1.inAngleRad))
    # reusable time powers
    tt = parameter * parameter
    ttt = parameter * tt
    ttt2 = ttt * 2
    tt3 = tt * 3
    # Hermite basis functions
    h00t = ttt2 - tt3 + 1
    h10t = ttt - tt*2 + parameter
    h01t = -ttt2 + tt3
    h11t = ttt - tt
    
    return h00t * p0 + h10t * duration * m0 + h01t * p1 + h11t * duration * m1
Envelope skinning

Picked up the old idea of envelope skinning today.. and it totally works! Screenshot is without modifying the initial skin bind. Currently working on making it non-destructive (being able to move envelopes after binding and still keep weight paint changes).

Image2

I did a prototype for this using MEL in school years back, then Autodesk announced the exact same feature so I dropped finishing it. Later when I tried it it didn’t work, and it never changed either. Now I had some time, it’s a lot cleaner and made in Python and hopefully will offer some additional flexibility to the skinning process.

Maya UI (ELF) wrapper

This UI wrapper was originally created to avoid PyQt installation (and instability) and I recently had the chance to do some bug-fixes and port it to Maya 2010.

What it does is use simple maya UI elements (from the cmds module) but wraps them in a more user friendly and editable way. This is not much to see but it’s to use! Using form layouts in an automated way saves the headache of making things work and align neatly and comes with a nice perk: you can have row-layouts with a dynamic number of columns (because they are actually form layouts).

Alltogether this makes interface code less long and more logical wrapping the static native Maya UI system (to be fair: this system is old and pretty decent, but the amount of exposed API is extremely limited. So for a third party, like me and probably you, it is very hard to use.

Click to get a zip:

PythonUI

Run the install to add the extracted folder to the pythonpath, go into ElfUI/icons/ to find another useful BAT example: drag a PNG on it to get an XPM out of it – > you do need to open it and edit it to point to the right path though (this assumes x64 maya 2010, default location, as you will notice once you open it).

Last but not least, have a little example script that inherits a window and adds some elements into it:

import ElfUI


class UI( ElfUI.Window ):
    def __init__(self):
        super(UI, self).__init__('Easy interface.')
        self.size = [200,300]
        
        self.collapsable = ElfUI.FrameLayout(self, self.layout, 'File list')
        ElfUI.Label(self, self.collapsable, 'Label 1')
        ElfUI.Label(self, self.collapsable, 'Label 2')
        
        self.header = ElfUI.RowLayout(self, self.layout)

        btnA = ElfUI.Button(self, self.header, 'A', None, [16, 32], 'Prints the letter a!')
        btnA.AppendClicked(self._PrintA)
        self.header.AddChild(btnA)
        
        btnB = ElfUI.Button(self, self.header, 'B', None, [32, 16], 'Prints the letter b!')
        btnB.AppendClicked(self._PrintB)
        self.header.AddChild(btnB)
        
    def _PrintA(self):
        print('a')
        
    def _PrintB(self):
        print('b')


UI().show()
Python path from batch

Ocassionally you may wish to pack your tools for other people outside your general pipeline. I wrote this dirty batch script that registers the folder from which it is run to the pythonpath. This way a user can extract and place a folder where desired, run this magical “setup.bat” and perhaps run some inline code (like import Pfx_MyPackage.Setup;) from maya that intializes the remainder of the tool (by generating shelves and other necessities).

We can cheat some more if we wish to copy files from the Setup (icons!) because the setup __file__ variable points to the current script file. Knowing that the user extracted an archive as-is perhaps ‘%s/icons/’%os.dirname(__file__) may give us a path to all shelf icons (to be moved to maya’s own icon folder).

Enjoy just the batch file for now…

@setlocal enableextensions enabledelayedexpansion
@echo off

IF "%PYTHONPATH%" == "" GOTO CREATE


:APPEND
set NEWPATH=%CD%;%PYTHONPATH%
set NEWPATH=!NEWPATH:*%CD%;=!
setx PYTHONPATH "%NEWPATH%"
GOTO FINALIZE


:CREATE
setx PYTHONPATH "%CD%"


:FINALIZE
echo %PYTHONPATH%
pause
Maya Scene Assembly Wrapper

My ex-classmate Freek Hoekstra was asking about scripting with scene assembly nodes, as it appeared to be lacking documentation and generally didn’t work.

So I felt up to the challenge and with some trial and error was able to create an assemblyDefinition with working representations (the trick is to set all attributes or it will disable the entry).

The difficult part was the assemblyReference. It appeared to import the files rather than referencing them. As I finally found in the assemblyReference.cpp source code this is in fact what should happen.

The assemblyReference imports the file you want to reference, looks up the assemblyDefinition node it just imported and then copies it’s attributes and deletes any new nodes it found. Problem is: it can’t find the assemblyDefinition and doesn’t clean up after itself. So that bit I did manually in python by essentially tracking the difference in ALL scene nodes before and after referencing the file. If there are any new nodes the referencing went wrong and I attempt to do it manually. At the very least my wrapper DOES clean up the file and print some more errors if no assemblyDefinition was found.

This code has no interface to go with it yet, I first want to make up some more features before doing so (such as screenshots!). What the wrapper code does support however is single line exporting of multiple selected groups to each be another representation of one object (imagine a file containing all LODs of an asset). It also supports exporting in different types (maya scene, alembic, gpucache).

Then there’s a single line to save an assetDefinition as a separate file (containing just the one assetDefinition node) which is then ready for referencing; creating an assetReference from a file path is another one-liner.

Please look and try out the examples at the bottom, you could create a sphere and a cube, select them both and run all code at once. This should leave you with a folder & 3 files exported next to the current scene as well as an assetDefinition and assetReference node.

To be complete frank there’s also one thing seriously lacking: changing the definition file of the created assemblyReference node from the attribute editor does not work as it results in errors identical to the ones this wrapper fixes. AssetReferences created by the maya ‘create->scene assembly->assembly reference’ button don’t suffer this problem but I don’t know the code that lies behind it.

#
# Resources used:
# cmds.listAttr('dagAsset1.representations', multi=True)
# C:\Program Files\Autodesk\Maya2014\devkit\plug-ins\sceneAssembly
# C:\Program Files\Autodesk\Maya2014\Python\Lib\site-packages\maya\app\sceneAssembly
# http://docs.autodesk.com/MAYAUL/2013/ENU/Maya-API-Documentation/index.html?url=cpp_ref/hierarchy.html,topicNumber=cpp_ref_hierarchy_html
#

                
import os
import os.path
from maya import cmds
from maya.OpenMaya import *
from maya.OpenMayaMPx import *


cmds.loadPlugin('AbcExport.mll', qt=True)
cmds.loadPlugin('AbcImport.mll', qt=True)
cmds.loadPlugin('sceneAssembly.mll', qt=True)


class Enum():
    '''Bare bones Enum implementation for python 2'''
    def __init__(self, *args):
        
        self.reverse_mapping = {}
        self.__dict = {}
        
        for i in range(len(args)):
            self.reverse_mapping[i] = args[i]
            self.__dict[args[i]] = i
    
    def keys(self):
        return self.__dict.keys()
    
    def __getattr__(self, sAttribname):
        try:
            return self.__dict[sAttribname]
        except:
            raise AttributeError



#Export types, determine what function to use (abcExport, gpuCache, file)
SAExportType = Enum( 'Alembic', 'GpuCache', 'Scene' )

#Reference types, defined by the plugin
SAAssetType = Enum( *cmds.adskRepresentation(q=True, lrt=True) )


class SABase(object):
    '''
    Scene assembly base class, shared functionality
    between reference and definition node wrappers
    '''
    _nodename = None
    _node = None
    
    @property
    def nodename(self):
        return self._nodename
        
    @nodename.setter
    def nodename(self, sNewNodePath):
        if cmds.objExists(sNewNodePath):
            sFullPath = cmds.ls(sNewNodePath, l=True, type=self._wrappedType)[0]
            if not sFullPath:
                cmds.error('Attempting to swap scene assembly node %s with %s, but new node is not of type %s, ignored'%(self._nodename, sNewNodePath, self.__wrappedType))
                return
            self._nodename = sFullPath
            li = MSelectionList()
            MGlobal.getSelectionListByName(self._nodename, li)
            obj = MObject()
            li.getDependNode(0, obj)
            if obj.isNull():
                cmds.error('Attempting to swap scene assembly node %s with %s, but MObject could not be found, ignored'%(self._nodename, sNewNodePath))
                return
            self._node = MFnAssembly( obj )
        else:
            cmds.error('Attempting to swap scene assembly node %s with non existing node %s, ignored'%(self._nodename, sNewNodePath))
            return

    @property
    def activeRepresentationName(self):
        return self._node.getActive()
    
    @activeRepresentationName.setter
    def activeRepresentationName(self, sNewName):
        bValidName = False
        
        #validate name
        iaValidIndices = cmds.getAttr('%s.representations'%self.nodename, multiIndices=True)
        if not iaValidIndices:
            iaValidIndices = []
        for iValidIndex in iaValidIndices:
            if sNewName == cmds.getAttr('%s.representations[%s].repName'%(self.nodename, iValidIndex)):
                bValidName = True
        
        if not bValidName:
            cmds.error('Attempting to activate representation %s on assembly node, but %s has no representation with that name, ignored.'%(sNewName, self._nodename))
            return
        
        #set name
        self._node.activate(sNewName)
    
    
    def __init__(self):
        '''
        ABSTRACT CLASS, do not initialize
        '''
        cmds.error('Initializing SABase, but this is an abstract class. You probably intend to use SAReference or SADefinition.')
        return


class SAReference(SABase):
    '''
    Scene assembly helper class to represent a referenced
    asset in code
    
    NOTE: It should be possible to bind class to existing
    node when working from existing scenes / data so when
    extending take this class implement this funcitonality!
    '''
    
    
    #set the node type for this class, important for error handling
    _wrappedType = 'assemblyReference'
    
    
    def __init__(self, sNodeFullPath=None):
        '''
        @param sNodeFullPath: string, full path name of the
        existing assemblyReference node to bind this object
        instance to.
        '''
        
        #if no argument is given, initialize a blank class
        if sNodeFullPath == None:
            self.nodename = cmds.createNode('assemblyReference')
            return
        
        #else wrap the node given
        if not type(sNodeFullPath) in (unicode, str):
            cmds.error('Trying to initialize SAAsset from %s but argument is not a string'%sNodeFullPath)
            return
        sPath = cmds.ls(sNodeFullPath, type='assemblyReference', l=True)
        if not sPath:
            cmds.error('Trying to initialize SAAsset from %s but argument is not a valid assemblyReference node'%sNodeFullPath)
            return
        self.nodename = sPath[0] 
    
    
    @classmethod
    def CreateFromFile(cls, sFilePath):
        '''
        Given a filepath this creates a reference nodes and connects the path
        It is not capable of reading information of the file beforehand, so just
        like maya's builtin create assembly reference menu it gives errors upon
        importing a file without a reference node and does not obey the LOD saved
        inside the referenced file.
        
        @TODO:
        This function does not work! It just appears to use do regular import...
        '''
        if not os.path.exists(sFilePath):
            cmds.error('Attempting to create scene assembly reference to %s, but file does not exist, ignored.'%sFilePath)
            return
        outInstance = SAReference()
        
        #rename the node
        sFileName = sFilePath.replace('\\','/').rsplit('/',1)[-1].rsplit('.',1)[0]
        outInstance.nodename = cmds.rename(outInstance.nodename, sFileName)
        
        #set the file path
        #POSTLOAD fails and leaves us with a bunch of nodes so
        #let's search for the assemblyDefinition ourselves and
        #keep things clean eh!
        allNodes = cmds.ls(l=True)
        
        #this should work and newNodes should be empty, but it does not work and leaves a mess
        cmds.setAttr('%s.definition'%outInstance.nodename, sFilePath, type='string')
        
        #get file changes
        newNodes = list( set(cmds.ls(l=True))-set(allNodes) )
        if not newNodes:
            cmds.warning('SAReference.CreateFromFile: Reference definition either worked or file was empty. Returning outInstance assuming it is valid.')
            return outInstance
        
        #get assembly definition
        saValidNodes = cmds.ls(newNodes, type='assemblyDefinition', l=True)
        if not saValidNodes or len(saValidNodes) != 1:
            #too many or too few definitions, clean the file
            cmds.delete(newNodes)
            cmds.delete(outInstance)
            cmds.error('Attempting to set assembly reference file to %s but 0 or more than 1 definition nodes were found. File could not be referenced, assemblyReference node removed.'%sFilePath)
            return
        
        iaValidInidices = cmds.getAttr('%s.representations'%saValidNodes[0], multiIndices=True)
        if iaValidInidices:
            #copy all representations
            for i in iaValidInidices:
                #get
                sRepName = cmds.getAttr('%s.representations[%s].repName'%(saValidNodes[0], i))
                sRepLabel = cmds.getAttr('%s.representations[%s].repLabel'%(saValidNodes[0], i))
                sRepType = cmds.getAttr('%s.representations[%s].repType'%(saValidNodes[0], i))
                sRepData = cmds.getAttr('%s.representations[%s].repData'%(saValidNodes[0], i))
                #set
                cmds.setAttr('%s.representations[%s].repName'%(outInstance.nodename, i), sRepName, type='string')
                cmds.setAttr('%s.representations[%s].repLabel'%(outInstance.nodename, i), sRepLabel, type='string')
                cmds.setAttr('%s.representations[%s].repType'%(outInstance.nodename, i), sRepType, type='string')
                cmds.setAttr('%s.representations[%s].repData'%(outInstance.nodename, i), sRepData, type='string')
                
            #apply last representation as default
            if len(iaValidIndices) != 0:
                iFurthest = iaValidIndices[len(iaValidIndices)-1]
                sRepName = cmds.getAttr('%s.representations[%s].repName'%(saValidNodes[0], iFurthest))
                outInstance.activeRepresentationName = sRepName

        cmds.delete(newNodes)
        return outInstance


class SAAsset(SABase):
    '''
    Scene assembly helper class to create and represent an
    asset in code
    
    NOTE: It should be possible to bind class to existing
    node when working from existing scenes / data so when
    extending take this class implement this funcitonality!
    '''
    
    
    #set the node type for this class, important for error handling
    _wrappedType = 'assemblyDefinition'
    
    
    def __init__(self, sNodeFullPath=None):
        '''
        @param sNodeFullPath: string, full path name of the
        existing assemblyDefinition node to bind this object
        instance to.
        '''
        
        #if no argument is given, initialize a blank class
        if sNodeFullPath == None:
            self.nodename = cmds.createNode('assemblyDefinition')
            return
        
        #else wrap the noe given
        if not type(sNodeFullPath) in (unicode, str):
            cmds.error('Trying to initialize SAAsset from %s but argument is not a string'%sNodeFullPath)
            return
        sPath = cmds.ls(sNodeFullPath, type='assemblyDefinition', l=True)
        if not sPath:
            cmds.error('Trying to initialize SAAsset from %s but argument is not a valid assemblyDefinition node'%sNodeFullPath)
            return
        self.nodename = sPath[0]
    
    def SaveAsAssembly(self):
        '''
        Exports this asset to a file using currentSceneName_alembic_assembly
        
        @returns: string, the new file path

        @TODO:
        support suffixing (don't assume alembic),
        support multiple assets exported from one file (so not based on scene name)
        '''
        sCurrentFile = cmds.file(q=True, sn=True)
        if not sCurrentFile:
            cmds.error('Scene needs to be saved first, subfolder and LOD files will be created next to it')
            return
        sSceneType = cmds.file(q=True, type=True)[0]
        sCurrentDirectory, sCurrentFileName = sCurrentFile.replace('\\','/').rsplit('/', 1)
        sCurrentFileName, sCurrentExtension = sCurrentFileName.rsplit('.',1)
        
        cmds.select(self.nodename)
        sAssemblyFilePath = '%s/%s_alembic_assembly.%s'%(sCurrentDirectory, sCurrentFileName, sCurrentExtension)
        cmds.file(sAssemblyFilePath, force=True, type=sSceneType, pr=True, es=True);
        
        return sAssemblyFilePath
    
    
    @classmethod
    def CreateFromGroups(cls, saLodGroups, iExportType):
        '''
        @param saLodGroups: string array, full path names of
        each group starting from most detailed to least detailed
        
        @param exportType: SAExportType, defines the export function to use
        
        This function exports each group to a separete file and
        creates a sceneassembly node pointing to each file as
        next lod level
        '''
        #get selection
        sSelection = cmds.ls(sl=True, l=True)
        
        #grab info from current scene
        sCurrentFile = cmds.file(q=True, sn=True)
        if not sCurrentFile:
            cmds.error('Scene needs to be saved first, subfolder and LOD files will be created next to it')
            return
        sSceneType = cmds.file(q=True, type=True)[0]
        sCurrentDirectory, sCurrentFileName = sCurrentFile.replace('\\','/').rsplit('/', 1)
        sCurrentFileName = sCurrentFileName.rsplit('.',1)[0]
        
        #generate directory to store lods ins
        sLodDir = '%s/%s_LODs'%(sCurrentDirectory, sCurrentFileName)
        if not os.path.exists(sLodDir):
            os.makedirs(sLodDir)
        
        #create dag asset to put lods into
        outInstance = SAAsset()

        if True: #try:
            #export lods
            for i in range(len(saLodGroups)):
                #get file name
                sOutFileName = '%s_lod%s'%(sCurrentFileName, i)
                #get file full path
                sLodFilePath = '%s/%s'%(sLodDir, sOutFileName)
                
                if SAExportType.GpuCache:
                    iCurrentFrame = cmds.currentTime(q=True)
                    sLodFilePath = '%s.abc'%sLodFilePath
                    sDir, sName = sLodFilePath.replace('\\','/').rsplit('/',1)
                    sGpuCacheFile = cmds.gpuCache(saLodGroups[i], startTime=iCurrentFrame, endTime=iCurrentFrame, directory=sDir, fileName=sName)
                    
                    #append extension to show the file type in the representation name
                    sOutFileName = '%s.abc'%sOutFileName
                    
                    #set node attributes
                    cmds.setAttr('%s.representations[%s].repType'%(outInstance.nodename, i), 'Cache', type='string')
                    
                elif SAExportType.Alembic:
                    iCurrentFrame = cmds.currentTime(q=True)
                    sLodFilePath = '%s.abc'%sLodFilePath
                    cmds.AbcExport(j='-frameRange %s %s -root %s -file %s'%(iCurrentFrame, iCurrentFrame, saLodGroups[i], sLodFilePath))
                    
                    #append extension to show the file type in the representation name
                    sOutFileName = '%s.abc'%sOutFileName
                    
                    #set node attributes
                    cmds.setAttr('%s.representations[%s].repType'%(outInstance.nodename, i), 'Cache', type='string')
                    
                else:
                    cmds.select(saLodGroups[i])
                    cmds.file(sLodFilePath, force=True, type=sSceneType, pr=True, es=True);
                    
                    #append extension
                    sOutFileName = '%s.%s'%(sOutFileName, sCurrentFile.rsplit('.',1)[-1])
                    sLodFilePath = '%s.%s'%(sLodFilePath, sCurrentFile.rsplit('.',1)[-1])
                    
                    #set node attributes
                    cmds.setAttr('%s.representations[%s].repType'%(outInstance.nodename, i), 'Scene', type='string')
                    
                #set node attributes
                cmds.setAttr('%s.representations[%s].repName'%(outInstance.nodename, i), sOutFileName, type='string')
                cmds.setAttr('%s.representations[%s].repLabel'%(outInstance.nodename, i), sOutFileName, type='string')
                cmds.setAttr('%s.representations[%s].repData'%(outInstance.nodename, i), sLodFilePath, type='string')
                
                #default to furthest lod
                if i == len(saLodGroups)-1:
                    outInstance._node.activate(sOutFileName)
        else: #except:
            cmds.delete(outInstance)
            cmds.select(sSelection)
            cmds.error('LOD exporting and linking failed, no scene assembly definition created.')
            return
        
        #restore selection, make redo easier & avoid confusion
        if sSelection:
            cmds.select(sSelection)
        
        return outInstance

'''
#Usage examples

#Create an assemblyDefinition and for each selected transform: export and add as representation
dagAsset1 = SAAsset.CreateFromGroups( cmds.ls(sl=True, l=True, type='transform'), SAExportType.GpuCache)

#Wrap an existing assemblyDefinition
dagAsset1 = SAAsset( 'dagAsset1' )

#Set the currently visible definition (default is last)
dagAsset1.activeRepresentationName = 'crystal_pylon_lod0.abc'

#Export the wrapped assemblyDefinition to a separate file for referencing
sExportedPath = dagAsset1.SaveAsAssembly()

#Reference a filePath, assuming it contains exactly one assemblyDefinition node (other nodes are discarded)
reference1 = SAReference.CreateFromFile( sExportedPath )

#Set the currently visible definition in the reference (default is last)
reference1.activeRepresentationName = 'crystal_pylon_lod2.abc'
'''
Parallax mapping by marching


I had this idea thanks to the existance of raymarching. Currently figuring out self shadowing. Last half hour was messing with defines and such to get it to work on PS2.0, which it does now although I had to strip out the _Color and _SpecularColor in that version (althouh the properties are still defined the multiplications simply aren’t done).

Also seeing if maybe I need to make a PS3.0 version with loops instead of this hideous stack of if-checks. I just didn’t want to struggle with compiling so this was easier right now.

Shader "Custom/ParallaxMarching" 
{
	Properties 
	{
		_MainTex ("Base (RGBA)", 2D) = "white" {}
		_Color ("Color (RGBA)", Color) = (1,1,1,1)
		_NormalMap ("Tangent normals", 2D) = "bump" {}
		_HeightMap ("Height map (R)", 2D) = "white" {}
		_Intensity ("Intensity", Float) = 0.001
		
		_SpecularPower ("Specular power", Float) = 100
		_SpecularFresnel ("Specular fresnel falloff", Float) = 4
		_SpecularTex ("Specular texture (RGB)", 2D) = "white" {}
		_SpecularColor ("Specular color (RGB)", Color) = (1,1,1,1)
	}
	
	CGINCLUDE
	//only 4 steps in shader program 2.0
	//10 steps is max & prettiest, 2 steps is min
	#define PARALLAX_STEPS 4
	#define INTENSITYSCALE 5/PARALLAX_STEPS
	//#define SPECULAR_FRESNEL
	#define OPTIMIZE_PS20
	
	uniform sampler2D _MainTex;
	uniform half4 _Color;
	uniform sampler2D _NormalMap;
	uniform sampler2D _HeightMap;
	uniform half _Intensity;
	uniform half _SpecularPower;
	uniform half _SpecularFresnel;
	uniform sampler2D _SpecularTex;
	uniform half4 _SpecularColor;
	
	#include "BaseFunctions.cginc"
	
	half4 frag_parallax(v2f i) : COLOR
	{
		//get some normalized vectors
		half3 worldBiTangent = cross(i.worldTangent, i.worldNormal);
		half3 cameraDirection = normalize(i.worldPosition - _WorldSpaceCameraPos);
		
		//determine what the tangent space step is from this view angle
		half2 uvstep = half2( dot( cameraDirection, i.worldTangent ),
		  dot( cameraDirection, worldBiTangent ) ) * _Intensity;
		uvstep *= INTENSITYSCALE;
		
		//iteratively sample until a point is hit
		half2 uv = i.uv;
		
		#if PARALLAX_STEPS > 1
		half mapDepth0 = 1-tex2D(_HeightMap, uv).r;
		half mapDepth1 = 1-tex2D(_HeightMap, uv + uvstep).r;
		#endif
		#if PARALLAX_STEPS > 2
		half mapDepth2 = 1-tex2D(_HeightMap, uv + uvstep*2).r;
		#endif
		#if PARALLAX_STEPS > 3
		half mapDepth3 = 1-tex2D(_HeightMap, uv + uvstep*3).r;
		#endif
		#if PARALLAX_STEPS > 4
		half mapDepth4 = 1-tex2D(_HeightMap, uv + uvstep*4).r; 
		#endif
		#if PARALLAX_STEPS > 5
		half mapDepth5 = 1-tex2D(_HeightMap, uv + uvstep*5).r;
		#endif
		#if PARALLAX_STEPS > 6
		half mapDepth6 = 1-tex2D(_HeightMap, uv + uvstep*6).r;
		#endif
		#if PARALLAX_STEPS > 7
		half mapDepth7 = 1-tex2D(_HeightMap, uv + uvstep*7).r;
		#endif
		#if PARALLAX_STEPS > 8
		half mapDepth8 = 1-tex2D(_HeightMap, uv + uvstep*8).r;
		#endif
		#if PARALLAX_STEPS > 9
		half mapDepth9 = 1-tex2D(_HeightMap, uv + uvstep*9).r;
		#endif
		
		#if defined(STEPS_10)
		half depthStep = 0.1;
		#else
		half depthStep = 0.2;
		#endif
		
		#if PARALLAX_STEPS > 1
		if( mapDepth0 > 0 && mapDepth1 > depthStep )
		{
			uv = uv + uvstep;
			#if PARALLAX_STEPS > 2
			if( mapDepth2 > depthStep*2 )
			{
				uv = uv + uvstep*2;
				#if PARALLAX_STEPS > 3
				if( mapDepth3 > depthStep*3 )
				{
					uv = uv + uvstep*3; 
					#if PARALLAX_STEPS > 4
					if( mapDepth4 > depthStep*4 )
					{
						uv = uv + uvstep*4;
						#if PARALLAX_STEPS > 5
						if( mapDepth5 > depthStep*5 )
						{
							uv = uv + uvstep*5;
							#if PARALLAX_STEPS > 6
							if( mapDepth6 > depthStep*6 )
							{
								uv = uv + uvstep*6;
								#if PARALLAX_STEPS > 7
								if( mapDepth7 > depthStep*7 )
								{
									uv = uv + uvstep*7;
									#if PARALLAX_STEPS > 8
									if( mapDepth8 > depthStep*8 )
									{
										uv = uv + uvstep*8;
										
										#if PARALLAX_STEPS > 9
										if( mapDepth9 > depthStep*9 )
										{
											uv = uv + uvstep*9;
										}
										#endif
									}
									#endif
								}
								#endif
							}
							#endif
						}
						#endif
					} 
					#endif
				}
				#endif
			}
			#endif
		}
		#endif
		
		//apply normal mapping
		half3 N = half3(tex2D(_NormalMap, i.uv).ra*2.0-1.0, 1.0);
		N = normalize( mul( N, float3x3(i.worldTangent, worldBiTangent, i.worldNormal) ) );
		
		//implement some lighting
		half3 L = _WorldSpaceLightPos0.xyz;
		half atten = 1.0;
		#ifndef OPTIMIZE_PS20
		if( _WorldSpaceLightPos0.w == 1 )
		{
			L -= i.worldPosition;
		#endif
			#ifdef OPTIMIZE_PS20
		 	//multiplying the worldPosition by 0 costs less instructions
			L -= i.worldPosition * _WorldSpaceLightPos0.w;
			#endif
			//it does mean that for directional lights these calculations are all useless and slow down the shader
			half invLightDistance = 1.0 / length(L);
			L *= invLightDistance;
			atten *= invLightDistance;
		#ifndef OPTIMIZE_PS20
		}
		#endif
		half NdotL = max(0, dot(L, N));
		
		half3 R = reflect(cameraDirection, N);
		half RdotL = max(0, dot(R, L))*atten;
		RdotL = pow(RdotL, _SpecularPower);
		
		#ifdef SPECULAR_FRESNEL
		half FR = dot(cameraDirection, N);
		if( _SpecularFresnel < 0 )
			RdotL *= pow(1-FR, _SpecularFresnel);
		else
			RdotL *= pow(FR, _SpecularFresnel);
		#endif
		
		half4 outColor = NdotL * _LightColor0 * tex2D(_MainTex, uv)
#ifndef OPTIMIZE_PS20
		* _Color.xyz
#endif
;
		outColor.xyz += RdotL * _LightColor0.xyz * tex2D(_SpecularTex, uv).xyz
#ifndef OPTIMIZE_PS20
		* _SpecularColor.xyz
#endif
;
		return outColor;
	}
	ENDCG
	
	SubShader 
	{
		Tags { "RenderType"="Opaque" }
		
		Pass
		{ 
			Tags{ "LightMode" = "ForwardBase" }
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag_parallax
			#pragma target 2.0
			#pragma only_renderers d3d9 
			#pragma fragmentoption ARB_precision_hint_fastest 
			//required for lights to update
			#pragma multi_compile_fwdbase_fullshadows
			ENDCG
		}
	} 
	FallBack "Diffuse"
}
Applying decals

Creating a tool for applying decals is fairly useful…

I consider the case in which a decal is a single planar polygon with a texture (be it transparent, cutout or neither)

An artist would want to take the polygon (with texture) and parent it to the camera, then the artist can move around until a nice spot for the decal is found. In the meantime the decal can be panned and rotated (only around the camera forward axis) so it always remains in the same plane as the viewport.

Once the decal looks nice from the point of view of the camera, it needs to be applied in 3D and separated from the camera.

This initially is a straightforward action: for each vertex we need to shoot a ray from the camera position, through the vertex, onto our world or target mesh. The intersection point minus a little offset (to avoid coplanar faces which cause flickering) is where the vertex should be set.

Problems arise however when applying the decal on a curved surface, it will be floating in front of or penetrating through the surface…

This can be solved in another way. Fastest is if we know what mesh we wish to apply our decal on, otherwise we’d need to combine the world mesh for this and that gets heavy to compute really fast.

Transform the mesh to world space, then transform it to camera space (or the decal’s object space).
Now divide x,y by z to put the mesh onto the view plane, from this point on, our case is 2D.
Foreach edge in the mesh: check intersection with each edge in the decal polygon.
If they intersect, insert a vertex into the polygon (or schedule the insertion until all checks are performed).
When done inserting vertices (which in a simple single non-triangulated polygon case is easily accomplished by inserting a point at the right place in the list).
We can first merge the vertices with a small tolerance in case we graze the tip of a triangle and split redundantly much at some point.

Here we have an original vertices and three splits very close to eachother merging these will make the result cleaner and have less ugly normals.

Next things to do:
Raycast all the vertices in world space onto the mesh in world space as described before
Triangulate
Calculate normals (cross product 2 edges of each triangle: (p2-p0) X (p1-p0))
Unparent from the camera (if you wrote a tool that did that for the artist)
Done!

Note that it is possible to flatten the decal polygon in camera space as well, making that it does not need to be parented to the camera and a user is free to modify it in any way.

Delaunay Triangulation

dt

I was trying to cap polygons with holes and all my gaps were coplanar. As opposed to my initial solution (finding edge rings and capping those, leaving me with no clue how to cut out holes) I decided to look up how to go about this. This was really a learning exercise and it proved more complex than I initially anticipated.

So the idea, and some illustrations displaying that I wasn’t the first to run into the problem of capping edge rings come from here:
Jose Esfer

But the most difficulty I had with finding a good reference for this. There’s a lot of scientific papers that go very, very far beyond my understanding of mathematical jargon and there’s a number of source code examples in various languages, but the few I opened were lengthy so I knew the only way to understand this was with a decent explanation, finally to be discovered here:
Computing constrained Delaunay triangulations

This explanation is pretty clear although it took me over a day to implement, further errors are yet to be discovered…

The only thing to watch for is the explanation of the actual triangulation algorithm where the first ‘New LR-edge’ is inserted and some edge gets deleted one image yet is not deleted in the next (and in my implementation hence never deleted). So following what the text actually says got me there.

UPDATE: Added debug output -> define UDT_Debug in the conditional compilation symbols of the project settings (also for the editor project) and get some control on where to stop triangulating. It displays potential new connection candidates as circles (green and blue) as well as the last edge (red) and the last deleted edge(s) (green).

Also randomization now generates input when you check it and then unchecks itself.

I’m not sure if it’s useful to post the source I ended up with, because it’ll be just another example out there, but since it’s for Unity it may be easier to get it running. Do select the GameObject you attach it to though because it relies on OnDrawSceneGUI which only gets called for selected objects.

It displays the current vertex numbers as well, which may give some confusion because the vertices get dynamically rearranged so this does not reflect your actual input.

Grab the package here!

Python singleton, demonstrated using Qt
class ESingleton( object ):
    @classmethod
    def Stub(cls):
        cls.inst = cls()
        return cls.inst
    
    inst = Stub
    
    def __call__(self):
        return self

class EMainWindow( QtGui.QDockWidget, ESingleton ):
    def __init__(self):
        QtGui.QDockWidget.__init__(self)
        main = QtGui.QWidget()
        self.setWidget( main )
        self.setFloating( True )
        self.show()

print EMainWindow.inst
print EMainWindow.inst()
print EMainWindow.inst

@classmethod
def refresh(cls):
    cls.Stub()

Although you can make it as complex as you want, for example when the inst already is an instance of cls we can take the geometry to the refreshed instance.

@classmethod
def refresh(cls):
    if isinstance(cls.inst, cls):
        g = cls.geometry()
    cls.Stub().setGeometry(g)
    return cls.inst

Note that I did implement this method as override in the main window to keep the singleton generic and not Qt specific!

Here’s the fun:

print EMainWindow.inst
EMainWindow.refresh()
print EMainWindow.inst
EMainWindow.inst()
print EMainWindow.inst
EMainWindow.refresh()
print EMainWindow.inst
<bound method ObjectType.Stub of <class '__main__.EMainWindow'>>
<__main__.EMainWindow object at 0x000000001143DAC8>
<__main__.EMainWindow object at 0x000000001143DAC8>
<__main__.EMainWindow object at 0x000000001143D908>

As you can see the inst() call sticks to the method and the window can in fact be initialized with refresh without ever calling inst first.

Derivative normal maps

Very interseting indeed:
http://www.rorydriscoll.com/2012/01/11/derivative-maps/

I started typing in FX Composer from the default Phong_Reflect material and extended it with some tweaks and this cool normal mapping technique.

The current implementation has height maps commented out and actual derivative normal maps commented out, so normal maps work. Of course it’s no use doing this with regular normal maps hence you’d have to convert to derivative maps and change the comments before this is of any use. That’s a bit I’m still figuring out however.

The height maps don’t seem as successful as the normal / derivative maps implementation (and their Y is reversed) o it’s best left untouched. Scroll down to the pixel shader or paste this code in FX composer (the reason I posted the full code is that you may review it in a more proper environment without having to tiresomely figure out how to implement some snippet first).

#define FLIP_TEXTURE_Y

float Script : STANDARDSGLOBAL <
    string UIWidget = "none";
    string ScriptClass = "object";
    string ScriptOrder = "standard";
    string ScriptOutput = "color";
    string Script = "Technique=Phong?Main:Main10;";
> = 0.8;

//// UN-TWEAKABLES - AUTOMATICALLY-TRACKED TRANSFORMS ////////////////

float4x4 WorldITXf : WorldInverseTranspose < string UIWidget="None"; >;
float4x4 WvpXf : WorldViewProjection < string UIWidget="None"; >;
float4x4 WorldXf : World < string UIWidget="None"; >;
float4x4 ViewIXf : ViewInverse < string UIWidget="None"; >;

//// TWEAKABLE PARAMETERS ////////////////////

/// Point Light 0 ////////////
float3 Light0Pos : Position <
    string Object = "PointLight0";
    string UIName =  "Light 0 Position";
    string Space = "World";
> = {-0.5f,2.0f,1.25f};
float3 Light0Color : Specular <
    string UIName =  "Light 0";
    string Object = "Pointlight0";
    string UIWidget = "Color";
> = {1.0f,1.0f,1.0f};

/// Point Light 1 ////////////
float3 Light1Pos : Position <
    string Object = "PointLight1";
    string UIName =  "Light 1 Position";
    string Space = "World";
> = {-0.5f,2.0f,1.25f};
float3 Light1Color : Specular <
    string UIName =  "Light 1";
    string Object = "Pointlight1";
    string UIWidget = "Color";
> = {1.0f,1.0f,1.0f};

// Ambient Light
float3 AmbiColor : Ambient <
    string UIName =  "Ambient Light";
    string UIWidget = "Color";
> = {0.07f,0.07f,0.07f};

float Ks <
    string UIWidget = "slider";
    float UIMin = 0.0;
    float UIMax = 1.0;
    float UIStep = 0.05;
    string UIName =  "Specular";
> = 0.4;

float SpecExpon : SpecularPower <
    string UIWidget = "slider";
    float UIMin = 1.0;
    float UIMax = 128.0;
    float UIStep = 1.0;
    string UIName =  "Specular Power";
> = 55.0;


float Bump <
    string UIWidget = "slider";
    float UIMin = 0.00;
    float UIMax = 2;
    float UIStep = 0.001;
    string UIName =  "Bumpiness";
> = 1.0; 

float Kr <
    string UIWidget = "slider";
    float UIMin = 0.0;
    float UIMax = 1.0;
    float UIStep = 0.01;
    string UIName =  "Reflection Strength";
> = 0.5;

float ReflBias <
    string UIWidget = "slider";
    float UIMin = 0.0;
    float UIMax = 10.0;
    float UIStep = 0.01;
    string UIName =  "Reflection supression";
> = 1.0;

//////// COLOR & TEXTURE /////////////////////

texture ColorTexture : DIFFUSE <
    string ResourceName = "";
    string UIName =  "Diffuse Texture";
    string ResourceType = "2D";
>;

sampler2D ColorSampler = sampler_state {
    Texture = ;
    FILTER = MIN_MAG_MIP_LINEAR;
    AddressU = Wrap;
    AddressV = Wrap;
}; 

texture SpecularTexture  <
    string ResourceName = "";
    string UIName =  "Specular-Map Texture";
    string ResourceType = "2D";
>;

sampler2D SpecularSampler = sampler_state {
    Texture = ;
    FILTER = MIN_MAG_MIP_LINEAR;
    AddressU = Wrap;
    AddressV = Wrap;
}; 

texture HeightTexture  <
    string ResourceName = "";
    string UIName =  "Normal-Map Texture";
    string ResourceType = "2D";
>;

sampler2D HeightSampler = sampler_state {
    Texture = ;
    FILTER = MIN_MAG_MIP_LINEAR;
    AddressU = Wrap;
    AddressV = Wrap;
}; 

texture EnvTexture : ENVIRONMENT <
    string ResourceName = "";
    string UIName =  "Environment";
    string ResourceType = "Cube";
>;

samplerCUBE EnvSampler = sampler_state {
    Texture = ;
    FILTER = MIN_MAG_MIP_LINEAR;
    AddressU = CLamp;
    AddressV = CLamp;
    AddressW = CLamp;
};

// shared shadow mapping supported in Cg version

//////// CONNECTOR DATA STRUCTURES ///////////

/* data from application vertex buffer */
struct appdata {
    float3 Position	: POSITION;
    float4 UV		: TEXCOORD0;
    float4 Normal	: NORMAL;
};

/* data passed from vertex shader to pixel shader */
struct vertexOutput {
    float4 HPosition	: POSITION;
    float2 UV		: TEXCOORD0;
    // The following values are passed in "World" coordinates since
    //   it tends to be the most flexible and easy for handling
    //   reflections, sky lighting, and other "global" effects.
    float3 WorldNormal	: TEXCOORD1;
	float3 WorldPosition : TEXCOORD2;
    float3 WorldView	: TEXCOORD3;
};
 
///////// VERTEX SHADING /////////////////////

/*********** Generic Vertex Shader ******/

vertexOutput std_VS(appdata IN) {
    vertexOutput OUT = (vertexOutput)0;
    OUT.WorldNormal = mul(IN.Normal,WorldITXf).xyz;
	
    float4 Po = float4(IN.Position.xyz,1);
    float3 Pw = mul(Po,WorldXf).xyz;
#ifdef FLIP_TEXTURE_Y
    OUT.UV = float2(IN.UV.x,(1.0-IN.UV.y));
#else /* !FLIP_TEXTURE_Y */
    OUT.UV = IN.UV.xy;
#endif /* !FLIP_TEXTURE_Y */
    OUT.WorldView = normalize(ViewIXf[3].xyz - Pw);
    OUT.HPosition = mul(Po,WvpXf);
	OUT.WorldPosition = Pw;
    return OUT;
}

///////// DERIVATIVE NORMAL MAPPING //////////////////////
float3 surface_gradient(float3 Nn, float3 dpdx, float3 dpdy, float dhdx, float dhdy )
{
	float3 r1 = cross( dpdy, Nn );
	float3 r2 = cross( Nn, dpdx );
	return (r1*dhdx + r2*dhdy)/dot(dpdx, r1);
}

float3 modify_normal( float3 Nn, float3 dpdx, float3 dpdy, float dhdx, float dhdy )
{
	return normalize(Nn - surface_gradient(Nn, dpdx, dpdy, dhdx, dhdy));
}

/*
//from a height map
float3 surface_normal(float3 position, float3 normal, float height)
{
	float3 dpdx = ddx(position);
	float3 dpdy = ddy(position);
	float dhdx = ddx(height);
	float dhdy = ddy(height);
	
	return modify_normal( normal, dpdx, dpdy, dhdx, dhdy );
}
*/

//from a derivative map
float ApplyChainRule(float dhdu, float dhdv, float dud_, float dvd_)
{
    return dhdu * dud_ + dhdv * dvd_;
}

float3 surface_normal(float3 position, float3 normal, float2 gradient, float2 uv)
{
    float3 dpdx = ddx(position);
    float3 dpdy = ddy(position);
 
    float dhdx = ApplyChainRule(gradient.x, gradient.y, ddx(uv.x), ddx(uv.y));
    float dhdy = ApplyChainRule(gradient.x, gradient.y, ddy(uv.x), ddy(uv.y));
 
    return modify_normal(normal, dpdx, dpdy, dhdx, dhdy);
}

///////// PIXEL SHADING //////////////////////
struct data
{
	float3 nWorldNormal;
	float3 vColor;
	float3 vSpecularColor;
	float3 vReflectedColor;
};

data parse_inputs( vertexOutput IN )
{
	data P;
	
	// Sample textures
	P.vColor = tex2D(ColorSampler, IN.UV).rgb;
	P.vSpecularColor = tex2D(SpecularSampler, IN.UV).rgb;
	
	
	// Height map
	// note that the height map's Bump value is more sensitive and should
	// be scaled by about 0.05 compared to when using a normal map
	//P.nWorldNormal = surface_normal(IN.WorldPosition, IN.WorldNormal, height);
	//float height = tex2D(HeightSampler, IN.UV).r*Bump;
	
	// Normal map
	// note that the tangent normal to gradient conversion can be done in the
	// source texture beforehand with ease, use derivative map after doing so
	float3 tangentNormal = normalize( tex2D(HeightSampler, IN.UV).rgb*2-1 );
	float2 gradient = float2(-tangentNormal.x, tangentNormal.y) / tangentNormal.z * Bump;
	P.nWorldNormal = surface_normal(IN.WorldPosition, IN.WorldNormal, gradient, IN.UV);
	
	//Derivative map
	//float3 gradient = tex2D(HeightSampler, IN.UV).rg*2-1;
	//P.nWorldNormal = surface_normal(IN.WorldPosition, IN.WorldNormal, gradient, IN.UV);
	
	
	// Reflection vector
	float3 R = -reflect(IN.WorldView, P.nWorldNormal);
	
	// Sample reflection map
	P.vReflectedColor = texCUBE(EnvSampler, R).rgb;
	
	return P;
}

float3 apply_light_diffuse( float3 nWorldNormal, float3 nWorldView, float3 vColor,
	float3 nLightVec, float3 vLightColor )
{
    float3 diffuse = max( 0.0, dot(nLightVec, nWorldNormal) ) * vLightColor;
    float3 result = vColor*(diffuse + AmbiColor);
	return result;
}

float3 apply_point_light_diffuse( data P, float3 nWorldPosition, float3 nWorldView,
	float3 vLightPos, float3 vLightColor )
{
    float3 nLightVec = normalize(vLightPos - nWorldPosition);
    return apply_light_diffuse(P.nWorldNormal, nWorldView, P.vColor, nLightVec, vLightColor);
}

float3 apply_light( data P, float3 nWorldView,
	float3 nLightVec, float3 vLightColor )
{
    float3 Hn = normalize(nWorldView + nLightVec);
    float4 litV = lit(dot(nLightVec, P.nWorldNormal),dot(Hn, P.nWorldNormal),SpecExpon);
    float3 diffuse = litV.y * vLightColor;
    float3 specularReflection = litV.y * litV.z * Ks * vLightColor;
	
	//// APPLY ////
	// Diffuse
    float3 result = P.vColor*(diffuse + AmbiColor);
	
    // Specular reflection
	result += specularReflection * P.vSpecularColor;
	
	return result;
}

float3 apply_point_light( data P, float3 nWorldPosition, float3 nWorldView,
	float3 vLightPos, float3 vLightColor )
{
    float3 nLightVec = normalize(vLightPos - nWorldPosition);
	return apply_light( P, nWorldView, nLightVec, vLightColor );
}

float4 std_PS(vertexOutput IN) : COLOR
{
	data P = parse_inputs( IN );
	
	/// Hard coded light types & count ///
	float3 outPixel = apply_point_light( P, IN.WorldPosition, IN.WorldView, Light0Pos, Light0Color );
	outPixel += apply_point_light_diffuse( P, IN.WorldPosition, IN.WorldView, Light1Pos, Light1Color );
	
	// Environment cubemap reflection
	float fInvBrightness = max( 0.0, 1-(outPixel.x+outPixel.y+outPixel.z)*0.333 );
	if( ReflBias < 1 )
	{
		fInvBrightness = fInvBrightness * ReflBias + (1-ReflBias);
	}
	else
	{
		fInvBrightness = pow( fInvBrightness, ReflBias );
	}
    float3 reflColor = P.vSpecularColor * Kr * P.vReflectedColor;
    outPixel += reflColor * fInvBrightness;
	
	return float4(outPixel, 1);
}
///// TECHNIQUES /////////////////////////////
RasterizerState DisableCulling
{
    CullMode = NONE;
};

DepthStencilState DepthEnabling
{
	DepthEnable = TRUE;
};

BlendState DisableBlend
{
	BlendEnable[0] = FALSE;
};

technique10 Main10
{
    pass p0
	{
        SetVertexShader( CompileShader( vs_4_0, std_VS() ) );
        SetGeometryShader( NULL );
        SetPixelShader( CompileShader( ps_4_0, std_PS() ) );
                
        SetRasterizerState(DisableCulling);       
		SetDepthStencilState(DepthEnabling, 0);
		SetBlendState(DisableBlend, float4( 0.0f, 0.0f, 0.0f, 0.0f ), 0xFFFFFFFF);
    }
}

technique Main
{
    pass p0
	{
        VertexShader = compile vs_3_0 std_VS();
		ZEnable = true;
		ZWriteEnable = true;
		ZFunc = LessEqual;
		AlphaBlendEnable = false;
		CullMode = CW;
        PixelShader = compile ps_2_a std_PS();
    }
}

/////////////////////////////////////// eof //
And now for something completely different

We made a demo, in Unity! Enjoy if you have the time :)
http://pouet.net/prod.php?which=61220

Maya scatter node

Just enjoying exploring the API…

I made a node that scatters a bunch of points and then a script that puts locators on those positions; not useful yet, but this attribute can be pushed to another node to say.. instance a mesh around or as source to spawn something from or whatever you like.

cmds.file(new=True, f=True)
cmds.unloadPlugin('MayaAPI.mll')

cmds.loadPlugin('MayaAPI.mll')
scatterNode = cmds.createNode('scatterPointsOnMesh')
inMesh = cmds.polyCylinder()[0]
cmds.connectAttr( '%s.outMesh'%inMesh, '%s.inMesh'%scatterNode )
cmds.dgeval( '%s.outPoints'%scatterNode )

for point in cmds.getAttr( '%s.outPoints'%scatterNode ):
    cmds.spaceLocator(p=point[0:3])

It left me with the knowledge that compiling for 32bit really won’t work if you only have 64bit Maya; I was using VS2008 which didn’t let me compile to 64bit profile so after installing VS2012 I had more success.

Additionally if you create numeric attributes, their data type in the compute method is going to be MFnData::kNumeric and not a more detailed MFnNumericType.

scatter

So what it basically does is use stdlib’s rand to get random UV coordinates (it returns an int so use (float)rand() / (float)RAND_MAX to get desirable results). Then it iterates over the mesh polygon Ids and uses MFnMesh.getPointAtUv to see whether the UV coord lies in that polygon and where the 3D point is.

I can see the flaw of non 0-1 space UVs or no UVs here, so make sure to use automatic mapping, auto layout and set the UV set before implementing this somewhere useful.

It’s something really basic, but brief nodes are the most reusable ones. Download solution here…

Detecting wire color in Maya III

This is te final post regarding color detection as described more broadly in those posts:
Wirecolor Part 1
Wirecolor Part 2
Detecting ffection

What I wish to leave you with is a documented piece of code which you can use. To me this is Mutils.color.py

'''
Created on Feb 18, 2013
@author: Trevor van Hoof
@package: Mutils
'''


from maya import cmds
import maya.utils


def affectsAll( attr, type ):
    '''
    Maps the affects net within a node,
    instead of just using cmds.affects
    this iterates over the resulting attributes
    again until all attributes indirectly
    affecting the given inAttr are found.
    
    @param inAttr: str, name of the attribute
    to map affecting attributes for.
    
    @param inType: str, type of the node we're
    looking at, this nodetype must of course
    have the inAttr.
    
    @returns: list of attribute names
    that affect the given inAttr.
    '''
    #these lists can in theory be precalculated constants
    attrs = cmds.affects(attr.rsplit('.',1)[-1], t=type)
    if not attrs:
        return []
    i = 0
    while i < len(attrs):
        tmp = cmds.affects(attrs[i], t=type) 
        if tmp:
            attrs.extend(tmp)
        attrs = list(set(attrs))
        i += 1
    return attrs


def affectedNet( inAttr, inNode ):
    '''
    This iteratively maps the affected network of the given
    attribute on the given node. The type of the node is 
    important and the attribute must exist on the node.
    
    It works by finding which internal inputs affect the given
    attribute, then it lists all connections to these attributes.
    
    For all node.attrs again the affected network is mapped
    until we have the entire node graph plus names of all attributes
    affecting the given inNode.inAttr through a connection in the DG.
    
    @returns: tuple of 2 lists, first containing the node names, second
    containing lists with attribute names (inputs and outputs) on that
    node that affect inNode.inAttr or connections thereto.
    
    The two lists have matching indices so the list of attribute names
    in returnValue[1][i] maps to the node name in returnValue[0][i]
    
    @param inAttr: str, name of the attribute to find affected net for
    @param inNode: str, path of the node to list inputs from 
    '''
    nodes = [inNode]
    attributes = [[inAttr]]
    
    #iterate until affection found or entire network traversed
    i = 0
    while i < len(nodes):
        #find internel affection net
        attributes[i].extend( affectsAll(attributes[i][0], cmds.nodeType(nodes[i])) )
        
        #find nodes that are connected to plugs in the affected net
        inputs = cmds.listConnections(nodes[i], s=True, d=False, c=True, p=True)
        if inputs:
            for j in range(0,len(inputs),2):
                #attribute name in affectednet
                if inputs[j].rsplit('.',1)[-1] in attributes[i]:
                    #get node attribute pair
                    nodeattr = inputs[j+1].split('.',1)
                    nodeattr[0] = cmds.ls(nodeattr[0], l=True)[0]
                    if nodeattr[0] not in nodes:
                        #append new nodes
                        nodes.append(nodeattr[0])
                        attributes.append([nodeattr[1]])
                    else:
                        #append new plugs on known nodes
                        attributes[ nodes.index(nodeattr[0]) ].append( nodeattr[1] )
        
        #if no incoming node was selected, continue iterating
        i += 1
    return nodes, attributes


def isAffected(inPathStr):
    '''
    This function grabs the affected network of the given node's matrix
    or output shape and checks whether any attributes of this network are driven
    by a selected node, or child of a selected node.
    
    It currently only supports geometry shapes.
    
    @param inPathStr: str, the node to check for
    
    @returns: bool, True if the given node is affected by one of the selected nodes
    '''
    #assume node is a transform by default
    attrib = 'matrix'
    
    #get the output attribute if node is a shape
    if cmds.ls(inPathStr, type='shape'):
        #detect the attribute name to get the affectedNet for
        nodetype = cmds.nodeType( inPathStr )
        if nodetype == 'mesh':
            attrib = 'outMesh'
        elif nodetype == 'subdiv':
            attrib = 'outSubdiv'
        elif nodetype in ('nurbsCurve','nurbsSurface'):
            attrib = 'local'
        else:
            raise ValueError('Nodetype %s of node %s not supported in isAffected'%(nodetype, inPathStr))
    elif not cmds.ls(inPathStr, type='dagNode'):
        raise ValueError('Given node path %s is not a Dag node in isAffected'%inPathStr)

    
    for node in affectedNet(attrib, inPathStr)[0]:
        if isParentSelected(node):
            return True
    return False


def isAffectedRecursively(inPathStr):
    '''
    Maps the affected net and checks if
    nodes in it are selected, if not,
    repeats the process for parents of the
    given node
    
    @param inPathStr: str, the node to check for
    
    @returns: bool, True if node or parent node
    is affected by a selected object
    '''
    obj = cmds.ls(inPathStr, l=True)
    if not obj:
        return False
    obj = obj[0]
    while obj and len(obj) > 1:
        if isAffected(obj):
            return True
        obj = obj.rsplit('|',1)[0]
    return False
    

def displayColorType(inObj):
    '''
    Returns the display color type, used by the
    cmds.displayColor() function, of the given node
    @todo: finish parsing imaginable node types
    
    @param inObj: node to get display color type name for
    
    @returns str: node type
    '''
    objtype = cmds.nodeType(inObj)
    if objtype == 'nurbsSurface':
        trims = cmds.listConnections(shape, s=True, d=False, type='planarTrimSurface')
        if trims:
            obtype = 'trimmedSurface'
        else:
            objtype = 'surface'
    if objtype == 'nurbsCurve':
        projectCurves = cmds.listConnections(shape, s=True, d=False, type='projectCurve')
        if projectCurves:
            objtype = 'curveOnSurface'
        else:
            objtype = 'curve'
    if objtype == 'mesh':
        objtype = 'polymesh'
    if objtype == 'joint' and cmds.listRelatives(shape, ad=True, type='effector'):
        objtype = 'segment'
    if objtype == 'cluster':
        objtype = 'locator'
    if objtype == 'distanceDimShape':
        objtype = 'dimension'
    return objtype
    
    
def isParentSelected(inObj, ignoreSelf=False):
    '''
    @param inPathStr: str, node to check parents for
    
    @param ignoreSelf: when set to True only the parents
    are checked for selection and not the input node
    
    @returns: bool, True when the given node or
    any of it's parents is selected
    '''
    selection = cmds.ls(sl=True, l=True)
    if not selection: #no selection, no result
        return
    if not ignoreSelf:
        if inObj in selection:
            return inObj
    targets = cmds.listRelatives(inObj, ap=True, f=True)
    if not targets:
        return
    for target in targets:
        if target in selection:
            return target
    return


def overrideAttr(inObj, inAttr):
    '''
    Gets the value of the given override attribute,
    searches in parents if overrides on the given object
    are not enabled, returns None if no overrides found
     
    @param inObj: str, node to start lookin from
    
    @param inAttr: str, attribute to find override value for
    
    @returns: value of the (parents) attribute or None
    '''
    target = inObj
    while target:
        if not cmds.getAttr('%s.overrideEnabled'%target):
            target = cmds.listRelatives(target, p=True, f=True)[0]
        return cmds.getAttr('%s.%s'%(target,inAttr))

    
def drawColor(inObj):
    '''
    Gets the color of the object's type. If the given object
    is not a shape node it returns the color of the first
    valid shape node directly below the given node
    
    @param inObjStr: string representing the path to the node
    to search for. If multiple nodes with the given name/path
    exist only the first will be used
        
    @returns: float[3], list of 0 to 1 RGB values
    '''
    #using executeInMainThreadWithResult to resolve 'bool is not a bool' errors
    #that should only occur when threading but still occur randomly all the time
    shapes = maya.utils.executeInMainThreadWithResult( 'cmds.listRelatives(\'%s\', ad=True, type=\'shape\', f=True)'%inObj )
    if not shapes:
        if cmds.nodeType(inObj) != 'transform':
            shape = inObj
        else: #transform node without shapes has no color
            return None
    else:
        shape = shapes[0]

    nodetype = displayColorType( shape )
    selected = isParentSelected( shape )
    displaytype = overrideAttr(shape, 'overrideDisplayType')
    
    if selected:
        #templated
        if displaytype == 1:
            return cmds.colorIndex( cmds.displayColor('activeTemplate', q=True, active=True), q=True )
        #lead
        if selected == cmds.ls(os=True, l=True)[-1]:
            return cmds.colorIndex( cmds.displayColor('lead', q=True, active=True), q=True )
            
        #active
        return cmds.colorIndex( cmds.displayColor(nodetype, q=True, active=True), q=True )
        
    #affected
    if cmds.displayPref( q=True, displayAffected=True ) and isAffectedRecursively( shape ):
        #if obj is affected by something that is selected
        return cmds.colorIndex( cmds.displayColor('activeAffected', q=True, active=True), q=True )
    
    #referenced
    if displaytype == 2:
        return cmds.colorIndex( cmds.displayColor('referenceLayer', q=True), q=True )
        
    #templated
    if displaytype == 1:
        return cmds.displayRGBColor('template', q=True)
    
    #override color
    overridecolor = overrideAttr(shape, 'overrideColor')
    if overridecolor: #not None and not 0
        return cmds.colorIndex( overridecolor, q=True )

    #dormant
    return cmds.colorIndex( cmds.displayColor(nodetype, q=True, dormant=True), q=True )

And a little demonstration, which requires PyQt4. It draws the current wire color of the current target. Push the button to set selected object as new target ( cmds.ls(sl=True)[0] ), updates color when you change selection, so it may be a tad confusing but you have to keep selecting stuff to see it change; which mostyly works but not when adding constraints, changing layer settings or changing target, then just deselect and undo or use the up and down arrow keys or something.

from maya import cmds
from Mutils import color
reload(color)
from PyQt4 import QtCore, QtGui
import sip
from maya import OpenMayaUI
import maya.utils

mainwindow = sip.wrapinstance( long(OpenMayaUI.MQtUtil.mainWindow()), QtGui.QMainWindow )

class coloredRect( QtGui.QDockWidget ):
    def __init__(self):
        QtGui.QFrame.__init__(self, mainwindow)
        
        btn = QtGui.QPushButton("Show selected",self)
        btn.clicked.connect(self.storeobj)
        self.storeobj(None)
        
        self._job = cmds.scriptJob(e=['SelectionChanged', self.updatebrush])
        self._brush = QtCore.Qt.NoBrush
        
        self.updatebrush()
        
        self.setFloating(True)
        self.show()
        
    def storeobj(self, e):
        self._obj = maya.utils.executeInMainThreadWithResult( 'cmds.ls(sl=True, l=True)' )
        print self._obj
        
    def paintEvent(self, e):
        if self._brush != QtCore.Qt.NoBrush:
            r = self.geometry()
            r.setTop(0)
            r.setLeft(0)
            painter = QtGui.QPainter(self)
            painter.setBrush(self._brush)
            painter.drawRect(r)

    def updatebrush(self):
        if self._obj:
            c = color.drawColor(self._obj[0])
            print c
            c = QtGui.QColor( c[0]*255, c[1]*255, c[2]*255, 255 )
            self._brush = QtGui.QBrush( c )
        else:
            self._brush = QtCore.Qt.NoBrush
        self.repaint()

    def __del__(self):
        cmds.scriptJob(k=self._job, force=True)

try:
    if not cmds.objExists( c ):
        raise
except:
    c = cmds.polyCube()[0]
    s = cmds.polySphere()[0]
    cmds.xform(s,t=[0,0,2])
    cn = cmds.orientConstraint(c, s)
finally:
    cmds.scriptJob(ka=True)
    w = coloredRect()

    cmds.select(c)

color.drawColor(w._obj[0])
w.update()
w.repaint()
Is it purple?

Or.. is it affected by a selected object?

I asked this question on Creative Crash and this reply was definately helpful in finding out how to deal with this; but don’t use affectsNet, it creates tons of nodes which contain info you could also come up with or print out. The golden tip was simply to look at the matrix and shape attributes of geometry and transform nodes.

So to do this we use the cmds.affects function, which tells what inputs of a node have influence (affect) the given output (it can do the other way around with the by flag, I don’t use that here however).

So whether a transform node is affected seems reasonably simple, the shape turns purple when one of it’s parents’ matrices is affected by a selected node.

>We use affects to find out what inputs affect the matrix attribute on a transform node
>Then we list the incoming connection of every parent
>We filter the incoming connections to only those that connect to the attributes returned by affects
>We check if one of these nodes is selected

But then it’s not that simple. The matrix attribute is affected by several attributes, these attributes are affected by more attributes, hence we need to iterate over all affecting attributes to see what affects them, until we had all the attributes and have a clear map of what nodes – directly or indirectly – affect the matrix attribute.

When none of the nodes are selected, we’re not there yet – what if a parent of an incomoing connection is selected? That is solved with the isParentSelected function posted here.

Now what if the node (and its parents) isn’t selected, maybe the output attributes that drive the inputs that affect the matrix attribute on the done we wish to know more about are affected by inputs which are connected as well. On second thought, look at this image instead of attempting to grasp that scentence.

affects

So we wish to know whether the rightmost node is affected, we map the matrix attribute and see that the red attributes affect eachother. Now the other node connected to it isn’t selected, but we must map the incoming connections’ affection to see that the blue attributes also affect eachother, because there are more affecting attributes we need to map inputs on the input nodes’ affected attributes as well, giving us the leftmost node which IS selected and therefore the rightmost node IS affected!

So the first trick is to get the true affects result by iterating of the the initial result until no more attributes affect the affects set.

def affectsAll( attr, type ):
    #these lists can in theory be precalculated constants
    attrs = cmds.affects(attr, t=type)
    if not attrs:
        return []
    i = 0
    while i < len(attrs):
        tmp = cmds.affects(attrs[i], t=type)
        if tmp:
            attrs.extend(tmp)
        attrs = list(set(attrs))
        i += 1
    return attrs

The next step is to find the full network of affected attributes by listing inputs, filtering by affected attributes, and iterating again as if we wanted to know whether that input node was affected. This can be done by going over all known nodes, starting with the given node, then appending all valid input nodes to the target list and repeating the iteration:

def affectedNet( inAttr, inNode ):
    nodes = [inNode]
    attributes = [[inAttr]]
    
    #iterate until affection found or entire network traversed
    i = 0
    while i < len(nodes):
        #find internel affection net
        attributes[i].extend( affectsAll(attributes[i][0], cmds.nodeType(nodes[i])) )
        
        #find nodes that are connected to plugs in the affected net
        inputs = cmds.listConnections(nodes[i], s=True, d=False, c=True, p=True)
        if inputs:
            for j in range(0,len(inputs),2):
                #attribute name in affectednet
                if inputs[j].rsplit('.',1)[-1] in attributes[i]:
                    #get node attribute pair
                    nodeattr = inputs[j+1].split('.',1)
                    nodeattr[0] = cmds.ls(nodeattr[0], l=True)[0]
                    if nodeattr[0] not in nodes:
                        #append new nodes
                        nodes.append(nodeattr[0])
                        attributes.append([nodeattr[1]])
                    else:
                        #append new plugs on known nodes
                        attributes[ nodes.index(nodeattr[0]) ].append( nodeattr[1] )
        
        #if no incoming node was selected, continue iterating
        i += 1
    return nodes, attributes

The next step is to provide input for these functions. If we wish to check whether a shape node is affected this is most cumbersome, as every shape node's out geometry attribute has a different name. So we assume the node to be a transfomr node with the attribute that determines affected color being 'matrix'. Then we check whether the object is a shape and change the attribute name before finally grabbing the affectedNet for that node/attr combination and checking whether any node in it, or one if that node's, parents is selected.

def isAffected(inPathStr):
    #assume node is a transform by default
    attrib = 'matrix'
    
    #get the output attribute if node is a shape
    if cmds.ls(inPathStr, type='shape'):
        #detect the attribute name to get the affectedNet for
        nodetype = cmds.nodeType( inPathStr )
        if nodetype == 'mesh':
            attrib = 'outMesh'
        elif nodetype == 'subdiv':
            attrib = 'outSubdiv'
        elif nodetype in ('nurbsCurve','nurbsSurface'):
            attrib = 'local'
        else:
            raise ValueError('Nodetype %s of node %s not supported in isAffected'%(nodetype, inPathStr))
    elif not cmds.ls(inPathStr, type='dagNode'):
        raise ValueError('Given node path %s is not a Dag node in isAffected'%inPathStr)

    
    for node in affectedNet(attrib, inPathStr)[0]:
        if isParentSelected(node):
            return True
    return False

Then the very last thing we need to do is to not check only the given node, but all it's parent nodes as well. Because if not the shape, then perhaps a parent is affected and the shape still needs to appear affected.

def isAffectedRecursively(inPathStr):
    obj = cmds.ls(inPathStr, l=True)
    if not obj:
        return False
    obj = obj[0]
    while obj and len(obj) > 1:
        if isAffected(obj):
            return True
        obj = obj.rsplit('|',1)[0]
    return False

By merging the affectedNet with the isAffected function I managed to get a 15% speed increase as this function is reasonably slow, but it cancels simply as soon as a found node is selected. What may be better is to cache all the affected networks once we need them (put them in a dict, key is the full dagpath string) and then use that. Just the merged code in case you disagree:

def isAffected( inNode ):
    nodes = [inNode]
    attributes = [['matrix']]
    
    
    #get the output attribute if node is a shape
    if cmds.ls(inNode, type='shape'):
        #detect the attribute name to get the affectedNet for
        nodetype = cmds.nodeType( inNode )
        if nodetype == 'mesh':
            attributes[0][0] = 'outMesh'
        elif nodetype == 'subdiv':
            attributes[0][0] = 'outSubdiv'
        elif nodetype in ('nurbsCurve','nurbsSurface'):
            attributes[0][0] = 'local'
        else:
            raise ValueError('Nodetype %s of node %s not supported in isAffected'%(nodetype, inNode))
    elif not cmds.ls(inNode, type='dagNode'):
        raise ValueError('Given node path %s is not a Dag node in isAffected'%inNode)


    #iterate until affection found or entire network traversed
    i = 0
    while i < len(nodes):
        #find internel affection net
        attributes[i].extend( affectsAll(attributes[i][0], cmds.nodeType(nodes[i])) )
        
        #find nodes that are connected to plugs in the affected net
        inputs = cmds.listConnections(nodes[i], s=True, d=False, c=True, p=True)
        if inputs:
            for j in range(0,len(inputs),2):
                #attribute name in affectednet
                if inputs[j].rsplit('.',1)[-1] in attributes[i]:
                    #get node attribute pair
                    nodeattr = inputs[j+1].split('.',1)
                    nodeattr[0] = cmds.ls(nodeattr[0], l=True)[0]
                    if nodeattr[0] not in nodes:
                        #bail as soon as node is affected
                        if isParentSelected(nodeattr[0]):
                            return True
                        #append new nodes
                        nodes.append(nodeattr[0])
                        attributes.append([nodeattr[1]])
                    else:
                        #append new plugs on known nodes
                        attributes[ nodes.index(nodeattr[0]) ].append( nodeattr[1] )
    
        #if no incoming node was selected, continue iterating
        i += 1
    return False

Remove the affectedNet and replace the isAffected function with the above and run this testing code to see the printed time drop as well as to see the function's cases functional (note I didn't use isAffectedRecursively here):

#test code
c = cmds.polyCube()[0]
s = cmds.polySphere()[0]
cmds.xform(s,t=[0,0,2])
cn = cmds.orientConstraint(c, s)
import time
t = time.time()
print 'direct input selection'
cmds.select(cn)
print isAffected( s )
print 
print 'direct input selection that does not drive an affecting attribute'
n = cmds.group(em=True)
cmds.connectAttr('%s.visibility'%n, '%s.visibility'%s)
print isAffected( s )
print 
print 'secondary input selection'
cmds.select(c)
print isAffected( s )
print 
print 'input parent selection'
cmds.group(c)
print isAffected( s )
print 
print 'irrelevant selection'
cmds.select(cmds.listRelatives(c,c=True,f=True))
print isAffected( s )
print 
print 'no selection'
cmds.select(cl=True)
print isAffected( s )
print 
print 'beware: the object affects itself because it contains the constraint'
cmds.select(s)
print isAffected( s )
print 
print 'as you can see it\'s shape does not'
cmds.select(cmds.listRelatives(s,c=True,f=True,type='shape'))
print isAffected( s )
print 
print 'and with the constraint deleted neither does the object any longer'
cmds.delete(cn)
cmds.select(s)
print isAffected( s )
print 
print time.time()-t
Detecting wire color in Maya II

Continueing from here I am going to look at override attributes. An object’s display color is affected by overrides of itself, or of it’s parents. For this I wrote a function that checks whether overrides are enabled, if not I check it for the parent, its parent, and so on. When an object has overrides enabled, I wish to get the color and later the displayType (template/reference). After having written the code I decided to create a simpler function that can get an override attribute by name instead of having multiple functions doing the same thing.

def overrideAttr(inObj, inAttr):
    target = inObj
    while target:
        if not cmds.getAttr('%s.overrideEnabled'%target):
            target = cmds.listRelatives(target, p=True, f=True)[0]
        return cmds.getAttr('%s.%s'%(target,inAttr))

The neat thing about this is that if the overrideDisplayType is set back to normal while the parent is templated, it will return 0 and display the object as normal, which it should, automatically. Then to apply this I only need to insert this code right before final line in drawColor:

    #override color
    overridecolor = overrideAttr(shape, 'overrideColor')
    if overridecolor: #not None and not 0
        return cmds.colorIndex( overridecolor, q=True )

But now we can easily expand this to templating and referencing as well by getting the overrideDisplayType. Then if the object is selected we need to return activeTemplate color, otherwise simply the templateColor or referenceColor will suffice. Now here’s a confusing bit: the displayColor is referenceLayer so that it won’t be confused with file referencing and for the template we use displayRGBColor because in the preferences this does not have a simple index, but a free RGB selection unlimited to the palette of other colors. This goes for a select list of colors, which you can also read by using

for i in cmds.displayRGBColor(list=True): print i

Now for ease of use I added a display layer, added the testing cube in it, and printed the color in every state: selected templated (orange), selected referenced (which is just the lead green), templated (gray), referenced (black), colorized layer and normal (in my case blue) and it all works as you may see by trying!

So here’s the drawColor function in full again:

def drawColor(inObj):
    shapes = maya.utils.executeInMainThreadWithResult( 'cmds.listRelatives(\'%s\', ad=True, type=\'shape\', f=True)'%inObj )
    if not shapes:
        if cmds.nodeType(inObj) != 'transform':
            shape = inObj
        else: #transform node without shapes has no color
            return None
    else:
        shape = shapes[0]

    nodetype = displayColorType( shape )
    selected = isParentSelected( shape )
    displaytype = overrideAttr(shape, 'overrideDisplayType')
    
    if selected:
        #templated
        if displaytype == 1:
            return cmds.colorIndex( cmds.displayColor('activeTemplate', q=True, active=True), q=True )
        #lead
        if selected == cmds.ls(os=True, l=True)[-1]:
            return cmds.colorIndex( cmds.displayColor('lead', q=True, active=True), q=True )
            
        #active
        return cmds.colorIndex( cmds.displayColor(nodetype, q=True, active=True), q=True )
        
    #referenced
    if displaytype == 2:
        return cmds.colorIndex( cmds.displayColor('referenceLayer', q=True), q=True )
        
    #templated
    if displaytype == 1:
        return cmds.displayRGBColor('template', q=True)
    
    #override color
    overridecolor = overrideAttr(shape, 'overrideColor')
    if overridecolor: #not None and not 0
        return cmds.colorIndex( overridecolor, q=True )

    #dormant
    return cmds.colorIndex( cmds.displayColor(nodetype, q=True, dormant=True), q=True )

Now the last thing to do is find out if an object is affected by another, selected, object. I will implement this by inserting, above the lines for referenced objects, directly after the indent for selected objects, the following:

    #affected
    if cmds.displayPref( q=True, displayAffected=True ) and isAffectedRecursively( shape ):
        #if obj is affected by something that is selected
        return cmds.colorIndex( cmds.displayColor('activeAffected', q=True, active=True), q=True )

Now the displayPrefe is a maya command and is necessary to disable the returning of this color if the user disabled it in their preferences. The isAffectedRecursively function is a long awnser to a simple question ‘is it purple?’ which I have described in detail (with code) here.

Detecting wire color in Maya

When creating a shape node with Maya’s API in the draw event you simply get the state of the object. Sadly, this can never be retrieved anywhere else (unless we’d override all Maya nodes to have them store the value somewhere). After a long search I found no way of replicating what Maya does before drawing a node, so I had to come up with a different method.

When determining the color of an object’s wireframe there’s all kind of inÎfluences. Is it:
>selected
>a lead selection
>templated
>referenced
>in a layer which is templated or referenced
>does it have an override color set
>does it have a layer with a color
>does it have a parent with an override color
and most hated of all:
>is it purple? (affected by a selected object)

Now luckily layers drive the overridesEnabled, overrideColor and overrideDisplayType attributes, so we don’t really have to worry about those.

An important part is determining in which order of importance these colors are determined. Essentially templating is most important

>objects turn orange when selected and templated simultanously
>green when selected as last (lead)
>white when selected
>purple when influenced by other selected objects (affected)
>gray when templated
>black when referenced
>overrideColor when enableOverrides is True
>blue otherwise

All these properties are inherited from parents as well. So when a referenced object has a drawingOverride, it is still black, when an object is affected by another selected object but also selected it will still be green (or white). When an object’s parent is templated, the object itself appears templated, etcetera.

Do realize this only applies to shapes, as they are the only objects actually being drawn!

The next problem once we know all this information, is determining what colour links to that info. There’s the displayColor and displayRGBColor commands for that, and lucky for use, they have a list feature. So printing each entry and then reading for quite a while we find out the names of the attributes (which mostly match those in the Window -> Settings/preferences -> Color Settings but not always).

So some colors can be set freely, such as the template color. Other colors can only be set to certain indices, displayColor returns a number and we’ll have to use the colorIndex command to get to the actual color. We could hardcore the colors, but then the result is not matching the display if the user changes his settings.

So let’s start with the most basic scenario, a given node’s child shape’s dormant color. Here we run immediately into the next issue, every shape type can have it’s own deselected and selected color and most of the names do not match the nodeType name. For example a measure node is of type distanceDimShape and it’s color needs to be retrieved as ‘dimension’. So here’s a partial list of name conversion:

def displayColorType(inObj):
    objtype = cmds.nodeType(inObj)
    if objtype == 'nurbsSurface':
        trims = cmds.listConnections(shape, s=True, d=False, type='planarTrimSurface')
        if trims:
            obtype = 'trimmedSurface'
        else:
            objtype = 'surface'
    if objtype == 'nurbsCurve':
        projectCurves = cmds.listConnections(shape, s=True, d=False. type='projectCurve')
        if projectCurves:
            objtype = 'curveOnSurface'
        else:
            objtype = 'curve'
    if objtype == 'mesh':
        objtype = 'polymesh'
    if objtype == 'joint' and cmds.listRelatives(shape, ad=True, type='effector'):
        objtype = 'segment'
    if objtype == 'cluster':
        objtype = 'locator'
    if objtype == 'distanceDimShape':
        objtype = 'dimension'
    return objtype

This function is not limited to shapes, but it will result in errors when you attempt to use

cmds.displayColor(displayColorType(),q=True)

on a transform node.

So I assume this to be all I need but the list may get longer when it turns out certain objects try to get their color by the wrong name. Now let’s check whether the object is selected or not and return either the dormant or the active color related to its type:

def drawColor(inObj):
    shapes = cmds.listRelatives(inObj, ad=True, type='shape', f=True)
    if not shapes:
        if cmds.nodeType(inObj) != 'transform':
            shape = inObj
        else: #transform node without shapes has no color
            return None
    else:
        shape = shapes[0]

    nodetype = displayColorType( shape )
    if shape in cmds.ls(sl=True,l=True):
        return cmds.colorIndex( cmds.displayColor(nodetype, q=True, active=True), q=True )
    return cmds.colorIndex( cmds.displayColor(nodetype, q=True, dormant=True), q=True )

print( drawColor( cmds.polyCube()[0] ) )

Now this will immediately print the wrong color, as the parent is selected and not the shape.

#result: [1.0, 1.0, 1.0]

So let’s solve that bit with the following function:

def isParentSelected(inObj):
    selection = cmds.ls(sl=True, l=True)
    target = cmds.ls(inObj, l=True)[0] #ensure full path
    while target:
        if target in selection:
            return True
        target = cmds.listRelatives(target, p=True, f=True)[0]
    return False

Now in drawColor on line 11 instead of using

if shape in cmds.ls(sl=True, l=True)

I will use

if isParentSelected(shape):

Now it returns the the selected color, which is white.

#result: [1.0, 1.0, 1.0]

But our object is in the lead, so it is green. So with some modifications this isn’t too difficult. Let the isParentSelected return the parent (or None) instead of True. I also fixed the part where I forgot to check if we had a selection, ls and listRelatives return None instead of an empty list if there is no result, so we get problems at ‘target in selection’ if there is no selection.

def isParentSelected(inObj, ignoreSelf=False):
    selection = cmds.ls(sl=True, l=True)
    if not selection: #no selection, no result
        return
    if not ignoreSelf:
        if inObj in selection:
            return inObj
    targets = cmds.listRelatives(inObj, ap=True, f=True)
    if not targets:
        return
    for target in targets:
        if target in selection:
            return target
    return

Then the last bit of drawColor becomes this:

    nodetype = displayColorType( shape )
    selected = isParentSelected( shape )
    if selected:
        if selected == cmds.ls(os=True, l=True)[-1]:
            return cmds.colorIndex( cmds.displayColor('lead', q=True, active=True), q=True )
        return cmds.colorIndex( cmds.displayColor(nodetype, q=True, active=True), q=True )
    return cmds.colorIndex( cmds.displayColor(nodetype, q=True, dormant=True), q=True )

The ls function returns the ordered selection, ensuring that the last entry is indeed the lead entry and we use the active lead displayColor isntead of the active nodetype displayColor. The lead color can also be customised, but not per object type.

#result: [0.2630000114440918, 1.0, 0.63899999856948853]

Now this post is getting rather long, so more on this later, where I’ll have a look into overrides.

Knife II

After losing some work due to HDD problems I ran into a lot of issues with the previously posted Knife SOP
So I tried again using the normal Knife and got the same issues as it does not work iteratively and then I bugfixed (or rewrote largely) the previous knife I made, hopefully more functional this time. It retains face order, it does now calculate point attributes for new points and it transfters custom prim attributes (the split prims duplicate the attribute vaues). Consider this a snippet dump…

node = hou.pwd()
geo = node.geometry()


#parse parameters
target = node.evalParm("target")
origin = hou.Vector3( node.evalParm("originx"), node.evalParm("originy"), node.evalParm("originz") )
distance = node.evalParm("dist")
direction = hou.Vector3( node.evalParm("dirx"), node.evalParm("diry"), node.evalParm("dirz") ).normalized()
#distance really just moves the origin
origin += direction*distance


def rayPlaneIntersect(rayorigin, in_raydirection, planeorigin, in_planenormal):  
    ''''' 
    @returns: Vector3, intersectionPoint-rayOrigin 
    '''  
    raydirection = in_raydirection.normalized()  
    planenormal = in_planenormal.normalized()  
    distanceToPlane = (rayorigin-planeorigin).dot(planenormal)  
    triangleHeight = raydirection.dot(-planenormal)  
    if not distanceToPlane: #ray origin lies in the plane
        return rayorigin-planeorigin  
    if not triangleHeight: #ray is parallel to plane
        return None
    return raydirection * distanceToPlane * (1.0/triangleHeight)


def getPolygonsWithEdge(geom, edgeids):
    '''''
    @param geo: hou.Geometry, geometry to search
    @param edgeids: tuple of 2 ints, point numbers
    describing the edge to find shared faces for
    
    @returns: list of hou.Prim, all primitives sharing this edge
    
    if the points are not connected by an edge
    (adjacent in the vertex list of any primitive)
    the result is an empty list
    '''
    out = []
    for poly in geom.prims():  
        verts = poly.vertices()
        for i in range(poly.numVertices()):
            if verts[i].point().number() in edgeids and\
               verts[(i+1)%poly.numVertices()].point().number() in edgeids:
                out.append(poly)
    return out


#stub primitive, I use this for the cut faces so I can actually add the polygons in the end to not disturb primitive order / numbers
class notPrim():
    def __init__(self):
        self.points = []

    def addVertex( self, inPoint ):
        self.points.append( inPoint )

    def addToGeo( self, inGeo ):
        poly = inGeo.createPolygon()
        for point in self.points:
            poly.addVertex(point)
        return poly


### Cut target ###  
verts = geo.iterPrims()[target].vertices()  
nverts = len(verts)  
cutEdges = []
adjFaces = []
#foreach edge
for i in range(nverts):  
    pt0 = verts[i].point()
    pt1 = verts[(i+1)%nverts].point()
    edgedirection = pt1.position()-pt0.position()
    #find interseciton on edge
    intersectpt = rayPlaneIntersect(pt0.position(), edgedirection, origin, direction)  
    if not intersectpt: #edge is parallel to cutting plane  
        continue
    #check if intersection is on the edge (line-segment)
    param = intersectpt.dot(edgedirection.normalized())
    if param > 0 and param < edgedirection.length():  
        #store the cut
        pt = geo.createPoint()

        #propagate point attribs
        for attrib in geo.pointAttribs():
            val0 = pt0.attribValue(attrib)
            val1 = pt1.attribValue(attrib)
            if type(val0) in (int, float):
                pt.setAttribValue( attrib, val0*(1-param)+val1*param )
            if type(val0) == tuple:
                val = []
                for i in range(len(val0)):
                    if type(val0[i]) in (int, float):
                        val.append( val0[i]*(1-param)+val1[i]*param )
                pt.setAttribValue( attrib, val )

        pt.setPosition( intersectpt+pt0.position() )  
        cutEdges.append( [pt0, pt1, pt] )

        #store the face(s) influenced by this cut
        adjFaces.extend( getPolygonsWithEdge( geo, (pt0.number(), pt1.number()) ) )


### Rebuild geometry ###
delete = []
polys = []
#rebuild all prims
for i in range(len(geo.iterPrims())):
    prim = geo.iterPrims()[i]
    delete.append(prim) #remove all old prims

    poly = geo.createPolygon() #create new prim

    #duplicate prim attribs
    for attrib in geo.primAttribs():
        val = prim.attribValue(attrib)
        poly.setAttribValue( attrib, val )

    #build cut faces
    if i == target:
        ### Create cut face ###
        #iterate over edges to build new polygons  
        cuts = 0
        wrap = False
        polys.append( poly ) #the first split primitive keeps the original primitive nr
        for j in range(prim.numVertices()):
            cut = None
            vtx = prim.vertices()[j]
            #all cuts added, finish the first polygon
            if wrap:
                polys[0].addVertex(vtx.point())  
                continue

            #find edge points
            nxtvtx = prim.vertices()[(j+1)%prim.numVertices()]
            polys[-1].addVertex(vtx.point())

            for edge in cutEdges:
                if vtx.point() in edge and nxtvtx.point() in edge:
                    cut=edge

            #if edge is a cut edge
            if cut:
                polys[-1].addVertex(cut[2])  
                cuts += 1
                if cuts == len(cutEdges): #wrap to first polygon at last cut  
                    wrap = True  
                    polys[0].addVertex(cut[2])
                    continue
                polys.append(notPrim()) #add stub primitive and start building vertex list for it
                polys[-1].addVertex(cut[2])
    else: #or just build the primitive again
        for j in range(prim.numVertices()):
            vtx = prim.vertices()[j]
            poly.addVertex(vtx.point())
            if prim in adjFaces: #if influenced by cut, then cut the right edge
                for edge in cutEdges:
                    #when at start of the split edge, known because next vertex is it's end point
                    if vtx.point() in edge:
                        nxtvtx = prim.vertices()[(j+1)%prim.numVertices()]
                        if nxtvtx.point() in edge: 
                            #add the intersection point
                            poly.addVertex(edge[2])
                            break


### append stub split faces at the very end###
for i in range(1,len(polys),1):
    poly = polys[i].addToGeo( geo )
    src = geo.iterPrims()[target]
    #duplicate prim attribs
    for attrib in geo.primAttribs():
        val = src.attribValue(attrib)
        poly.setAttribValue( attrib, val )

#remove all old prims
geo.deletePrims(delete, True)
PyQt Binding singals dynamically

So I got around to solving this thing when I wanted to create a QFrame that docks itself to the top right, ignoring all layout (so it can actually be on top of things). Useful to me as a sort of icon bar on top of a tab widget, given that the tab widget would never have so many tabs that the tabs go behind the icons.

To dock something it would need to know it’s parent widget and connect to the parent resize event to update it’s own geometry. There is no resize signal however, so the resizeEvent needs to be overridden; but this isn’t possible because the resizeEvent handles all kinds of stuff that we need.

So we can choose the cheap way out and inherit QWidget, override the resizeEvent and create a QFrame that is outside the layout and always forced in the top right, but let’s disregard that for a moment as this gets more interesting.

We can’t create signals on runtime, so we need a custom signal class that works exactly like pyqtBoundSignal in usage except it doesn’t crash Qt on creation.

Note: The pyqtBoundSignal class can’t be created manally, the pyqtSignal class is just a placeholder and can’t be used as it contains no actual signal functionality.

We also can’t extend functions in a decent way in Python, but this hack proved quite useful.

'''
Created on Feb 15, 2013
@author: Trevor van Hoof
@package Qtutils
'''


class UnboundSignal():
    def __init__(self):
        self._functions = []
    
    def emit(self):
        for function in self._functions:
            function()
    
    def connect(self, inBoundFunction):
        self._functions.append( inBoundFunction )
    
    def disconnect(self, inBoundFunction):
        try:
            self._functions.remove( inBoundFunction )
        except:
            print('Warning: function %s not removed from signal %s'%(inBoundFunction,self))

So here’s the UnboundSignal class I use, it just implements all the signal functionality I use (new style) and then I can instantiate it, it is only not aware of what parent it has or the self class, but as a bonus it could be driven by multiple classes or instances at the same time.

Example: when you wish to have one object fill the gap between two others you need the middle object to link to the resizeEvent of both or you just give the other objects a shared signal.

Then for our test class we need to initialize it with a parent, always.

from PyQt4 import QtGui
from Qtutils.LaunchAsStandalone import QtStandalone
from Qtutils.unboundsignal import UnboundSignal

class Tst(QtGui.QFrame):
    def __init__(self, inParent):
        QtGui.QFrame.__init__(self, inParent)

Then as the parent is known we can give that parent a resized property, set it to a new signal and connec to that signal.

        self.parent().resized = UnboundSignal()
        self.parent().resized.connect( self.doPrint )

Lastly we need to override the resizeEvent and show the widget:

        self.parent().resizeEvent = self.extendResizeEvent( self.parent().resizeEvent )
        
        self.show()

Now for that extend method:

    '''
    Awesome method extension from
    http://stackoverflow.com/a/2789542
    '''
    def extendResizeEvent(self, fn):
        def extendedResizeEvent(*args, **kwargs):
            fn(*args, **kwargs)
            fn.__self__.resized.emit()
            #we could do this instead of using the signal:
            #self.updatePosition()
            #but the signal could be created out of
            #this class and be globally accessible
        return extendedResizeEvent

It could even stack infinitely and as long as all the extensions do not depend on new arguments it is reasonably maintainable code. Then last let’s launch the app:

def main():
    w = QtGui.QWidget()
    Tst(w)
    w.show()
    return w
    
QtStandalone(main)

The QtStandalone class can be found in this post.

To finish this example we could implement no parent initializing and override the setParent command to disconnect from the current signal and create another signal on another parent again; or always have this class be owner of the signal instead of the parent that emits it (also reverting the function); but that may lead to more trouble when doing this with multiple objects to the same parent. Also we should check whether the parent already has a resized signal in which case the initialization is not necessary.

PyQt multiple inheritance

Qt (and PyQt) does not support multiple inheritance, but when inheriting from a Qt class in two different classes it is possible to inherit from both these classes if their base class is the same or in the same line of inheritance, but there’s a couple of limitations.

1. The first inherited widget must have the deepest base-class.

2. Only additional signals defined in the first inherited widget are used.

3. Name clashes are resolved by calling the first class’s methods

1. The first inherited widget must have the deepest base-class.

So given this situation:

from PyQt4 import QtGui
class Widget(QtGui.QWidget):
    def __init__(self, parent=None):
        QtGui.QWidget.__init__(self, parent)

class Frame(QtGui.QFrame):
    def __init__(self, parent=None):
        QtGui.QFrame.__init__(self, parent)

This is allowed:

class Child(Frame, Widget):
    def __init__(self, parent=None):
        Frame.__init__(self, parent)
        Widget.__init__(self, parent)

But this is not:

class Child(Widget, Frame):
    def __init__(self, parent=None):
        super(Child, self).__init__(parent)

Because QFrame is not a base class of, or identical to, QWidget.

Notice that the order of the base constructors does not matter, just the order in the class definition.

2. Only additional signals defined in the first inherited widget are used.
Now extending the situation into this:

from PyQt4 import QtGui
class Widget(QtGui.QWidget):
    widgetSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        QtGui.QWidget.__init__(self, parent)

class Frame(QtGui.QFrame):
    frameSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        QtGui.QFrame.__init__(self, parent)

class Child(Frame, Widget):
    def __init__(self, parent=None):
        super(Child, self).__init__(parent)
        self.frameSignal.connect(self.printtest)
        self.widgetSignal.connect(self.printtest)

    def printtest(self):
        print("test")

Will raise an error at widgetSignal, stating that it is not possible to connect between a Widget signal and a unislot().

This is because to Qt we are a Frame, not a Widget. Even if both base classes had the same Qt base class (so say we make FrameA and FrameB which inherit from QFrame) it still raises that same error.

We can however add signals in our own context, so it would be possible to copy those signals and have the parent call the overrides instead of the disfunctional signals.

from PyQt4 import QtGui
class Widget(QtGui.QWidget):
    widgetSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        QtGui.QWidget.__init__(self, parent)

    def emitWidgetSignal(self):
        self.widgetSignal.emit()

class Frame(QtGui.QFrame):
    frameSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        QtGui.QFrame.__init__(self, parent)

    def emitFrameSignal(self):
        self.frameSignal.emit()

class Child(Frame, Widget):
    widgetSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        super(Child, self).__init__(parent)

        self.frameSignal.connect(self.printtest)
        self.widgetSignal.connect(self.printtest)

        self.emitFrameSignal()
        self.emitWidgetSignal()

    def printtest(self):
        print("test")

Now whenever Widget uses self.widgetSignal.emit(), to the Child it will refer to the Child.widgetSignal override, which we can use again.

3. Name clashes are resolved by calling the first class’s methods

As you may see in the previous example I explicitely named emitFrameSignal and emitWidgetSignal differently, this is because the first base class, Frame, is overriding the second base class, Widget.

So imagine this:

from PyQt4 import QtGui
class Widget(QtGui.QWidget):
    widgetSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        QtGui.QWidget.__init__(self, parent)

    def emitWidgetSignal(self):
        self.emitSignal()

    def emitSignal(self):
        self.widgetSignal.emit()

class Frame(QtGui.QFrame):
    frameSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        QtGui.QFrame.__init__(self, parent)

    def emitFrameSignal(self):
        self.emitSignal()

    def emitSignal(self):
        self.frameSignal.emit()

class Child(Frame, Widget):
    widgetSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        super(Child, self).__init__(parent)

        self.frameSignal.connect(self.printframe)
        self.widgetSignal.connect(self.printwidget)

        self.emitFrameSignal()
        self.emitWidgetSignal()

    def printframe(self):
        print("frame")

    def printwidget(self):
        print("widget")

This prints frame twice, even though we go into the frame OR widget class separately to call emitSignal, it uses ‘self’, which is in this event Child, which then uses Frame.emitSignal at all times.

So to resolve the issue we could adapt the base class either to have different function names (as shown before), or by referring to the right class when calling the method, as opposed to self. This is especially useful when the base class must be emitting signals in inherited methods which we can’t rename (such as mouse events).

from PyQt4 import QtGui
class Widget(QtGui.QWidget):
    widgetSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        QtGui.QWidget.__init__(self, parent)

    def emitWidgetSignal(self):
        Widget.emitSignal(self)

    def emitSignal(self):
        self.widgetSignal.emit()

class Frame(QtGui.QFrame):
    frameSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        QtGui.QFrame.__init__(self, parent)

    def emitFrameSignal(self):
        Frame.emitSignal(self)

    def emitSignal(self):
        self.frameSignal.emit()

Stay aware that this does not resolve name clashes in signal names. In fact when both base class’ signals were named ‘signal’, we could only refer to self.signal, which refers to the Frame’s signal as it is the first child.

Also the child widget’s emitSignal functions would refer to self.signal, which can not be resolved by using the class name because Frame.signal and Widget.signal refer to a pyqtSignal. When an instance is created the class signals are converted to bound signals attached to the instance by Qt. Only bound signals can be connected to and emitted (as well as all other functionality). This is also the reason signals require definition at class level, so they can be resolved and bound on init.

Now here’s a working standalone demonstration:

import sys
from PyQt4 import QtGui, QtCore

class QtStandalone:
    def __init__(self, mainfunction):
        app = QtGui.QApplication(sys.argv)
        alive = mainfunction()
        app.exec_()
        
class Widget(QtGui.QWidget):
    widgetSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        QtGui.QWidget.__init__(self, parent)

    def emitWidgetSignal(self):
        Widget.emitSignal(self)
        
    def emitSignal(self):
        self.widgetSignal.emit()

class Frame(QtGui.QFrame):
    frameSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        QtGui.QFrame.__init__(self, parent)

    def emitFrameSignal(self):
        Frame.emitSignal(self)
        
    def emitSignal(self):
        self.frameSignal.emit()

class Child(Frame, Widget):
    widgetSignal = QtCore.pyqtSignal()
    def __init__(self, parent=None):
        super(Child, self).__init__(parent)
        
        self.frameSignal.connect(self.printframe)
        self.widgetSignal.connect(self.printwidget)
        
        self.emitFrameSignal()
        self.emitWidgetSignal()

    def printframe(self):
        print("frame")
        
    def printwidget(self):
        print("widget")

from Qtutils.LaunchAsStandalone import QtStandalone

def main():
    w = Child()
    w.show()
    return w
    
QtStandalone(main)
Mouse tracking on a widget

I am creating a generic testing widget that I may inherit from later on to track the mouse in a specific widget. The advantage of this is that the mouse position is relative to the widget (useful for painting in the GraphicsView for example) and that I am certain on which widget the mouse is when the events are triggered.

Setting up a mouse widget is fairly simple, we inherit from QWidget and override any mouse related events. What I wish to know about the mouse is where it is now, where it was last pressed and if it is still pressed.

class MouseWidget(QtGui.QWidget):
    def __init__(self, parent=None):
        QtGui.QWidget.__init__(self, parent)
        self.position = Vec(0,0)
        self.leftState = ButtonState.up
        self.dragStart = Vec(0,0)

Also we need to add signals so that other code can attach events to the mouse callbacks – even an inherited class may choose to, instead of overriding the mousePressed again, connect to it’s own signals for cleaner code.

These signals need to be class variables, because pyqtSignals are always unbound, which can’t be connected or emitted to, this is because of the difference in python and C++ implementations of singals. The problem gets resolved when the class is created (not instanced), so classes can store bound signals, instances can’t, so creating signals in a function (whether it’s __init__ or another) is useless.

    onMousePressed = QtCore.pyqtSignal()
    onMouseReleased = QtCore.pyqtSignal()
    onMouseMoved = QtCore.pyqtSignal()
    onMouseLeave = QtCore.pyqtSignal()
    onMouseEnter = QtCore.pyqtSignal()

The last thing to do in the init funtoin is to enable mouse tracking, this will make sure the mouseMove event is triggered also when no mouse button is pressed (which normally isn’t the case).

        self.setMouseTracking(True)

– On press I update the last clicked position and say left button is pressed,

    def mousePressEvent(self,e):
        self.leftState = ButtonState.press
        v = self.mouseEventPosition(e)
        self.dragStart = v
        self.position = v
        self.onMousePressed.emit()

On release I say left button is up

    def mouseMoveEvent(self, e):
        if self.leftState%2:
            self.leftState -= 1 
        self.position = self.mouseEventPosition(e)
        self.onMouseMoved.emit()

On move I update the position and update the state from release to up and from press to down.

    def mouseReleaseEvent(self,e):
        self.leftState = ButtonState.release
        self.position = self.mouseEventPosition(e)
        self.onMouseReleased.emit()

But now there’s the special cases of leave and enter, when I am dragging an object around and leave the widget, I can release the mouse there and won’t get notified about it, so instead I treat leaving as undoing, I snap the mouse position to the last clicked position.

    def leaveEvent(self,e):
        if self.leftState > 1:
            #clear dragging
            self.mouse.position = self.mouse.dragStart
        self.onMouseLeave.emit()

Also on the enterEvent the mouse is reset to be up because we have no measurement of what happened while the mouse was off the widget.

    def enterEvent(self,e):
        if self.leftState > 1:
            #assume left mouse button as long released
            self.mouse.leftState = ButtonState.up
        self.onMouseEnter.emit()

Then this is my ButtonState, which is basically a python enum. Please do read the link in the comments for more information:

'''
Enum pattern from
http://stackoverflow.com/a/1695250/1971060
'''
def enum(*sequential, **named):
    enums = dict(zip(sequential, range(len(sequential))), **named)
    reverse = dict((value, key) for key, value in enums.iteritems())
    enums['index'] = reverse
    return type('Enum', (), enums)

ButtonState = enum('UP','RELEASE','DOWN','PRESS')
Houdini clouds tool demo

Explanation on how to start making clouds here.

cloud_tool_1
The start of the tool allows to setup a number of copies, that are copied along the X axis, and to set up a min and max size. The size determines the endpoint and midpoint scale of the copied sphere and the radius in which they scatter. This results in always having a nice curvy cloud. The Show result button is for previewing the settings in a faster way, this button appears in several spots of the tool to be able to look at different stages of the cloud isolating certain calculations and increasing editing speed.

cloud_tool_2
Next there’s the stacking tabs. This primary stack scatters clouds onto the previously determined shape (which is first converted to metaballs to avoid invisible inside spheres). The radius varies randomly between the given points, the amount per area increases the number of spheres that are scattered in this specific stack. The merge node on the right of the network is solely for the preview to also show the input of the scatter.

cloud_tool_3
Next all stacks are applied, with ever decreasing radii and ever increasing amounts the cloud takes its final shape. Then the post-scale comes into play. The foreach loop at the end of the network scales every sphere individually from its own center point to the post-sphere scale.
Note that previewing nodes are painted out of the network to avoid confusion.

Because all scattered spheres are put with their center on the surface of the input geometry, they will never float loose and creases in the cloud volume are not very deep. By scaling afterwards, less densely scattered areas will have bigger gaps and this will create a more puffy cloud with more pieces hanging loose around the edges.

cloud_tool_4
Converted to a volume with a billow smoke shader results in this final picture.

Houdini Clouds

I was working on creating a day-night cycle using Unity and figured it would be best to generate volumetric cloud data rather than attempting to approximate cloud shading from a sky texture. Hence I googled and searched the Houdini site to find:

A cloud example, leading to this customer story.
Reading through that and finally finding this howto to have some steps to follow quickly led to a renderable set of clouds

Then I attempted to duplicate the Rio system (customer story), which I also slightly had in mind, but seeing their pictures gave me the great tip to not just scatter spheres on spheres, but to offset the secondary and tertiary spheres in Y, so that the clouds actually stack upwards for a much more cloud-like look. Even though this probably isn’t how clouds look from up close, this is what they appear to me from the ground (slightly stylized) which is all I require.

The final idea is to generate game-ready cubemaps with object space normal maps of clouds (with volumetric transparency) as well as a depth map, so when the sun is directly behind a cloud and the normal map is of no influence the cloud still looks properly (1-depth map = translucency). I imagine this in a shader as my current concept to create a good looking day-night cycle.

First step is to produce a decent looking cloud:
(Houdini Mantra render on the left, Composite on a blue background on the right)
cloud

I created a sphere to scatter metaballs on, merged it with a metaball the same size as the sphere and converted that to poly’s to base my volume on. Not cloudy at all but at least a wooly base look, then I followed the steps of
http://www.sidefx.com/docs/houdini9.5/howto/clouds
in Create clouds using Volumes. But I’ll go more into detail below.
cloud_step1

The IsoOffset has the output type to be set to SDF Volume and it requires Invert Sign to be set to true, this is a tickbox in the Construction tab! I started playing with the offset to see something, but coming to the volume mix this appeared unnecessary, so just leave it at zero.

After copying the formula given at the houdini howto I did increase the Uniform sampling to 30.
cloud_step2

Now for the rendering I just added a top-down distant light (from the Lights and Cameras shelf) and created a default camera to frame my cloud, paying attention to the volume box not the volume. DO use ray trace shadows, ignore the remark about depth maps as to me it decreases quality and speed with default rendering.

Drag on the Billowy Smoke shader, then in the out I add a default mantra node and in the render view I set the render node / camera and hit Render.
cloud_step3

Now for the cloud look all we have to do is adjust the geometry, for the ambient color we can just adjust the billowy smoke shader’s shadow density to something like 1.3. The smoke density can be adjusted to have more or less chunks separated on the edges, when increasing samples the smoke density should also increase to make the samples in fact count and be visible, it can also create a tighter/more cartoony look, but you can come back to this at any time.

So looking at the Rio image we can see a distinction between green (base spheres) and red (small secondary spheres). So I decided to scatter 4 or 5 points on a unit sphere, and copy more smaller spheres onto that. Editing the first sphere to have radii (1, 0.6, 1.2) to get a slightly flatter result.

Now instead of scattering points onto the sphere, I first create a combined mesh by copying metaballs using the spheres as template points (meaning that the metaballs come in the same volume as the input, to achieve this the metaball weight must be set to a high value; I use 100).

Then I can scatter onto that so that there are no spheres on the inside. Metaballs need to be converted to polygons, settings the level of detail U, V to 1, 1. Then again scattering 4 pts per area and copying smaller spheres on top, merging with the initial spheres, gives me a better cloud already.
cloud_step4

But it is still much too generic and blobby. After repeating the above process for even smaller spheres didn’t work I decided to randomize my secondary sphere radius, copy stamping the template points. You can see the difference between the previous (left) and this step (right) that the cloud has larger creases now.
Image5

This randomness is what I wish for, so the next step is to add a third iteration of spheres. First I convert the previous step to a subnet, so everything between the first copy node and the merge at the end becomes a subnet, then I append a copy of that subnet to itself for tertiary sphere. Inside the copy the sphere radius is scaled down (previously I randomized tpt*0.2+0.2 and now I randomize tpt*0.1+0.1). The problem is however, that the metaball copy in the second subnet does not match the sphere input, because the spheres have a varying radius. The trick is to add an attribute, pointradius, to the point level of the spheres. I do this already outside of the subnet at the first sphere so that I can keep both subnets the same, then the metaball in the subnet will get the point(input,tpt,pointradius,0) as radius and the copy node output matches the sphere input again.
cloud_step6

Then with the two subnets converted to a volume again, setting the second sphere radius to a smaller number and the scatter per area amount to a higher number creates this result (I did edit the shader to have a volume density of 30 and a shadow density of 1.3 now):
cloud_step7

Increasing the isooffset samples to 100, the volume mix formula to clamp($V*4, 0, 1) – just editing the multiplier to something much lower – and the billow smoke smoke density to 30 gives me somewhat what I want.
cloud_step8

As a final note it is also possible to set the isooffset sampling to non square, as for some clouds I got strange stretching in the noise, this can be solved by manually calculating or inputting a more suitable amount of samples, and/or oversampling the largest axis compared to the other axes.
cloud_step9
On the left the previous result with diagonal lines disturbing the look, on the right the non-square sample setup with this problem solved.

UI file loading

I followed this explanation by Nathan Horne and built an inheritable class for loading .ui files and compiling them at runtime quite a while ago.

Also I found this neat way of writing singletons.

So here’s a class that takes a UI file, compiles it and shows its contents.

'''
@Author: Trevor van Hoof

UIC compiler at runtime
Inherit uicWindow and give it a ui file to open
Reloading the import of uicr will also recompile
the UI file, showing new changes.
'''


import os.path
from PyQt4 import uic, QtGui


'''
Generic PyQt window class which can be inherited from for quick window creation
'''
class UicWindow(object):
    '''
    Uses the uic compiler at runtime, so any QtDesigner file gets updated immediately
    The created QtWindow object is named after the file given (without extension)
    When creating multiple instances of the same .ui file it may be wise to manually rename
    by using the .window.setWindowTitle() function 
        
    @param in_parent
    The window this widget is parented to
    -> widgets get embedded in main
    -> dockwidgets can dock to main
    -> mainWindows get closed when main gets closed
        
    @param in_uifile
    The QtDesigner ui file to load, best is to use an absolute path to avoid problems with import and inheritance
        
    @param in_customtitle
    QtDesigner permits windows and widgets to be named, but it is also possible to set or change
    the name using script, this is supported so multiple copies of the same input can be differently named
    '''
    def __init__(self,in_parent,in_uifile,in_customtitle=None):
        window_class = uic.loadUiType(in_uifile)
        '''
        The uic returns both a form class with other functionality
        and a QWidget with the designer file objects
            
        Both functionalities are required and are therefore packed together
        through inheritance in this embedded class which serves no other
        purpose than combining data
        '''
        class QtWindow(window_class[0],window_class[1]):
            def __init(self):
                pass

        self.window = QtWindow()
        super(QtWindow, self.window).__init__(in_parent)
        self.window.setupUi(self.window)
        self.window.setObjectName(os.path.splitext(os.path.basename(in_uifile))[0])
        if in_customtitle is not None:
            self.window.setWindowTitle(in_customtitle)
        self.window.show()

    def snapToCenter(self):
        if self.window.parent() != None:
            core = self.window.parent().geometry().center()
        else:
            core = QtGui.QDesktopWidget().screen().geometry().center()
        geo = self.window.geometry()
        self.window.setGeometry( core.x()-geo.width()*0.5,
                                 core.y()-geo.height()*0.5,
                                 geo.width(),
                                 geo.height() )
    
    def resizeAndCenter(self, in_size):
        self.window.resize(in_size)
        self.snapToCenter()
    
    def __del__(self):
        try: self.window.close()
        except: pass

And here’s a usage example; note that the ui file must exist.

import Qtutils.uicr
from Qtutils.LaunchAsStandalone import *
from PyQt4 import QtCore, QtGui

class MainWindow(Qtutils.uicr.UicWindow):
    def __init__(self):
        #act like a singleton, any future function call will return this instance
        globals()[self.__class__.__name__] = self
        
        #get a file next to this file
        self.filepath = __file__.replace('\\','/').rsplit('/',1)[0]
        filename = ('%s/main.ui'%self.filepath)
        
        #and load it as a UI file, parent defaults to None
        Qtutils.uicr.UicWindow.__init__(self, None, filename)

    '''
    Makes sure the singleton instance is callable
    '''    
    def __call__(self):
        return self


#main function to launch as standalone app for unit-tests
def main():
    w = MainWindow()
    w.resizeAndCenter( QtCore.QSize(180,220) )
    return w
    
QtStandalone(main)

By removing the first line in __init__:

globals()[self.__class__.__name__] = self

The class is no longer a singleton, this may be desirable while frequently updating the ui file as it won’t be reimported unless the class is reinitialized.

Custom GraphicsView setup

Referring to previous prototypes this class was setup reasonably fast, the Polygon class however was built from scratch as my prototype for that did it’s own triangulation (slowly I might add) and I figured out that although QPolygon does not accept a list of points it can quite easily be populated with a concave shape with good results.

from PyQt4 import QtGui
#import classes from the same package in the same namespace
from MouseData import *
from Qtutils.LaunchAsStandalone import *
from PaintItem import *
from Polygon import *

'''
GraphicsView is a custom implementation of
the QGraphicsView that does not depend on
scenes and does not support zooming and
panning in such an awkwardly enforced way.

In the event of camera usage, the
mouse coordinates will also be in a
converted state allowing functionality
programming in a natural way.
'''
class GraphicsView(QtGui.QFrame):
    def __init__(self, parent=None):
        #inherit from QFrame for the paint function
        #and the ability to be used in a window
        QtGui.QFrame.__init__(self, parent)

        #track what items are attached to this scene
        self._paintitems = []
    
    '''
    Helper function for adding drawable items
    to the graphicsview, filters out items that do
    not support drawing
    
    @param in_item: item to add on the stage
    any class with a paint(QPainter) function
    '''
    def addItem(self, in_item):
        if isinstance(in_item, PaintItem):
            self._paintitems.append(in_item)
    
    '''
    Inherited paint event
    '''
    def paintEvent(self, e):
        #let parent handle drawing the main object
        QtGui.QFrame.paintEvent(self, e)
        #create a painter for objects to draw with
        painter = QtGui.QPainter(self)
        #draw each paintable item
        for item in self._paintitems:
            item.paint(painter)

Then to prototype this class, I use the QtStandalone class and the following main function at the bottom of my GraphicsView file:

#main function to launch as standalone app for unit-tests
def main():
    w = GraphicsView()
    w.resize(QtCore.QSize(110,110))
    p = Polygon([Vec(0,0),
                 Vec(100,0),
                 Vec(100,100),
                 Vec(66,100),
                 Vec(66,50),
                 Vec(33,50),
                 Vec(33,100),
                 Vec(0,100)])
    w.addItem(p)
    w.show()
    return w
    
QtStandalone(main)

This won’t work of course, because the Polygon and PaintItem classes are not known. Now I created the PaintItem to be able to do some type checking as well as to implement generalized behaviour as I want my drawn objects to be able to transform (scale, rotate, reposition) later on, and possibly even have a zoom feature, but for not it is pretty much empty:

'''
base class for drawable items
'''
class PaintItem:
    def __init__(self):
        pass

So here’s the polygon class, it stores it’s vertices in a list and rebuilds the internal QPolygon only when necessary to gain performance (I don’t know how slow this program will become at the end of the day so I’m taking some minor precautions).

It is made to be a finalized class so variables are made private with setter functions handling clean assignment to say brush and pen colors as well as points. The addPoint method is to be used extensively while drawing and possibly we need an optimizePoints function to cleanup redundant points after freehand drawing, but more on that later.

from PaintItem import *
from PyQt4 import QtCore, QtGui

 
class Polygon(PaintItem):
    '''
    @param in_points: list of Vec2, the points to add
    as polygon vertices. Default is an empty list.
    '''
    def __init__(self, in_points = []):
        #polygon vertices, a list of Vec2
        self._points = in_points
        #required for self._polygon to exist
        self._buildPolygon()
        
        self._pen = QtCore.Qt.NoPen;
        self._brush = QtGui.QBrush( QtGui.QColor(64,64,244,128) )
    
    '''
    Set the outline color to draw with
    @param c: QColor, color to use
    if c == None there will be no outline
    '''    
    def setOutlineQColor(self, c):
        if c == None: #remove pen
            self._pen = QtCore.Qt.NoPen
        elif self._pen == QtCore.Qt.NoPen: #rebuild pen
            self._pen = QtGui.QPen(c)
        else: #change color of existing pen
            self._pen.setColor(c)
    
    '''
    set the fill color to draw with
    @param c: QColor, color to use
    if c == None there will be no fill
    '''
    def setQColor(self, c):
        if c == None: #remove brush
            self._brush = QtCore.Qt.NoBrush
        elif self._brush == QtCore.Qt.NoBrush: #rebuild brush
            self._brush = QtGui.QBrush(c)
        else: #change color of existing brush
            self._brush.setColor(c)
    
    '''
    Helper function to generate an int-based
    QColor to set as outline color
    @param r: int, 0-255 based red value
    @param g: int, 0-255 based green value
    @param b: int, 0-255 based blue value
    '''
    def setOutlineColor(self, r, g, b, a=255):
        self.setOutlineQColor( QtGui.QColor(r,g,b,a) )
    
    '''
    Sets the fill color, see setOutlineColor
    '''
    def setColor(self, r, g, b, a):
        self.setQColor( QtGui.QColor(r,g,b,a) )
    
    '''
    Assigns a new Polygon to self._polygon
    containing the current self._points data
    Call after changeing self._points
    '''
    def _buildPolygon(self):
        npt = len(self._points)
        #create a polygon of the right size
        self._polygon = QtGui.QPolygon()
        #initialize argument list starting at point index 0
        #WARNING: documentation says this function requires index, nPoints
        #but that is not the case, the nPoints is handled automatically
        ptlist = [0]
        #add all points' x,y to the arglist
        for i in range(npt):
            ptlist.extend([self._points[i][0], self._points[i][1]])
        #apply the arglist to the yet empty self._polygon
        apply(self._polygon.putPoints, ptlist)
    
    '''
    @param in_point: Vec2, point to add to the polygon
    '''
    def addPoint(self, in_point):
        self._points.append(in_point)
        #update geometry as the polygon has been modified
        self._buildPolygon()
        
    '''
    @param in_points: list of Vec2, points to add to the polygon
    '''
    def addPoints(self, in_points):
        self._points.extend(in_points)
        #update geometry as the polygon has been modified
        self._buildPolygon()
    
    '''
    Paints the polygon using self._polygon,
    whether it is out of date or not, and by
    applying self._brush and self._pen to it
    '''
    def paint(self, painter):
        #TODO: add color and pen settings
        painter.setBrush( self._brush )
        painter.setPen( self._pen )
        
        #draw the polygon
        painter.drawPolygon(self._polygon)

Now assuming all files are named after their contained class the GraphicsView should be executable resulting in the following window:
tmp

PyQt standalone

Useful from testing within eclipse

import sys
from PyQt4 import QtCore, QtGui

def main():
    #construct the application before any other objects
    app = QtGui.QApplication(sys.argv)
    #setup the default state of the application upon launch
    w = QtGui.QFrame() #treating this frame as mainwindow
    #layout for the mainwindow
    l = QtGui.QVBoxLayout()
    w.setLayout(l)
    #add default widgets
    #l.addWidget()
    #display the mainwindow on startup
    w.show()
    #launch the app
    app.exec_()
    return app;
#this did not need to be wrapped in a main
#function but it is cleaner to do so
main()

Or an even better example, import it into any window class to test it at the bottom, simply change what w contains.

import sys
from PyQt4 import QtGui

class QtStandalone:
    def __init__(self, mainfunction):
        app = QtGui.QApplication(sys.argv)
        alive = mainfunction()
        app.exec_()
        
'''
#usage example
def main():
    w = QtGui.QFrame()
    w.show()
    return w
    
QtStandalone(main)
'''
Graduation backlog

The Graduation category describes a day-to-day (or every other day or whenever I find I have something worth posting) logbook of my efforts to graduate.

The project that has been determined focuses on a rig selection tool (think Autodesk MotionBuilder), but with an extensive editor that aims to create a custom selection window for every rig, preferably taking as little time as possible. For this I will dive deeply into QT and create a functional editor and then hook it up with Maya to import defaults as well.

An additional part is an improved Maya timeline, that displays more information and is easier to use than the current Timeline. This is however another project entirely and I’ll get to posting about it once I start working on that, although completing both is my graduation assignment.

After having a bit more than a month of break between my specialization and graduation and doing a lot of other stuff inbetween, I now look back upon the code and curse myself for slacking off with the commenting in the later bit.

I learned some Unity stuff and realized that commenting every 2 lines is a lot more helpful both when learning something (and reading it over again) as well as when reviewing something from a while ago as you can simply ignore the code and just read the comments.

With this in mind I created a new package and started writing a custom GraphicsView class and all it’s siblings. I also took in mind that classes should fend for themselves as much as possible.

Although the mouseMoveEvent is in the GraphicsView class, it is clearer to just pass it to the MouseData and let the mouse data find out it’s own state, leaving the GraphicsView to handle more important things in those events.

But first I wish to be able to test outside of Maya, so let’s setup a main application.

Ray – plane intersection

Just for fun I decided to explain this a bit more elaborately.

So first we define a plane and a ray, both infinite mathematical entities. A plane is defined by an offset and a normal, a ray is defined by an offset and a direction, both sets of 2 vector3’s

If we subtract the plane origin from the ray origin, we get a point on the ray in the plane’s space, as a vector.

If we dot this vector with the plane normal, we get the distance from the ray origin to the plane.

Were we to multiply the plane normal with the negative distance, we get the nearest point on the plane from the ray origin.

If we consider the ray origin, the intersection point we’re looking for and this point, we also get a triangle, for which we know two directions (ray direction and plane normal) and one length (the distance from ray origin to plane).

Because triangles are scalable, we can dot the negative plane normal with the ray direction to get the ratio between the diagonal and the known length o the previously described triangle. So if we multiply one with the calculated value we get the length of this side, if we multiply the length between the ray origin and the intersection point we get the length between the ray origin and the nearest point on the plane.

When we divide the known length by this ratio however, we get the length between the ray origin and the plane. If we multiply the ray direction with this length (white), we get the intersection point as a vector from the ray origin, so we need to add the ray origin again to get this point in world space.

This code sample was written for a Houdini node, so it’s not functional with the Vec class I once posted, though the logic works and some minor changes should make it work with any vector class.

def rayPlaneIntersect(rayorigin, in_raydirection, planeorigin, in_planenormal):
    '''
    @returns: Vector3, intersectionPoint-rayOrigin
    '''
    raydirection = in_raydirection.normalized()
    planenormal = in_planenormal.normalized()
    distanceToPlane = (rayorigin-planeorigin).dot(planenormal)
    triangleHeight = raydirection.dot(-planenormal)
    if not distanceToPlane:
        return rayorigin-planeorigin
    if not triangleHeight:
        return None #ray is parallel to plane
    return raydirection * distanceToPlane * (1.0/triangleHeight)
Knife in pyhon

I’m rather inexperienced with Python in houdini, so I didn’t bother with external files and a clean workflow for this one.

The knife tool bothered my because when cutting a single face it would separate the face, not cutting shared edges on other faces. Hence I wrote this to find faces sharing the cut edge and rebuild those faces too.

A bit of novice info first
Use File -> New Operator Type, set Style to Python and Network Type to Geometry, save it into an OTL and create the node. Then right click the node and at the bottom select Type properties. There add the following parameters under the Parameters tab:
int, Face nr (id)
float3, Origin (origin)
float, Distance (dist)
float3, Direction (dir)
Then go to the Code tab to start writing code.

Not knowing how to make a param that accepts a group or a pattern it currently only can cut one face, determined by the ‘Face nr’ parameter.

Writing the code
First I’ll need to know the parameter contents, so the basic setup of the node looks like this:

node = hou.pwd()
geo = node.geometry()
### Parse parameters ###
target = node.evalParm("id")
origin = hou.Vector3( node.evalParm("originx"), node.evalParm("originy"), node.evalParm("originz") )
dist = node.evalParm("dist")
dir = hou.Vector3( node.evalParm("dirx"), node.evalParm("diry"), node.evalParm("dirz") )

Then, matching the original Knife operator, I added a distance parameter, normally this is senseless because the distance simply moves the cutting plane’s origin into the direction vector, by that distance. Safely done by normalizing first, like this:

dir = dir.normalized()
origin += dir*dist

Ray – plane intersection

The next thing is to cut the target face where it intersects with the given plane. For this I raycast every edge of the face through the plane, any intersections found that still lie on the edge should become an inbetween point, splitting the edge in two. So first the raycasting part:

def rayPlaneIntersect(rayorigin, in_raydirection, planeorigin, in_planenormal):
    '''
    @returns: Vector3, intersectionPoint-rayOrigin
    '''
    raydirection = in_raydirection.normalized()
    planenormal = in_planenormal.normalized()
    distanceToPlane = (rayorigin-planeorigin).dot(planenormal)
    triangleHeight = raydirection.dot(-planenormal)
    if not distanceToPlane:
        return rayorigin-planeorigin
    if not triangleHeight:
        return None #ray is parallel to plane
    return raydirection * distanceToPlane * (1.0/triangleHeight)

It essentially dots the ray origin to the plane normal to find out the distance to the plane, then the point on the plane and the ray origin form one line with a known length, trigonometry will determine the third point for we know its direction (the ray direction) and only need to determine the length. More on this here.

Cutting the face
To cut the face I will extract the primitive with the given id (stored in target), then I’ll iterate over it’s vertices to define the edges and check for every edge if it should be split, if so I append the new point to both the polygon and the edge for later usage.

By dotting the intersection point vector with the ray (edge) direction I know the parameter, which should remain between 0 and the edgelength to still be on the edge.

### Cut target ###
verts = geo.iterPrims()[target].vertices()
nverts = len(verts)
edges = []
for i in range(nverts):
    edges.append( [verts[i].point(), verts[(i+1)%nverts].point()] )
    edgedirection = edges[-1][1].position()-edges[-1][0].position()
    intersectpt = rayPlaneIntersect(edges[-1][0].position(), edgedirection, origin, dir)
    if intersectpt == edges[-1][0].position()-origin:
        continue
    if not intersectpt: #edge is parallel to cutting plane
        continue
    param = intersectpt.dot(edgedirection.normalized())
    if param > 0 and param < edgedirection.length():
        pt = geo.createPoint()
        pt.setPosition( intersectpt+edges[-1][0].position() )
        edges[-1].append( pt )

Next is to create the split geometry, after that we'll have a replica of the knife tool working on just one face. For this I start adding vertices to the first polygon until a cut edge is reached, then I add a new polygon and start adding vertices to that. Beforehand I work back to the last cut so I don't put half of the first polygon (the points between the last cut and point 0) in another polygon.

### Create output polygon(s) ###
polys = [geo.createPolygon()]
#find last cut point for wrapping the first polygon
lastcut = None
for i in range(len(edges)-1,-1,-1):
    if len(edges[i]) > 2:
        lastcut = i
        break;

#iterate over edges to build new polygons
wrap = False
for i in range(len(edges)):
    if wrap:
        polys[0].addVertex(edges[i][0])
        continue
    polys[-1].addVertex(edges[i][0])
    if len(edges[i]) > 2:
        polys[-1].addVertex(edges[i][2])
        if i == lastcut: #wrap to first polygon at last cut
            wrap = True
            polys[0].addVertex(edges[i][2])
            continue
        polys.append(geo.createPolygon())
        polys[-1].addVertex(edges[i][2])

Getting polygons from an edge
Given two point numbers and a geometry object we iterate over all primitives' vertices, if any adjacent vertices (thus an edge) matches the given ids it shares this edge.

Error warning
Also I ran into an interesting issue, where storing len(verts) before the for loop to speed it up (not calling it during the module but using the variable instead) it didn't contain a valid number, resulting in i exceeding len(verts).

def getPolygonsWithEdge(geom, edgeids):
    '''
    @param geo: hou.Geometry, geometry to search
    @param edgeids: tuple of 2 ints, point numbers
    describing the edge to find shared faces for

    @returns: list of hou.Prim, all primitives sharing this edge

    if the points are not connected by an edge
    (adjacent in the vertex list of any primitive)
    the result is an empty list
    '''
    out = []
    for poly in geom.prims():
        verts = poly.vertices()
        for i in range(len(verts)):
            if verts[i].point().number() in edgeids and\
               verts[(i+1)%len(verts)].point().number() in edgeids:
                out.append(poly)
    return out

The last bit of code
Now to make this tool actually renewing I need to find any polgons sharing the renewed edges and rebuild them to include the new points as well. I simply go over the points and if I encounter the cut edge I insert the additional point before continuing. Here I also track the old primitives to delete at the end.

### Find polygons sharing the cut edges ###
deleteprims = [geo.iterPrims()[target]]

for i in range(len(edges)):
    if len(edges[i]) > 2:
        #rebuild polygons that share the cut edge
        edgeids = (edges[i][0].number(), edges[i][1].number())
        sharedprims = getPolygonsWithEdge(geo, edgeids)
        for prim in sharedprims:
            deleteprims.append(prim)
            split = False
            if prim.number() != target: #ignore the polygon we are cutting entirely
                poly = geo.createPolygon()
                for vert in prim.vertices():
                    poly.addVertex(vert.point())
                    if vert.point().number() in edgeids and split == False:
                        split = True
                        poly.addVertex(edges[i][2])

geo.deletePrims(deleteprims,True)

Here's the full code again, two issues remain, being the changing of prim numbers (and groups containing those prims will lose them, even if they're the adjacent faces) and concave faces get overlapping primitives that should actually be joined (looking at the knife tool).

If you draw a closed polygonal curve with these coordinates and cut it with both the knife and this tool the issue will become clear:
-2,1,0 -1,1,0 -0.5,-0.25,0 0.5,-0.25,0 1,1,0 2,1,0 2,-1,0 0,-2,0 -2,-1,0

'''
@todo: fix concave face errors
@todo: insert prim with right number (important on adjacent faces, optional on new cut faces); also maintain groups!
'''
node = hou.pwd()
geo = node.geometry()


def rayPlaneIntersect(rayorigin, in_raydirection, planeorigin, in_planenormal):
    '''
    @returns: Vector3, intersectionPoint-rayOrigin
    '''
    raydirection = in_raydirection.normalized()
    planenormal = in_planenormal.normalized()
    distanceToPlane = (rayorigin-planeorigin).dot(planenormal)
    triangleHeight = raydirection.dot(-planenormal)
    if not distanceToPlane:
        return rayorigin-planeorigin
    if not triangleHeight:
        return None #ray is parallel to plane
    return raydirection * distanceToPlane * (1.0/triangleHeight)


def getPolygonsWithEdge(geom, edgeids):
    '''
    @param geo: hou.Geometry, geometry to search
    @param edgeids: tuple of 2 ints, point numbers
    describing the edge to find shared faces for

    @returns: list of hou.Prim, all primitives sharing this edge

    if the points are not connected by an edge
    (adjacent in the vertex list of any primitive)
    the result is an empty list
    '''
    out = []
    for poly in geom.prims():
        verts = poly.vertices()
        for i in range(len(verts)):
            if verts[i].point().number() in edgeids and\
               verts[(i+1)%len(verts)].point().number() in edgeids:
                out.append(poly)
    return out


### Parse parameters ###
target = node.evalParm("id")
origin = hou.Vector3( node.evalParm("originx"), node.evalParm("originy"), node.evalParm("originz") )
dist = node.evalParm("dist")
dir = hou.Vector3( node.evalParm("dirx"), node.evalParm("diry"), node.evalParm("dirz") )
dir = dir.normalized()
origin += dir*dist

### Cut target ###
verts = geo.iterPrims()[target].vertices()
nverts = len(verts)
edges = []
for i in range(nverts):
    edges.append( [verts[i].point(), verts[(i+1)%nverts].point()] )
    edgedirection = edges[-1][1].position()-edges[-1][0].position()
    intersectpt = rayPlaneIntersect(edges[-1][0].position(), edgedirection, origin, dir)
    if intersectpt == edges[-1][0].position()-origin:
        continue
    if not intersectpt: #edge is parallel to cutting plane
        continue
    param = intersectpt.dot(edgedirection.normalized())
    if param > 0 and param < edgedirection.length():
        pt = geo.createPoint()
        pt.setPosition( intersectpt+edges[-1][0].position() )
        edges[-1].append( pt )

### Create output polygon(s) ###
polys = [geo.createPolygon()]
#find last cut point for wrapping the first polygon
lastcut = None
for i in range(len(edges)-1,-1,-1):
    if len(edges[i]) > 2:
        lastcut = i
        break;

#iterate over edges to build new polygons
wrap = False
for i in range(len(edges)):
    if wrap:
        polys[0].addVertex(edges[i][0])
        continue
    polys[-1].addVertex(edges[i][0])
    if len(edges[i]) > 2:
        polys[-1].addVertex(edges[i][2])
        if i == lastcut: #wrap to first polygon at last cut
            wrap = True
            polys[0].addVertex(edges[i][2])
            continue
        polys.append(geo.createPolygon())
        polys[-1].addVertex(edges[i][2])

### Find polygons sharing the cut edges ###
deleteprims = [geo.iterPrims()[target]]

for i in range(len(edges)):
    if len(edges[i]) > 2:
        #rebuild polygons that share the cut edge
        edgeids = (edges[i][0].number(), edges[i][1].number())
        sharedprims = getPolygonsWithEdge(geo, edgeids)
        for prim in sharedprims:
            deleteprims.append(prim)
            split = False
            if prim.number() != target: #ignore the polygon we are cutting entirely
                poly = geo.createPolygon()
                for vert in prim.vertices():
                    poly.addVertex(vert.point())
                    if vert.point().number() in edgeids and split == False:
                        split = True
                        poly.addVertex(edges[i][2])

geo.deletePrims(deleteprims,True)
Specialization project

For the past few months I analyzed animation workflow in Maya and designed some tools as a school project.

Full dissertation & video here:
http://www.trevorius.com/specialization

In the coming months I’ll be making and elaborating on few of these.

PyQt spinboxes

This is a follow up post building on the class described here.

Several people brought it to my attention that in Maya most numeric inputs do not support scrolling, this is because they are not QtSpinboxes, but if they would be scrolling anyways a friend of mine also neatly described how they could do more than just increment by one, but instead increment the number the mouse is hovering on. This would be following how numeric boxes work in The Foundry’s Nuke.

So PyQt scrolls boxes on mouse hover, but it scrolls by a set value, all I need to do is determine that step size before changing the value based on the current contents and mouse position.

But first things first, to integrate with Maya clicking on a number results in selecting all text, so this is easy with the class in the previous post. I simply inherit a line edit and on the first click it will selectAll contents and store that it has focus, then on unfocus it will reset that value so it will selectAll on the next click again.

class SelectAllLineEdit(QtGui.QLineEdit):
    def __init__(self):
        QtGui.QLineEdit.__init__(self)
        self.setFocusPolicy(QtCore.Qt.StrongFocus)
        self.focus = False

    def focusOutEvent(self,e ):
        self.focus = False
        QtGui.QLineEdit.focusOutEvent(self, e)
                
    def mousePressEvent(self, e):
        if not self.focus:
            self.focus = True
            self.selectAll()
        else:
            QtGui.QLineEdit.mousePressEvent(self, e)

Then I inherit my infinite spinbox and replace the default lineedit with this custom lineedit. Also I enable tab focus and click focus on the widget so that when pressing TAB I can use the focusInEvent of the spinbox to selectAll contents in the event the user does not click but tabs into the widget.

class HiliteAllSpinBox(InfiniteSpinBox):
    def __init__(self, in_parent=None, in_value=0, in_type=float):
        InfiniteSpinBox.__init__(self, in_parent, in_value, in_type)
        self.setLineEdit( SelectAllLineEdit() )
        self.setText( numberToStr(in_value) )
        self.setFocusPolicy(QtCore.Qt.StrongFocus)
    
    def focusInEvent(self, e):
        self.selectAll();

Now the interesting part kicks in. I extend the line edit some more, tracking mouse position with the mouseMoveEvent and simply storing the x so it can be matched against the text later on. Matching the mousex with the x of each character will give us the character the mouse is over, and because there will only be numbers we can determine the increment value from there on.

class MouseTrackingLineEdit(SelectAllLineEdit):
    def __init__(self):
        SelectAllLineEdit.__init__(self)
        self.setMouseTracking(True)
        self.mousex = 0
        
    def mouseMoveEvent(self, e):
        self.mousex = e.pos().x()

So again I inherit from InfiniteSpinBox, attach a custom lineEdit and I set the focus policy and use the focusInEvent just as with the HiliteAllSpinBox to select all on tabbing.

The interesting stuff happens in stepBy however. I request fontMetrics to get a class that can measure the width of a string with the current font settings of the lineEdit. Then I determine the current size of the number, discarding decimals, because if we have 10.0 the default step size would be 10.

Next I split the string up into separate characters, so I can measure the width of the string for each character, then with this information I know when the mouse cursor is on a character as then for the first time in the loop the string width will be larger than the mouse X. In the loop I continuously decrease the step size so it will be correct as soon as I break out of the loop. This allows to set the step size and call the parent stepBy to complete the scrolling of the right number.

class NukeSpinBox(InfiniteSpinBox):
    def __init__(self, in_parent=None, in_value=0, in_type=float):
        InfiniteSpinBox.__init__(self, in_parent, in_value, in_type)
        self.setLineEdit( MouseTrackingLineEdit() )
        self.setText( numberToStr(in_value) )
        self.setFocusPolicy(QtCore.Qt.StrongFocus)
    
    def focusInEvent(self, e):
        self.selectAll();
        
    def stepBy(self, in_step):
        ln = self.lineEdit()
        m = ln.fontMetrics()
        
        stepsize = 10**( len(ln.text().split('.')[0])-1 )
        
        chars = ln.text().split('')
        str = ''
        for i in range(1, len(chars)-1, 1):
            str += chars[i]
            if chars[i] == '.':
                continue
            x = m.width(str)
            if x > ln.mousex:
                break
            stepsize *= 0.1
        
        self.setSingleStep(stepsize)
        InfiniteSpinBox.stepBy(self, in_step)

At last I will leave you with a test application, I never tried to run Qt within Eclipse before but always ran it from within Maya instead, so a another thing I finally figured out is that I can just test my PyQt code by creating a QApplication and run in Eclipse PyDev.

def main():
    app = QtGui.QApplication(sys.argv)
    w = QtGui.QFrame()
    l = QtGui.QVBoxLayout()
    w.setLayout(l)
    l.addWidget( QtGui.QLabel('Infinite spinbox') )
    l.addWidget( InfiniteSpinBox() )
    l.addWidget( QtGui.QLabel('Select contents on click spinbox') )
    l.addWidget( HiliteAllSpinBox() )
    l.addWidget( QtGui.QLabel('Step determined by mouse position') )
    l.addWidget( NukeSpinBox() )
    w.show()
    app.exec_()
    return app;
app = main()

Full code of the test application with all classes:

from PyQt4 import QtGui, QtCore
import sys


class InfiniteSpinBox(QtGui.QAbstractSpinBox):
    def __init__(self, in_parent=None, in_value=0, in_type=float):
        QtGui.QAbstractSpinBox.__init__(self, in_parent)
        self.singlestep = 1
        self.type = in_type
        self.value = self.type(in_value)
        self.setText( numberToStr(in_value) )
        self.basevalue = self.value
    
    def keyPressEvent(self, in_event):
        QtGui.QAbstractSpinBox.keyPressEvent(self, in_event)
        self.updateValue()
        
    def keyReleaseEvent(self, in_event):
        QtGui.QAbstractSpinBox.keyReleaseEvent(self, in_event)
        self.updateValue()
        
    def updateValue(self):
        value = strToNumber(self.text(), self.type)
        if value is not None:
            self.value = value
            return
        elif self.text() != '':
            self.lineEdit().setText( numberToStr(self.value) )
            
    def setSingleStep(self, in_step):
        self.singlestep = in_step
        
    def setType(self, in_type):
        self.type = in_type
    
    def setText(self, in_text):
        self.lineEdit().setText( str(in_text) )
        self.updateValue()

    def stepBy(self, in_step):
        self.value += self.singlestep*in_step
        self.setText( numberToStr(self.value, self.type) )
        
    def setValue(self, in_value):
        self.value = self.type(in_value)
        self.setText( numberToStr(self.value, self.type) )

    def stepEnabled(self):
        return QtGui.QAbstractSpinBox.StepUpEnabled | QtGui.QAbstractSpinBox.StepDownEnabled


class SelectAllLineEdit(QtGui.QLineEdit):
    def __init__(self):
        QtGui.QLineEdit.__init__(self)
        self.setFocusPolicy(QtCore.Qt.StrongFocus)
        self.focus = False

    def focusOutEvent(self,e ):
        self.focus = False
        QtGui.QLineEdit.focusOutEvent(self, e)
                
    def mousePressEvent(self, e):
        if not self.focus:
            self.focus = True
            self.selectAll()
        else:
            QtGui.QLineEdit.mousePressEvent(self, e)


class HiliteAllSpinBox(InfiniteSpinBox):
    def __init__(self, in_parent=None, in_value=0, in_type=float):
        InfiniteSpinBox.__init__(self, in_parent, in_value, in_type)
        self.setLineEdit( SelectAllLineEdit() )
        self.setText( numberToStr(in_value) )
        self.setFocusPolicy(QtCore.Qt.StrongFocus)
    
    def focusInEvent(self, e):
        self.selectAll();


class MouseTrackingLineEdit(SelectAllLineEdit):
    def __init__(self):
        SelectAllLineEdit.__init__(self)
        self.setMouseTracking(True)
        self.mousex = 0
        
    def mouseMoveEvent(self, e):
        self.mousex = e.pos().x()
        
    
class NukeSpinBox(InfiniteSpinBox):
    def __init__(self, in_parent=None, in_value=0, in_type=float):
        InfiniteSpinBox.__init__(self, in_parent, in_value, in_type)
        self.setLineEdit( MouseTrackingLineEdit() )
        self.setText( numberToStr(in_value) )
        self.setFocusPolicy(QtCore.Qt.StrongFocus)
    
    def focusInEvent(self, e):
        self.selectAll();
        
    def stepBy(self, in_step):
        ln = self.lineEdit()
        m = ln.fontMetrics()
        
        stepsize = 10**( len(ln.text().split('.')[0])-1 )
        
        chars = ln.text().split('')
        str = ''
        for i in range(1, len(chars)-1, 1):
            str += chars[i]
            if chars[i] == '.':
                continue
            x = m.width(str)
            if x > ln.mousex:
                break
            stepsize *= 0.1
        
        self.setSingleStep(stepsize)
        InfiniteSpinBox.stepBy(self, in_step)


def numberToStr(in_number, in_type=float):
    out_string = str(in_number)
    out_string = out_string.split('.')
    if in_type in (long, int):
        return out_string[0]
    
    if len(out_string) > 1 and out_string[1]:
        if len(out_string[1]) > 6:
            out_string[1] = out_string[1][0:6]
        return '%s.%s'%(out_string[0],out_string[1])
    
    return '%s.0'%out_string[0]


def strToNumber(in_str, in_type=float):
    segs = str(in_str).split('.')
    if len(segs) in (1,2) and segs[0].isdigit():
        if len(segs) == 1 or not segs[1] or segs[1].isdigit():
            return in_type(in_str)
    return None


def main():
    app = QtGui.QApplication(sys.argv)
    w = QtGui.QFrame()
    l = QtGui.QVBoxLayout()
    w.setLayout(l)
    l.addWidget( QtGui.QLabel('Infinite spinbox') )
    l.addWidget( InfiniteSpinBox() )
    l.addWidget( QtGui.QLabel('Select contents on click spinbox') )
    l.addWidget( HiliteAllSpinBox() )
    l.addWidget( QtGui.QLabel('Step determined by mouse position') )
    l.addWidget( NukeSpinBox() )
    w.show()
    app.exec_()
    return app;
app = main()
PyQt infinite spinbox

In the past I lacked an infinite spinbox, as any default qt spinbox will require a min and max value (or default to one).

So messing around with the QAbstractSpinbox I managed to create an InfiniteSpinBox, which I finally made bug free today while extending it with more advanced functionality such as scrolling the digit the mouse is on and selecting all when the text receives focus (generally when typing in a spinbox the user will always want to type an entirely new value).

I am going to inherit the QAbstractSpinBox, and because we normally have double and integer spinboxes, I will give the inherited class a default type of float, but allow the user to set the type in the constructor or later on. Also I need a settable step size, would like to set a default value and QAbstractSpinBox can accept a default parent as well.

The init will inherit and parent, will store the step size and type as well as store the default value and set it as text to display. I am calling upon helper functions numberToStr and strToNumber which I will describe below as well.

from PyQt4 import QtGui, QtCore
import sys


class InfiniteSpinBox(QtGui.QAbstractSpinBox):
    def __init__(self, in_parent=None, in_value=0, in_type=float):
        QtGui.QAbstractSpinBox.__init__(self, in_parent)
        self.singlestep = 1
        self.type = in_type
        self.value = self.type(in_value)
        self.setText( numberToStr(in_value) )
        self.basevalue = self.value

The next thing to do is handle typing.
The key press events are forwarded to the contained QLineEdit automatically so we simply call the parents key press event and then parse the text manually using a new function I named updateValue. This simply converts the typed text to a valid number and then sets that number as text again (in the event a user types characters)

    def keyPressEvent(self, in_event):
        QtGui.QAbstractSpinBox.keyPressEvent(self, in_event)
        self.updateValue()
        
    def keyReleaseEvent(self, in_event):
        QtGui.QAbstractSpinBox.keyReleaseEvent(self, in_event)
        self.updateValue()
        
    def updateValue(self):
        value = strToNumber(self.text(), self.type)
        if value is not None:
            self.value = value
            return
        elif self.text() != '':
            self.lineEdit().setText( numberToStr(self.value) )

Another way of editing the value is by scrolling
The parent class calls stepBy automatically so all we need to do is fill in that function to use the step size and increment the value by the number of steps initiated by scrolling.

The parent class also depends on stepEnabled, which should return StepUpEnabled if the value is not the maximum value and which should return StepDownEnabled if the value is not the minimum value. In the case of the infinite spinbox, obviously it should always return both flags because we are never at the min or max value.

    def stepBy(self, in_step):
        self.value += self.singlestep*in_step
        self.setText( numberToStr(self.value, self.type) )

    def stepEnabled(self):
        return QtGui.QAbstractSpinBox.StepUpEnabled | QtGui.QAbstractSpinBox.StepDownEnabled

Then all that remains is a bunch of setter functions to be consistent with other qt classes.

    def setSingleStep(self, in_step):
        self.singlestep = in_step
        
    def setType(self, in_type):
        self.type = in_type

    def setValue(self, in_value):
        self.value = self.type(in_value)
        self.setText( numberToStr(self.value, self.type) )

    def setText(self, in_text):
        self.lineEdit().setText( str(in_text) )
        self.updateValue()

And at last the numberToStr and strToNumber functions.

numberToStr is reasonably easy, it will convert a number to a string and remove any decimals in the event the type is int or long. Also it will print pretty numbers by clamping the maximum decimals to six.

strToNumber validates the number to be composed of digits and only one point and then typecasts it to the given type, ditching decimals in the event of long or float again.

def numberToStr(in_number, in_type=float):
    out_string = str(in_number)
    out_string = out_string.split('.')
    if in_type in (long, int):
        return out_string[0]
    
    if len(out_string) > 1 and out_string[1]:
        if len(out_string[1]) > 6:
            out_string[1] = out_string[1][0:6]
        return '%s.%s'%(out_string[0],out_string[1])
    
    return '%s.0'%out_string[0]


def strToNumber(in_str, in_type=float):
    segs = str(in_str).split('.')
    if len(segs) in (1,2) and segs[0].isdigit():
        if len(segs) == 1 or not segs[1] or segs[1].isdigit():
            return in_type(in_str)
    return None
Triangulation of convex shapes (2D)

To draw freehand shapes in PyQt I wanted to sample the user’s mouse drag and calculate a triangulated polygon for drawing per-triangle.

Basic triangulation proved cumbersome with concave shapes, so here’s a more advanced method (complete class at the bottom):

I create a polygon that contains a list of points, each point is connected to the previous (and 0 is connected to the last), so the points are sorted as a sequence that form a closed polygon.

Then for every point I attempt to create a triangle between that point and the next two points. There are two problems when creating that triangle for a concave polygon. First the new triangle may be outside the polygon because the angle between the edges is more than 180 degrees, second the edge may intersect other edge of the polygon:

To avoid the first issue it is easiest to take the center of the new edge and see if that point lies in the polygon, for that the polygon requires and intersectPoint method; I made mine using the winding number algorithm described in A Winding Number and Point-in-Polygon Algorithm by D. G. Alciatore, R. Miranda

UPDATE: there is a problem with this method obviously, a better way to check whether the new edge lies outside the polygon is by checking whether the new edge creates a triangle with a different winding direction.

First we need the normal of the input polygon by taking the cross product of each connected edge-pair. In 2D the cross product only can be computed for the Z component. X and Y are always zero. So I am assuming you have a 2D cross product which returns a single float.
sign( (P0-P1) X (P2-P1) )
Doing this for each edge pair and adding the results together and taking the sign of that again gives us either 1 or -1. This determines whether the polygon winds clockwise or counterclockwise (which is not important).

Then for the new edge we can take the cross product with one of the adjacent points of the starting point. So consider the new edge consisting of P0 and PE, we have the adjacent point P1.
(P1-P0) X (PE-P0) should give the same sign as the polygon, if not, the edge is outside the polygon (if you’re getting invalid results, consider taking the other adjacent point).

Now you can ignore the next code block and continue reading.

    def intersectPoint(self, in_pt):
        '''
        winding number algorithm by D. G. Alciatore, R. Miranda
        http://www.engr.colostate.edu/~dga/dga/papers/point_in_polygon.pdf

        @param in_pt: Vec[2]: point to check
        '''
        w = 0
        ptlen = len(self.points)
        for i in range(ptlen):
            pt0, pt1 = self.points[i]-in_pt, self.points[(i+1)%ptlen]-in_pt

            if pt0[1] == pt1[1]: #parallel
                continue

            #up or down
            mod = ( pt0[1] < pt1[1] )*2-1

            #start/end on the axis
            if 0 in (pt0[1],pt1[1]):
                mod *= 0.5
            #line crossing X axis
            if ( pt0[1] <= 0 and pt1[1] >= 0 ) or ( pt0[1] >= 0 and pt1[1] <= 0 ):
                #line on the right of the pivot
                if pt0[0] >= 0 and pt1[0] >= 0:
                    w += mod
                    continue
                #get intersection point X
                xpery = (pt1[0]-pt0[0])/(pt1[1]-pt0[1])
                ix = xpery*-pt0[1]+pt0[0]
                #intersection on the right of the pivot
                if ix >= 0:
                    w += mod
        return w

To solve the self-intersection problem I will check the new line against every other line in the polygon, skipping lines that can not be the problem because they share one of the points with the new line.

This exclusion is given as parameter because not all line intersection checks are with a line of the same polygon. Excluding these lines however is necessary to avoid false positives.

def intersectLine(self, in_start, in_end, in_exclude = []):
        '''
        @param excluderange: list of int: excludes points with
        indices matching the given list, defaults to empty list
        '''
        ptlen = len(self.points)
        for i in range(ptlen):
            if i in in_exclude:
                continue
            tmp = Vmath.utils.lineLineIntersect2D(self.points[i], \
                                            self.points[(i+1)%ptlen], \
                                            in_start, in_end)
            if tmp[0]:
                return tmp
        return [False]

This method references lineLineIntersect2D from Vmath.utils. The line to line intersection formula comes from euclideanspace.com by Martin Baker. This is only a quick python implementation and it can be optimized, there are also some added checks to avoid division by 0 and related errors.

def lineLineIntersect2D(pt0, pt1, pt2, pt3):
    '''
    formula source from
    http://www.euclideanspace.com/maths/geometry/elements/intersection/twod/index.htm
    '''
    a,c = 0,0
    e = (pt1[0]-pt0[0])
    b = (pt1[0]*pt0[1] - pt0[0]*pt1[1])
    if e != 0:
        a = (pt1[1]-pt0[1]) / e #step size of line 1
        b /= e
    f = (pt3[0]-pt2[0])
    d = (pt3[0]*pt2[1] - pt2[0]*pt3[1])
    if f != 0:
        c = (pt3[1]-pt2[1]) / f #step size of line 2
        d /=  f

    g = a-c

    if g == 0: #lines are parallel
        if e == 0:
            out = Vec( pt0[0], (a*d-b*c) )
        else:
            out = Vec( pt2[0], (a*d-b*c) )
    else:
        out = Vec( (d-b) / g, (a*d-b*c) / g )

    if out[0] >= min(pt0[0],pt1[0]) and out[0] <= max(pt0[0],pt1[0]) and \
        out[0] >= min(pt2[0],pt3[0]) and out[0] <= max(pt2[0],pt3[0]) and \
        out[1] >= min(pt0[1],pt1[1]) and out[1] <= max(pt0[1],pt1[1]) and \
        out[1] >= min(pt2[1],pt3[1]) and out[1] <= max(pt2[1],pt3[1]):
            return [True, out]
    return [False, out]

Below is the final class, the triangulate method is the most interesting of course, it will duplicate all internal points and for every three points attempt to create a triangle. When succeeding the intermediate point is removed from the target points as it is now cutoff from the rest of the polygon by a new edge.

The ptmap variable contains the original point indices for self.points querying (i in range(ptlen) will contain correct data for the points duplicate but the intersection functions will use self.points and require the pmap). The triangle indices that are generated relate to self.points using ptmap because the internal copy (points) is mutated during the process and deleted at the end.

'''
Created on Nov 20, 2012

@author: TC
'''
import Vmath.utils
from Vmath.vec import Vec
reload(Vmath.utils)

class Polygon2D(object):
    '''
    Polygon contains a set of points
    and their triangulation data
    as well as intersection support
    for points, lines and other polygons
    '''

    def __init__(self, in_points):
        self.points = in_points
        self.triangulate()

    def triangulate(self):
        '''
        self.triangles contains ids of the
        points array for drawing purposes; it
        triangulates concave polygons properly
        '''
        self.triangles = []
        points = self.points[:]
        ptlen = len(points)
        ptmap = range(ptlen)
        prevptlen = ptlen
        i = 0
        while ptlen > 2:
            if ptlen == 3:
                self.triangles.extend(ptmap[0:3])
                break
            pt0, pt1 = points[i], points[(i+2)%ptlen]
            #exclude adjacent edges
            exclude = set([ptmap[i],
            (ptmap[i]-1)%len(self.points),
            (ptmap[i]+1)%len(self.points),
            (ptmap[(i+2)%ptlen])%len(self.points),
            (ptmap[(i+2)%ptlen]-1)%len(self.points),
            (ptmap[(i+2)%ptlen]+1)%len(self.points)])
            #if new line does not cross any outer edges
            #and midpoint of line is inside this poly
            if not self.intersectLine(pt0, pt1, exclude)[0] \
                and self.intersectPoint( (pt1-pt0)*0.5+pt0 ) != 0:
                self.triangles.extend([ptmap[i], ptmap[(i+1)%ptlen], ptmap[(i+2)%ptlen]])
                points.pop((i+1)%ptlen)
                ptmap.pop((i+1)%ptlen)
                ptlen-=1
            i+=1
            #wrap around
            if i > ptlen-1:
                #if no change since previous wrap, bail out of infinite loop
                if ptlen == prevptlen:
                    raise ValueError("Polygon could not be triangulated, it contains self intersection")
                    break
                i = 0
                prevptlen = ptlen

    def addPoints(self, in_points):
        self.points.extend(in_points)
        self.triangulate()

    def setPoints(self, in_points):
        self.points = in_points[:]
        self.triangulate()

    def intersectLine(self, in_start, in_end, in_exclude = []):
        '''
        @param excluderange: list of int: excludes points with
        indices matching the given list, defaults to empty list
        '''
        ptlen = len(self.points)
        for i in range(ptlen):
            if i in in_exclude:
                continue
            tmp = Vmath.utils.lineLineIntersect2D(self.points[i], \
                                            self.points[(i+1)%ptlen], \
                                            in_start, in_end)
            if tmp[0]:
                return tmp
        return [False]

    def intersectPoint(self, in_pt):
        '''
        winding number algorithm by D. G. Alciatore, R. Miranda
        http://www.engr.colostate.edu/~dga/dga/papers/point_in_polygon.pdf

        @param in_pt: Vec[2]: point to check
        '''
        w = 0
        ptlen = len(self.points)
        for i in range(ptlen):
            pt0, pt1 = self.points[i]-in_pt, self.points[(i+1)%ptlen]-in_pt

            if pt0[1] == pt1[1]: #parallel
                continue

            #up or down
            mod = ( pt0[1] < pt1[1] )*2-1

            #start/end on the axis
            if 0 in (pt0[1],pt1[1]):
                mod *= 0.5
            #line crossing X axis
            if ( pt0[1] <= 0 and pt1[1] >= 0 ) or ( pt0[1] >= 0 and pt1[1] <= 0 ):
                #line on the right of the pivot
                if pt0[0] >= 0 and pt1[0] >= 0:
                    w += mod
                    continue
                #get intersection point X
                xpery = (pt1[0]-pt0[0])/(pt1[1]-pt0[1])
                ix = xpery*-pt0[1]+pt0[0]
                #intersection on the right of the pivot
                if ix >= 0:
                    w += mod
        return w

And for any Maya users out there, here’s a quick example that draws curve for each triangle which I used to debug it:

from Vmath.vec import Vec
import Vmath.polygon
reload(Vmath.polygon)
from Vmath.polygon import Polygon2D

poly = Polygon2D([
Vec(1,0),
Vec(2.8,0),
Vec(3,2),
Vec(2,4),
Vec(0,3),
Vec(0,2),
Vec(1,2.2),
Vec(2.45,1),
Vec(0,1)])
from maya import cmds
for i in range(0, len(poly.triangles), 3):
    p0 = (poly.points[ poly.triangles[i] ][0], 0, poly.points[ poly.triangles[i] ][1])
    cmds.curve(d=1, p=[p0,
    (poly.points[ poly.triangles[i+1] ][0], 0,
    poly.points[ poly.triangles[i+1] ][1]),
    (poly.points[ poly.triangles[i+2] ][0], 0,
    poly.points[ poly.triangles[i+2] ][1]), p0], k=[0,1,2,3])
Unity meshes

Playing around with Unity and trying to generate a Mesh. But if it is not known in advance how large the vertex and triangle lists have to be, ArrayLists are a solution, which apparantly cannot just be cast into a mesh.

Instinctively one would do
mesh.vertices = new Vector3[vtxarray.Count]
vtxarray.CopyTo(mesh.vertices)
but this copies the data and does not display the Mesh.
Instead it has to be assigned to another Vector3[] variable, then this variable can be assigned to mesh.vertices; I tested assigning mesh.vertices to itself after copying but that also didn’t work.

Here’s a working Quad example:

using UnityEngine;
using System.Collections;


public class Quad : MonoBehaviour {
	void Start () {
		//add a mesh and meshrenderer to this transform
		gameObject.AddComponent();
		Mesh mesh = gameObject.AddComponent().mesh;
		
		ArrayList verts = new ArrayList();
		ArrayList uvs = new ArrayList();
		ArrayList tris = new ArrayList();

		//generate the meshdata
		verts.Add( new Vector3(-1, 0, -1) );
		verts.Add( new Vector3(1, 0, -1) );
		verts.Add( new Vector3(-1, 0, 1) );
		verts.Add( new Vector3(1, 0, 1) );
		uvs.Add( new Vector2(0, 0) );
		uvs.Add( new Vector2(1, 0) );
		uvs.Add( new Vector2(0, 1) );
		uvs.Add( new Vector2(1, 1) );
		tris.Add(0);
		tris.Add(1);
		tris.Add(2);
		tris.Add(2);
		tris.Add(1);
		tris.Add(3);
		
		// copy arraylists to arrays, then apply arrays to mesh
		// it's impossible to use CopyTo(mesh.variable) directly
		// although printing the data gives valid output, mesh
		// is invisible
		Vector3[] vertices = new Vector3[verts.Count];
		verts.CopyTo(vertices);
		mesh.vertices = vertices;
		Vector2[] uv = new Vector2[uvs.Count];
		uvs.CopyTo(uv);
		mesh.uv = uv;
		int[] triangles = new int[tris.Count];
		tris.CopyTo(triangles);
		mesh.triangles = triangles;
		
		mesh.RecalculateNormals();
		mesh.RecalculateBounds();
	}
	

	void Update () {
	
	}
}