Part 2: Creating an OpenGL friendly mesh exporter for Maya

Part 2: Creating an exporter

This is part 2 of a series and it is about getting started with visualizing triangle meshes with Python 2.7 using the libraries PyOpenGL and PyQt4.

Part 1
Part 2
Part 3

I will assume you know python, you will not need a lot of Qt or OpenGL experience, though I will also not go into the deeper details of how OpenGL works. For that I refer you to official documentation and the excellent (C++) tutorials at https://open.gl/. Although they are C++, there is a lot of explanation about OpenGL and why to do certain calls in a certain order.

On a final note: I will make generalizations and simplifications when explaining things. If you think something works different then I say it probably does, this is to try and convey ideas to beginners, not to explain low level openGL implementations.

2.1 File layout

Now that we can draw a model, it is time to define the data that we need to give OpenGL and decide upon a file format that can contain all this data.

Starting off with the elements, we have a way to draw them (GL_TRIANGLES in our case) and we have a data type (GL_UNSIGNED_INT in our case). Given this data type and the number of elements we can actually determine the buffer size regardless of the data type, allowing our file to support not only a number of element values, but allowing those values to be of all the supported types.

Similarly we can look at the vertex layout. We probably want a vertex count and a size of per-vertex-data. This size is a little more complicated because the attribute layout can be very flexible. I suppose it’s easier if we also write the vertex element size instead of trying to figure out what it should be based on the layout.

Then we can look at the attribute layout. We can assume all our data is tightly packed, so we can infer the offset (last argument of glVertexAttribPointer). That leaves us with a layout location, a number of values per vertex, and a data type. Of course first we need to write how many attributes we have.

After that all we need to do is fill in the buffer data. So for vertexCount * vertexElementSize bytes we specify binary data for the vertex buffer and for elementCount * elementDataSize we specify binary data for the elements buffer.

Our file format now looks like this:

Version nr byte So we can change the format later and not break things.
Vertex count unsigned int Because the elements_array can use at most an unsigned int we can never point to vertices beyond the maximum of this data, so no need to store more bytes.
Vertex element size byte Size in bytes of a vertex, based on all attribute sizes combined.
Element count unsigned int
Element data size byte To infer whether indices are unsigned char, unsigned short or unsigned int.
Render type GLenum OpenGL defines variables as GL_TRIANGLES as a GLenum type which is in turn just an unsigned int.
Number of attributes byte
[For each attribute]
Attribute location byte
Attribute dimensions byte Is it a single value, vec2, 3 or 4?
Attribute type GLenum Are the values float, or int, more types listed in the OpenGL documenation for glVertexAttribuPointer.
[End for]
Vertex buffer vertexCount * vertexElementSize bytes
Elements buffer elementCount * elementDataSize bytes

That brings us to the next step, gather this information in Maya.

2.2 Maya mesh exporter

To export I’ll use the maya API (OpenMaya). It provides a way to quickly iterate over a mesh’ data without allocating too much memory using the MItMeshPolygons. This will iterate over all the faces and allow us to extract the individual triangles and face vertices.

There are a few steps to do. First let’s make a script to generate a test scene:

from maya import cmds
from maya.OpenMaya import *
cmds.file(new=True, force=True)
cmds.polyCube()
meshShapeName = 'pCubeShape1'
outputFilePath = 'C:/Test.bgm'

Now with these variables in mind we have to convert the shape name to actual maya API objects that we can read data from.

# get an MDagPath from the given mesh path
p = MDagPath()
l = MSelectionList()
MGlobal.getSelectionListByName(mayaShapeName, l)
l.getDagPath(0, p)

# get the iterator
poly = MItMeshPolygon(p)

This sets us up to actually start saving data. Because openGL requires us to provide all the data of a vertex at 1 vertex index we have to remap some of Maya’s data. In Maya a vertex (actually face-vertex in Maya terms) is a list of indices that points to e.g. what vertex to use, what normal to use, etc. All with separate indices. In OpenGL all these indices must match. The way I’ll go about this is to simply take the triangulation and generate 3 unique vertices for each triangle. This means that to find the vertex count can be determined by counting the triangles in the mesh. Maya meshes don’t expose functionality to query this , so instead I’ll iterate over all the faces and count the triangles in them.

# open the file as binary
with open(outputFilePath, 'wb') as fh:
    # fixing the vertex data size to just X, Y, Z floats for the vertex position
    vertexElementSize = 3 * ctypes.sizeof(ctypes.c_float)
  
    # using unsigned integers as elements
    indexElementSize = ctypes.sizeof(ctypes.c_uint)
  
    # gather the number of vertices
    vertexCount = 0
    while not poly.isDone():
        vertices = MPointArray()
        vertexList = MIntArray()
        poly.getTriangles(vertices, vertexList, space)
        vertexCount += vertexList.length()
        poly.next()
    poly.reset()
    # start writing
    fh.write(struct.pack('B', FILE_VERSION))
    fh.write(struct.pack('I', vertexCount))
    fh.write(struct.pack('B', vertexElementSize))
    # currently I'm duplicating all vertices per triangle, so total indices matches    total vertices
    fh.write(struct.pack('I', vertexCount))
    fh.write(struct.pack('B', indexElementSize))
    fh.write(struct.pack('I', GL_TRIANGLES))  # render type

As you can see we had to make some assumptions about vertex data size and we had to gather some intel on our final vertex count, but this is a good setup. Next step is to write the attribute layout. I’ve made the assumption here to write only X Y Z position floats at location 0. We can expand the exporter later with more features, as our file format supports variable attribute layouts. We can write our position attribute next:

# attribute layout
# 1 attribute
fh.write(struct.pack('B', 1))
# at location 0
fh.write(struct.pack('B', 0))
# of 3 floats
fh.write(struct.pack('B', 3))
fh.write(struct.pack('I', GL_FLOAT))

Note that I am using a constant GL_FLOAT here, if you do not wish to install PyOpenGL for your maya, you can quite simply include this at the top of the file instead:

import ctypes
GL_TRIANGLES = 0x0004
GL_UNSIGNED_INT = 0x1405
GL_FLOAT = 0x1406

After that comes streaming the vertex buffer. For this I use the same iterator I used to count the vertex count. The code is pretty much the same only now I write the vertices instead of counting the vertex list.

# iter all faces
while not poly.isDone():
    # get triangulation of this face
    vertices = MPointArray()
    vertexList = MIntArray()
    poly.getTriangles(vertices, vertexList, space)
  
    # write the positions
    for i in xrange(vertexList.length()):
        fh.write(struct.pack('3f', vertices[i][0], vertices[i][1], vertices[i][2]))
  
    poly.next()

Last is the element buffer.

# write the elements buffer
for i in xrange(vertexCount):
   fh.write(struct.pack('I', i))

2.3 All the data

The next step naturally is to export more than just the position. Here is a more elaborate way to extract all the attributes. First we need to get some global data from the mesh. This goes right after where we create the MItMeshPolygons.

fn = MFnMesh(p)
tangents = MFloatVectorArray()
fn.getTangents(tangents, space)
colorSetNames = []
fn.getColorSetNames(colorSetNames)
uvSetNames = []
fn.getUVSetNames(uvSetNames)

Next we have to change our vertexElementSize code to the following:

# compute the vertex data size, write 4 floats for the position for more convenient transformation in shaders
# position, tangent, normal, color sets, uv sets
vertexElementSize = (4 + 3 + 3 + 4 * len(colorSetNames) + 2 * len(uvSetNames)) * ctypes.sizeof(ctypes.c_float)

The attribute layout is significantly changed. I’m also changing the point data from a vec3 to a vec4. I’m filling in the w component as 1.0, this to indicate a point instead of a vector. It will make transforming vertices in shaders a step simpler.

# attribute layout

# Since NVidia is the only driver to implement a default attribute layout I am following this as much as possible
# on other drivers using a custom shader is mandatory and modern buffers will never work with the fixed function pipeline.
# http://developer.download.nvidia.com/opengl/glsl/glsl_release_notes.pdf
# https://stackoverflow.com/questions/20573235/what-are-the-attribute-locations-for-fixed-function-pipeline-in-opengl-4-0-cor

# num attributes
fh.write(struct.pack('B', 3 + len(colorSetNames) + len(uvSetNames)))
# vec4 position at location 0
fh.write(struct.pack('B', 0))
fh.write(struct.pack('B', 4))
fh.write(struct.pack('I', GL_FLOAT))
# vec3 tangent at location 1
fh.write(struct.pack('B', 1))
fh.write(struct.pack('B', 3))
fh.write(struct.pack('I', GL_FLOAT))
# vec3 normal at location 2
fh.write(struct.pack('B', 2))
fh.write(struct.pack('B', 3))
fh.write(struct.pack('I', GL_FLOAT))
# vec4 color at locations (3,7) and 16+
used = {}
for i in xrange(len(colorSetNames)):
    idx = 3 + i
    if idx > 7:
        idx = 11 + i
        used.add(idx)
    fh.write(struct.pack('B', idx))
    fh.write(struct.pack('B', 4))
    fh.write(struct.pack('I', GL_FLOAT))
# vec2 uvs at locations 8-15 and 16+, but avoiding overlap with colors
idx = 8
for i in xrange(len(uvSetNames)):
    while idx in used:
        idx += 1
    fh.write(struct.pack('B', idx))
    fh.write(struct.pack('B', 2))
    fh.write(struct.pack('I', GL_FLOAT))
    idx += 1

Most of the MItMeshPolygon iterator functions, like getNormals(), gives us a list of the normals for all vertices in this face. The problem is that this data is not triangulated.

To extract the triangulation we used getTriangles(), which gives us a list of vertices used in the face. These vertex numbers are object-wide, so they keep getting bigger the further we get.

That means they’re useless if we want to use them to look up the normal returned by getNormals(), because that array is always very short, containing just the normals for this face.

So we have to do some mapping from the triangulated vertex indices into indices that match the data we’ve got. Either that or get all the normals from the mesh in 1 big array but that is not memory efficient. So at the top of the while loop (just inside) I’ve added the following dictionary:

# map object indices to local indices - because for some reason we can not query the triangulation as local indices
# but all getters do want us to provide local indices
objectToFaceVertexId = {}
count = poly.polygonVertexCount()
for i in xrange(count):
    objectToFaceVertexId[poly.vertexIndex(i)] = i

That allows us to extract all the data we want for these triangles like so:

# get per-vertex data
normals = MVectorArray()
poly.getNormals(normals, space)
colorSet = []
for i, colorSetName in enumerate(colorSetNames):
    colorSet.append(MColorArray())
    poly.getColors(colorSet[i], colorSetName)
uvSetU = []
uvSetV = []
for i, uvSetName in enumerate(uvSetNames):
    uvSetU.append(MFloatArray())
    uvSetV.append(MFloatArray())
    poly.getUVs(uvSetU[i], uvSetV[i], uvSetName)

Handling fairly small sets of data at a time. Last we have to write the data, replacing the loop writing 3 floats per vertex we had before with this longer loop:

# write the data
for i in xrange(vertexList.length()):
    localVertexId = objectToFaceVertexId[vertexList[i]]
    tangentId = poly.tangentIndex(localVertexId)
  
    fh.write(struct.pack('4f', vertices[i][0], vertices[i][1], vertices[i][2], 1.0))
    fh.write(struct.pack('3f', tangents[tangentId][0], tangents[tangentId][1], tangents[tangentId][2]))
    fh.write(struct.pack('3f', normals[localVertexId][0], normals[localVertexId][1], normals[localVertexId][2]))
    for j in xrange(len(colorSetNames)):
        fh.write(struct.pack('4f', colorSet[j][localVertexId][0], colorSet[j][localVertexId][1], colorSet[j][localVertexId][2], colorSet[j][localVertexId][3]))
    for j in xrange(len(uvSetNames)):
        fh.write(struct.pack('2f', uvSetU[j][localVertexId], uvSetV[j][localVertexId]))

And that completes the exporter with full functionality, extracting all possible data from a maya mesh we want. Unless you want blind data and skin clusters, but that’s a whole different story!

2.4 Code

Here is the final code as a function, with an additional function to export multiple selected meshes to multiple files, using Qt for UI. Note that if you wish to use PySide or PyQt5 instead the QFileDialog.getExistingDirectory and QSettings.value return types are different and require some work.

import os
import struct
from maya import cmds
from maya.OpenMaya import *
import ctypes

GL_TRIANGLES = 0x0004
GL_UNSIGNED_INT = 0x1405
GL_FLOAT = 0x1406
FILE_EXT = '.bm'  # binary mesh
FILE_VERSION = 0
EXPORT_SPACE = MSpace.kWorld  # export meshes in world space for now


def exportMesh(mayaShapeName, outputFilePath, space):
    # get an MDagPath from the given mesh path
    p = MDagPath()
    l = MSelectionList()
    MGlobal.getSelectionListByName(mayaShapeName, l)
    l.getDagPath(0, p)
  
    # get the mesh and iterator
    fn = MFnMesh(p)
    poly = MItMeshPolygon(p)
  
    tangents = MFloatVectorArray()
    fn.getTangents(tangents, space)
    colorSetNames = []
    fn.getColorSetNames(colorSetNames)
    uvSetNames = []
    fn.getUVSetNames(uvSetNames)
  
    # open the file as binary
    with open(outputFilePath, 'wb') as fh:
        # compute the vertex data size, write 4 floats for the position for more convenient transformation in shaders
        # position, tangent, normal, color sets, uv sets
        vertexElementSize = (4 + 3 + 3 + 4 * len(colorSetNames) + 2 * len(uvSetNames)) * ctypes.sizeof(ctypes.c_float)
  
        # using unsigned integers as elements
        indexElementSize = ctypes.sizeof(ctypes.c_uint)
  
        # gather the number of vertices
        vertexCount = 0
        while not poly.isDone():
            vertices = MPointArray()
            vertexList = MIntArray()
            poly.getTriangles(vertices, vertexList, space)
            vertexCount += vertexList.length()
            poly.next()
        poly.reset()
  
        # start writing
        fh.write(struct.pack('B', FILE_VERSION))
        fh.write(struct.pack('I', vertexCount))
        fh.write(struct.pack('B', vertexElementSize))
        # currently I'm duplicating all vertices per triangle, so total indices matches total vertices
        fh.write(struct.pack('I', vertexCount))
        fh.write(struct.pack('B', indexElementSize))
        fh.write(struct.pack('I', GL_TRIANGLES))  # render type
  
        # attribute layout
  
        # Since NVidia is the only driver to implement a default attribute layout I am following this as much as possible
        # on other drivers using a custom shader is mandatory and modern buffers will never work with the fixed function pipeline.
        # http://developer.download.nvidia.com/opengl/glsl/glsl_release_notes.pdf
        # https://stackoverflow.com/questions/20573235/what-are-the-attribute-locations-for-fixed-function-pipeline-in-opengl-4-0-cor
  
        # num attributes
        fh.write(struct.pack('B', 3 + len(colorSetNames) + len(uvSetNames)))
        # vec4 position at location 0
        fh.write(struct.pack('B', 0))
        fh.write(struct.pack('B', 4))
        fh.write(struct.pack('I', GL_FLOAT))
        # vec3 tangent at location 1
        fh.write(struct.pack('B', 1))
        fh.write(struct.pack('B', 3))
        fh.write(struct.pack('I', GL_FLOAT))
        # vec3 normal at location 2
        fh.write(struct.pack('B', 2))
        fh.write(struct.pack('B', 3))
        fh.write(struct.pack('I', GL_FLOAT))
        # vec4 color at locations (3,7) and 16+
        used = {}
        for i in xrange(len(colorSetNames)):
            idx = 3 + i
            if idx > 7:
                idx = 11 + i
                used.add(idx)
            fh.write(struct.pack('B', idx))
            fh.write(struct.pack('B', 4))
            fh.write(struct.pack('I', GL_FLOAT))
        # vec2 uvs at locations 8-15 and 16+, but avoiding overlap with colors
        idx = 8
        for i in xrange(len(uvSetNames)):
            while idx in used:
                idx += 1
            fh.write(struct.pack('B', idx))
            fh.write(struct.pack('B', 2))
            fh.write(struct.pack('I', GL_FLOAT))
            idx += 1
  
        # iter all faces
        while not poly.isDone():
            # map object indices to local indices - because for some reason we can not query the triangulation as local indices
            # but all getters do want us to provide local indices
            objectToFaceVertexId = {}
            count = poly.polygonVertexCount()
            for i in xrange(count):
                objectToFaceVertexId[poly.vertexIndex(i)] = i
  
            # get triangulation of this face
            vertices = MPointArray()
            vertexList = MIntArray()
            poly.getTriangles(vertices, vertexList, space)
  
            # get per-vertex data
            normals = MVectorArray()
            poly.getNormals(normals, space)
            colorSet = []
            for i, colorSetName in enumerate(colorSetNames):
                colorSet.append(MColorArray())
                poly.getColors(colorSet[i], colorSetName)
            uvSetU = []
            uvSetV = []
            for i, uvSetName in enumerate(uvSetNames):
                uvSetU.append(MFloatArray())
                uvSetV.append(MFloatArray())
                poly.getUVs(uvSetU[i], uvSetV[i], uvSetName)
  
            # write the data
            for i in xrange(vertexList.length()):
                localVertexId = objectToFaceVertexId[vertexList[i]]
                tangentId = poly.tangentIndex(localVertexId)
  
                fh.write(struct.pack('4f', vertices[i][0], vertices[i][1], vertices[i][2], 1.0))
                fh.write(struct.pack('3f', tangents[tangentId][0], tangents[tangentId][1], tangents[tangentId][2]))
                fh.write(struct.pack('3f', normals[localVertexId][0], normals[localVertexId][1], normals[localVertexId][2]))
                for j in xrange(len(colorSetNames)):
                    fh.write(struct.pack('4f', colorSet[j][localVertexId][0], colorSet[j][localVertexId][1], colorSet[j][localVertexId][2], colorSet[j][localVertexId][3]))
                for j in xrange(len(uvSetNames)):
                    fh.write(struct.pack('2f', uvSetU[j][localVertexId], uvSetV[j][localVertexId]))
  
            poly.next()
  
        # write the elements buffer
        for i in xrange(vertexCount):
            fh.write(struct.pack('I', i))


def exportSelected():
    selectedMeshShapes = cmds.select(ls=True, type='mesh', l=True) or []
    selectedMeshShapes += cmds.listRelatives(cmds.select(ls=True, type='transform', l=True) or [], c=True, type='mesh', f=True) or []
    from PyQt4.QtCore import QSettings
    from PyQt4.QtGui import QFileDialog
    settings = QSettings('GLMeshExport')
    mostRecentDir = str(settings.value('mostRecentDir').toPyObject())
    targetDir = QFileDialog.getExistingDirectory(None, 'Save selected meshes in directory', mostRecentDir)
    if targetDir and os.path.exists(targetDir):
        settings.setValue('mostRecentDir', targetDir)
        for i, shortName in enumerate(cmds.ls(selectedMeshShapes)):
            exportMesh(selectedMeshShapes[i],
                       os.path.join(targetDir, shortName.replace('|', '_'), FILE_EXT),
                       EXPORT_SPACE)

Cubic root solving

Update: Autodesk released their interpolation code for Maya animation curves, weighted tangents on animation curves do exactly this.
Refer to (and use!) https://github.com/Autodesk/animx instead of the code below. I know it’s not python, but it does work where I found bugs in my version below.

I really need to get back to this, but I mocked up this bit of code and finally got it to work.
It is about animation curve evaluation, due to the parametric nature of curves it is very complicated to get the value of a curve on at an arbitrary time, and it involves finding the parameter T for a value X.

This demo shows how the parameter (left to right) relates to the time of the animcurve (top to bottom), so the red curve shows linear parameter increasing results in non linear time samples.
It then computes backwards from the found value to the parameter again, showing we can successfully reconstruct the linear input given the time output.

I intend to clean up this code and make a full 2D example, rendering a 2D cubic spline segment as normal and then overlaying an evaluation based on the X coordinate, but wanted to dump the result nonetheless. Knowing how bad I am at getting back to things…
Using QT purely for demonstration, code itself is pure python…

from PyQt4.QtCore import *
from PyQt4.QtGui import *


def cubicArgs(x0, x1, x2, x3):
    a = x3 + (x1 - x2) * 3.0 - x0
    b = 3.0 * (x2 - 2.0 * x1 + x0)
    c = 3.0 * (x1 - x0)
    d = x0
    return a, b, c, d


def cubicEvaluate(x0, x1, x2, x3, p):
    # convert points to a cubic function & evaluate at p
    a, b, c, d = cubicArgs(x0, x1, x2, x3)
    return a * p * p * p + b * p * p + c * p + d


class CurveDebug(QWidget):
    def __init__(self):
        super(CurveDebug, self).__init__()
        self.t = QTimer()
        self.t.timeout.connect(self.update)
        self.t.start(16)
        self.ot = time.time()

    def paintEvent(self, event):
        painter = QPainter(self)

        life = time.time() - self.ot

        padding = 100
        w = self.width() - padding * 2
        h = self.height() - padding * 2
        painter.translate(padding, padding)

        # zig zag 2D bezier
        x0, y0 = 0, 0
        x1, y1 = (sin(life) * 0.5 + 0.5) * w, 0
        x2, y2 = 0, h
        x3, y3 = w, h

        # draw hull
        # painter.setPen(QColor(100, 220, 220))
        # painter.drawLine(x0, y0, x1, y1)
        # painter.drawLine(x1, y1, x2, y2)
        # painter.drawLine(x2, y2, x3, y3)

        for i in xrange(w):
            p = i / float(w - 1)

            # draw curve
            # painter.setPen(QColor(220, 100, 220))
            # x = cubicEvaluate(x0, x1, x2, x3, p)
            # y = cubicEvaluate(y0, y1, y2, y3, p)
            # painter.drawPoint(x, y)

            # draw X as function of P
            painter.setPen(QColor(220, 100, 100))
            x = cubicEvaluate(x0, x1, x2, x3, p)
            painter.drawPoint(i, x)

            # now let's evaluate the curve at x and see if we can get the original p back
            # make cubic with offset
            a, b, c, d = cubicArgs(x0 - x, x1 - x, x2 - x, x3 - x)

            # find roots
            # http://www.1728.org/cubic2.htm
            f = ((3.0 * c / a) - ((b * b) / (a * a))) / 3.0
            g = (((2.0 * b * b * b) / (a * a * a)) - ((9.0 * b * c) / (a * a)) + ((27.0 * d) / a)) / 27.0
            _h = ((g * g) / 4.0) + ((f * f * f) / 27.0)
            root0, root1, root2 = None, None, None
            if _h <= 0.0:
                # we have 3 real roots
                if f == 0 and g == 0:
                    # all roots are real & equal
                    _i = d / a
                    root0 = -copysign(pow(abs(_i), 0.3333), _i)
                else:
                    _i = sqrt((g * g / 4.0) - _h)
                    j = pow(_i, 0.3333333)
                    k = acos(-(g / (2.0 * _i)))
                    m = cos(k / 3.0)
                    n = sqrt(3.0) * sin(k / 3.0)
                    _p = b / (3.0 * a)
                    root0 = 2.0 * j * m - _p
                    root1 = -j * (m + n) - _p
                    root2 = -j * (m - n) - _p
            else:
                # we have only 1 real root
                R = -(g / 2.0) + sqrt(_h)
                S = copysign(pow(abs(R), 0.3333333), R)
                T = -(g / 2.0) - sqrt(_h)
                U = copysign(pow(abs(T), 0.3333333), T)
                root0 = (S + U) - (b / (3.0 * a))

            painter.setPen(QColor(100, 100, 220))
            painter.drawPoint(i, root0 * h)
            if root1:
                painter.drawPoint(i, root1 * h)
                painter.drawPoint(i, root2 * h)


app = QApplication([])
cvd = CurveDebug()
cvd.show()
cvd.resize(300, 300)
app.exec_()

Lattice -> Joints

It’s not perfect, but here’s a small script that samples a lattice and tries to set joint weights based on the influence of each lattice point.

Given a set of lattice vertices and a model influenced by these vertices it will create joints at every lattice point, bind a skin and set the weights.

Usage: just edit the variables at the top & run the script. It’s slapped together really quickly.

It moves every lattice point one by one & stores the amount of movement that occured per vertex, which is basically the weight of this point for that vertex.

Issues: Small weights are completely vanishing, you could try dividing the sampled movement by the amout of movement to get a 0-1 weight, then apply an inverse s-curve or pow / sqrt to that value and use it as weight instead.

Requirements: to set all weights really fast I use a custom “skinWeightsHandler” command, you can write your own ‘set all weights for all joints and then normalize’ routine or get the plugin by installing Perry Leijten’s skinning tools for which I originally made this plugin.

model = r'polySurface1'
influences = (r'ffd1Lattice.pt[0][0][0]',
r'ffd1Lattice.pt[0][0][1]',
r'ffd1Lattice.pt[0][1][0]',
r'ffd1Lattice.pt[0][1][1]',
r'ffd1Lattice.pt[1][0][0]',
r'ffd1Lattice.pt[1][0][1]',
r'ffd1Lattice.pt[1][1][0]',
r'ffd1Lattice.pt[1][1][1]')

def sample(model):
    return cmds.xform(model + '.vtx[*]', q=True, ws=True, t=True)[1::3]

def difference(list, list2):
    stack = [0] * len(list)
    for i in range(len(list)):
        stack[i] = abs(list2[i] - list[i])
    return stack

def gather(model, influences):
    original = sample(model)
    weights = {}
    for influence in influences:
        cmds.undoInfo(ock=True)
        cmds.xform(influence, ws=True, r=True, t=[0, 1000, 0])
        weights[influence] = difference(sample(model), original)
        cmds.undoInfo(cck=True)
        cmds.undo()
    return weights

weights = gather(model, influences)
# generate joints
joints = []
for influence in influences:
    pos = cmds.xform(influence, q=True, ws=True, t=True)
    cmds.select(cl=True)
    joints.append(cmds.joint())
    cmds.xform(joints[-1], ws=True, t=pos)
# concatenate weights in the right way
vertexCount = len(weights.values()[0])
influenceCount = len(influences)
vertexWeights = [0] * (vertexCount * influenceCount)
for i in xrange(vertexCount):
    tw = 0
    for j, influence in enumerate(influences):
        vertexWeights[i * influenceCount + j] = weights[influence][i]
        tw += weights[influence][i]
    if not tw:
        # weight is 0
        continue
    for j in xrange(influenceCount):
        vertexWeights[i * influenceCount + j] /= tw
# expand to shape
if not cmds.ls(model, type='mesh'):
    model = cmds.listRelatives(model, c=True, type='mesh')[0]
# bind skin
cmds.select(model, joints)
skinCluster = cmds.skinCluster()
# set weights
cmds.SkinWeights([model, skinCluster],  nwt=vertexWeights)

Maya discovery of the day

if you’re looking for all objects with a specific attribute, it is nice to know that ls and it’s wildcards also work on attributes! It even does not care whether you supply the long or the short name. To get all objects with a translateX attribute you can simply use:

cmds.ls('*.tx')

Wildcards do not work with some other modifiers however, so you can not do this:

cmds.ls('*.myMetaData', l=True, type='mesh', sl=True)

because the returned type is not a mesh, but an attribute; but you can of course do this (notice the o=True to return object names not attributes):

cmds.ls(cmds.ls('*.myMetaData', o=True), l=True, type='mesh', sl=True)

Just wanted to share that bit of information! And while we’re at it, python supports ‘or’ in arbitrary expressions, so if you wish to find all transforms that contain a mesh (or get the transforms of selected meshes at the same time), you’ll often find yourself doing this:

selected_transforms = cmds.ls(type='transform', sl=True, l=True)
selected_meshes = cmds.ls(type='mesh', sl=True, l=True)
if selected_transforms is not None:
    meshes = cmds.listRelatives(selected_transforms, c=True, type='mesh', f=True)
    if meshes is not None:
        if selected_meshes is not None:
            selected_meshes += meshes
        else:
            selected_meshes = meshes
selected_mesh_transforms = []
if selected_meshes is not None:
    selected_mesh_transforms = cmds.listRelatives(selected_meshes, p=True)

just because ls and listRelatives return None instead of an empty list this code is super complicated. With ‘or’ we can simply do this:

meshes = (cmds.ls(type='mesh', sl=True, l=True) or []) + (cmds.listRelatives(cmds.ls(type='transform', sl=True, l=True), c=True, type='mesh', f=True) or [])
selected_mesh_transforms = cmds.listRelatives(meshes, p=True, f=True) or []

Admittedly a bit less readable, but make a utility function or name variables appropriately is my advice!

Simple Maya mesh save/load

I recently wanted to capture some frames of an animation into a single mesh and really the easiest way to ditch any dependencies & materials was to export some OBJs, import them and then combine them! This is rather slow, especially reading gigantic models, and I did not need a lot of the data stored in an OBJ.

So here I have a small utility that stores a model’s position & triangulation and nothing else in a binary format closely resembling the Maya API, allowing for easy reading, writing and even combining during I/O.

Use write() with a mesh (full) name and use read() with a filepath to serialize
and deserialize maya meshes:

import struct
from maya.OpenMaya import MSelectionList, MDagPath, MFnMesh, MGlobal, MPointArray, MIntArray, MSpace, MPoint


def _named_mobject(path):
    li = MSelectionList()
    MGlobal.getSelectionListByName(path, li)
    p = MDagPath()
    li.getDagPath(0, p)
    return p


def writeCombined(meshes, file_path):
    # start streaming into the file
    with open(file_path, 'wb') as fh:
        # cache function sets
        fns = []
        for mesh in meshes:
            fns.append(MFnMesh(_named_mobject(mesh)))

        # get resulting mesh data sizes
        vertex_count = 0
        poly_count = 0
        index_count = 0
        meshPolygonCounts = []
        meshPolygonConnects = []
        for fn in fns:
            vertex_count += fn.numVertices()
            meshPolygonCounts.append(MIntArray())
            meshPolygonConnects.append(MIntArray())
            # we need to get these now in order to keep track of the index_count,
            # we cache them to avoid copying these arrays three times during this function.
            fn.getVertices(meshPolygonCounts[-1], meshPolygonConnects[-1])
            poly_count += meshPolygonCounts[-1].length()
            index_count += meshPolygonConnects[-1].length()

        # write num-vertices as uint32
        fh.write(struct.pack('<L', vertex_count))

        for fn in fns:
            vertices = MPointArray()
            fn.getPoints(vertices, MSpace.kWorld)

            # write all vertex positions as pairs of three float64s
            for i in xrange(vertex_count):
                fh.write(struct.pack('<d', vertices[i].x))
                fh.write(struct.pack('<d', vertices[i].y))
                fh.write(struct.pack('<d', vertices[i].z))

        # write num-polygonCounts as uint32
        fh.write(struct.pack('<L', poly_count))

        for i, fn in enumerate(fns):
            # write each polygonCounts as uint32
            for j in xrange(meshPolygonCounts[i].length()):
                fh.write(struct.pack('<L', meshPolygonCounts[i][j]))

        # write num-polygonConnects as uint32
        fh.write(struct.pack('<L', index_count))

        # keep track of how many vertices there are to offset the polygon-vertex indices
        offset = 0
        for i, fn in enumerate(fns):
            # write each polygonConnects as uint32
            for j in xrange(meshPolygonConnects[i].length()):
                fh.write(struct.pack('<L', meshPolygonConnects[i][j] + offset))
            offset += fn.numVertices()


def write(mesh, file_path):
    writeCombined([mesh], file_path)


def readCombined(file_paths):
    numVertices = 0
    numPolygons = 0
    vertices = MPointArray()
    polygonCounts = MIntArray()
    polygonConnects = MIntArray()

    for file_path in file_paths:
        with open(file_path, 'rb') as fh:
            # read all vertices
            n = struct.unpack('<L', fh.read(4))[0]
            for i in xrange(n):
                vertices.append(MPoint(*struct.unpack('<3d', fh.read(24))))

            # read all polygon counts
            n = struct.unpack('<L', fh.read(4))[0]
            numPolygons += n
            polygonCounts += struct.unpack('<%sL'%n, fh.read(n * 4))

            # read all polygon-vertex indices
            n = struct.unpack('<L', fh.read(4))[0]
            offset = polygonConnects.length()
            polygonConnects += struct.unpack('<%sL'%n, fh.read(n * 4))

            # offset the indices we just added to the match merged mesh vertex IDs
            for i in xrange(n):
                polygonConnects[offset + i] += numVertices

            numVertices += n

    new_object = MFnMesh()
    new_object.create(numVertices, numPolygons, vertices, polygonCounts, polygonConnects)
    return new_object.fullPathName()


def read(file_path):
    with open(file_path, 'rb') as fh:
        numVertices = struct.unpack('<L', fh.read(4))[0]
        vertices = MPointArray()
        for i in xrange(numVertices):
            vertices.append(MPoint(*struct.unpack('<3d', fh.read(24))))
        numPolygons = struct.unpack('<L', fh.read(4))[0]
        polygonCounts = MIntArray()
        polygonCounts += struct.unpack('<%sL'%numPolygons, fh.read(numPolygons * 4))
        n = struct.unpack('<L', fh.read(4))[0]
        polygonConnects = MIntArray()
        polygonConnects += struct.unpack('<%sL'%n, fh.read(n * 4))

    new_object = MFnMesh()
    new_object.create(numVertices, numPolygons, vertices, polygonCounts, polygonConnects)
    return new_object.fullPathName()

I basically used a snippet like this to snapshot my animation:

tempfiles = []
for f in (0,4,8,12):
    cmds.currentTime(f)
    tempfiles.append('C:/%s.mfnmesh'%f)
    writeCombined(cmds.ls(type='mesh', l=True), tempfiles[-1])
newmesh = readCombined(tempfiles)
for p in tempfiles:
    os.unlink(p)

Important notice: I have found some random crashes in using a large amount of memory (high polycount per frame) in the writeCombined function (which may be solvable when ported to C++ an receiving proper error data).