A few weeks back I posted about exporting a mesh from Maya and drawing it with Python.
I recently got back to improving this a little. First I improved maya exporter performance by porting it to a C++ plugin.
I don’t want to go over all the details, because it is similar to the python version posted before and if I’m going to explain this code properly I’d have to do a tutorial series on the Maya API in the first place! So here’s a little dump of the visual studio project instead: plugin.vcxproj!
It is currently very basic and just exports all data it can find. I’m aware certain maya models can crash certain other functions in Maya’s MFnMesh (and related) class. E.g. empty UV sets, UV sets with UVs for only certain vertices/faces, geometry with holes crashing getTriangles, etc. It may be good to write a python layer that does some validation on the mesh as well as add flags to explicitly export (or ignore) certain attributes and UV/color sets.
Next I used the python mmap (memory map) module to upload the mesh directly from disk to openGL without getting (and therefore boxing) the raw data in Python objects first. Previously I was loading binary to python, which requires python to cast the binary to a python object, which I then wrapped into a ctypes object, allocating and copying huge chunks of memory and constructing tons of python objects. With mmap I can just cast the file handle to a void* and hand it to glBufferData.
import os import mmap import ctypes import contextlib @contextlib.contextmanager def memoryMap(fileDescriptor, sizeInBytes=0, offsetInBytes=0): if isinstance(fileDescriptor, basestring): fd = os.open(fileDescriptor, os.O_RDWR | os.O_BINARY) ownFd = True else: fd = fileDescriptor ownFd = False mfd = None try: mfd = mmap.mmap(fd, sizeInBytes, offset=offsetInBytes) yield MappedReader(mfd) finally: if mfd is not None: mfd.close() if ownFd: os.close(fd) class MappedReader(object): def __init__(self, memoryMap): """Wrap a memory map into a stream that can stream through the file and map sections to ctypes.""" self.__memoryMap = memoryMap self.__offset = 0 def close(self): self.__memoryMap.close() def size(self): return self.__memoryMap.size() def seek(self, offset): assert offset >= 0 and offset < self.size(), 'Seek %s beyond file bounds [0, %s)' % (offset, self.size()) self.__offset = offset def tell(self): return self.__offset def read(self, ctype): """ Map a part of the file memory to a ctypes object (from_buffer, so ctype points directly to file memory). Object type is inferred from the given type. File cursor is moved to the next unread byte (seek = tell + sizeof(ctype)). """ result = ctype.from_buffer(self.__memoryMap, self.__offset) self.__offset += ctypes.sizeof(result) return result def readValue(self, ctype): """ Utility to read and directly return the data cast as a python value. """ return self.read(ctype).value
The memoryMap context can take a file descriptor (acquired through os.open, different from the regular open) or file path.
It will then open the entire file as read-only binary and map it instead of reading it.
Last it returns a MappedReader object which is a little wrapper around the mmap object that assists in reading chunks as a certain ctype.
This way I can easily read some header data (previously I'd do this by reading n bytes and using struct.unpack) and then read the remainder (or a large chunk) of the file as a ctypes pointer.
This code is a refactor from what I did in the tutorial mentioned at the top, but using mmap instead! It is mostly identical.
def _loadMesh_v0(stream, vao, bufs): vertexCount = stream.readValue(ctypes.c_uint32) vertexSize = stream.readValue(ctypes.c_ubyte) indexCount = stream.readValue(ctypes.c_uint32) indexSize = stream.readValue(ctypes.c_ubyte) assert indexSize in indexTypeFromSize, 'Unknown element data type, element size must be one of %s' % indexTypeFromSize.keys() indexType = indexTypeFromSize[indexSize] drawMode = stream.readValue(ctypes.c_uint32) assert drawMode in (GL_LINES, GL_TRIANGLES), 'Unknown draw mode.' # TODO: list all render types # gather layout numAttributes = stream.readValue(ctypes.c_ubyte) offset = 0 layouts = [None] * numAttributes for i in xrange(numAttributes): location = stream.readValue(ctypes.c_ubyte) dimensions = stream.readValue(ctypes.c_ubyte) assert dimensions in (1, 2, 3, 4) dataType = stream.readValue(ctypes.c_uint32) assert dataType in attributeElementTypes, 'Invalid GLenum value for attribute element type.' layouts[i] = AttributeLayout(location, dimensions, dataType, offset) offset += dimensions * sizeOfType[dataType] assert offset == vertexSize, 'File says each chunk of vertex data is %s bytes, but attribute layout used up %s bytes' % (vertexSize, offset) # apply layout for layout in layouts: glVertexAttribPointer(layout.location, layout.dimensions, layout.dataType, GL_FALSE, vertexSize, ctypes.c_void_p(layout.offset)) # total offset is now stride glEnableVertexAttribArray(layout.location) raw = stream.read(ctypes.c_ubyte * (vertexSize * vertexCount)) glBufferData(GL_ARRAY_BUFFER, vertexSize * vertexCount, raw, GL_STATIC_DRAW) raw = stream.read(ctypes.c_ubyte * (indexSize * indexCount)) glBufferData(GL_ELEMENT_ARRAY_BUFFER, indexSize * indexCount, raw, GL_STATIC_DRAW) if stream.size() - stream.tell() > 0: raise RuntimeError('Error reading mesh file, more data in file after we were done reading.') return Mesh(vao, bufs, drawMode, indexCount, indexType) def model(filePath): vao = glGenVertexArrays(1) glBindVertexArray(vao) bufs = glGenBuffers(2) glBindBuffer(GL_ARRAY_BUFFER, bufs) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bufs) with memoryMap(filePath) as stream: fileVersion = stream.readValue(ctypes.c_ubyte) if fileVersion == 0: return _loadMesh_v0(stream, vao, bufs) raise RuntimeError('Unknown mesh file version %s in %s' % (fileVersion, filePath))