Tuesday 2 May 2023

Side project: Python UI for texture selection in both Maya and Houdini

I am happy to present a few trials of a side project.

Inspired by Postoffice (Amsterdam) Stadium and Crowd tool, I decided to start development of my own system.

I started with a very basic premise: to be able to select any team, from any football league, from any country and assign that choice to the home and away team. This would then be reflected in the texture of the shirts worn by the crowd.

In this way, I would be able to customise the crowd to any combination. Useful for broadcasters who need to quickly swap teams in virtual stadiums.

This UI was created in Python 2

Team shirt selector from Daniel Sidi on Vimeo.

Team Selector tool in Houdini from Daniel Sidi on Vimeo.

Tuesday 28 March 2023

How to create a pointcloud from any joint of crowd of agents in Houdini


Here is a method I have learned to create a pointcloud from any joint of crowd of agents in Houdini.

For example, an army of soldiers, carrying guns. The FX department require the location and orientation of the end of each gun barrel. We can deliver a pointcloud which has that data.

Here are the steps

First, take your crowd....

// 1. get joint name
string JOINT_NAME = "r_handJA_JNT";
// 2. get joint index
int JOINT_IDX = agentrigfind(0, @ptnum, JOINT_NAME);
// 3. get position of agent
matrix AGENT_XFORM = primintrinsic(0, "packedfulltransform", @ptnum);
// 4. get position of joint within agent
matrix JOINT_XFORM = agentworldtransform(0, @ptnum, JOINT_IDX);
// 5. set offset to end of the gun barrel (enter manually)
vector POS = chv("offset");
// 6. transform by JOINT_XFORM

// 7. trnasform by AGENT_XFORM

// 8. set initial direction along the gun (+x direction)
vector DIR = set(1, 0, 0);

// 9. transform by rotation component of JOINT_XFORM
DIR *= matrix3(JOINT_XFORM);

// 10. trnasform by the rotation component of AGENT_XFORM
DIR *= matrix3(AGENT_XFORM);

// 11. make a new point
int newPoint = addpoint(0, POS);

// 12. set DIR on new point
setpointattrib(0, "DIR", newPoint, DIR);

// 13. delete the agent
removepoint(0, @ptnum, 1);

then export this pointcloud.

Tuesday 21 July 2020

Convert images to Pixar .tex format using handy right-click in Windows

After playing a little with Pixar Renderman, I found it laborious to convert images to the .tex format preferred by the renderer.

Here is a super simple one line script to make it as easy as possible to convert image files.

This is a Windows only hack, using the right-click context menu in Windows Explorer.

Open the 'sendTo' folder. Don't know where that is? Don't worry, you don't have to.
Just press 'windows-R' to get the launcher and type 'shell:sendto'

You should see your sendTo folder. Something like this:

Now, right-click in there and create a new text file. Name the new file 'txmake' or something else if you like.

You have created a .txt file, but we actually need a .bat file. We need to rename the file and give it a new extension.

Here is a quick way to do that: Shift-Right-click on a blank part of the sendTo window. You should see the enhanced version of the right-click menu.

One option you should see is 'Open PowerShell window here'. Clicking that will open a command line window with the current working directory set to the sendTo directory. That's a killer tip by itself!

OK, now we can rename the txt file:
ren txmake.txt txmake.bat

The icon for the txmake file will change to a document with cogs. This is a windows batch file and it is executable.

Right-click the file and select edit. Enter the following lines and save the file.

@echo off

C:\Progra~1\Pixar\RenderManProServer-23.3\bin\txmake.exe %1 %~n1.tex

%1 represents a file dropped onto the batch file.
%~n1 is the file with the extension stripped off the end of the file name

Make sure the path to your Renderman installation is correctly inserted into this script.


Browse to any image file in Windows explorer.
Right-click the image file > send to > txmake

The tex file will be created in the same folder.

Wednesday 8 January 2020

Wedging simulations in TOPs with burned in parameter values

Here I will show how I setup a simple system for wedging simulations in Houdini and outputting a mosaic mpeg of the results with parameter values displayed in the frame.
There were a few tricky gotchas but I will show you how I got past them.

What is wedging?
Imagine you are creating a Pyro explosion, there are a lot of parameters that can be varied to produce different end results. Creating a separate simulation and render for each value of a parameter, then compositing them side by side is one way to compare the effects of changing that paramater, but this would be very laborious and time consuming. Then imaging is you wanted to vary multiple parameters. The task then becomes extremely tedious.
Wedging is a quick way of testing the effect of a parameter on the outcome of a simulation.
This has been implemented in Houdini since the year dot, but now Houdini has TOPs, which makes the setup even more efficient.

  1. Decide what parameters you want to vary
  2. Create Wedge TOP nodes to create those variations
  3. If you are varying multiple parameters, then create multiple Wedge TOPs and chain them together
  4. Cache your simulation using the ROP_Geometry TOP
  5. Render each of those simulations using the ROP_Mantra TOP
  6. Composite text overlay using ROP_Composite_Output TOP
  7. Use Partition_by_Frame TOP to gather all your frames into groups which have the same frame number
  8. The Imagemagick TOP will create a montage of your frames
  9. Use the Wait_for_All TOP before encoding the output video
  10. Output a mpeg of the results using the FFMPEG_Encode_Video TOP

There are a couple of things you need before you can get started: Imagemagick and ffmpeg.
Install these as per their instructions.

OK, Let's get started setting up this workflow.

{I am following Steve Knipping's Volumes II (version 2), so you may recognise some of this}

Here is my simple Pyro setup:

I am sourcing Density, Velocity and Temperature from a deformed sphere - like so:

The Pyro simulation is equally faithfull to Mr Knipping's tutorial - like so:

At this point I decided to try the Wedging, so I diverted away from the Knipping and focussed on the TOP side.

I did need to import the Pyro simulation back into SOPs, so here is my seup for that. Just a DOP_Import_Fields and a Null.

Now, we dive into TOPs

Firstly, in the top level, create a TOP network and dive inside.

Once inside TOPs, make a Wedge TOP node

I chose to wedge a couple of paramaters: Disturbance Reference Scale (shown above) and Disturbance Strength.
So I created two Wedge TOPs and chained them together.
I have 4 values for Disturbance Scale and 4 values for Disturbance Strength, making 16 variations in total.

Now to cache these simulations out to disk.

Use the ROP_Geometry_Output node to cache the simulation.
Here is the first important gotcha:
In the ROP Fetch tab, make sure to enable 'All Frames in One Batch' because this is a simulation and each frame must be calculated in order.

This is where you set the frame range of the simulation. When you render, the renderer will take each frame seperately and render them (or in chunks).

The SOP path points to the output of the Pyro simulation, which is brough back into SOPs with the DOP_Import_Fields node.

The Output File uses an attribute called @wedgeindex, which is a string, so you have to use back-ticks (i.e. `@wedgeindex') to let Houdini evaluate it's value.
@wedgeindex identifies which wedge you are currently simulating or rendering. It will be the same value for each rendered or simulated geometry file for that one wedge.
So, we will end up with a lot of geometry files, but they will be named using the @wedgeindex tag.

Now we have the geometry files, we need to render them.
Make a ROP_Mantra_Render TOP node and connect it to the previous node.

This time you can render frames out of order or in chunks - it doesn't matter. So in the ROP Fetch tab, you can set Frames per Batch to a larger number to reduce the load on the network.
Again, we need to use the `@wedgeindex` attribute in the output file name.

I would recommend checking the output frame size because if you are making a mosaic of full frame renders, the resulting mpeg could become enourmous - 16k or larger. I chose to render at 1/3 size.

OK, now we have a lot of rendered frames. It's a good idea to label them with which wedged attributes they are using. There is little point having a lovely mosaic of pyro sims if you don't know which version of the wedged attributes your favourite one refers to. I discovered on the SideFX forum that there is a very clever solution to this problem, which involves a COP network.
Don't worry, I will give you the details here.

Create a Overlay_Text TOP node.
We want to put a Python script in the Overlay tab, but to do this you have to dive into the node.

Once inside the COP network, select the Font node. This is where the Python script goes.
However just copying the code into the text box will not work. First, convert the text box into a python script box by right-clicking in the text box, choose Expression > Change Language to Python.
Now copy the Python code into the Python text box

import pdg
active_item = pdg.EvaluationContext.workItemDep()

data =
attribs = ""
for wedge_attrib in data.stringDataArray("wedgeattribs"):  
    val = data.stringDataArray(wedge_attrib)
    if not val:
        val = data.intDataArray(wedge_attrib)
    if not val:
        val = data.floatDataArray(wedge_attrib)
    if not val:
    val = round(float(str(val)[1:-1]),3)
    attribs += "{} = {}\n".format(wedge_attrib, str(val))
return attribs
The text box will change to a purple colour.

Next, we need to sort the rendered frames by frame number - get all the frame 1's together, frame 2's together, etc. This is done by the Partition_by_Frame TOP node. Drop one down.

Follow it with an Imagemagick TOP node. Set the node to 'Montage'

Gotcha! Have a look at the Output Filename. It references a variable called $PDG_DIR but this variable does not exist in Houdini by default, you need to create it.
So in Houdini's main menu, go to Edit >  Aliases and Variables and add the following entries:

We are nearly there.
Before we can encode the mpg, we must wait for all files to be rendered by Imagemagick, so drop down a Wait_For_All TOP node.

Finally, we can encode the images into a video file. Use the FFMPEG_Encode_Video TOP node.
Again, watch out for that $PDG_TEMP and $PDG_DIR variables.
You might also need to set the FFMPEG path to the folder that contains the FFMPEG executable file. I didn't need to do this, but some people might have to if they already had FFMPEG installed before Houdini was installed.

That's all there is to it!
I will attach a setup file, because something is bound to go wrong.

Saturday 4 January 2020

Old nodes for following old tutorials

As SideFX has changed a few Houdini nodes recently, here is a way to access the old nodes so you can follow those tutorials that came out before the changes. This is especially useful for tutorials that came out before Houdini version 17, when large parts of the Dynamics workflow changed.

To get the old, deprecated, nodes back, you need to use the Textport. Yes, the Textport!
You always wondered that window was for, well, here is one use for it.

In Textport, type

You will see a popup list of classes of nodes appear

pressing enter now will list every deprecated node - there is a lot of them!

Note: the list in the popup does NOT show the Dop class of nodes. That confused me for a while, but you can still access the old Dop nodes. Here's how:

In Textport, type

opunhide Dop

Now if you press enter, you will get a list of deprecated Dop nodes.

If you want to access the old sourceVolume node, for example, type this in the Textport

opunhide Dop sourcevolume

Now, in the Dynamics context, you should be able to get a sourceVolume node the old version of the volumeSource node.

Wednesday 22 May 2019

Spare Inputs in Houdini

Here is a technique I have recently picked up from my friend Jay Natrajan.

Houdini allows 'spare inputs' into a SOP node, which lets you reference external data from inside a For Each loop.

For an example, let's say I want to scatter some points around the vertices of a grid. I want a random number of points at a random distance from each vertex.

I can create attributes on each vertex to control number of points, radius from vertex and random seed. I can then move that data to the scatter node using Spare Inputs.

Here I have got three wrangles creating data on each vertex on a grid.

f@size = fit01(rand(@ptnum), ch("size_min"), ch("size_max"));

 i@number = int(fit01(rand(@ptnum), chi("min"), chi("max")));

 f@seed = rand(@ptnum + ch("seed"));

The data created will be different per vertex, because of the rand(@ptnum) function

Inside the loop, on the sphere node, I reference the 'size' attribute to create a different sized sphere on each point on the original grid. To get that data from the loop node into the sphere node, use the Spare Input:

From the config 'cog' menu on the Sphere node, choose 'Add Spare Input'.
This creates a new slot in the Sphere node. Into this slot, drag in the top For Each node from the loop

You will see a purple connecting line in the network graph from the For Each node into the Sphere node. This line indicates that there is a connection into a spare input.

In the Sphere node, I want to look up the 'size' value for each vertex on the grid and use it to scale the sphere. To do this, I use the 'point' function, wth the first parameter being -1, which tells Houdini to look at the first spare input.

point(-1, 1, "size", 0)

Here I am looking at the first spare input, then the first object in that input, then look for the attribute called "size" and choose the 0th component of that value. It's a scalar value, so it just fetches the value

Next, I have used a Scatter node to make some particles inside each of the spheres. The number of particles in each sphere is a random number per vertex, generated by the second of the wrangles.
Again, I created a Spare Input to fetch the data from within the For Each loop.

So, I now have a random number of particles created around each vertex of the grid.

However, you can see that the distribution of particles is repeated for each group. I want each scatter to have a random seed. I can do this with another attribute passed into the Spare Input. I don't need to make a new Spare Input, I can use the same one, but just change the point function:

So, it's possible to pull any number of attributes through a single Spare Input.

In this case "seed", "number" and "size" are all passed into the For Each loop via Spare Inputs on the Sphere and Scatter nodes.

Wednesday 1 March 2017

Distort and Undistort with PFTrack and Nuke

Here is a reliable way to produce STmaps from PFTrack for use in Nuke when undistorting and re-distorting a plate.

This is the method outlined by Dan Newlands on his excellent blog Visual Barn. There you can find many tutorials and methods for professional Matchmoving.

Dan Newlands walks you through the whole process very clearly and I cannot really add anything much to what he shows you. I want to show my setup in PFTrack and Nuke for my own reference.

Here I show the Add Distortion node. Use the 'original clip size' option. The three export nodes are for the undistorted plate (for use in your 3D app), the undistortion STmap and the re-distortion STmap.
The undistorted plate and the undistortion STmap will have a different size to the original plate, but the re-distortion STmap will have the same resolution as the original plate.

Here is the Nuke setup. The first STmap node will undistort the plate. The second STmap node will re-distort the plate to the original state.
Any 3D rendered elements can be introduced between these two nodes and they will be re-distorted to match the original plate. If that is your workflow, then use the undistorted plate exported form PFTrack in your 3D package and match your 3D elements against that.

One thing to note: if you see blocky or tearing artifacts in Nuke it may be that the filtering option in the STmap node is set incorrectly. I have found that 'cubic' filtering seems to work well, although resulting in some softness in the final redistorted image.