Wednesday, 22 May 2019

Spare Inputs in Houdini

Here is a technique I have recently picked up from my friend Jay Natrajan.

Houdini allows 'spare inputs' into a SOP node, which lets you reference external data from inside a For Each loop.

For an example, let's say I want to scatter some points around the vertices of a grid. I want a random number of points at a random distance from each vertex.



I can create attributes on each vertex to control number of points, radius from vertex and random seed. I can then move that data to the scatter node using Spare Inputs.


Here I have got three wrangles creating data on each vertex on a grid.

f@size = fit01(rand(@ptnum), ch("size_min"), ch("size_max"));

 i@number = int(fit01(rand(@ptnum), chi("min"), chi("max")));

 f@seed = rand(@ptnum + ch("seed"));

The data created will be different per vertex, because of the rand(@ptnum) function

Inside the loop, on the sphere node, I reference the 'size' attribute to create a different sized sphere on each point on the original grid. To get that data from the loop node into the sphere node, use the Spare Input:


From the config 'cog' menu on the Sphere node, choose 'Add Spare Input'.
This creates a new slot in the Sphere node. Into this slot, drag in the top For Each node from the loop


You will see a purple connecting line in the network graph from the For Each node into the Sphere node. This line indicates that there is a connection into a spare input.

In the Sphere node, I want to look up the 'size' value for each vertex on the grid and use it to scale the sphere. To do this, I use the 'point' function, wth the first parameter being -1, which tells Houdini to look at the first spare input.

point(-1, 1, "size", 0)

Here I am looking at the first spare input, then the first object in that input, then look for the attribute called "size" and choose the 0th component of that value. It's a scalar value, so it just fetches the value


Next, I have used a Scatter node to make some particles inside each of the spheres. The number of particles in each sphere is a random number per vertex, generated by the second of the wrangles.
Again, I created a Spare Input to fetch the data from within the For Each loop.





So, I now have a random number of particles created around each vertex of the grid.


However, you can see that the distribution of particles is repeated for each group. I want each scatter to have a random seed. I can do this with another attribute passed into the Spare Input. I don't need to make a new Spare Input, I can use the same one, but just change the point function:


So, it's possible to pull any number of attributes through a single Spare Input.

In this case "seed", "number" and "size" are all passed into the For Each loop via Spare Inputs on the Sphere and Scatter nodes.

Thursday, 4 May 2017

Random Colour for Houdini Crowd Agents

I am working on a stadium shot using Houdini.

I have found a way of assigning random colors for the crowd agents.



1. Create a shader for your agents, called variation.

2. Create an attribute either on the points from which your agents are generated or on the crowd_sim_import node, which is the node that reads back the simulated agents.





3. Create a stylesheet override, so that your agents pick up the variation shader.



4. In the variation shader, create an Inline Code node.



The inline code node creates the link between the packed primitive and the shader. The renderer looks-up the attribute and passes the attribute value into a new variable inside the shader. You can then use that variable to alter the look of the shader. In my case I just used a rand node to apply a random colour and piped that straight into the shader output without any lighting.






Wednesday, 1 March 2017

Distort and Undistort with PFTrack and Nuke

Here is a reliable way to produce STmaps from PFTrack for use in Nuke when undistorting and re-distorting a plate.

This is the method outlined by Dan Newlands on his excellent blog Visual Barn. There you can find many tutorials and methods for professional Matchmoving.

http://www.visual-barn.com/updated-lens-distortion-workflow/

Dan Newlands walks you through the whole process very clearly and I cannot really add anything much to what he shows you. I want to show my setup in PFTrack and Nuke for my own reference.


Here I show the Add Distortion node. Use the 'original clip size' option. The three export nodes are for the undistorted plate (for use in your 3D app), the undistortion STmap and the re-distortion STmap.
The undistorted plate and the undistortion STmap will have a different size to the original plate, but the re-distortion STmap will have the same resolution as the original plate.


Here is the Nuke setup. The first STmap node will undistort the plate. The second STmap node will re-distort the plate to the original state.
Any 3D rendered elements can be introduced between these two nodes and they will be re-distorted to match the original plate. If that is your workflow, then use the undistorted plate exported form PFTrack in your 3D package and match your 3D elements against that.

One thing to note: if you see blocky or tearing artifacts in Nuke it may be that the filtering option in the STmap node is set incorrectly. I have found that 'cubic' filtering seems to work well, although resulting in some softness in the final redistorted image.

Thursday, 8 December 2016

2017 Show Reels

I am delighted to post my new show reels for 2017.

Here is my Houdini and Maya FX reel, which shows many disciplines including Houdini crowds, Maya nCloth, nParticles, nHair, and fluids. Rendering is done either in Mental Ray or Arnold, and camera tracking is done with either 3DEqualizer or PFTrack.






.
Here is my Matchmoving reel, which shows some of the most difficult shots I have tackled. All work shown in these shots was done using PFTrack. I am also familiar with 3DEqualizer and Nuke's built in tracker


Thursday, 8 September 2016

Align Particle Instances to a Camera

Here I will demonstrate a method of aligning a particle to face a location (eg a camera), which mimics the behaviour of particle sprites.

This is quite easy to do. First get the position of each particle, then get the position of the location you want the particles to point to. From this you can calculate the aim direction thus:

aimPP = targetPosition - position



To get this done in Maya, follow these steps:

1.    On your particle object, create a new per-particle vector attributes, called aimPP.

2.    Create an expression on the particle:

//
// CREATION EXPRESSION
//
float $targetX = targetLocation.translateX;
float $targetY = targetLocation.translateY;
float $targetZ = targetLocation.translateZ;


vector $targetPosition = <<$targetX, $targetY, $targetZ>>;

aimPP = $targetPosition - position;

spriteTwistPP = rand(-0.25, 0.25);



//
// RUNTIME BEFORE DYNAMICS EXPRESSION
//
float $targetX = targetLocation.translateX;
float $targetY = targetLocation.translateY;
float $targetZ = targetLocation.translateZ;
vector $targetPosition = <<$targetX, $targetY, $targetZ>>;

aimPP = $targetPosition - position;

aimUpAxisPP = << 0, sin(frame*spriteTwistPP), cos(frame*spriteTwistPP) >>;


Now, I was expecting

vector $targetPosition = targetLocation.translate;

to work, but for some reason I cannot think of, it does not work, so I have taken each of the components and constructed the vector from those. Not very elegant but it does the job. If anyone knows why the vector cannot be  assigned, please do let me know!

3.    On the particle shape, set the Aim Direction in the Instancer (Geometry Replacement) section to the aimPP attribute.


If you want the particle instances to rotate as well as face the camera, I am using the spriteTwistPP attribute and then create a new per particle vector attribute called aimUpAxisPP.

Create a random value for spriteTwistPP in the creation expression, then in the runtime before dynamics expression, add the line

aimUpAxisPP = << 0, sin(frame*spriteTwistPP), cos(frame*spriteTwistPP) >>;



Now, for the Aim Up Axis in the Instancer options, choose aimUpAxisPP

Now you can instance any geometry to the particle object and the instances will behave like sprite particles. You can render them with Arnold, too!



Tuesday, 6 September 2016

Maya Sprites with Arnold

I am doing some testing of Maya Sprite rendered with Arnold.

Here is a test scene:

SpriteTest_v001

At the moment the setup is not working. I get the same sprite image (number 1) on all the sprites, and the sprites are all oriented the same. I think the problem is that the spriteTwistPP and spriteNumPP attributes are not being passed to the renderer.


After contacting Solid Angle support, it seems that this workflow is not currently supported, but their developers are 'looking at it', which probably means that they will fix it quite quickly.

I will update this post when I have more information.


Wednesday, 2 December 2015

Arnold aiMotionVector with transparency

I will show one method of rendering motion vectors for an object which has transparency.

Here is the scenario:

I have some snowflakes which are comprised of simple polygon meshes instanced to some particles.
I have a circular ramp with noise to give the snowflakes a feathered edge.



What I want to achieve is to render the beauty in one pass and then a motion vectors pass. The motion vectors must have the same opacity as in the the beauty pass.

Here is the trick: In the ramp which controls the opacity, replace the white colour with an aiMotionVector node.

Set the output to Raw in the aiMotionVector node.



Here is the shader network.


Apply this shader to the snowflakes in a seperate render layer. This will be the motion vector pass


For the motion vector pass, enable Motion Blur in Arnold's render settings.



This will give each snowflake an RGB value.

 

However there is a problem. We do not want the snowflakes to be motion blurred, we just want them to show the motion vectors.

To stop each slowflake being rendered with motion blur, click the Ignore Motion Blur option in the Override tab of the Arnold render settings



That will give snowflakes with opacity and motion vector information in RGB



One more problem remains - the position of the particle at the time when motion blur is calculated is not the same as the position of the snowflake when the beauty pass is rendered. To fix this, select Start on Frame in the Motion Blur options in the Arnold render settings.



Now, into Nuke.

Load the rendered beauty pass and the motion vector pass. Each snowflake should overlap perfectly (if not, check the Start on Frame option)


Combine the two renders by using a Shuffle Copy node.
Shuffle R -> u
shuffle G -> v



Now use a VectorBlur node to produce the motion blur effect