Thursday, 8 December 2016

2016 Show Reels

I am delighted to post my new show reels for 2016.

Firstly, here is my Maya FX reel, which shows many disciplines including nCloth, nParticles, nHair, and Maya fluids. Rendering is done either in Mental Ray or Arnold, and camera tracking is done with PFTrack.





Secondly, I have a Match Move reel, which shows some of the most difficult shots I have tackled. All work is done using PFTrack (although I am also familiar with Boujou, 3D Equaliser and Nuke's tracker). 



Thursday, 8 September 2016

Align Particle Instances to a Camera

Here I will demonstrate a method of aligning a particle to face a location (eg a camera), which mimics the behaviour of particle sprites.

This is quite easy to do. First get the position of each particle, then get the position of the location you want the particles to point to. From this you can calculate the aim direction thus:

aimPP = targetPosition - position



To get this done in Maya, follow these steps:

1.    On your particle object, create a new per-particle vector attributes, called aimPP.

2.    Create an expression on the particle:

//
// CREATION EXPRESSION
//
float $targetX = targetLocation.translateX;
float $targetY = targetLocation.translateY;
float $targetZ = targetLocation.translateZ;


vector $targetPosition = <<$targetX, $targetY, $targetZ>>;

aimPP = $targetPosition - position;

spriteTwistPP = rand(-0.25, 0.25);



//
// RUNTIME BEFORE DYNAMICS EXPRESSION
//
float $targetX = targetLocation.translateX;
float $targetY = targetLocation.translateY;
float $targetZ = targetLocation.translateZ;
vector $targetPosition = <<$targetX, $targetY, $targetZ>>;

aimPP = $targetPosition - position;

aimUpAxisPP = << 0, sin(frame*spriteTwistPP), cos(frame*spriteTwistPP) >>;


Now, I was expecting

vector $targetPosition = targetLocation.translate;

to work, but for some reason I cannot think of, it does not work, so I have taken each of the components and constructed the vector from those. Not very elegant but it does the job. If anyone knows why the vector cannot be  assigned, please do let me know!

3.    On the particle shape, set the Aim Direction in the Instancer (Geometry Replacement) section to the aimPP attribute.


If you want the particle instances to rotate as well as face the camera, I am using the spriteTwistPP attribute and then create a new per particle vector attribute called aimUpAxisPP.

Create a random value for spriteTwistPP in the creation expression, then in the runtime before dynamics expression, add the line

aimUpAxisPP = << 0, sin(frame*spriteTwistPP), cos(frame*spriteTwistPP) >>;



Now, for the Aim Up Axis in the Instancer options, choose aimUpAxisPP

Now you can instance any geometry to the particle object and the instances will behave like sprite particles. You can render them with Arnold, too!



Tuesday, 6 September 2016

Maya Sprites with Arnold

I am doing some testing of Maya Sprite rendered with Arnold.

Here is a test scene:

SpriteTest_v001

At the moment the setup is not working. I get the same sprite image (number 1) on all the sprites, and the sprites are all oriented the same. I think the problem is that the spriteTwistPP and spriteNumPP attributes are not being passed to the renderer.


After contacting Solid Angle support, it seems that this workflow is not currently supported, but their developers are 'looking at it', which probably means that they will fix it quite quickly.

I will update this post when I have more information.


Wednesday, 2 December 2015

Arnold aiMotionVector with transparency

I will show one method of rendering motion vectors for an object which has transparency.

Here is the scenario:

I have some snowflakes which are comprised of simple polygon meshes instanced to some particles.
I have a circular ramp with noise to give the snowflakes a feathered edge.



What I want to achieve is to render the beauty in one pass and then a motion vectors pass. The motion vectors must have the same opacity as in the the beauty pass.

Here is the trick: In the ramp which controls the opacity, replace the white colour with an aiMotionVector node.

Set the output to Raw in the aiMotionVector node.



Here is the shader network.


Apply this shader to the snowflakes in a seperate render layer. This will be the motion vector pass


For the motion vector pass, enable Motion Blur in Arnold's render settings.



This will give each snowflake an RGB value.

 

However there is a problem. We do not want the snowflakes to be motion blurred, we just want them to show the motion vectors.

To stop each slowflake being rendered with motion blur, click the Ignore Motion Blur option in the Override tab of the Arnold render settings



That will give snowflakes with opacity and motion vector information in RGB



One more problem remains - the position of the particle at the time when motion blur is calculated is not the same as the position of the snowflake when the beauty pass is rendered. To fix this, select Start on Frame in the Motion Blur options in the Arnold render settings.



Now, into Nuke.

Load the rendered beauty pass and the motion vector pass. Each snowflake should overlap perfectly (if not, check the Start on Frame option)


Combine the two renders by using a Shuffle Copy node.
Shuffle R -> u
shuffle G -> v



Now use a VectorBlur node to produce the motion blur effect



Thursday, 19 November 2015

High Resolution Cloth Simulations

Here I will describe the method I have been using to create high resolution cloth simulations, based upon low resolution pre-vis cloth. This method is derived from the work of David Knight - thanks David!

1. First, create a medium resolution poly mesh which we will use to create the low res sim.



In my example I have created a mesh with 80 x 160 faces. The mesh MUST be proportionate to the number of faces (i.e. 2:1 in my case) this is because nCloth works better with square faces.

2. Duplicate the mesh. We will use this second mesh to 'pull' the cloth around the scene. Select around 5% of the faces from the leading edge of the Puller mesh. Invert the selection and delete the other faces. We should be left with a narrow strip of faces which exactly overlap the leading edge of the cloth.
 These are the faces we want to keep


3.  Make the original mesh a nCloth.

4. Select the vertices of the nCloth that correspond to the Puller object and then shift select the Puller mesh and create a Point to Surface nConstraint.


5. We want the nConstraint to have low strength, so that the puller gently guides the cloth through the scene. I have used values:
Strength = 0.05
Tangent Strength = 0.0.5

6. We want to animate the Puller mesh now. You can attach it to a motion path or just keyframes, it doesn't matter really. Remember that the faster the cloth moves through the scene, the more sub-frame samples you will need to keep the cloth behaving nicely. If the motion is too jerky the cloth will go crazy. Keep the animation as smooth as you can.

7. Add some noise to the cloth. No cloth will behave perfectly in reality, so add some noise to your simulation. One way to do this is to add a texture deformer to the Puller mesh.

8. Select the Puller mesh. Create a Texture Deformer. Set the deformer's Direction to Normal. In the Texture slot, assign a Noise texture.

9. We don't want to have the Texture Deformer to act on the Puller mesh at the start of the simulation, but rather have it gradually ramp up to full strength over, say, 25 frames. To do this, key the Envelope attribute on the Texture Deformer.


10. Set the Texture Deformer's Offset to be half of it's strength, but in the opposite direction. This will keep the Puller mesh 'centered'. To do this, apply an expression:

textureDeformer1.offset=textureDeformer1.strength*-0.5

 11. Set an expression in the noise texture Time attribute:

noise1.time=time

This will make the noise texture flow over time.

12. Add some wind, gravity or other forces if you like. Now simulate!

13. Now we have a low resolution mesh. We need to make a high resolution version, but with extra details.


14. First thing to do is to apply a Smooth to the low res mesh. Mesh > Smooth, and give it 2 divisions. In my example, this gives me a mesh with ~200,000 faces.

15. Export this mesh as an alembic cache. Pipeline Cache > Export Selection to Alembic. This is quite slow! Save your scene as LowRes.

16. I recommend that you do the next steps in a fresh scene. Not only will this be faster, lighter and easier to organise, but it will be much easier to go back to the Low Res scene at any time and re-export any changes you need to make really easily. Once re-exported, you can very easily re-import the Alembic Cache file in the High Res scene, without any fuss.

18. In a new scene, import the Alembic cache. Duplicate it. Make the duplicate a nCloth object.

19. Constrain the nCloth to the Alembic mesh. Select the cloth, then the Alembic mesh and then create an Attract to Matching Mesh nContsraint.

20. Again, we want the constraint to 'guide' the cloth, rather than drag it too strongly. Here are the settings I use, but, of course, it will depend on the scene scale, and what you want the cloth to do.


Notice the Strength Drop Off ramp. This allows the cloth to move freely when it is near to the Alembic guide, but the constraint kicks in as the cloth moves away from the guide.

21. Now simulate this High Resolution cloth. Hopefully you will see that is follows the Alembic guide quite closely, but will also have some extra details. I have not changed any nCloth attributes apart from self-collision width. All the motion is made with the constraints.


Here is one I made earlier


High Resolution nCloth test from Daniel Sidi on Vimeo.

Monday, 18 May 2015

Blending nCloth caches using Blendshapes


With many thanks to David Knight, nCloth guru, I present his method for blending two nCloth caches on a per vertex basis. You can have one half of a nCloth following one cache and the other half following a different cache.



1. Create two simulations for your cloth sim. Use a copy of the mesh for each sim. If the meshes do not match exactly (same number of vertices) then this method of blending will not work.

In my example I have one wide simulation and one which is narrow.


2. Cache your simulations.

3. Make another copy of the mesh, label it 'blendMesh'

4. Select the two nCloth meshes and finally shift-select blendMesh. Create a Blend Shape deformer (Create Deformers > Blend Shape)

5. In the Blend Shape attributes, set the weights for each input to 1.0



6. Assign weights per vertex. To do this open the Paint Blend Weights Tool (in the Edit Deformers menu). Do not manually paint blend weights because the sum of blend weights on each vertex must be equal to 1.0. Painting will not allow fine control. You can edit blend weights per vertex  manually in the Component Editor, but it is also possible to use an image set weights.

7. I have created some ramps in photoshop and saved them as TIF files. First I created the blengMap_H ramp, then I inverted the image (ctrl-I) which will subtract the value of each pixel from 1.0. That inverted image becomes blendMap_H_inverted. This will ensure that when the two ramps are added together the result will equal 1.0



I followed the same procedure to create the vertical ramps. The version of the ramp you need to use will depend on the orientation of your simulations. It's useful to have any combination of ramps saved in a library.

8. Apply the blendMap ramp to the Blend Shape deformer. Choose one of the Targets on the Blend Shape node and then under the Attribute Maps section, press Import and browse to where the blendMap ramps are stored.



Once the blendMap is assigned to the first target, chose the second target and assign the inverted blendMap to it.

That's it. You should now have a mesh which one end follows one cache and the other end follows a different cache.




Thursday, 7 May 2015

nCloth Matching Mesh Constraint

If you want a high resolution nCloth, it can be very slow to simulate. One method is to generate a low resolution nCloth to produce the large scale movement that you require and then simulate a high resolution nCloth which follows the low resolution mesh on the large scale but will display small scale details of its own.

Here is one way to set up this systerm.

  1. Create a low resolution nCloth and simulate the large-scale motion. I will call that low resolution nCloth mesh "cloth_L0"
  2. Cache cloth_L0
  3. Smooth cloth_L0 using Mesh > Smooth. Be careful of having too many divisions as very high subdivision levels will cause significant slowing down of the simulation. I usually choose 1 to start with and then repeat the process if I need more detail.
  4. Export the smoothed cloth_L0 as Alembic using Pipeline Cache > Alembic Cache > Export Selected to Alembic. If you want to preserve UVs, remember to tick the check box in the options box.
  5. Import the Alembic file bac in to your scene. Rename that imported mesh as "Alembic_Import_L1"
  6. Duplicate Alembic_Import_L1. Rename the duplicate "cloth_L1"
  7. Create an nCloth from cloth_L1
  8. Select cloth_L1 and shift-select Alembic_Import_L1, then create an Attract to Matching Mesh constraint using nConstraint > Attract to Matching Mesh 
  9. In the constraint, choose a Dropoff Distance that makes sense in your scene. You want cloth_L1 to be able to deviate just enough from Alembic_Import_L1 to add some good detail, but not so much that it no longer the follows the large scale motion of the original simulation.
  10. In the Strength Dropoff ramp, create a profile that has a value of 0 in the left and 1 on the right. An exponential curve will work well.
  11. Tune the forces acting on cloth_L1 to give a variation over the movement of cloth_L0.
You should now have a high resolution nCloth which follows a low resolution cloth but has extra details. This process can be applied any number of times, depending on the power of your workstation.


In my example above I have chosen to use a division level of 2 because the original mesh was so low resolution I knew I would require quite a lot more resolution to get more detail.