tag:blogger.com,1999:blog-8233827650289023352024-02-07T22:35:05.715-08:00Particle EffectsHere I will present some useful tips for Crowd Artists and VFX artists who use particles in their work. It will be mostly Houdini but some Maya in the older posts.
Please feel free to comment or correct as appropriate.
Unknownnoreply@blogger.comBlogger33125tag:blogger.com,1999:blog-823382765028902335.post-74182783417435717452023-05-02T07:07:00.006-07:002023-05-02T07:07:46.163-07:00Side project: Python UI for texture selection in both Maya and Houdini<p>I am happy to present a few trials of a side project.<br /></p><p>Inspired by <a href="https://www.postoffice.nl/services/crowd-stadium/" target="_blank">Postoffice (Amsterdam) Stadium and Crowd tool</a>, I decided to start development of my own system.</p><p>I started with a very basic premise: to be able to select any team, from any football league, from any country and assign that choice to the home and away team. This would then be reflected in the texture of the shirts worn by the crowd.</p><p>In this way, I would be able to customise the crowd to any combination. Useful for broadcasters who need to quickly swap teams in virtual stadiums.</p><p>This UI was created in Python 2<br /></p><p><br /></p>
<iframe allow="autoplay; fullscreen; picture-in-picture" allowfullscreen="" frameborder="0" height="360" src="https://player.vimeo.com/video/449749439?h=6a5627e06f" width="640"></iframe>
<p><a href="https://vimeo.com/449749439">Team shirt selector</a> from <a href="https://vimeo.com/user11558205">Daniel Sidi</a> on <a href="https://vimeo.com">Vimeo</a>.</p>
<iframe allow="autoplay; fullscreen; picture-in-picture" allowfullscreen="" frameborder="0" height="360" src="https://player.vimeo.com/video/450132633?h=4c4630bd38" width="640"></iframe>
<p><a href="https://vimeo.com/450132633">Team Selector tool in Houdini</a> from <a href="https://vimeo.com/user11558205">Daniel Sidi</a> on <a href="https://vimeo.com">Vimeo</a>.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-71364129695799102382023-03-28T08:29:00.005-07:002023-03-28T08:29:38.703-07:00How to create a pointcloud from any joint of crowd of agents in Houdini<p> </p><p>Here is a method I have learned to create a pointcloud from any joint of crowd of agents in Houdini.</p><p>For
example, an army of soldiers, carrying guns. The FX department require
the location and orientation of the end of each gun barrel. We can
deliver a pointcloud which has that data.</p><p>Here are the steps<br /></p><p>First, take your crowd....</p><div style="text-align: left;"><span style="font-family: courier;">// 1. get joint name</span></div><div style="text-align: left;"><span style="font-family: courier;">string JOINT_NAME = "r_handJA_JNT";</span></div><div style="text-align: left;"><span style="font-family: courier;"> </span></div><div style="text-align: left;"><span style="font-family: courier;">// 2. get joint index</span></div><div style="text-align: left;"><span style="font-family: courier;">int JOINT_IDX = agentrigfind(0, @ptnum, JOINT_NAME);</span></div><div style="text-align: left;"><span style="font-family: courier;"> </span></div><div style="text-align: left;"><span style="font-family: courier;">// 3. get position of agent</span></div><div style="text-align: left;"><span style="font-family: courier;">matrix AGENT_XFORM = primintrinsic(0, "packedfulltransform", @ptnum);</span></div><div style="text-align: left;"><span style="font-family: courier;"> </span></div><div style="text-align: left;"><span style="font-family: courier;">// 4. get position of joint within agent</span></div><div style="text-align: left;"><span style="font-family: courier;">matrix JOINT_XFORM = agentworldtransform(0, @ptnum, JOINT_IDX);</span></div><div style="text-align: left;"><span style="font-family: courier;"> </span></div><div style="text-align: left;"><span style="font-family: courier;">// 5. set offset to end of the gun barrel (enter manually)</span></div><div style="text-align: left;"><span style="font-family: courier;">vector POS = chv("offset");</span></div><div style="text-align: left;"><span style="font-family: courier;"> </span></div><div style="text-align: left;"><span style="font-family: courier;">// 6. transform by JOINT_XFORM </span></div><div style="text-align: left;"><span style="font-family: courier;">POS *= JOINT_XFORM;</span></div><div style="text-align: left;"><span style="font-family: courier;"><br /></span></div><div style="text-align: left;"><span style="font-family: courier;">// 7. trnasform by AGENT_XFORM</span></div><div style="text-align: left;"><span style="font-family: courier;">POS *= AGENT_XFORM;</span></div><div style="text-align: left;"><span style="font-family: courier;"><br /></span></div><div style="text-align: left;"><span style="font-family: courier;">// 8. set initial direction along the gun (+x direction)<br /></span></div><div style="text-align: left;"><span style="font-family: courier;">vector DIR = set(1, 0, 0);</span></div><div style="text-align: left;"><span style="font-family: courier;"><br /></span></div><div style="text-align: left;"><span style="font-family: courier;">// 9. transform by rotation component of JOINT_XFORM</span></div><div style="text-align: left;"><span style="font-family: courier;">DIR *= matrix3(JOINT_XFORM);</span></div><div style="text-align: left;"><span style="font-family: courier;"><br /></span></div><div style="text-align: left;"><span style="font-family: courier;">// 10. trnasform by the rotation component of AGENT_XFORM</span></div><div style="text-align: left;"><span style="font-family: courier;">DIR *= matrix3(AGENT_XFORM);</span></div><div style="text-align: left;"><span style="font-family: courier;"><br /></span></div><div style="text-align: left;"><span style="font-family: courier;">// 11. make a new point</span></div><div style="text-align: left;"><span style="font-family: courier;">int newPoint = addpoint(0, POS);</span></div><div style="text-align: left;"><span style="font-family: courier;"><br /></span></div><div style="text-align: left;"><span style="font-family: courier;">// 12. set DIR on new point</span></div><div style="text-align: left;"><span style="font-family: courier;">setpointattrib(0, "DIR", newPoint, DIR);</span></div><div style="text-align: left;"><span style="font-family: courier;"><br /></span></div><div style="text-align: left;"><span style="font-family: courier;">// 13. delete the agent</span></div><div style="text-align: left;"><span style="font-family: courier;">removepoint(0, @ptnum, 1);<br /></span></div><p><br /></p><p>then export this pointcloud.<br /></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-6274701443309464712020-07-21T08:25:00.006-07:002020-07-21T08:46:49.490-07:00Convert images to Pixar .tex format using handy right-click in WindowsAfter playing a little with Pixar Renderman, I found it laborious to convert images to the .tex format preferred by the renderer.<br />
<br />
Here is a super simple one line script to make it as easy as possible to convert image files.<br />
<br />
This is a Windows only hack, using the right-click context menu in Windows Explorer.<br />
<br />
Open the 'sendTo' folder. Don't know where that is? Don't worry, you don't have to.<br />
Just press '<b>windows-R</b>' to get the launcher and type '<b>shell:sendto</b>'<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9L7LnSucwrZIQYnGbhj86uW3HhcqLXriOY9NU7bfx8-2vdteloQ3pcasGBCtWG3m9ChspZx31dW140LV4i7SnmPvHqYQfTASZU_MU1iYvIYKQR9AV-iTUyoi7JEM8B0YrAhGLTHC4jsZF/s1600/Annotation+2020-07-21+160212.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="209" data-original-width="398" height="210" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9L7LnSucwrZIQYnGbhj86uW3HhcqLXriOY9NU7bfx8-2vdteloQ3pcasGBCtWG3m9ChspZx31dW140LV4i7SnmPvHqYQfTASZU_MU1iYvIYKQR9AV-iTUyoi7JEM8B0YrAhGLTHC4jsZF/w400-h210/Annotation+2020-07-21+160212.png" width="400" /></a></div>
<br />
You should see your sendTo folder. Something like this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIPioAlWaBICGDVee1zK2r65tqAFLNyLs4yCSdX6j8RILPaaalA6k6t6zzO_Ae3fD7mN1edLUFO7SsOmRssjzHWBCC3-_xuMsTy8kZNBTp6z3LLqFNhlWd9-q9JWmb063eIVRyvRrHi8m8/s1600/Annotation+2020-07-21+160511.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="474" data-original-width="917" height="323" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIPioAlWaBICGDVee1zK2r65tqAFLNyLs4yCSdX6j8RILPaaalA6k6t6zzO_Ae3fD7mN1edLUFO7SsOmRssjzHWBCC3-_xuMsTy8kZNBTp6z3LLqFNhlWd9-q9JWmb063eIVRyvRrHi8m8/w625-h323/Annotation+2020-07-21+160511.png" width="625" /></a></div>
<br />
Now, right-click in there and create a new text file. Name the new file 'txmake' or something else if you like.<br />
<br />
You have created a .txt file, but we actually need a .bat file. We need to rename the file and give it a new extension.<br />
<br />
Here is a quick way to do that: Shift-Right-click on a blank part of the sendTo window. You should see the enhanced version of the right-click menu.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuljPEvcmK9jDID3PiwDsnXtjc_ByVyrs6vRAgHRYmPmzTqVN2pdpkByckSYhI6-Ewpyr_IRqyP7RZq5BoxJLL__yNcj-ud7ioWaTBRM6le-Rk-s9xF9nGUZZu4BxKIv1uaoPIqsiV4Cp5/s1600/Capture.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="777" data-original-width="783" height="619" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuljPEvcmK9jDID3PiwDsnXtjc_ByVyrs6vRAgHRYmPmzTqVN2pdpkByckSYhI6-Ewpyr_IRqyP7RZq5BoxJLL__yNcj-ud7ioWaTBRM6le-Rk-s9xF9nGUZZu4BxKIv1uaoPIqsiV4Cp5/w625-h619/Capture.PNG" width="625" /></a></div>
<br />
One option you should see is 'Open PowerShell window here'. Clicking that will open a command line window with the current working directory set to the sendTo directory. That's a killer tip by itself!<br />
<br />
OK, now we can rename the txt file:<br />
<blockquote class="tr_bq">
ren txmake.txt txmake.bat</blockquote>
<br />
The icon for the txmake file will change to a document with cogs. This is a windows batch file and it is executable.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhW26BHqlsMtTTCToQvTf3_lAPtTqGStwzTq5YcEpCQGpDqFB2SHiKzfIqTiBnAt0dTR6oIfuxKcL3YGtXvwXlEP5vVnL7PNk4rJizNt6-L6f0F4qvW5Nnay6h0yPk0UzaTQ7PTQGnaM89z/s1600/Annotation+2020-07-21+161427.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="146" data-original-width="146" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhW26BHqlsMtTTCToQvTf3_lAPtTqGStwzTq5YcEpCQGpDqFB2SHiKzfIqTiBnAt0dTR6oIfuxKcL3YGtXvwXlEP5vVnL7PNk4rJizNt6-L6f0F4qvW5Nnay6h0yPk0UzaTQ7PTQGnaM89z/s1600/Annotation+2020-07-21+161427.png" /></a></div>
<br />
<br />
<br />
<br />
<br />
Right-click the file and select edit. Enter the following lines and save the file.<br />
<br />
<blockquote class="tr_bq">
<br /></blockquote>
<blockquote>
@echo off<br /><br />C:\Progra~1\Pixar\RenderManProServer-23.3\bin\txmake.exe %1 %~n1.tex</blockquote>
<br />
<br />
<br />
%1 represents a file dropped onto the batch file.<br />
%~n1 is the file with the extension stripped off the end of the file name<br />
<br />
Make sure the path to your Renderman installation is correctly inserted into this script.<br />
<br />
<h4>
Usage:</h4>
Browse to any image file in Windows explorer.<br />
Right-click the image file > send to > txmake<br />
<div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh46s_9CYyFFze-k0vjgLzrP8BcrSazV9I44PYL408xDqKOld_49PITw4FkB1cuQhnLDJvz6kl630Kmg9MsNPcxH4o8ADAQpOZ50FVhRP43V-f7OWPnvMSYAAwxJsL_MSCHqNFTx4D-CT-f/s963/Capture.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="963" data-original-width="751" height="976" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh46s_9CYyFFze-k0vjgLzrP8BcrSazV9I44PYL408xDqKOld_49PITw4FkB1cuQhnLDJvz6kl630Kmg9MsNPcxH4o8ADAQpOZ50FVhRP43V-f7OWPnvMSYAAwxJsL_MSCHqNFTx4D-CT-f/w764-h976/Capture.PNG" width="764" /></a></div><div><br /></div><div><br /></div>
The tex file will be created in the same folder.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-823382765028902335.post-73880917325877029452020-01-08T12:07:00.002-08:002020-01-08T12:20:56.568-08:00Wedging simulations in TOPs with burned in parameter valuesHere I will show how I setup a simple system for wedging simulations in Houdini and outputting a mosaic mpeg of the results with parameter values displayed in the frame.<br />
There were a few tricky gotchas but I will show you how I got past them.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhI_pR58KYz8vs2uz5yRzzuBesst6SQ67hhp6WVnVYYUbZSBBkoEYv0W32qGF-aOQf8HVKIEk1ZP0r8OTnqPxq0hQuCNF3Dv3ul-QdFPa1Kneog-BXqTesxWZxaSMhbtcEAi8nfN8_Ecq48/s1600/Annotation+2020-01-08+142103.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="472" data-original-width="838" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhI_pR58KYz8vs2uz5yRzzuBesst6SQ67hhp6WVnVYYUbZSBBkoEYv0W32qGF-aOQf8HVKIEk1ZP0r8OTnqPxq0hQuCNF3Dv3ul-QdFPa1Kneog-BXqTesxWZxaSMhbtcEAi8nfN8_Ecq48/s400/Annotation+2020-01-08+142103.png" width="400" /></a></div>
<br />
<br />
What is wedging?<br />
Imagine you are creating a Pyro explosion, there are a lot of parameters that can be varied to produce different end results. Creating a separate simulation and render for each value of a parameter, then compositing them side by side is one way to compare the effects of changing that paramater, but this would be very laborious and time consuming. Then imaging is you wanted to vary multiple parameters. The task then becomes extremely tedious.<br />
Wedging is a quick way of testing the effect of a parameter on the outcome of a simulation.<br />
This has been implemented in Houdini since the year dot, but now Houdini has TOPs, which makes the setup even more efficient.<br />
<br />
Workflow:<br />
<ol>
<li>Decide what parameters you want to vary</li>
<li>Create Wedge TOP nodes to create those variations</li>
<li>If you are varying multiple parameters, then create multiple Wedge TOPs and chain them together</li>
<li>Cache your simulation using the ROP_Geometry TOP</li>
<li>Render each of those simulations using the ROP_Mantra TOP</li>
<li>Composite text overlay using ROP_Composite_Output TOP</li>
<li>Use Partition_by_Frame TOP to gather all your frames into groups which have the same frame number</li>
<li>The Imagemagick TOP will create a montage of your frames</li>
<li>Use the Wait_for_All TOP before encoding the output video</li>
<li>Output a mpeg of the results using the FFMPEG_Encode_Video TOP</li>
</ol>
<br />
There are a couple of things you need before you can get started: Imagemagick and ffmpeg.<br />
<ul>
<li><a href="https://imagemagick.org/script/download.php" target="_blank">imagemagick.org</a></li>
<li><a href="https://www.ffmpeg.org/download.html" target="_blank">ffmpeg.org</a></li>
</ul>
Install these as per their instructions.<br />
<br />
<br />
<br />
OK, Let's get started setting up this workflow.<br />
<br />
{I am following Steve Knipping's Volumes II (version 2), so you may recognise some of this} <br />
<br />
Here is my simple Pyro setup:<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfkVHCLkZmxM-l1fcfJpCzCZsIU-jUJsUqTnqSxdrZ3w8cWjXKCvrhN-CGopov0Ls0jMJ6iVwHlmqsw42JKYKgWUdTn4jn_UhCENUshhMJyh0coTJHG7FAq8I3d4NwcXnZABNXRSOTUBrY/s1600/Annotation+2020-01-08+140526.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="371" data-original-width="324" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfkVHCLkZmxM-l1fcfJpCzCZsIU-jUJsUqTnqSxdrZ3w8cWjXKCvrhN-CGopov0Ls0jMJ6iVwHlmqsw42JKYKgWUdTn4jn_UhCENUshhMJyh0coTJHG7FAq8I3d4NwcXnZABNXRSOTUBrY/s320/Annotation+2020-01-08+140526.png" width="279" /></a></div>
<br />
I am sourcing Density, Velocity and Temperature from a deformed sphere - like so:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ0Jzls_tOLcWWG1c4XZWdnGtVNwv9MYNrojk8NErSfL0fbuWFnHPgt6hUDk7VirjdICjWKCahotYc7tjskFn3GsObd9CEEf7gNPKk2wWF9YD60XuYOqVVWT5jUp-0uUDGo4-j2Cnu2g5U/s1600/Annotation+2020-01-08+140907.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="801" data-original-width="590" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ0Jzls_tOLcWWG1c4XZWdnGtVNwv9MYNrojk8NErSfL0fbuWFnHPgt6hUDk7VirjdICjWKCahotYc7tjskFn3GsObd9CEEf7gNPKk2wWF9YD60XuYOqVVWT5jUp-0uUDGo4-j2Cnu2g5U/s320/Annotation+2020-01-08+140907.png" width="235" /></a></div>
<br />
The Pyro simulation is equally faithfull to Mr Knipping's tutorial - like so:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilZhlaPkRqKBApPb5qZYlu_uqTx7AxEjhGT__yS9Au3sHW0G2BmNKwGIswER9-B-Z_y_p-2Ofrns7U4Oblg14_qIrHc5rTtXLQJNxlN3uBu_V1iQ_QQVDTP-NxAhEU00sqrKXZ5nNGYi2j/s1600/Annotation+2020-01-08+141038.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="632" data-original-width="768" height="263" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilZhlaPkRqKBApPb5qZYlu_uqTx7AxEjhGT__yS9Au3sHW0G2BmNKwGIswER9-B-Z_y_p-2Ofrns7U4Oblg14_qIrHc5rTtXLQJNxlN3uBu_V1iQ_QQVDTP-NxAhEU00sqrKXZ5nNGYi2j/s320/Annotation+2020-01-08+141038.png" width="320" /></a></div>
<br />
At this point I decided to try the Wedging, so I diverted away from the Knipping and focussed on the TOP side.<br />
<br />
I did need to import the Pyro simulation back into SOPs, so here is my seup for that. Just a DOP_Import_Fields and a Null.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFQLWa2OcciGM65ezkDJ5gnvoyZBJhcciyOxC5afWojcxbjshpnzak4fATk0h1cX-z1gwm_sSnW7J2v3nFScmO9FPL3f3p5KzGnOCSb4nRSk8hd5iSwO3TP7hLwtGVXv_y1G-P8hUaQoNw/s1600/Annotation+2020-01-08+162218.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="820" data-original-width="643" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFQLWa2OcciGM65ezkDJ5gnvoyZBJhcciyOxC5afWojcxbjshpnzak4fATk0h1cX-z1gwm_sSnW7J2v3nFScmO9FPL3f3p5KzGnOCSb4nRSk8hd5iSwO3TP7hLwtGVXv_y1G-P8hUaQoNw/s400/Annotation+2020-01-08+162218.png" width="312" /></a></div>
<br />
<br />
Now, we dive into TOPs<br />
<br />
Firstly, in the top level, create a TOP network and dive inside.<br />
<br />
Once inside TOPs, make a Wedge TOP node<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhS9Imce5LuZrNjXibiMbij40x_MGZajaMopbX8qiy4EF_sNL5FzynxmRV1H2_J2UuXWr_QlNLnKv-863uqroVft04ogghAcYloWPG0Q0bNDCIPzQbmIBaVNgZDh8TgmVBcAU91uHMcRQrn/s1600/Annotation+2020-01-08+140511.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="638" data-original-width="660" height="309" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhS9Imce5LuZrNjXibiMbij40x_MGZajaMopbX8qiy4EF_sNL5FzynxmRV1H2_J2UuXWr_QlNLnKv-863uqroVft04ogghAcYloWPG0Q0bNDCIPzQbmIBaVNgZDh8TgmVBcAU91uHMcRQrn/s320/Annotation+2020-01-08+140511.png" width="320" /></a></div>
<br />
I chose to wedge a couple of paramaters: Disturbance Reference Scale (shown above) and Disturbance Strength.<br />
So I created two Wedge TOPs and chained them together.<br />
I have 4 values for Disturbance Scale and 4 values for Disturbance Strength, making 16 variations in total.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjavN-tZaNzSuDTY4I9w7YjBmtIhV0opE2840u3CcVP4Tu5f2Uw_sY-5FYR5Yql4HifjF_0rtP_l-Lth7HAhdid85prYlPgmiZjclOMlA0By8oTPcs1hyphenhyphenNnq_srCa8oBddxiqY8jXVB1kVP/s1600/i.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="852" data-original-width="649" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjavN-tZaNzSuDTY4I9w7YjBmtIhV0opE2840u3CcVP4Tu5f2Uw_sY-5FYR5Yql4HifjF_0rtP_l-Lth7HAhdid85prYlPgmiZjclOMlA0By8oTPcs1hyphenhyphenNnq_srCa8oBddxiqY8jXVB1kVP/s400/i.png" width="303" /></a></div>
<br />
<br />
Now to cache these simulations out to disk.<br />
<br />
Use the ROP_Geometry_Output node to cache the simulation.<br />
Here is the first important gotcha:<br />
In the ROP Fetch tab, make sure to enable 'All Frames in One Batch' because this is a simulation and each frame must be calculated in order.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMknyTVC50ma8ZnGVXDqFhTIBQZQimgRvxEoyEBGMfgdMJ7p4ijjaL0Y9_5KyFyfv06TZRvIPhJpMWSj3OqmL-70noMGOz6BxCEq8Xh_7UikMTJsVeCjGbAXryaqt2yjoCO_scjKjdwMDs/s1600/Annotation+2020-01-08+143144.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="505" data-original-width="1096" height="183" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMknyTVC50ma8ZnGVXDqFhTIBQZQimgRvxEoyEBGMfgdMJ7p4ijjaL0Y9_5KyFyfv06TZRvIPhJpMWSj3OqmL-70noMGOz6BxCEq8Xh_7UikMTJsVeCjGbAXryaqt2yjoCO_scjKjdwMDs/s400/Annotation+2020-01-08+143144.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7Wg0lm4JROc6TufX4uNJnj7odfW_VtfqY3T_5fzcDlNB9ko-HZyr_b09dk1bb7O8pHwH19KrVnVAJJqqedoAQ0iaix4z8B6z2BMmFt6NGHfaM2y3Xej0P4kpVNQ9QZwd17hj-PhrfcY-p/s1600/Annotation+2020-01-08+143241.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="470" data-original-width="641" height="234" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7Wg0lm4JROc6TufX4uNJnj7odfW_VtfqY3T_5fzcDlNB9ko-HZyr_b09dk1bb7O8pHwH19KrVnVAJJqqedoAQ0iaix4z8B6z2BMmFt6NGHfaM2y3Xej0P4kpVNQ9QZwd17hj-PhrfcY-p/s320/Annotation+2020-01-08+143241.png" width="320" /></a></div>
<br />
This is where you set the frame range of the simulation. When you render, the renderer will take each frame seperately and render them (or in chunks).<br />
<br />
The SOP path points to the output of the Pyro simulation, which is brough back into SOPs with the DOP_Import_Fields node.<br />
<br />
The Output File uses an attribute called @wedgeindex, which is a string, so you have to use back-ticks (i.e. `@wedgeindex') to let Houdini evaluate it's value.<br />
@wedgeindex identifies which wedge you are currently simulating or rendering. It will be the same value for each rendered or simulated geometry file for that one wedge.<br />
So, we will end up with a lot of geometry files, but they will be named using the @wedgeindex tag.<br />
<br />
Now we have the geometry files, we need to render them.<br />
Make a ROP_Mantra_Render TOP node and connect it to the previous node.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvyP2A48j1HEtUvqTc1udYDAr3aEhXfZX6eFuNQiZ2J06u7kAivZGddLx9qnAuIP3_SzE9HbVuLD90OJEGap1kQlSbkLNUaYAKftYQ7tb-9UQ-_jr1Vfezh7bwvWHFdgJs5cWFMb5cwcTV/s1600/Annotation+2020-01-08+163001.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="499" data-original-width="1300" height="152" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvyP2A48j1HEtUvqTc1udYDAr3aEhXfZX6eFuNQiZ2J06u7kAivZGddLx9qnAuIP3_SzE9HbVuLD90OJEGap1kQlSbkLNUaYAKftYQ7tb-9UQ-_jr1Vfezh7bwvWHFdgJs5cWFMb5cwcTV/s400/Annotation+2020-01-08+163001.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxx937_eojXhldFqzS2Tvd5-QnAP5MEoxktQdjsVgNQE2DB2NlPAqdPxvQxjHSYwa1tkrfI9TPILHk5u7VLQWzHwK2y7V-zQmR4S__NEZLIkqu7mqCEblQNFRmTiFg8sFD8XZ6t-TgsKpQ/s1600/Annotation+2020-01-08+163055.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="501" data-original-width="643" height="248" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxx937_eojXhldFqzS2Tvd5-QnAP5MEoxktQdjsVgNQE2DB2NlPAqdPxvQxjHSYwa1tkrfI9TPILHk5u7VLQWzHwK2y7V-zQmR4S__NEZLIkqu7mqCEblQNFRmTiFg8sFD8XZ6t-TgsKpQ/s320/Annotation+2020-01-08+163055.png" width="320" /></a></div>
<br />
This time you can render frames out of order or in chunks - it doesn't matter. So in the ROP Fetch tab, you can set Frames per Batch to a larger number to reduce the load on the network.<br />
Again, we need to use the `@wedgeindex` attribute in the output file name.<br />
<br />
I would recommend checking the output frame size because if you are making a mosaic of full frame renders, the resulting mpeg could become enourmous - 16k or larger. I chose to render at 1/3 size.<br />
<br />
OK, now we have a lot of rendered frames. It's a good idea to label them with which wedged attributes they are using. There is little point having a lovely mosaic of pyro sims if you don't know which version of the wedged attributes your favourite one refers to. I discovered on the SideFX forum that there is a <a href="https://www.sidefx.com/forum/topic/68258/?page=1#post-291791" target="_blank">very clever solution</a> to this problem, which involves a COP network.<br />
Don't worry, I will give you the details here.<br />
<br />
Create a Overlay_Text TOP node.<br />
We want to put a Python script in the Overlay tab, but to do this you have to dive into the node.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSY4FMIu339zZHiCkYBD8BU4TTOHXoPBmmaF2zdnIea1Z1ava7gU7i-VRpm9AYLJKrU0OpGSo7lCqCPYQqtu4qGqz0xPaTLzNbuadrLE-6oEqQrQQcEqmGZORkwseOwq6CNLhJGyZnaH6p/s1600/Annotation+2020-01-08+194242.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="905" data-original-width="944" height="306" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSY4FMIu339zZHiCkYBD8BU4TTOHXoPBmmaF2zdnIea1Z1ava7gU7i-VRpm9AYLJKrU0OpGSo7lCqCPYQqtu4qGqz0xPaTLzNbuadrLE-6oEqQrQQcEqmGZORkwseOwq6CNLhJGyZnaH6p/s320/Annotation+2020-01-08+194242.png" width="320" /></a></div>
<br />
Once inside the COP network, select the Font node. This is where the Python script goes.<br />
However just copying the code into the text box will not work. First, convert the text box into a python script box by right-clicking in the text box, choose Expression > Change Language to Python.<br />
Now copy the Python code into the Python text box<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">import pdg<br />active_item = pdg.EvaluationContext.workItemDep()<br /><br />data = active_item.data<br />attribs = ""<br />for wedge_attrib in data.stringDataArray("wedgeattribs"): <br /> val = data.stringDataArray(wedge_attrib)<br /> if not val:<br /> val = data.intDataArray(wedge_attrib)<br /> if not val:<br /> val = data.floatDataArray(wedge_attrib)<br /> if not val:<br /> continue<br /> val = round(float(str(val)[1:-1]),3)<br /> attribs += "{} = {}\n".format(wedge_attrib, str(val))<br />return attribs</span></blockquote>
The text box will change to a purple colour.<br />
<br />
<br />
Next, we need to sort the rendered frames by frame number - get all the frame 1's together, frame 2's together, etc. This is done by the Partition_by_Frame TOP node. Drop one down.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsU98GooP4aG7DyzFwT-aDLK6c31AmfK2qmfPkozbSCRc1AZCsCAYid-RR8AAZ8MXkYuze4d_9vWyn6ixCol3K_kOKRY0LSqJuVtFWIIPubbjFwrE7zq4xFydQYH4yfaCdb5TPQOt2uaXl/s1600/Annotation+2020-01-08+195248.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="867" data-original-width="1102" height="251" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsU98GooP4aG7DyzFwT-aDLK6c31AmfK2qmfPkozbSCRc1AZCsCAYid-RR8AAZ8MXkYuze4d_9vWyn6ixCol3K_kOKRY0LSqJuVtFWIIPubbjFwrE7zq4xFydQYH4yfaCdb5TPQOt2uaXl/s320/Annotation+2020-01-08+195248.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
Follow it with an Imagemagick TOP node. Set the node to 'Montage'<br />
<br />
Gotcha! Have a look at the Output Filename. It references a variable called $PDG_DIR but this variable does not exist in Houdini by default, you need to create it.<br />
So in Houdini's main menu, go to Edit > Aliases and Variables and add the following entries:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-IYKo_18Tr134WbKzyoiFUwbKQrKMbYz8BrNC_Yjnw5BKe_5ZAWfTO4QEVMH0yV8ums7a_WJe1TNfu4IbkHKsvvmPiqApCC5q7GOGt3mgMLcDxFiB6iasM9VnFXsMWxAsSlKk4CcagOfa/s1600/Annotation+2020-01-08+195619.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="626" data-original-width="749" height="267" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-IYKo_18Tr134WbKzyoiFUwbKQrKMbYz8BrNC_Yjnw5BKe_5ZAWfTO4QEVMH0yV8ums7a_WJe1TNfu4IbkHKsvvmPiqApCC5q7GOGt3mgMLcDxFiB6iasM9VnFXsMWxAsSlKk4CcagOfa/s320/Annotation+2020-01-08+195619.png" width="320" /></a></div>
<br />
We are nearly there.<br />
Before we can encode the mpg, we must wait for all files to be rendered by Imagemagick, so drop down a Wait_For_All TOP node.<br />
<br />
Finally, we can encode the images into a video file. Use the FFMPEG_Encode_Video TOP node.<br />
Again, watch out for that $PDG_TEMP and $PDG_DIR variables.<br />
You might also need to set the FFMPEG path to the folder that contains the FFMPEG executable file. I didn't need to do this, but some people might have to if they already had FFMPEG installed before Houdini was installed.<br />
<br />
That's all there is to it!<br />
I will attach a <a href="https://drive.google.com/open?id=1AYjtprOGcM_wv4N4FDv5PqMnYj6PmGh-">setup file</a>, because something is bound to go wrong.<br />
<br />
<br />
<br />Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-823382765028902335.post-11656506991193201742020-01-04T06:41:00.000-08:002020-01-04T07:20:20.731-08:00Old nodes for following old tutorialsAs SideFX has changed a few Houdini nodes recently, here is a way to access the old nodes so you can follow those tutorials that came out before the changes. This is especially useful for tutorials that came out before Houdini version 17, when large parts of the Dynamics workflow changed.<br />
<br />
To get the old, deprecated, nodes back, you need to use the Textport. Yes, the Textport!<br />
You always wondered that window was for, well, here is one use for it.<br />
<br />
In Textport, type <br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">opunhide</span> </blockquote>
<br />
You will see a popup list of classes of nodes appear<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihxfpeWtPLahkms1T1C3lNyWZpqWKsP9-6TAQ7eiRGBEG_6jb6EVW9AEU_z1PeG6jbmZZh59CtDjWoo8J_lV3BEEMxDvx24uyK-ZfGRTi0e_FXHJ3gJv1NGc_oN17hyphenhyphenGbxQbXR5NCVG2uv/s1600/Annotation+2020-01-04+143412.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="686" data-original-width="616" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihxfpeWtPLahkms1T1C3lNyWZpqWKsP9-6TAQ7eiRGBEG_6jb6EVW9AEU_z1PeG6jbmZZh59CtDjWoo8J_lV3BEEMxDvx24uyK-ZfGRTi0e_FXHJ3gJv1NGc_oN17hyphenhyphenGbxQbXR5NCVG2uv/s400/Annotation+2020-01-04+143412.png" width="358" /></a></div>
<br />
pressing enter now will list every deprecated node - there is a lot of them!<br />
<br />
Note: the list in the popup does NOT show the Dop class of nodes. That confused me for a while, but you can still access the old Dop nodes. Here's how:<br />
<br />
In Textport, type<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">opunhide Dop</span><b><span style="font-family: "courier new" , "courier" , monospace;"> </span></b></blockquote>
<br />
Now if you press enter, you will get a list of deprecated Dop nodes.<br />
<br />
If you want to access the old sourceVolume node, for example, type this in the Textport<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">opunhide Dop sourcevolume </span></blockquote>
<br />
Now, in the Dynamics context, you should be able to get a sourceVolume node the old version of the volumeSource node.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj26MoBoe_s9_5vCROCrQjwQ3vEsrXPX5f4pW0AS4ihtmHrBjIwGv1QLGs8pbofh1pWRvH-UrPQmSSXlW-e56jC_E7iKFmDH8hNn7pandrahviQFjcQlHOJOdmLMxkpAukGBYj7FlBWlxHK/s1600/Annotation+2020-01-04+144253.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="641" data-original-width="664" height="385" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj26MoBoe_s9_5vCROCrQjwQ3vEsrXPX5f4pW0AS4ihtmHrBjIwGv1QLGs8pbofh1pWRvH-UrPQmSSXlW-e56jC_E7iKFmDH8hNn7pandrahviQFjcQlHOJOdmLMxkpAukGBYj7FlBWlxHK/s400/Annotation+2020-01-04+144253.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-823382765028902335.post-87907750634235510062019-05-22T14:31:00.000-07:002020-01-04T15:23:27.862-08:00Spare Inputs in HoudiniHere is a technique I have recently picked up from my friend Jay Natrajan.<br />
<br />
Houdini allows 'spare inputs' into a SOP node, which lets you reference external data from inside a For Each loop.<br />
<br />
For an example, let's say I want to scatter some points around the vertices of a grid. I want a random number of points at a random distance from each vertex.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKvYrdGT9Fa5uR7-20KHR4KC35ijYrMaHkeztaRwGy2BHnBqZo0Ko_k3JnR3WdMaRfANWaXuqDb7bbe7lqJamwuVmFofwzeDjHHTeoJkkeLj1_5Oq3HiL30eghpf_2Wj-WovRq9YugCVal/s1600/Capture1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="609" data-original-width="961" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKvYrdGT9Fa5uR7-20KHR4KC35ijYrMaHkeztaRwGy2BHnBqZo0Ko_k3JnR3WdMaRfANWaXuqDb7bbe7lqJamwuVmFofwzeDjHHTeoJkkeLj1_5Oq3HiL30eghpf_2Wj-WovRq9YugCVal/s400/Capture1.PNG" width="400" /></a></div>
<br />
<br />
I can create attributes on each vertex to control number of points, radius from vertex and random seed. I can then move that data to the scatter node using Spare Inputs.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZ1s_k1jqGQ6NKfJ4TPSfIBZFfXD4xLI2f4rdndrz2nB9i9LDk5YO38TBhJk6ddExH_Bd9CPXQ0F5Vv6NyWIz8cC6tZMEtCll5Z_Fv5SR-8u7hRp1STdVPfhavaSG7ufGQXkikVvfi6ejH/s1600/Capture3.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="830" data-original-width="672" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZ1s_k1jqGQ6NKfJ4TPSfIBZFfXD4xLI2f4rdndrz2nB9i9LDk5YO38TBhJk6ddExH_Bd9CPXQ0F5Vv6NyWIz8cC6tZMEtCll5Z_Fv5SR-8u7hRp1STdVPfhavaSG7ufGQXkikVvfi6ejH/s640/Capture3.PNG" width="518" /></a></div>
<br />
Here I have got three wrangles creating data on each vertex on a grid.<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">f@size = fit01(rand(@ptnum), ch("size_min"), ch("size_max"));</span></blockquote>
<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">i@number = int(fit01(rand(@ptnum), chi("min"), chi("max")));</span></blockquote>
<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;"> f@seed = rand(@ptnum + ch("seed"));</span></blockquote>
<br />
The data created will be different per vertex, because of the rand(@ptnum) function<br />
<br />
Inside the loop, on the sphere node, I reference the 'size' attribute to create a different sized sphere on each point on the original grid. To get that data from the loop node into the sphere node, use the Spare Input:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNAgXvnK4eQenfb5TLJb9uiq147HvoXAg8Sy0SeqW1UV1UIA7OE43xTEYRdHzbyrYtYSu1OOZva1nCInaRYYKBKvX2CmUtvBS5U95KmA-OIu5FE5xk5-bG22FUQuv2uqtvgE4FEYCVBHsL/s1600/Annotation+2019-11-24+175126.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="445" data-original-width="612" height="232" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNAgXvnK4eQenfb5TLJb9uiq147HvoXAg8Sy0SeqW1UV1UIA7OE43xTEYRdHzbyrYtYSu1OOZva1nCInaRYYKBKvX2CmUtvBS5U95KmA-OIu5FE5xk5-bG22FUQuv2uqtvgE4FEYCVBHsL/s320/Annotation+2019-11-24+175126.png" width="320" /></a></div>
<br />
From the config 'cog' menu on the Sphere node, choose 'Add Spare Input'.<br />
This creates a new slot in the Sphere node. Into this slot, drag in the top For Each node from the loop<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLqglUYWAst1d9ABQsbrLMp_q8fSKIiuUz-yzTF4fn_bdkAqivJnJLbWmYHV6gXUTFRTCXRutAf570MUwz1NmpRPu049O8p9LckqDKUidWPQYwBZiYKVNafqR5iESOff6pas2nNktZrPsa/s1600/Annotation+2019-11-24+175127.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="293" data-original-width="597" height="157" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLqglUYWAst1d9ABQsbrLMp_q8fSKIiuUz-yzTF4fn_bdkAqivJnJLbWmYHV6gXUTFRTCXRutAf570MUwz1NmpRPu049O8p9LckqDKUidWPQYwBZiYKVNafqR5iESOff6pas2nNktZrPsa/s320/Annotation+2019-11-24+175127.png" width="320" /></a></div>
<br />
You will see a purple connecting line in the network graph from the For Each node into the Sphere node. This line indicates that there is a connection into a spare input.<br />
<br />
In the Sphere node, I want to look up the 'size' value for each vertex on the grid and use it to scale the sphere. To do this, I use the 'point' function, wth the first parameter being -1, which tells Houdini to look at the first spare input.<br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">point(-1, 1, "size", 0)</span><br />
<br />
Here I am looking at the first spare input, then the first object in that input, then look for the attribute called "size" and choose the 0th component of that value. It's a scalar value, so it just fetches the value<br />
<br />
<br />
Next, I have used a Scatter node to make some particles inside each of the spheres. The number of particles in each sphere is a random number per vertex, generated by the second of the wrangles.<br />
Again, I created a Spare Input to fetch the data from within the For Each loop.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5do5nihYVPbytYim_3e8beoq-D_pjAIVSDP8tGkfIOqX48wbaklOQSYuUguoqFl0W3i6dtFwbuglf7o2remlNIU6ut3ZZvMF3SfyUJYIe8PmEuj_4p1jtE98P6NfHtCMJjOkiOdGyF5ZT/s1600/Annotation+2019-11-24+175128.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="389" data-original-width="565" height="220" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5do5nihYVPbytYim_3e8beoq-D_pjAIVSDP8tGkfIOqX48wbaklOQSYuUguoqFl0W3i6dtFwbuglf7o2remlNIU6ut3ZZvMF3SfyUJYIe8PmEuj_4p1jtE98P6NfHtCMJjOkiOdGyF5ZT/s320/Annotation+2019-11-24+175128.png" width="320" /></a></div>
<br />
<br />
<br />
<br />
So, I now have a random number of particles created around each vertex of the grid.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJmiI39KxA-JDVoIeYqOzOG5Qv5d_6ceaV57Gs4krkbEFkJ8lzCnyKNnEyzwq9asxJSFw1RupfyegjK1eqX7SvSQsJRl0STIYGwELznUp9sevINyL2Yh1UntN3NlLyHhNJLZbgKmrxeF7I/s1600/Annotation+2019-11-24+175129.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="456" data-original-width="585" height="249" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJmiI39KxA-JDVoIeYqOzOG5Qv5d_6ceaV57Gs4krkbEFkJ8lzCnyKNnEyzwq9asxJSFw1RupfyegjK1eqX7SvSQsJRl0STIYGwELznUp9sevINyL2Yh1UntN3NlLyHhNJLZbgKmrxeF7I/s320/Annotation+2019-11-24+175129.png" width="320" /></a></div>
<br />
However, you can see that the distribution of particles is repeated for each group. I want each scatter to have a random seed. I can do this with another attribute passed into the Spare Input. I don't need to make a new Spare Input, I can use the same one, but just change the point function:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxRFIlLTWmCMc1wY5GpTaT8ypcZGOXcFMIPi9lexLbebiq9CcXjfBf_FWXtN-v03YOGtyR2wKOT8NCgj696Ibkz00A3CRp7tzHvtekBJu0OWaWkBiGsd6uCkw_GabiidgdxNiCi37CWV5x/s1600/Annotation+2019-11-24+175130.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="298" data-original-width="548" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxRFIlLTWmCMc1wY5GpTaT8ypcZGOXcFMIPi9lexLbebiq9CcXjfBf_FWXtN-v03YOGtyR2wKOT8NCgj696Ibkz00A3CRp7tzHvtekBJu0OWaWkBiGsd6uCkw_GabiidgdxNiCi37CWV5x/s320/Annotation+2019-11-24+175130.png" width="320" /></a></div>
<br />
So, it's possible to pull any number of attributes through a single Spare Input.<br />
<br />
In this case "seed", "number" and "size" are all passed into the For Each loop via Spare Inputs on the Sphere and Scatter nodes.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-62470887721449491112017-03-01T13:59:00.003-08:002017-03-01T13:59:30.828-08:00Distort and Undistort with PFTrack and NukeHere is a reliable way to produce STmaps from PFTrack for use in Nuke when undistorting and re-distorting a plate.<br />
<br />
This is the method outlined by Dan Newlands on his excellent blog <a href="http://www.visual-barn.com/" target="_blank">Visual Barn</a>. There you can find many tutorials and methods for professional Matchmoving.<br />
<br />
<a href="http://www.visual-barn.com/updated-lens-distortion-workflow/">http://www.visual-barn.com/updated-lens-distortion-workflow/</a><br />
<br />
Dan Newlands walks you through the whole process very clearly and I cannot really add anything much to what he shows you. I want to show my setup in PFTrack and Nuke for my own reference.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhee5zBEY0p5IxNxHlHILSeVE3mfsb1ZavioMyy6Tz_O0tLfSRD613E4K0VbPIOHpSEF8z9rNq4xuNGwX-wxkLFTe5F6eOGVu0zCY_L9AtZUcl3OpzS5oXPb5b8F_YOgQtPKk8aFYWQ3ndz/s1600/undistort_redistort.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="376" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhee5zBEY0p5IxNxHlHILSeVE3mfsb1ZavioMyy6Tz_O0tLfSRD613E4K0VbPIOHpSEF8z9rNq4xuNGwX-wxkLFTe5F6eOGVu0zCY_L9AtZUcl3OpzS5oXPb5b8F_YOgQtPKk8aFYWQ3ndz/s640/undistort_redistort.PNG" width="640" /></a></div>
<br />
Here I show the Add Distortion node. Use the 'original clip size' option. The three export nodes are for the undistorted plate (for use in your 3D app), the undistortion STmap and the re-distortion STmap.<br />
The undistorted plate and the undistortion STmap will have a different size to the original plate, but the re-distortion STmap will have the same resolution as the original plate.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1ev0rKj4HvU88L8Lcl3LMVyrt5VShyphenhyphen1-o-w5ZE8_7cBnAMtjgQq8XhcSREKjY4YPBdwBrHwcFrMQs2-5YyIYqUpYiaEpfxNwlgc0G-lVGDYtOjob327AcgUmoslEw36Nn17_M0SMbkpK5/s1600/undistort_redistort_02.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="374" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1ev0rKj4HvU88L8Lcl3LMVyrt5VShyphenhyphen1-o-w5ZE8_7cBnAMtjgQq8XhcSREKjY4YPBdwBrHwcFrMQs2-5YyIYqUpYiaEpfxNwlgc0G-lVGDYtOjob327AcgUmoslEw36Nn17_M0SMbkpK5/s640/undistort_redistort_02.PNG" width="640" /></a></div>
<br />
Here is the Nuke setup. The first STmap node will undistort the plate. The second STmap node will re-distort the plate to the original state.<br />
Any 3D rendered elements can be introduced between these two nodes and they will be re-distorted to match the original plate. If that is your workflow, then use the undistorted plate exported form PFTrack in your 3D package and match your 3D elements against that.<br />
<br />
One thing to note: if you see blocky or tearing artifacts in Nuke it may be that the filtering option in the STmap node is set incorrectly. I have found that 'cubic' filtering seems to work well, although resulting in some softness in the final redistorted image.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-29960086697651165952016-12-08T00:04:00.000-08:002017-08-16T06:51:54.636-07:002017 Show Reels<span style="font-family: "arial" , "helvetica" , sans-serif;">I am delighted to post my new show reels for 2017.</span><br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;">H</span>ere is my Houdini and Maya FX reel, which shows many disciplines including Houdini crowds, Maya nCloth, nParticles, nHair, and fluids. Rendering is done either in Mental Ray or Arnold, and camera tracking is done with either 3DEqualizer or PFTrack.</span><br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<br />
<iframe allowfullscreen="" frameborder="0" height="360" mozallowfullscreen="" src="https://player.vimeo.com/video/228077598" webkitallowfullscreen="" width="640"></iframe>
<br />
<div style="text-align: center;">
<span style="font-size: x-small;"><span style="font-family: "arial" , "helvetica" , sans-serif;">
<a href="https://vimeo.com/228077598">Daniel_Sidi_FX_Reel_2017</a> from <a href="https://vimeo.com/user11558205">Daniel Sidi</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
</span></span><br />
<div style="text-align: center;">
</div>
</div>
.
<br />
<span style="font-family: "arial" , "helvetica" , sans-serif;">Here is my</span><span style="font-family: "arial" , "helvetica" , sans-serif;"> Match<span style="font-family: "arial" , "helvetica" , sans-serif;">m</span>ov<span style="font-family: "arial" , "helvetica" , sans-serif;">ing</span> reel, which shows some of the most difficult shots I have tackled. All work shown in these shots was done using PFTrack<span style="font-family: "arial" , "helvetica" , sans-serif;">.</span><span style="font-family: "arial" , "helvetica" , sans-serif;"> I am also<span style="font-family: "arial" , "helvetica" , sans-serif;"> familiar with</span> </span>3DEquali<span style="font-family: "arial" , "helvetica" , sans-serif;">z</span>er and Nuke's<span style="font-family: "arial" , "helvetica" , sans-serif;"> built in tracker</span>. </span><span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<iframe allowfullscreen="" frameborder="0" height="360" mozallowfullscreen="" src="https://player.vimeo.com/video/194728197" webkitallowfullscreen="" width="640"></iframe>
<br />
<div style="text-align: center;">
<span style="font-size: x-small;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><a href="https://vimeo.com/194728197">Daniel Sidi Match Move Reel 2016</a> from <a href="https://vimeo.com/user60008304">Daniel Sidi</a> on <a href="https://vimeo.com/">Vimeo</a>.</span></span></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-43287206612724512492016-09-08T02:19:00.004-07:002020-01-04T09:35:22.534-08:00Align Particle Instances to a CameraHere I will demonstrate a method of aligning a particle to face a location (eg a camera), which mimics the behaviour of particle sprites.<br />
<br />
This is quite easy to do. First get the position of each particle, then get the position of the location you want the particles to point to. From this you can calculate the aim direction thus:<br />
<b><br /></b>
<b>aimPP = targetPosition - position</b><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh91218qZA_IIduOvKAkZpSkOQ47ZCkPXVCCUj0WNFNAKb-Ad-dIPb1cxwDxQThjwmcRrFLiOV8WLy_UkjWOopLqyXNKPp3MbS-O9OG_f1EuxT2vqsGbffVEj4YQGBdsEqvAqRYk4QyUvkY/s1600/Capture3.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="290" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh91218qZA_IIduOvKAkZpSkOQ47ZCkPXVCCUj0WNFNAKb-Ad-dIPb1cxwDxQThjwmcRrFLiOV8WLy_UkjWOopLqyXNKPp3MbS-O9OG_f1EuxT2vqsGbffVEj4YQGBdsEqvAqRYk4QyUvkY/s400/Capture3.PNG" width="400" /></a></div>
<br />
<br />
To get this done in Maya, follow these steps:<br />
<br />
1. On your particle object, create a new per-particle vector attributes, called aimPP.<br />
<br />
2. Create an expression on the particle:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">//</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">// CREATION EXPRESSION</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">//</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">float $targetX = targetLocation.translateX;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">float $targetY = targetLocation.translateY;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">float $targetZ = targetLocation.translateZ;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<br />
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<span style="font-family: "courier new" , "courier" , monospace;">vector $targetPosition = <<$targetX, $targetY, $targetZ>>;</span></div>
<div style="-qt-block-indent: 0; -qt-paragraph-type: empty; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<span style="font-family: "courier new" , "courier" , monospace;">aimPP = $targetPosition - position;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">spriteTwistPP = rand(-0.25, 0.25);</span></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<span style="font-family: "courier new" , "courier" , monospace;">//</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">// RUNTIME BEFORE DYNAMICS EXPRESSION</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">//</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">float $targetX = targetLocation.translateX;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">float $targetY = targetLocation.translateY;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">float $targetZ = targetLocation.translateZ;</span><br />
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<span style="font-family: "courier new" , "courier" , monospace;">vector $targetPosition = <<$targetX, $targetY, $targetZ>>;</span></div>
<div style="-qt-block-indent: 0; -qt-paragraph-type: empty; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<span style="font-family: "courier new" , "courier" , monospace;">aimPP = $targetPosition - position;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">aimUpAxisPP = << 0, sin(frame*spriteTwistPP), cos(frame*spriteTwistPP) >>;</span></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<br /></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<br /></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
Now, I was expecting<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">vector $targetPosition = targetLocation.translate;</span><br />
<br />
to work, but for some reason I cannot think of, it does not work, so I have taken each of the components and constructed the vector from those. Not very elegant but it does the job. If anyone knows why the vector cannot be assigned, please do let me know!<br />
<br /></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
3. On the particle shape, set the Aim Direction in the Instancer (Geometry Replacement) section to the aimPP attribute.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt_HDMvupuS4LTDPucphFfkh9KO823n-z3neZ2e4HUqPksFCbInHhWwWawANxqfT7qVR9x_BRaB7oko1nYfLX0LEWlPO1CEwGzqWVqBw0moEdcm6Yc6G5ktDOUOJeaKKpTiwwDCuAO9D4Z/s1600/Capture4.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt_HDMvupuS4LTDPucphFfkh9KO823n-z3neZ2e4HUqPksFCbInHhWwWawANxqfT7qVR9x_BRaB7oko1nYfLX0LEWlPO1CEwGzqWVqBw0moEdcm6Yc6G5ktDOUOJeaKKpTiwwDCuAO9D4Z/s320/Capture4.PNG" width="315" /></a></div>
</div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<br /></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
If you want the particle instances to rotate as well as face the camera, I am using the spriteTwistPP attribute and then create a new per particle vector attribute called aimUpAxisPP.<br />
<br />
Create a random value for spriteTwistPP in the creation expression, then in the runtime before dynamics expression, add the line<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">aimUpAxisPP = << 0, sin(frame*spriteTwistPP), cos(frame*spriteTwistPP) >>;</span><br />
<br />
<br /></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<br /></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
Now, for the Aim Up Axis in the Instancer options, choose aimUpAxisPP</div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<br /></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
Now you can instance any geometry to the particle object and the instances will behave like sprite particles. You can render them with Arnold, too!</div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<br /></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<br /></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<br /></div>
Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-823382765028902335.post-68317300136184317192016-09-06T13:31:00.000-07:002016-09-08T01:58:37.843-07:00Maya Sprites with ArnoldI am doing some testing of Maya Sprite rendered with Arnold.<br />
<br />
Here is a test scene:<br />
<br />
<a href="https://drive.google.com/open?id=0BzsvxJESXf1QUGh4UUU1R0cyVk0">SpriteTest_v001</a><br />
<br />
At the moment the setup is not working. I get the same sprite image (number 1) on all the sprites, and the sprites are all oriented the same. I think the problem is that the spriteTwistPP and spriteNumPP attributes are not being passed to the renderer.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrEzrpmGj8pLPy52pmtm5SRmPZHUj0LKzhd4VTKTJO92GZjcDD_I8Q8P9Pqko2Vg17u55yB0_-fCWyA7gFwkel5USl9Iv5u0cgufmB7mYfUujlRQZzTLrpVX_t0c6l7kPtWhdBNwi0WCAB/s1600/Capture.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="392" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrEzrpmGj8pLPy52pmtm5SRmPZHUj0LKzhd4VTKTJO92GZjcDD_I8Q8P9Pqko2Vg17u55yB0_-fCWyA7gFwkel5USl9Iv5u0cgufmB7mYfUujlRQZzTLrpVX_t0c6l7kPtWhdBNwi0WCAB/s640/Capture.PNG" width="640" /></a></div>
<br />
After contacting Solid Angle support, it seems that this workflow is not currently supported, but their developers are 'looking at it', which probably means that they will fix it quite quickly.<br />
<br />
I will update this post when I have more information.<br />
<br />
<br />Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-823382765028902335.post-17293652002869339152015-12-02T08:40:00.002-08:002015-12-02T09:12:18.793-08:00Arnold aiMotionVector with transparencyI will show one method of rendering motion vectors for an object which has transparency.<br />
<br />
Here is the scenario:<br />
<br />
I have some snowflakes which are comprised of simple polygon meshes instanced to some particles.<br />
I have a circular ramp with noise to give the snowflakes a feathered edge.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJyQljkRMN79STAQWjWqkPy8Vi6yWBixMa1N-zY_xhj21bmQP_nDkybj29Wg4yji_N6-K_THG5OKjmocdaND8HC2Bq1uTGCNe6qUfxpZ_x2VD0fs-1PMN21kuKyASfLAPkTNqTcULhaA6H/s1600/Capture_01.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="393" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJyQljkRMN79STAQWjWqkPy8Vi6yWBixMa1N-zY_xhj21bmQP_nDkybj29Wg4yji_N6-K_THG5OKjmocdaND8HC2Bq1uTGCNe6qUfxpZ_x2VD0fs-1PMN21kuKyASfLAPkTNqTcULhaA6H/s640/Capture_01.PNG" width="640" /></a></div>
<br />
<br />
What I want to achieve is to render the beauty in one pass and then a motion vectors pass. The motion vectors must have the same opacity as in the the beauty pass.<br />
<br />
Here is the trick: In the ramp which controls the opacity, replace the white colour with an aiMotionVector node.<br />
<br />
Set the output to Raw in the aiMotionVector node.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7Rnq8jrktXqe7NrDb1GfE8CBJIrgDrj0lDaFUk7ivcfT-YgTMC7RQM4sef2FJp-vdr86d99dA_XRCEnEkuGHPrAb7AQCAVuLfH0T0MtSmdquPaBmcIu4B59Gkwh8HRhnrrLs0DqgXkYf8/s1600/Capture_09.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7Rnq8jrktXqe7NrDb1GfE8CBJIrgDrj0lDaFUk7ivcfT-YgTMC7RQM4sef2FJp-vdr86d99dA_XRCEnEkuGHPrAb7AQCAVuLfH0T0MtSmdquPaBmcIu4B59Gkwh8HRhnrrLs0DqgXkYf8/s1600/Capture_09.PNG" /></a></div>
<br />
<br />
Here is the shader network.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-yuo4DGIOORGzN3j90m6VUYebaB6t2Q4TzXzEB5q663IeSAlr7t2VHCNzobnGNvUge9EsHO4iBdeTRrd-litHGDmFjCDhko5iwRVxzjC8u8noMUHWHyl50XKdP2tmEpDTop3xOVnomn1y/s1600/Capture_02.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="431" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-yuo4DGIOORGzN3j90m6VUYebaB6t2Q4TzXzEB5q663IeSAlr7t2VHCNzobnGNvUge9EsHO4iBdeTRrd-litHGDmFjCDhko5iwRVxzjC8u8noMUHWHyl50XKdP2tmEpDTop3xOVnomn1y/s640/Capture_02.PNG" width="640" /></a></div>
<br />
Apply this shader to the snowflakes in a seperate render layer. This will be the motion vector pass<br />
<br />
<br />
For the motion vector pass, enable Motion Blur in Arnold's render settings.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSNpTl8dXGBgnS0CcAfAggFyfDva4lWQETcHQLXiffZYGshrrKwUVTMYI41wMRWtcM9SBGAfRn5jeDlqcAmQRLI7tpd_B3xBSaHaVMBhCwSexudVbpoWZ-jZorv3gbV10irnihVqIWC7tf/s1600/Capture_03.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSNpTl8dXGBgnS0CcAfAggFyfDva4lWQETcHQLXiffZYGshrrKwUVTMYI41wMRWtcM9SBGAfRn5jeDlqcAmQRLI7tpd_B3xBSaHaVMBhCwSexudVbpoWZ-jZorv3gbV10irnihVqIWC7tf/s1600/Capture_03.PNG" /></a></div>
<br />
<br />
This will give each snowflake an RGB value.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjybtYheswxikXkNWAopu4TMPfGzje1sGZkOc8JptFyXX1CKm7cN_22zzt0wAOzgKGklvODPGMSj4Z9zpYaxTNl2BoXjqDGmn4BEONU7HJeY9ccGj6IUCqKjd86ZuswDi625rXpoas43jqI/s1600/Capture_05.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="441" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjybtYheswxikXkNWAopu4TMPfGzje1sGZkOc8JptFyXX1CKm7cN_22zzt0wAOzgKGklvODPGMSj4Z9zpYaxTNl2BoXjqDGmn4BEONU7HJeY9ccGj6IUCqKjd86ZuswDi625rXpoas43jqI/s640/Capture_05.PNG" width="640" /></a></div>
<br />
<br />
However there is a problem. We do not want the snowflakes to be motion blurred, we just want them to show the motion vectors.<br />
<br />
To stop each slowflake being rendered with motion blur, click the Ignore Motion Blur option in the Override tab of the Arnold render settings<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4_8fb29u8voZEm3P0I1mqDantjMPpvYswGfEGORJLl16xEks4GbZ39pA8eeM6TBvtXn-mtQIn5c_hvXyiM3SqHAdX0N-QUhL_bDOyUw0Zdrdw3HEt74ibqStc48cVb7H_HcA2VvTB0P3z/s1600/Capture_04.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4_8fb29u8voZEm3P0I1mqDantjMPpvYswGfEGORJLl16xEks4GbZ39pA8eeM6TBvtXn-mtQIn5c_hvXyiM3SqHAdX0N-QUhL_bDOyUw0Zdrdw3HEt74ibqStc48cVb7H_HcA2VvTB0P3z/s1600/Capture_04.PNG" /></a></div>
<br />
<br />
That will give snowflakes with opacity and motion vector information in RGB<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKnG1MWbI1KWxE4-YuAfv-rAuTDQvhuWVom2U6W1EynwPoDouKcmtJ9ZliTgfwAZUF1JZFAkJnrAk-IybqLGXfGsblxp6uypZezx5qOLuqnaWEq82uquhywkOaPL7vRaqGrP4nGfeuYX50/s1600/Capture_06.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="441" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKnG1MWbI1KWxE4-YuAfv-rAuTDQvhuWVom2U6W1EynwPoDouKcmtJ9ZliTgfwAZUF1JZFAkJnrAk-IybqLGXfGsblxp6uypZezx5qOLuqnaWEq82uquhywkOaPL7vRaqGrP4nGfeuYX50/s640/Capture_06.PNG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
One more problem remains - the position of the particle at the time when motion blur is calculated is not the same as the position of the snowflake when the beauty pass is rendered. To fix this, select Start on Frame in the Motion Blur options in the Arnold render settings.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8cWU42dp5EagapgxIE4Zvu33d1NChPOKUZaXvY_ntpVvzENvHK6_hXT6vwjf6QG7jCU0OAnyro7seZ0sx-0w1En7jsA-BEbnzhE6Dk-J1YmxYnPo9lvuDKestON_lCuzXvS8sIKudJzHw/s1600/Capture_07.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8cWU42dp5EagapgxIE4Zvu33d1NChPOKUZaXvY_ntpVvzENvHK6_hXT6vwjf6QG7jCU0OAnyro7seZ0sx-0w1En7jsA-BEbnzhE6Dk-J1YmxYnPo9lvuDKestON_lCuzXvS8sIKudJzHw/s1600/Capture_07.PNG" /></a></div>
<br />
<br />
Now, into Nuke.<br />
<br />
Load the rendered beauty pass and the motion vector pass. Each snowflake should overlap perfectly (if not, check the Start on Frame option)<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_0aF9sk2f222xinOa-CAu64qXgERAgmKML9gbcf4Los9ysdwHry5u-xk4kgMP_LEJ9QsxsBQgHCgqrNohIcLbK2rQwlzYvI8D0mBc-cFEh3sQVDB7UZafVDjQmNiejtm9BKDzOvCnBqDs/s1600/Capture_08.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="392" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_0aF9sk2f222xinOa-CAu64qXgERAgmKML9gbcf4Los9ysdwHry5u-xk4kgMP_LEJ9QsxsBQgHCgqrNohIcLbK2rQwlzYvI8D0mBc-cFEh3sQVDB7UZafVDjQmNiejtm9BKDzOvCnBqDs/s640/Capture_08.PNG" width="640" /></a></div>
<br />
Combine the two renders by using a Shuffle Copy node.<br />
Shuffle R -> u<br />
shuffle G -> v<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDeEuEvZ-dQB3gXtJH4TQi-W03hfUMRP_kEbxhhd9Wb5qgwPGrEkpJEcFReGG2XfztFPqN6B_nvgh5Q3-yxEZ_QJbPh-ZduWqkzu86YCEXKEcE6yJ0O1UhcAV5_xt1mFYsksazQ6OwwJVa/s1600/Capture_10.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDeEuEvZ-dQB3gXtJH4TQi-W03hfUMRP_kEbxhhd9Wb5qgwPGrEkpJEcFReGG2XfztFPqN6B_nvgh5Q3-yxEZ_QJbPh-ZduWqkzu86YCEXKEcE6yJ0O1UhcAV5_xt1mFYsksazQ6OwwJVa/s1600/Capture_10.PNG" /></a></div>
<br />
<br />
Now use a VectorBlur node to produce the motion blur effect<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgt89g9MSEv_8kFnPjsneLm-oPYm-CNmCpB60EXniBaCSKKniubjYYmJ-iI1B0mV0K43OEiBkS0SVHU-eYQGEdMTiVPYVCKg_o34ZEQ68Bn1PUCs5oXMT_O8o3yx5y4XSeeH8kBoFdwiyvl/s1600/Capture_11.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgt89g9MSEv_8kFnPjsneLm-oPYm-CNmCpB60EXniBaCSKKniubjYYmJ-iI1B0mV0K43OEiBkS0SVHU-eYQGEdMTiVPYVCKg_o34ZEQ68Bn1PUCs5oXMT_O8o3yx5y4XSeeH8kBoFdwiyvl/s1600/Capture_11.PNG" /></a></div>
<br />
<br />Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-823382765028902335.post-13873109468539006052015-11-19T08:50:00.002-08:002015-11-19T22:47:12.012-08:00High Resolution Cloth SimulationsHere I will describe the method I have been using to create high resolution cloth simulations, based upon low resolution pre-vis cloth. This method is derived from the work of David Knight - thanks David!<br />
<br />
1. First, create a medium resolution poly mesh which we will use to create the low res sim.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtcEnC75rlt7NKL7cbKoP7qo9EqyLUtwzDgYuva1MpwW9bOJ_OBaCgieH19Dv4pe1O0e455Bn8oYtml0hvvaQXUvk1yrbrIQliYuuznqWqX4tMfmMG9Z4vlDpjinFN0ltxBaDCsV6DyvOO/s1600/Capture_01.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="402" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtcEnC75rlt7NKL7cbKoP7qo9EqyLUtwzDgYuva1MpwW9bOJ_OBaCgieH19Dv4pe1O0e455Bn8oYtml0hvvaQXUvk1yrbrIQliYuuznqWqX4tMfmMG9Z4vlDpjinFN0ltxBaDCsV6DyvOO/s640/Capture_01.PNG" width="640" /></a><br />
<br />
In my example I have created a mesh with 80 x 160 faces. The mesh MUST be proportionate to the number of faces (i.e. 2:1 in my case) this is because nCloth works better with square faces.<br />
<br />
2. Duplicate the mesh. We will use this second mesh to 'pull' the cloth around the scene. Select around 5% of the faces from the leading edge of the Puller mesh. Invert the selection and delete the other faces. We should be left with a narrow strip of faces which exactly overlap the leading edge of the cloth.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaI1xrD-e8HI06LFei5xPAb-SHmJEAOCd92yTKoZsUcTYINVtE1SXxdAY8SgT14cnhqjazRf_yUDzbfjk-PmEWHfi1wztD2ZozkDZMtff6oU663D4GGeuSpsp6NT1n6698TgtS0oFsJNKn/s1600/Capture_02.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="402" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaI1xrD-e8HI06LFei5xPAb-SHmJEAOCd92yTKoZsUcTYINVtE1SXxdAY8SgT14cnhqjazRf_yUDzbfjk-PmEWHfi1wztD2ZozkDZMtff6oU663D4GGeuSpsp6NT1n6698TgtS0oFsJNKn/s640/Capture_02.PNG" width="640" /></a></div>
These are the faces we want to keep<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmHsgTbnfYl47y7IFh-KGO-VrmYQpc-hmKkaEvKj6iJqVUoPpYiiS2m6TL3jZprPB1d-X64NejGcXfuRlr8k5gzLjFg7S6Ikhw25sojukPJRbZX8tmmwiRtfcsQobtZEfNPaBKiQI01klB/s1600/Capture_03.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div>
<br />
3. Make the original mesh a nCloth.<br />
<br />
4. Select the vertices of the nCloth that correspond to the Puller object and then shift select the Puller mesh and create a Point to Surface nConstraint.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw9wAt2aYlI4AqImuw59laFRuM0xR9skrJnTpuvsVOHhA3dLc72tTWw_YPohLjZBk48ulVhhXp-Q5rp4PRTlomFu1QMKs14FC_y9SipzzU6X5nWs9FQ_uVPfR-GwEKCy8ppVfD-U9RXVQU/s1600/Capture_04.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="402" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw9wAt2aYlI4AqImuw59laFRuM0xR9skrJnTpuvsVOHhA3dLc72tTWw_YPohLjZBk48ulVhhXp-Q5rp4PRTlomFu1QMKs14FC_y9SipzzU6X5nWs9FQ_uVPfR-GwEKCy8ppVfD-U9RXVQU/s640/Capture_04.PNG" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
5. We want the nConstraint to have low strength, so that the puller gently guides the cloth through the scene. I have used values:<br />
Strength = 0.05<br />
Tangent Strength = 0.0.5<br />
<br />
6. We want to animate the Puller mesh now. You can attach it to a motion path or just keyframes, it doesn't matter really. Remember that the faster the cloth moves through the scene, the more sub-frame samples you will need to keep the cloth behaving nicely. If the motion is too jerky the cloth will go crazy. Keep the animation as smooth as you can.<br />
<br />
7. Add some noise to the cloth. No cloth will behave perfectly in reality, so add some noise to your simulation. One way to do this is to add a texture deformer to the Puller mesh.<br />
<br />
8. Select the Puller mesh. Create a Texture Deformer. Set the deformer's Direction to Normal. In the Texture slot, assign a Noise texture.<br />
<br />
9. We don't want to have the Texture Deformer to act on the Puller mesh at the start of the simulation, but rather have it gradually ramp up to full strength over, say, 25 frames. To do this, key the Envelope attribute on the Texture Deformer.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk94WaUd5PMFjVV7l_LjvTWOnLL7xcmxF4cpRFif5X-TYeBxZG1R5iPvr0veCMtnGpeuw6ji3JkoSvv89azszJQkJinQjM_Gqrxrq0km85R0aPQeGw3p9CpGO0RqKrXJ1shZ6B434UuTwi/s1600/Capture_05.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="402" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk94WaUd5PMFjVV7l_LjvTWOnLL7xcmxF4cpRFif5X-TYeBxZG1R5iPvr0veCMtnGpeuw6ji3JkoSvv89azszJQkJinQjM_Gqrxrq0km85R0aPQeGw3p9CpGO0RqKrXJ1shZ6B434UuTwi/s640/Capture_05.PNG" width="640" /></a></div>
<br />
10. Set the Texture Deformer's Offset to be half of it's strength, but in the opposite direction. This will keep the Puller mesh 'centered'. To do this, apply an expression:<br />
<span style="background-color: #444444; font-size: large;"><br /></span>
<span style="color: #f6b26b; font-size: small;"><span style="background-color: #444444;"><b><span style="color: #b45f06;"><span style="font-family: "courier new" , "courier" , monospace;">textureDeformer1.offset=textureDeformer1.strength*-0.5</span></span></b></span></span><br />
<span style="font-size: small;"><br /></span>
<span style="font-size: small;"> 11. Set an expression in the noise texture Time attribute:</span><br />
<span style="color: #f6b26b; font-size: small;"><span style="background-color: #666666;"><br /></span></span>
<span style="color: #f6b26b; font-size: small;"><span style="background-color: #444444;"><span style="color: #b45f06;"><b><span style="font-family: "courier new" , "courier" , monospace;">noise1.time=time</span></b></span></span></span><br />
<br />
This will make the noise texture flow over time.<br />
<br />
12. Add some wind, gravity or other forces if you like. Now simulate!<br />
<br />
13. Now we have a low resolution mesh. We need to make a high resolution version, but with extra details.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwbQVdgJ0bVcNgTl0H1IC-tcVDXKkVZpne5xf-hMfV6M7hOYZ3T7PihL9o4sL84-1UI6yiVgHFPbAWLEs2tqdZGO5MWtpT6Jx384459EnvZzuRpOyOY7uVRVaS7gVyFTLi3XTNDejC4dgq/s1600/Capture_06.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="402" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwbQVdgJ0bVcNgTl0H1IC-tcVDXKkVZpne5xf-hMfV6M7hOYZ3T7PihL9o4sL84-1UI6yiVgHFPbAWLEs2tqdZGO5MWtpT6Jx384459EnvZzuRpOyOY7uVRVaS7gVyFTLi3XTNDejC4dgq/s640/Capture_06.PNG" width="640" /></a></div>
<br />
14. First thing to do is to apply a Smooth to the low res mesh. Mesh > Smooth, and give it 2 divisions. In my example, this gives me a mesh with ~200,000 faces.<br />
<br />
15. Export this mesh as an alembic cache. Pipeline Cache > Export Selection to Alembic. This is quite slow! Save your scene as LowRes.<br />
<br />
16. I recommend that you do the next steps in a fresh scene. Not only will this be faster, lighter and easier to organise, but it will be much easier to go back to the Low Res scene at any time and re-export any changes you need to make really easily. Once re-exported, you can very easily re-import the Alembic Cache file in the High Res scene, without any fuss.<br />
<br />
18. In a new scene, import the Alembic cache. Duplicate it. Make the duplicate a nCloth object.<br />
<br />
19. Constrain the nCloth to the Alembic mesh. Select the cloth, then the Alembic mesh and then create an Attract to Matching Mesh nContsraint.<br />
<br />
20. Again, we want the constraint to 'guide' the cloth, rather than drag it too strongly. Here are the settings I use, but, of course, it will depend on the scene scale, and what you want the cloth to do.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBz_rW7Srd5oY2W0wMJsfrU4iJfSFHP9Tp8Ea87RVL6B8YixqLccDK3NbMtmYUhUrGuYeYnbiaO69HegOuLxUPRW0wu92Ig2d6gzjHep2m8pyybGt9gBHqjIocfhJXYekyHyc6uiaB0KDO/s1600/Capture_08.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBz_rW7Srd5oY2W0wMJsfrU4iJfSFHP9Tp8Ea87RVL6B8YixqLccDK3NbMtmYUhUrGuYeYnbiaO69HegOuLxUPRW0wu92Ig2d6gzjHep2m8pyybGt9gBHqjIocfhJXYekyHyc6uiaB0KDO/s1600/Capture_08.PNG" /></a></div>
<br />
Notice the Strength Drop Off ramp. This allows the cloth to move freely when it is near to the Alembic guide, but the constraint kicks in as the cloth moves away from the guide.<br />
<br />
21. Now simulate this High Resolution cloth. Hopefully you will see that is follows the Alembic guide quite closely, but will also have some extra details. I have not changed any nCloth attributes apart from self-collision width. All the motion is made with the constraints.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmfiJiMJlLn6y0qyzOz11vgbzd1CIgn0KvGiOc1eYHt4NwLAxvej02Fn9zvEctfzkwFsABcLIu4bs1DHfApa2fQ6Y2_YQQdJdZ_lLTn2pEmqO48UFYUp786mIZoLJQThSY8UadKJ-hd1Nc/s1600/Capture_09.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="402" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmfiJiMJlLn6y0qyzOz11vgbzd1CIgn0KvGiOc1eYHt4NwLAxvej02Fn9zvEctfzkwFsABcLIu4bs1DHfApa2fQ6Y2_YQQdJdZ_lLTn2pEmqO48UFYUp786mIZoLJQThSY8UadKJ-hd1Nc/s640/Capture_09.PNG" width="640" /></a></div>
<br />
Here is one I made earlier<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/146289585" webkitallowfullscreen="" width="500"></iframe> <br />
<a href="https://vimeo.com/146289585">High Resolution nCloth test</a> from <a href="https://vimeo.com/user11558205">Daniel Sidi</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-823382765028902335.post-91152542026281060842015-05-18T04:42:00.001-07:002015-05-18T05:05:01.425-07:00Blending nCloth caches using Blendshapes<br />
With many thanks to <a href="http://www.davidknight3d.com/">David Knight</a>, nCloth guru, I present his method for blending two nCloth caches on a per vertex basis. You can have one half of a nCloth following one cache and the other half following a different cache.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwzbXOBvFlVroONkIjziNLZwa9MXBSgw-JUwe9Y06m5iZ1hoEyvnkzDfZkv_U1wARr5yS8JuV5hzXyuXm1DSaljHoGyOct2KiesP5aS9lsEV1fNhxcSQh7tfL9uHMyqolxVeCcnNUztl0E/s1600/blend_07.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="314" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwzbXOBvFlVroONkIjziNLZwa9MXBSgw-JUwe9Y06m5iZ1hoEyvnkzDfZkv_U1wARr5yS8JuV5hzXyuXm1DSaljHoGyOct2KiesP5aS9lsEV1fNhxcSQh7tfL9uHMyqolxVeCcnNUztl0E/s640/blend_07.PNG" width="640" /></a></div>
<br />
<br />
1. Create two simulations for your cloth sim. Use a copy of the mesh for each sim. If the meshes do not match exactly (same number of vertices) then this method of blending will not work.<br />
<br />
In my example I have one wide simulation and one which is narrow.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibbBYlhCU89v6GnW2izZuU9U5C6fwM9JaWIZstrftqOEokykWQDMArNc57IOOKgxH_4CfRO6GwjgRwZRDyWixfwRsATopaEaeXs34at9mCYql5Jj4MJeEKbiBUAWqpcA7NJXvxE-x4gxrY/s1600/blend_02.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="369" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibbBYlhCU89v6GnW2izZuU9U5C6fwM9JaWIZstrftqOEokykWQDMArNc57IOOKgxH_4CfRO6GwjgRwZRDyWixfwRsATopaEaeXs34at9mCYql5Jj4MJeEKbiBUAWqpcA7NJXvxE-x4gxrY/s640/blend_02.PNG" width="640" /></a></div>
<br />
2. Cache your simulations.<br />
<br />
3. Make another copy of the mesh, label it 'blendMesh'<br />
<br />
4. Select the two nCloth meshes and finally shift-select blendMesh. Create a Blend Shape deformer (Create Deformers > Blend Shape)<br />
<br />
5. In the Blend Shape attributes, set the weights for each input to 1.0<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjG7FQgZx8koyIji54vXZ68fuH4XGLJ_WlMrAj2emhsdmnno6ttJaGk41sakMROncE_iqALYoNxmgWIpGzJtBR1EAsx9nhYmjuVTzLDFhvHpvULZ85HaMSnM3xAzRMBio3bnpzna8c9dDjD/s1600/blend_04.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjG7FQgZx8koyIji54vXZ68fuH4XGLJ_WlMrAj2emhsdmnno6ttJaGk41sakMROncE_iqALYoNxmgWIpGzJtBR1EAsx9nhYmjuVTzLDFhvHpvULZ85HaMSnM3xAzRMBio3bnpzna8c9dDjD/s1600/blend_04.PNG" /></a></div>
<br />
<br />
6. Assign weights per vertex. To do this open the Paint Blend Weights Tool (in the Edit Deformers menu). Do not manually paint blend weights because the sum of blend weights on each vertex must be equal to 1.0. Painting will not allow fine control. You can edit blend weights per vertex manually in the Component Editor, but it is also possible to use an image set weights.<br />
<br />
7. I have created some ramps in photoshop and saved them as TIF files. First I created the blengMap_H ramp, then I inverted the image (ctrl-I) which will subtract the value of each pixel from 1.0. That inverted image becomes blendMap_H_inverted. This will ensure that when the two ramps are added together the result will equal 1.0<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTYnKCUBIfbEIiqOaAAIeNdndYOWHoB23Dm9nSfgAr5OaRNgTEZSNy8P99MJzEDL8oykJqUUIKW_tuXE1Do_jgNDMDSyVl4oYXjFUCO8CMkCD-HgI1l-b8LGvTdKUXacO7h9Fpp9cqD3F-/s1600/blend_06.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTYnKCUBIfbEIiqOaAAIeNdndYOWHoB23Dm9nSfgAr5OaRNgTEZSNy8P99MJzEDL8oykJqUUIKW_tuXE1Do_jgNDMDSyVl4oYXjFUCO8CMkCD-HgI1l-b8LGvTdKUXacO7h9Fpp9cqD3F-/s1600/blend_06.PNG" /></a></div>
<br />
I followed the same procedure to create the vertical ramps. The version of the ramp you need to use will depend on the orientation of your simulations. It's useful to have any combination of ramps saved in a library.<br />
<br />
8. Apply the blendMap ramp to the Blend Shape deformer. Choose one of the Targets on the Blend Shape node and then under the Attribute Maps section, press Import and browse to where the blendMap ramps are stored.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9NS1xxlBQcW2e2mwMWYWLbmTDuYJ07okgT-OF4wrC1y7wxe6Ynp31HXh6fowUSQV_VVcQIqNjXcAe3TpKPIKdvNaldz03Z_OayD3I8dFIgsxG6vQ6O-Ea4Pw0rj_MYvZJfe01vmB7ujsw/s1600/blend_05.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9NS1xxlBQcW2e2mwMWYWLbmTDuYJ07okgT-OF4wrC1y7wxe6Ynp31HXh6fowUSQV_VVcQIqNjXcAe3TpKPIKdvNaldz03Z_OayD3I8dFIgsxG6vQ6O-Ea4Pw0rj_MYvZJfe01vmB7ujsw/s1600/blend_05.PNG" /></a></div>
<br />
Once the blendMap is assigned to the first target, chose the second target and assign the inverted blendMap to it.<br />
<br />
That's it. You should now have a mesh which one end follows one cache and the other end follows a different cache.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKh9QhK1HMcFerjpTBNhyd-s3rWrHFQF9fy-oTsTfzt3stRAYpQQhuBwADHe_Oy0EDiQkQLWeNU3Ck78lfqmkZ0F4kGEuT5OgVEJnrCmMkdO-5GYTjAgM-4f-vSzcprFIrACryvA8TDieX/s1600/blend_03.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="314" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKh9QhK1HMcFerjpTBNhyd-s3rWrHFQF9fy-oTsTfzt3stRAYpQQhuBwADHe_Oy0EDiQkQLWeNU3Ck78lfqmkZ0F4kGEuT5OgVEJnrCmMkdO-5GYTjAgM-4f-vSzcprFIrACryvA8TDieX/s640/blend_03.PNG" width="640" /></a></div>
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-88045003603555318762015-05-07T03:47:00.000-07:002015-05-07T04:46:59.272-07:00nCloth Matching Mesh ConstraintIf you want a high resolution nCloth, it can be very slow to simulate. One method is to generate a low resolution nCloth to produce the large scale movement that you require and then simulate a high resolution nCloth which follows the low resolution mesh on the large scale but will display small scale details of its own.<br />
<br />
Here is one way to set up this systerm.<br />
<br />
<ol>
<li>Create a low resolution nCloth and simulate the large-scale motion. I will call that low resolution nCloth mesh "cloth_L0"</li>
<li>Cache cloth_L0</li>
<li>Smooth cloth_L0 using Mesh > Smooth. Be careful of having too many divisions as very high subdivision levels will cause significant slowing down of the simulation. I usually choose 1 to start with and then repeat the process if I need more detail.</li>
<li>Export the smoothed cloth_L0 as Alembic using Pipeline Cache > Alembic Cache > Export Selected to Alembic. If you want to preserve UVs, remember to tick the check box in the options box.</li>
<li>Import the Alembic file bac in to your scene. Rename that imported mesh as "Alembic_Import_L1"</li>
<li>Duplicate Alembic_Import_L1. Rename the duplicate "cloth_L1"</li>
<li>Create an nCloth from cloth_L1</li>
<li>Select cloth_L1 and shift-select Alembic_Import_L1, then create an Attract to Matching Mesh constraint using nConstraint > Attract to Matching Mesh </li>
<li>In the constraint, choose a Dropoff Distance that makes sense in your scene. You want cloth_L1 to be able to deviate just enough from Alembic_Import_L1 to add some good detail, but not so much that it no longer the follows the large scale motion of the original simulation.</li>
<li>In the Strength Dropoff ramp, create a profile that has a value of 0 in the left and 1 on the right. An exponential curve will work well.</li>
<li>Tune the forces acting on cloth_L1 to give a variation over the movement of cloth_L0.</li>
</ol>
You should now have a high resolution nCloth which follows a low resolution cloth but has extra details. This process can be applied any number of times, depending on the power of your workstation. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSJUvrzmYVsLay2U8dfuzGLWcnDMwlut2q4CCvqLxF24biX5VhcBWBiL8C_xEupTljsvqlFlFanJwJpGhEZIjHvAqBZk92A_ly881TuTO24VXmXjPacfGkVPBddbwYP6zvoS2MykKgGkQT/s1600/matchingMesh_01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="386" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSJUvrzmYVsLay2U8dfuzGLWcnDMwlut2q4CCvqLxF24biX5VhcBWBiL8C_xEupTljsvqlFlFanJwJpGhEZIjHvAqBZk92A_ly881TuTO24VXmXjPacfGkVPBddbwYP6zvoS2MykKgGkQT/s640/matchingMesh_01.png" width="640" /></a></div>
<br />
In my example above I have chosen to use a division level of 2 because the original mesh was so low resolution I knew I would require quite a lot more resolution to get more detail.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-56590049214977390032015-05-06T01:19:00.001-07:002015-05-06T01:39:10.641-07:00Velocity Field from Moving GeometryIf you want to create a velocity field from a moving mesh, here is a way to do it:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiL1FiCLsdTbJK30pdW0kj7CubN0ebzO63xsAarrrsTlrnT9fAFrTzZWJHV1IVNbO5KuJXzh2BUIaUXCknUXqNuq5Jgx0yIHAlbmieBmp3Z9d_9sFlxceJGHFuOVcJ75wj2zGkoZguZjAtJ/s1600/velocityField_01.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiL1FiCLsdTbJK30pdW0kj7CubN0ebzO63xsAarrrsTlrnT9fAFrTzZWJHV1IVNbO5KuJXzh2BUIaUXCknUXqNuq5Jgx0yIHAlbmieBmp3Z9d_9sFlxceJGHFuOVcJ75wj2zGkoZguZjAtJ/s1600/velocityField_01.JPG" height="361" width="640" /></a></div>
<br />
<br />
<br />
1. With your geometry selected, emit nParticles.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhye9BYeuK7tbtwto0PyZuZ8BwVueCSkBJ5P4hdTBYfUcErKE5f5jc0dCwbgdTuSMFztUDcS_FfR0aVyZfMESx6Bo2pFpcTOmwiClDhNY_QiwtBRDnC2xNAmD3o8yV-drV1y0dvNJYPyROu/s1600/velocityField_02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhye9BYeuK7tbtwto0PyZuZ8BwVueCSkBJ5P4hdTBYfUcErKE5f5jc0dCwbgdTuSMFztUDcS_FfR0aVyZfMESx6Bo2pFpcTOmwiClDhNY_QiwtBRDnC2xNAmD3o8yV-drV1y0dvNJYPyROu/s1600/velocityField_02.png" height="419" width="640" /></a></div>
<br />
<br />
2. For the emitter, set:<br />
<ul>
<li>Emitter Type to 'surface'</li>
<li>Increase the rate to, say, 50000 (depending on the size of your mesh)</li>
<li>Key the emission rate so that emission stops after a couple of frames. </li>
<li>Emission Speed and Normal Speed to 0</li>
<li>check the 'Need Parent UV' option</li>
</ul>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyefiJ94a4_wktZSq72g24Nx9RnjpC0WV2LXrWBUUchSDbvVVAiFWb_hq49OgUsbLTYPDQoJHkS1fbOixbxwsYKXuZe5kIawQjDmo_i4WtGIYR2VUnfNcVtjqNO3fhzs4lC8BCj0Y4vu5Y/s1600/velocityField_03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyefiJ94a4_wktZSq72g24Nx9RnjpC0WV2LXrWBUUchSDbvVVAiFWb_hq49OgUsbLTYPDQoJHkS1fbOixbxwsYKXuZe5kIawQjDmo_i4WtGIYR2VUnfNcVtjqNO3fhzs4lC8BCj0Y4vu5Y/s1600/velocityField_03.png" /></a></div>
<br />
4 Add the following per-particles attributes:<br />
<ul>
<li>parentU</li>
<li>parentV</li>
<li>goalU</li>
<li>goalV</li>
</ul>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDuebK17Ixorq2qfQHFyhP1Cn0gbxe6FZPTxFJGfjOfDiUiPEmMRag9H13VRChJ-EBTHwSrxvpvkynHfgM5LrlSQWIb5fPZSdkeHvydeiLkAsGTfkLsO2bOPcujW1b9JSKcNx1Ja8uj7e4/s1600/velocityField_04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDuebK17Ixorq2qfQHFyhP1Cn0gbxe6FZPTxFJGfjOfDiUiPEmMRag9H13VRChJ-EBTHwSrxvpvkynHfgM5LrlSQWIb5fPZSdkeHvydeiLkAsGTfkLsO2bOPcujW1b9JSKcNx1Ja8uj7e4/s1600/velocityField_04.png" height="552" width="640" /></a></div>
<br />
<br />
Make a creation expression on the nParticle object:<br />
<br />
goalU=parentU;<br />
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
goalV=parentV;</div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhx3i2HZzpbjCtFZUPOcOG66tjGfpQo8rdqBw2b9FyuWkfBFeeIO5stDVljErs0DtIud7ZRXapKKuZ7PwV8irZvCI1-YyV4t0-NZhlq43UTSPhd6QXxwyNRGTNzKIEBxUi431HvcrhNLgKc/s1600/velocityField_05.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhx3i2HZzpbjCtFZUPOcOG66tjGfpQo8rdqBw2b9FyuWkfBFeeIO5stDVljErs0DtIud7ZRXapKKuZ7PwV8irZvCI1-YyV4t0-NZhlq43UTSPhd6QXxwyNRGTNzKIEBxUi431HvcrhNLgKc/s1600/velocityField_05.png" height="592" width="640" /></a></div>
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
<br /></div>
<br />
6. Assign the geometry mesh as a goal for the nParticles. Set the Goal Smoothness to 0 and Goal Weight to 1.0<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL1HEsWgzNReziGH8BPO1SvX_8hwbWqANb4CuY1g51-VGKjKuaLaBrOYoq9HyF89voEsiDLFThHd1l1_gKsb8iZyS02wrmy1dR6HadUB75FxA5iTjcVw6QwYiAnzRLrjQQW2oMqI98zZU1/s1600/velocityField_06.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL1HEsWgzNReziGH8BPO1SvX_8hwbWqANb4CuY1g51-VGKjKuaLaBrOYoq9HyF89voEsiDLFThHd1l1_gKsb8iZyS02wrmy1dR6HadUB75FxA5iTjcVw6QwYiAnzRLrjQQW2oMqI98zZU1/s1600/velocityField_06.png" height="592" width="640" /></a></div>
<br />
<br />
<br />
Now you should have some particles sticking to the mesh.<br />
<br />
7. Create a fluid container. You can use auto-resize if you want.<br />
<br />
8. Select the fluid and the nParticles and create a fluid emitter.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiF0a5GrXmzdY3snvp6HMyRaWCebXkRtltJzLV0ogxWnv9a29HKBZtL_XDxFcaqPgQz_sQZgHJluhNRezBlY2769Lw-C0xKkdC1GUIGHN4Dp3ofc7p5_BVCSZUY44820q__fV8zSRswE8tk/s1600/velocityField_07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiF0a5GrXmzdY3snvp6HMyRaWCebXkRtltJzLV0ogxWnv9a29HKBZtL_XDxFcaqPgQz_sQZgHJluhNRezBlY2769Lw-C0xKkdC1GUIGHN4Dp3ofc7p5_BVCSZUY44820q__fV8zSRswE8tk/s1600/velocityField_07.png" height="458" width="640" /></a></div>
<br />
9. Set the emission to zero for Density, Heat and Fuel. Set the emission speed attributes to 'Add' and the Inherit Velocity to a value greater than zero.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgZke5yjJaLlYft6rNOCYLGZoPfdWWdp9ZEUu8PziV83MhlNyNDIJl9il5MXrp_gl6E85V3fXhnfGGrA7zNwfua4MpcCgdZNhUvQ39xA7nJOyON_agFwE7b8kng-YwUyhBz1mfyx1Uxy4E/s1600/velocityField_08.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgZke5yjJaLlYft6rNOCYLGZoPfdWWdp9ZEUu8PziV83MhlNyNDIJl9il5MXrp_gl6E85V3fXhnfGGrA7zNwfua4MpcCgdZNhUvQ39xA7nJOyON_agFwE7b8kng-YwUyhBz1mfyx1Uxy4E/s1600/velocityField_08.png" height="502" width="640" /></a></div>
<br />
<br />
That's it. You should now have the nParticles emitting velocity in the fluid. You can visualise the velocity field with the Velocity Draw option on the Fluid shape node.<br />
<br />
You can use the velocities generated by this method to drive other simulations - nCloth, particles or fluids.<br />
<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-86498699721872740552015-04-22T07:15:00.000-07:002015-04-22T07:15:50.696-07:00Softimage button to apply a saved preset to a tool<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br />
Let's say you have a good preset for the Curve_To_Mesh tool and you want to apply that preset many times to different curves. It is slow to keep loading the preset manually each time you apply the tool.<br />
<br />
Here is a way to apply the tool and then apply the preset in one handy button:<br />
<br />
Firstly create the preset for the tool and save it somewhere. You will need the path to the preset later.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaaJzzMlazdGAMXQWXfWdWVKpR2tCtEE5Tf2S6yCF_XX3yff85bvO7ODEeKNQIQbvzN2rHpFtw4d3Dgmhicdh6XQt835h7X3IKT8UmQjDfXl5Mjs_6eRY8LmUs5DiIAEpqYG0O83VUEWFy/s1600/softPreset_01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaaJzzMlazdGAMXQWXfWdWVKpR2tCtEE5Tf2S6yCF_XX3yff85bvO7ODEeKNQIQbvzN2rHpFtw4d3Dgmhicdh6XQt835h7X3IKT8UmQjDfXl5Mjs_6eRY8LmUs5DiIAEpqYG0O83VUEWFy/s1600/softPreset_01.jpg" height="203" width="320" /></a></div>
<br />
<span id="goog_1949111000"></span><span id="goog_1949111001"></span><br />
<br />
Next, open the script editor and copy the command form a previous usage. There are some command arguements that I am not yet familiar with, so copying from a previous usage guarantees that the syntax is correct. <br />
<br />
<br />
<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">for each curveObj in Selection<br /> ApplyGenOp "CurveListToMesh", , curveObj, siUnspecified, siPersistentOperation, siKeepGenOpInputs<br />next<br /><br /><br />for each polyObj in Selection<br />LoadPreset "C:\Users\3d\Autodesk\Softimage_2012_Subscription_Advantage_Pack\Data\DSPresets\Operators\d1.Preset", (polyObj+".polymsh.CurveListToMesh")<br />next</span><br />
<br />
<br />
<br />
<br />
Now I create a new Shelf<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxbhCFFkRjttfpxawEYCZYeHlB5iHWkrvn7x7iK_BbOeq3kYz0FdBnKhiX-ZULN8Q0uGavS5FWu1OMnWvPXNx2Qv9quJ02z2g4s5kStGAdVMT9usH87kdeW89VIqXewANviQTYnvy5rucj/s1600/softPreset_02.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxbhCFFkRjttfpxawEYCZYeHlB5iHWkrvn7x7iK_BbOeq3kYz0FdBnKhiX-ZULN8Q0uGavS5FWu1OMnWvPXNx2Qv9quJ02z2g4s5kStGAdVMT9usH87kdeW89VIqXewANviQTYnvy5rucj/s1600/softPreset_02.jpg" height="179" width="320" /></a></div>
<br />
<br />
In the new shelf, I create a new Toolbar<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF5C4Vl3Pvl0mINPtf2iE9X8l5eMQkn3PBXHcTgj9W5MAnrfnxC9BQFXi0QIC5Ngwqo5zdSuXBJqgNNxpFwvOopnbEq_BtAhxCgyGehcbZtZhMYFw4b1d_wI7_q45oMKDW_oQht55wFO4K/s1600/softPreset_03.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF5C4Vl3Pvl0mINPtf2iE9X8l5eMQkn3PBXHcTgj9W5MAnrfnxC9BQFXi0QIC5Ngwqo5zdSuXBJqgNNxpFwvOopnbEq_BtAhxCgyGehcbZtZhMYFw4b1d_wI7_q45oMKDW_oQht55wFO4K/s1600/softPreset_03.jpg" height="245" width="320" /></a></div>
<br />
Now I can drag my code from the script editor into the toolbar. That creates a button.<br />
<br />
<br />
<br />
<br />
The first line reads the selection and runs the tool on the selected curve(s).<br />
<br />
Softimage will have the newly created poly mesh object already selected, which makes the next part so much easier.<br />
<br />
The second line gets the name of the selected poly object and applies the preset to the stack. This is where wou will need the location of the preset. Also, note the syntax of the last argument.<br />
<br />
Having come from a Maya and MEL background I found this syntax really easy to pick up.<br />
<br />Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-823382765028902335.post-37333064331232582272015-04-20T05:52:00.002-07:002015-04-20T05:54:01.943-07:00Extending a camera track in PFTrackIf you have tracked a shot in PFTrack and then the shot gets extended and you want to extend your track, but keep the old solve, here is the workflow which worked for me.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<ol>
<li>Import the extended clip</li>
<li>Copy your node tree in PFTrack. I created a new Node Page using the P+ button.<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8fuECsatsqGQPDtCaNC5M-cxAlSlX9SGK1WTjq7hiXsDfQPcqqHLdpRGM_WEkumU4vyUfPmkQReK19i1zkKV7K2oLuTFzGylkwE1SotzmSUWueIEfaHKugm8U1ibjQs6YEnWvu7mAFkC-/s1600/pft_01.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8fuECsatsqGQPDtCaNC5M-cxAlSlX9SGK1WTjq7hiXsDfQPcqqHLdpRGM_WEkumU4vyUfPmkQReK19i1zkKV7K2oLuTFzGylkwE1SotzmSUWueIEfaHKugm8U1ibjQs6YEnWvu7mAFkC-/s1600/pft_01.JPG" height="320" width="239" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
</li>
<li>Paste your node tree into the new node page. I do this so that I don't accidentally overwrite the existing solve.</li>
<li>If you have any User Tracks, select and export them.<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBzQjxLuGckJiIvt-srH5tXxZdZlIcPVVT5-yDBqxUDMiH67sus8JQ7cKAjpYlTZzK8mkBowIE1Fba-KPUzA65SXkb1mi1Z6sa0pG8OYU1vYzWg2L3gEYyFZjVyaZhulTeOsg0TDorDmwd/s1600/pft_02.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBzQjxLuGckJiIvt-srH5tXxZdZlIcPVVT5-yDBqxUDMiH67sus8JQ7cKAjpYlTZzK8mkBowIE1Fba-KPUzA65SXkb1mi1Z6sa0pG8OYU1vYzWg2L3gEYyFZjVyaZhulTeOsg0TDorDmwd/s1600/pft_02.JPG" height="110" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
</li>
<li>If you have any Auto Tracks, select and export them as well.</li>
<li>Connect the new clip with the extra frames into the top of your tree.<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwOpoQ8qlt8aDKCkO-LEzvZKqQ-EjI4dCFFN6FV2h3MxSVBMe203TDrwBvplp071YMj5GnuOQZn8PGUCORi7Y-uzGUgBjtJlWuf_wGOKEtWaA5JMXQ0zNovD_1swmUrQ7zHePWRFVKxyyF/s1600/pft_03.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwOpoQ8qlt8aDKCkO-LEzvZKqQ-EjI4dCFFN6FV2h3MxSVBMe203TDrwBvplp071YMj5GnuOQZn8PGUCORi7Y-uzGUgBjtJlWuf_wGOKEtWaA5JMXQ0zNovD_1swmUrQ7zHePWRFVKxyyF/s1600/pft_03.JPG" height="320" width="281" /></a></div>
</li>
<li>When you connect the new clip, the User Tracks and the Auto tracks will not work anymore. Select all the User Tracks and delete them. Then Import the tracks you exported in step 4<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigLmxBVdbfH9nU-3F_2KjBdIKLKW6_Hd5dq3jM9wMjC2rtUjQITGON9jCsQben7FhKrwryCV48iwmCEOHEmx4Xz624SyNyC1JUdEwJBBJxL34UM1fS036DcCIz22kpY5FgI8AUh4OOZs8F/s1600/pft_04.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigLmxBVdbfH9nU-3F_2KjBdIKLKW6_Hd5dq3jM9wMjC2rtUjQITGON9jCsQben7FhKrwryCV48iwmCEOHEmx4Xz624SyNyC1JUdEwJBBJxL34UM1fS036DcCIz22kpY5FgI8AUh4OOZs8F/s1600/pft_04.JPG" height="247" width="320" /></a></div>
</li>
<li>Do the same with the Auto Tracks.</li>
<li>Your User Tracks will now have keyframes only where they were previously tracked. You now need to track the un-tracked frames for all of those User Tracks. Select them and press the Track button in the direction you need to fill.<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtcp4aWl1EI9iEZW_jjdMKrSJ8sa_ug0JsUQQNdVLrQdmJjSYBRAe197xZdYKpJu2SLHwwBrznRYOSe_w3DOKQG5GBP3r4GVuNY9ugQifG3bQQD0iVhd2ACpG4hl1bVTwio-Nxj6hCiqYB/s1600/pft_05.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtcp4aWl1EI9iEZW_jjdMKrSJ8sa_ug0JsUQQNdVLrQdmJjSYBRAe197xZdYKpJu2SLHwwBrznRYOSe_w3DOKQG5GBP3r4GVuNY9ugQifG3bQQD0iVhd2ACpG4hl1bVTwio-Nxj6hCiqYB/s1600/pft_05.JPG" height="111" width="320" /></a></div>
</li>
<li>The Auto Tracks will also need to be tracked for the missing frames. Simply select them all and press the Auto Track button. Select 'extend' when the dialogue box appears.</li>
<li>You now have all the trackers in 2D, they need to be solved for 3D. Go to the Camera Solver node and press the Solve Trackers button. <div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7kz5QOypVD7UvHDDUjxrtg5tHZwZmRDAkyFMM-Wdd3b4ncmD_1Y-FN-_UY_5PCwxPn4lssELrHlZYxlGGupMXIZWkipZGv5vEHcUTWcL-iHZquN3bmHjnu1oazmWB366SOcP4hHxjuDvH/s1600/pft_06.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7kz5QOypVD7UvHDDUjxrtg5tHZwZmRDAkyFMM-Wdd3b4ncmD_1Y-FN-_UY_5PCwxPn4lssELrHlZYxlGGupMXIZWkipZGv5vEHcUTWcL-iHZquN3bmHjnu1oazmWB366SOcP4hHxjuDvH/s1600/pft_06.JPG" height="171" width="320" /></a></div>
</li>
<li>Now you are ready to extend the camera solve. In the Camera Solver node, press the extend button in the direction you need. The camera solve will extend out to the new frames and you should now have a camera for the whole shot which does not deviate from the old solve.</li>
</ol>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-82297779242485885732014-08-13T03:35:00.003-07:002014-09-02T08:58:49.530-07:00Per-Particle Field AttributesHere is an incredibly handy tip from the <a href="http://www.fxphd.com/" target="_blank"><b>FXPHD</b></a> training course <b>MYA217 Maya Effects</b> by <a href="http://www.sputnikstudio.com/" target="_blank"><b>Pasha Ivanov</b></a>.<br />
<br />
If you have a particle system being affected by a field, you can control the magnitude (or any other parameter) of the field on a per-particle basis. Here's how:<br />
<br />
Let's say <b>starL_nParticle</b> is being driven by <b>approachCurve_volumeAxisField</b><br />
<ol>
<li>Create a new attribute on star_nParticle</li>
<li>Make the new attibute per particle (array)</li>
<li>Name the new attribute <b>approachCurve_volumeAxisField_magnitude</b></li>
</ol>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQFtWhrY5qfO9eeXPPPSG2cF8qjum2HvsCpwnVw1gAO3IzlHz9crhKC832g3Yh-3A1Y5JaS3ofvc4zaoz95CjD3ZEz5RCvAFl6eaHgwsibDGVazjNuoJVrvZsPqRA7P0tTUq9OjUR4Xx0s/s1600/fapp_01.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQFtWhrY5qfO9eeXPPPSG2cF8qjum2HvsCpwnVw1gAO3IzlHz9crhKC832g3Yh-3A1Y5JaS3ofvc4zaoz95CjD3ZEz5RCvAFl6eaHgwsibDGVazjNuoJVrvZsPqRA7P0tTUq9OjUR4Xx0s/s1600/fapp_01.JPG" height="320" width="293" /></a></div>
<div>
<b><br /></b></div>
<div>
<b><br /></b></div>
<div>
You now have a per-particle attribute to control the magnitude of the field's effect. You can create a per-particle attribute for any of the field's perameters (e.g. alongAxis) but the crucial thing to remember is the naming of the per-particle attribute: it <b>MUST</b> be in the form of</div>
<div>
<br /></div>
<div style="text-align: center;">
<b><span style="color: orange;">fieldName_perameterName</span></b></div>
<div>
<br /></div>
<div>
Maya will understand that syntax and make the connection for you.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-55749072422041731952014-06-17T04:43:00.000-07:002014-06-17T06:10:35.545-07:00Passing rgbPP to Instanced ObjectsThis is a nice one that I have needed quite a lot in the past. Now, thanks to Arnold, it's extremely easy to do.<br />
<br />
Here is the problem:<br />
<br />
I want to vary the shading on objects that are instanced to a particle system.<br />
<br />
In Maya and Mental ray this is not easy to do. In fact I don't know of any way to do it. In Arnold, however, it is very straightforward:<br />
<br />
1. Create your particle and instancer system as you normally would do.<br />
<br />
2. Assign a shader to your instanced objects (not the instancer object)<br />
<br />
3. create an <b>aiUserDataColor</b> node<br />
<br />
4. type <b>rgbPP</b> in the <b>Color Attr Name</b> in the <b>aiUserColor</b> node<br />
<br />
5. connect <b>aiUserDataColor.outColor --> diffuse</b> in your shader (or whatever channel you need it to go to)<br />
<br />
6. type <b>rgbPP</b> into the <b>Export Attributes</b> in the Arnold section of the particle object.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0fv7EFuCd3RqgO0p58tynRv373IXaOfalqmKwclGua-MYsKZYucQrilvv498GXiZ7JyaV2gvrentRkL-FZx-yYzpL3OJ6OLHPzgTxVZGVN2n7a3I4i8aarKJm56zbOpJbRD5UiDjbWvcL/s1600/rgbPP_to_instances_01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0fv7EFuCd3RqgO0p58tynRv373IXaOfalqmKwclGua-MYsKZYucQrilvv498GXiZ7JyaV2gvrentRkL-FZx-yYzpL3OJ6OLHPzgTxVZGVN2n7a3I4i8aarKJm56zbOpJbRD5UiDjbWvcL/s1600/rgbPP_to_instances_01.png" height="194" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
That's it. A really easy and long overdue feature.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDv1hyKBRNRmTzc4x6ktzk4fe-CK0Cr15xlNforfjzvdhDisJxMKBVJYOWPk5ORum8NEllxe-lwuBt2DoGrem4ewCqk9cCdq-KdN_f_aAbMJY8h3jwuL3rMFhHtdEFIrsYk9VaCV0HVy4H/s1600/rgbPP_to_instances_02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDv1hyKBRNRmTzc4x6ktzk4fe-CK0Cr15xlNforfjzvdhDisJxMKBVJYOWPk5ORum8NEllxe-lwuBt2DoGrem4ewCqk9cCdq-KdN_f_aAbMJY8h3jwuL3rMFhHtdEFIrsYk9VaCV0HVy4H/s1600/rgbPP_to_instances_02.png" height="192" width="320" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-823382765028902335.post-5332506331584812282014-06-13T05:54:00.001-07:002014-09-01T00:27:31.006-07:00Passing ageNormalized to Arnold<span style="font-family: Arial, Helvetica, sans-serif;">Passing the ageNormalized attribute from Maya particles to an Arnold shader is extremely useful and extremely not documented in Solid Angle's user guides. I will show two ways to do it - one is my own recipe, and one of from Pedro Gomez from the MtoA list.</span><br />
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;">Here is the problem:</span><br />
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<br />
<div>
<span style="font-family: Arial, Helvetica, sans-serif;">I would like to pass my particle's <b>ageNormalized</b> to a shader, rather than <b>age</b>.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;">As you may be able to see from the screenshot, passing <b>age</b> <i>sort of</i> works, but not quite. Some of the oldest particles have reached the end of the colour ramp and wrapped around to the beginning of the ramp again.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;">If I try just typing in <b>ageNormalized</b> into the <b>Export Attributes</b>, it does not work at all, Arnold just reads the first value of the ramp and applies that value to every particle.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;">Is there a smart workaround for this? Can I put <b>age/lifespanPP</b> somewhere in the shader? But where? And talking of export attributes, can I put more than one attribue in there (eg age, lifespanPP, velocityPP)? </span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgia4QivDMy_BZFlO1DJLV2yTA_dztlFTfVOxWsKe_Vu_SKDfsMlOcYx8hVLcHZXkVxH8PcpeW5IrPj27sW2st5JNROiP28zERiLxp7asxiO3UIn9Dp7VGdXBn2xrQQCaI0ZJqV0OUY3u1E/s1600/ageNormalized_01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: Arial, Helvetica, sans-serif;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgia4QivDMy_BZFlO1DJLV2yTA_dztlFTfVOxWsKe_Vu_SKDfsMlOcYx8hVLcHZXkVxH8PcpeW5IrPj27sW2st5JNROiP28zERiLxp7asxiO3UIn9Dp7VGdXBn2xrQQCaI0ZJqV0OUY3u1E/s1600/ageNormalized_01.jpg" height="200" width="320" /></span></a></div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;">First is my method: </span><br />
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;">1. Add a new Dynamic Per-Particle Attribute, <b>userScalar1PP</b>, say.</span><br />
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;">2. Adding the runtime expression:</span><br />
<h4>
<span style="font-family: Arial, Helvetica, sans-serif;"><i>nParticleShape1.userScalar1PP=nParticleShape1.age/nParticleShape1.lifespanPP;</i></span></h4>
<span style="font-family: Arial, Helvetica, sans-serif;"><br />3. Put <b>userScalar1PP</b> into the <b>Export Attributes</b><br /><b><br /></b>4. Connect the particle sampler to the shader ramps, but use the <b>UserScalar1PP attribute</b> instead of<b> age</b>.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;">Second is Pedro's way - more correct and elegant:</span><br />
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;">Export both the <b>age</b> and <b>lifespanPP</b> and catch those in two <b>aiUserDataFloat</b> nodes. Then use a <b>Multiply/Divide</b> node and divide the age/lifespanPP. Then pipe that into your shader. This is a much better as it does not require an expensive runtime expression to be cached.</span><br />
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD4qnanQu__f8KI4QlCbr7-OYmtGi3CJ0TWuXawnQhcZIbzZ7qxA_4cu6_cw8OiIkHx3-WnauOjvB2uNsDMD4knsU0y0aPPfMl5j0eIxJ6nBW5O5ufn8aSgLCNLZOqnAcLhM9uI7H_Xwz2/s1600/ageNormalized_attributes_01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: Arial, Helvetica, sans-serif;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD4qnanQu__f8KI4QlCbr7-OYmtGi3CJ0TWuXawnQhcZIbzZ7qxA_4cu6_cw8OiIkHx3-WnauOjvB2uNsDMD4knsU0y0aPPfMl5j0eIxJ6nBW5O5ufn8aSgLCNLZOqnAcLhM9uI7H_Xwz2/s1600/ageNormalized_attributes_01.jpg" height="195" width="320" /></span></a></div>
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">Here is the shader: one ramp for Colour and one for Opacity.</span><br />
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnGXBDAf6HvboK8VH6DQdLVI3SfqMTYX3BsnLyYLT8NYEziSCPu_l8iJo9Nw_xdjxJw8Y0nTcB1JWlthJtkkUR8Ztnv-LJoz1baYfm8IrDoFwf9TZ4N7IPDm3VbaOnXy2G4uUUnrluq4sL/s1600/ageNormalized_shader_01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: Arial, Helvetica, sans-serif;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnGXBDAf6HvboK8VH6DQdLVI3SfqMTYX3BsnLyYLT8NYEziSCPu_l8iJo9Nw_xdjxJw8Y0nTcB1JWlthJtkkUR8Ztnv-LJoz1baYfm8IrDoFwf9TZ4N7IPDm3VbaOnXy2G4uUUnrluq4sL/s1600/ageNormalized_shader_01.jpg" height="199" width="320" /></span></a></div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;">And the answer is yes, you can export any number of attributes, so long as they are seperated by a space in the Export Attribues box.</span><br />
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJtapaVNiVf1gKT3BKkN1-GcoYjIsWnlXwyl6OnV7jn2_6ztvMocvoLDfzlK4-0M0vWbeV75_B7oNHbwu6UXQzNWlKFVauQWtsCulGuFWHL6Hi9H3zKeQJ2Z64rQAMoackXQElNTa93J9I/s1600/age_aiUserData.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJtapaVNiVf1gKT3BKkN1-GcoYjIsWnlXwyl6OnV7jn2_6ztvMocvoLDfzlK4-0M0vWbeV75_B7oNHbwu6UXQzNWlKFVauQWtsCulGuFWHL6Hi9H3zKeQJ2Z64rQAMoackXQElNTa93J9I/s1600/age_aiUserData.JPG" height="229" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicQJU5QIY6p_B3BERQhh3-CvoQjvLMTGfSdASNyZNlSULDbqW2pngoTyNuRHlijONiv6R18_5l-ZxQxI-MSP7-NxnRaapTHM-TOoLS_6ZHE0tXvbLKJR_ubSSv5Rg9l5Ck4BlepVU-Kwyb/s1600/lifespan_aiUserData.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicQJU5QIY6p_B3BERQhh3-CvoQjvLMTGfSdASNyZNlSULDbqW2pngoTyNuRHlijONiv6R18_5l-ZxQxI-MSP7-NxnRaapTHM-TOoLS_6ZHE0tXvbLKJR_ubSSv5Rg9l5Ck4BlepVU-Kwyb/s1600/lifespan_aiUserData.JPG" height="232" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgC4iCzqBQ1wN_Q322WM-5h62Zyyi0fH2BuqW4L-4OehzO03J5GBWSHoNvPYkvOFfEE0L3z6Gk2byDdDw9rgHXP8KzTptT9LvWjeztAlcA9yL2d4sZxcGDs9gpDEj3jDmXXNJGQMktk4H9z/s1600/colourRamp.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgC4iCzqBQ1wN_Q322WM-5h62Zyyi0fH2BuqW4L-4OehzO03J5GBWSHoNvPYkvOFfEE0L3z6Gk2byDdDw9rgHXP8KzTptT9LvWjeztAlcA9yL2d4sZxcGDs9gpDEj3jDmXXNJGQMktk4H9z/s1600/colourRamp.JPG" height="320" width="293" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX0eQSkat0QINXVvb5OXit_qDAzXHYrZpV_kHNDVsPcmCEBIThPSknQaSRRjWvAbB3W7GIWetUMda509d-DF_t79VpmOBBYAHLUNj2csZeAYB0K_bNAQcUyyrXrLrD1HffACDGmOxRmZj_/s1600/opacityRamp.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX0eQSkat0QINXVvb5OXit_qDAzXHYrZpV_kHNDVsPcmCEBIThPSknQaSRRjWvAbB3W7GIWetUMda509d-DF_t79VpmOBBYAHLUNj2csZeAYB0K_bNAQcUyyrXrLrD1HffACDGmOxRmZj_/s1600/opacityRamp.JPG" height="320" width="274" /></a></div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span><span style="font-family: Arial, Helvetica, sans-serif;"><a href="https://drive.google.com/file/d/0BzsvxJESXf1QUFlYRFAwVm5jSlk/edit?usp=sharing">A sample scene is available in Maya 2014 MA format</a></span><br />
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-823382765028902335.post-46374330860881478492014-06-13T04:55:00.001-07:002014-06-13T04:55:12.223-07:00UV particles via SOuPHere is a small SOuP technique for producing a plane of particles with a UV gradient on their RGB.<br />
<br />
First you need a plane, then emit some particles from that plane. Make sure that the particles have the rgbPP attribute available as we will need to put an expression on it.<br />
<br />
Create a SOuP TextureToArray node<br />
Create a SOuP rgbaToColorAndAlpha node<br />
Create two ramp texture nodes - ramp1 is black to red along U, ramp two is black to green along V<br />
<br />
Connect the following:<br />
<br />
<blockquote class="tr_bq">
<span style="color: orange;">ramp2.outColor --> ramp1.colorOffset</span></blockquote>
<blockquote class="tr_bq">
<span style="color: orange;">ramp1.outColor --> textureToArray1.inColor</span></blockquote>
<blockquote class="tr_bq">
<span style="color: orange;">polyPlaneShape.worldMesh[0] --> nParticleShape.inputGeometry</span></blockquote>
<blockquote class="tr_bq">
<span style="color: orange;">polyPlaneShape.worldMesh[0] --> textureToArray1.inGeometry</span></blockquote>
<blockquote class="tr_bq">
<span style="color: orange;">textureToArray1.outRgbaPP --> rgbToColorAndAlpha1.inRgbaPP </span></blockquote>
<br />
Also connect the emitting plane's transform node to the particle's transform node as shown in the node graph.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibAeAVsqtplfyZBuGxUtcqAwDZVlfu5f2LEJXk7jPV3W_M4GDDbdUcmIK3GkOt-Ib6UlRLUevk1_I2wfFSc6vq6yUFSMTAYcV_CaydmIy68lZQKRZn7nYiVbSUu4nAaDvL-6typzfrT-8/s1600/UVgrid_nodes.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibAeAVsqtplfyZBuGxUtcqAwDZVlfu5f2LEJXk7jPV3W_M4GDDbdUcmIK3GkOt-Ib6UlRLUevk1_I2wfFSc6vq6yUFSMTAYcV_CaydmIy68lZQKRZn7nYiVbSUu4nAaDvL-6typzfrT-8/s640/UVgrid_nodes.jpg" height="396" width="640" /></a></div>
<br />
Now set the rgbPP using the creation expression:<br />
<br />
<blockquote class="tr_bq">
<span style="color: orange;">rgbPP=rgbaToColorAndAlpha1.outRgbPP</span></blockquote>
<br />
Rewind and step forward one frame so that the particles are emitted. Then set their initial state and disconnect the emitter and the connection between <span style="color: orange;">polyPlaneShape.worldMesh[0] --> nParticleShape.inputGeometry</span><br />
<span style="color: orange;"><br /></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxy4Q9rtdZD-4W7-vXuNclHXSiM_wpw_1fBnLEEWcJCgcoJnIiiDbsXEjy3HeMgxTEODUnwV4dTnlfhzNahPbt7rRl3JAgacGEqu9CuDc2zY6bVnXOZ0q1WhRwoTQeTa15yEeMc27vD-k/s1600/UVgrid_particleShape.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxy4Q9rtdZD-4W7-vXuNclHXSiM_wpw_1fBnLEEWcJCgcoJnIiiDbsXEjy3HeMgxTEODUnwV4dTnlfhzNahPbt7rRl3JAgacGEqu9CuDc2zY6bVnXOZ0q1WhRwoTQeTa15yEeMc27vD-k/s640/UVgrid_particleShape.jpg" height="400" width="640" /></a></div>
<span style="color: orange;"><br /></span>
<span style="color: orange;"><br /></span>
<span style="color: orange;"><br /></span>
The particles will now be dynamic again.<br />
<br />
<br />
In Nuke, plug in your rendered particles into the STMap node as shown in the image below<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-9cVhF_s9p1HUvTwieb7YEZGkEYtIQXRN26lJ8h2M_trD3ZCWI5TjU0hnJRMIO-X_GY61texvDa-6eUhOmF0cy7szsHjLPvADeCjGaR3tP6XbO6BUOLBgiTCCBuGURRyxCVPIvG2MF0E/s1600/UVgrid_STmap.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-9cVhF_s9p1HUvTwieb7YEZGkEYtIQXRN26lJ8h2M_trD3ZCWI5TjU0hnJRMIO-X_GY61texvDa-6eUhOmF0cy7szsHjLPvADeCjGaR3tP6XbO6BUOLBgiTCCBuGURRyxCVPIvG2MF0E/s640/UVgrid_STmap.jpg" height="390" width="640" /></a></div>
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-48662817592359711492014-06-13T04:52:00.002-07:002014-06-13T06:52:32.671-07:00GPU renderersI will be testing some particle renderers in the next few days - Arnold, Fury and Krakatoa.<br />
<br />
<br />
This first test is Fury<br />
<br />
20 million nParticles<br />
motion blur switched ON<br />
4 x multisampling<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLHiLuysfPhhEFKtkQC_EL0u0WUTOHofXvMKPDdeHPgcAB_McV-T4IgCl07v3kUru-qAL0gm_OVI3e5f161qbZCqy_20z4nUsh5bOJHJ2ERRCTMwzRZxTjaJOqE6g6pEPyUzsA15wPP38/s1600/furyTest_01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLHiLuysfPhhEFKtkQC_EL0u0WUTOHofXvMKPDdeHPgcAB_McV-T4IgCl07v3kUru-qAL0gm_OVI3e5f161qbZCqy_20z4nUsh5bOJHJ2ERRCTMwzRZxTjaJOqE6g6pEPyUzsA15wPP38/s320/furyTest_01.jpg" height="211" width="320" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">13.26 seconds (dual Xeon E5 - 32 cores, 48GB RAM, Nvidia Quadro 4000)</span></div>
<br />
I'm showing the alpha channel only because I currently just have the demo version of Fury and the watermark is distracting in the colour channel.<br />
<br />
<br />
The next test is Krakatoa<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxgNrOUMScOfP1uIccyQy9MZ0B5bO8DTnapg68Amne6Kjz_0LO8CNSOQdSqeAKJ3O4-crxzH7c1NGdRKyksM545vCiXr_LYvnRS5oQ8_rMhHwBc_lSYuBnXusdQwp6jbnZ1u-CgHZvEbM/s1600/krakatoaTest_01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxgNrOUMScOfP1uIccyQy9MZ0B5bO8DTnapg68Amne6Kjz_0LO8CNSOQdSqeAKJ3O4-crxzH7c1NGdRKyksM545vCiXr_LYvnRS5oQ8_rMhHwBc_lSYuBnXusdQwp6jbnZ1u-CgHZvEbM/s320/krakatoaTest_01.jpg" height="216" width="320" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">motion blur OFF, render time 11 seconds </span></div>
<br />
It's slightly trickier to get started with Krakatoa, but I think the results look amazing. Again, this is a lot of particles (14 million)<br />
<br />
<br />
Here is my setup for Arnold, but I cannot seem to get the opacity to work properly.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicpJ_5FjMJT8BMXAtLJV3JhSf47w7rTG6fWb_LvyiAyS34PezqHmBpytPRdSfEtclRjPAS46EsG4TBJg_LGb3k_cJf6s8EC2sIIgA7lIFhygclGUYsObya_Ua_oAju43ga6yrRnixUr9I/s1600/Arnold_02.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicpJ_5FjMJT8BMXAtLJV3JhSf47w7rTG6fWb_LvyiAyS34PezqHmBpytPRdSfEtclRjPAS46EsG4TBJg_LGb3k_cJf6s8EC2sIIgA7lIFhygclGUYsObya_Ua_oAju43ga6yrRnixUr9I/s320/Arnold_02.jpg" height="320" width="254" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsoyaotG970tVk2NYxla4mICGRRCMQJBWuke3tPsYGE8nHdiQ1rBl3dhwB3IV5dbRezZO51IvKKBbcGxxODP-4wB_6Vx_cf8fBiFrrV8PZaJsncebWq5W1Q5tHn67gt3Yde1hBjt9ae1c/s1600/Arnold_03.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsoyaotG970tVk2NYxla4mICGRRCMQJBWuke3tPsYGE8nHdiQ1rBl3dhwB3IV5dbRezZO51IvKKBbcGxxODP-4wB_6Vx_cf8fBiFrrV8PZaJsncebWq5W1Q5tHn67gt3Yde1hBjt9ae1c/s320/Arnold_03.jpg" height="320" width="115" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitXeSlfX2iC5OhVGtJWb9zQGx5or-swfz8TjjIsKaiZ7-6v2KSKY_jDysiZpHa7ucIwNflctoWW_nS0t3UzgD1bH4AT8Wv6U8w32YBHLZ3PtcSVQ1wo5N71e0qrqX9L488GSZvJQ_ulvs/s1600/Arnold_04.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitXeSlfX2iC5OhVGtJWb9zQGx5or-swfz8TjjIsKaiZ7-6v2KSKY_jDysiZpHa7ucIwNflctoWW_nS0t3UzgD1bH4AT8Wv6U8w32YBHLZ3PtcSVQ1wo5N71e0qrqX9L488GSZvJQ_ulvs/s320/Arnold_04.jpg" height="219" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwXAJNdCo4QECloA2Xi6xTxVxv4MQ99IEQkBVQaPS7gnLgQ-qD4rXCMKKI0dG2Co6zfF-OrqQ5WxsE19-1cGfd5OwjyIOkU9DehMEJdhbQM7dehLb3_gj7TNBNRKqaGiwt7V7ID5VoNxo/s1600/Arnold_05.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwXAJNdCo4QECloA2Xi6xTxVxv4MQ99IEQkBVQaPS7gnLgQ-qD4rXCMKKI0dG2Co6zfF-OrqQ5WxsE19-1cGfd5OwjyIOkU9DehMEJdhbQM7dehLb3_gj7TNBNRKqaGiwt7V7ID5VoNxo/s320/Arnold_05.jpg" height="226" width="320" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
More details as soon as I can get some help making this work.<br />
<br />
I have finally got this working. Please see my later post "passing ageNormalized to Arnold"<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-20197497682168600362012-10-26T00:34:00.003-07:002012-10-26T00:54:00.765-07:00nCloth instancesI have been working on a confetti shot and wanted to use nCloth. I decided to use a particle emitter and to instance the nCloth object onto the particles - but how? If I were to simply assign the nCloth object to the instance, then each instance would be identical. I wanted to have a different starting point in the nCloth cache for each instance. Unfortunately Maya's instancer doesn't support this kind of connection. I tried to use an userScalarPP attribute along with a particle sampler to access the cacheStart attribute on the nCloth cache, but that didn't work.<br />
<br />
Here's how I did it:<br />
<br />
First cache the nCloth object<br />
<br />
Export the nCloth as a sequence of OBJs. I used a python script called <a href="http://www.creativecrash.com/maya/downloads/scripts-plugins/utility-external/export/c/objs-exporter--2" target="_blank">objsExporter_v2</a> from Christos Parliaros<br />
<br />
Re-import the OBJ sequence using Dave Girard's <a href="http://www.creativecrash.com/maya/downloads/scripts-plugins/data-management/c/obj-sequence-importer" target="_blank">objSequenceImporter</a><br />
<br />
Use the imported OBJs to create an instancer with cycling set to On<br />
<br />
On the particle object I setup a per-particle attribute; cyclePP and used this to select the correct OBJ in the sequence. For example, if there are 30 OBJs in your sequence, in the creation expression:<br />
<br />
cyclePP=0;<br />
<br />
and in the Runtime before dynamics:<br />
<br />
cyclePP=(cyclePP+1) % 30;<br />
<br />
so the nCloth object will loop through the cache and start from the beginning at the end of the loop.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-823382765028902335.post-90719952158964743522012-01-03T00:39:00.000-08:002012-01-03T03:20:02.386-08:00Maya Strands**OK, not exactly the same as Softimage ICE strands (which are great, by the way), but here is a way to get a very basic approximation.<br /><br />I am basically taking <a href="http://www.sigillarium.com/blog/lang/en/228/">Sigillarium's</a> particle expression for making a uniform trail of particles and adding a line to take the seed particle's colour attribute and passing it to the emit command. This way, the trails have the same colour as the emitter particle, which is very handy if you are emitting from a surface and inheriting particle colour from that surface.<div><span><br /></span></div><div><span>Here is the expression:</span></div><div><br /></div><div><span>//runtime before dynamics</span></div><div><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>seed_particleShape.beforePosition = seed_particleShape.position;</span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><br /></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><br /></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>//runtime after dynamics</span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>string $trail_pt = "smoke_nParticle"; </span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>float $separ = seed_particleShape.separation;</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>vector $lastPos = seed_particleShape.beforePosition;</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>vector $pos = seed_particleShape.position;</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>vector $move = <<(($pos.x)-($lastPos.x)), (($pos.y)-($lastPos.y)), (($pos.z)-($lastPos.z))>>;</span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span><br /></span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>//get colour info</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>vector $rgb=seed_particleShape.rgbPP;</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>float $r=$rgb.r;</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>float $g=$rgb.g;</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>float $b=$rgb.b;</span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span><br /></span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>//get number of particles to emit per frame</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>int $num = ceil( mag( $move ) / $separ );</span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span><br /></span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>//loop !</span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span style="font-family: 'courier new'; font-size: small; ">if( $num != 0 ) {</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span> vector $step = $move / $num;</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span> for( $i = 1; $i <= $num; $i++ ) {</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span> vector $newPos = $lastPos + $step*$i;</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span> float $life = time - (1.0/25/$num * ($num-$i));</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span> emit -o $trail_pt -pos ($newPos.x) ($newPos.y) ($newPos.z) -at rgbPP -vv ($r) ($g) ($b) -at "birthTime" -fv $life;</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span> }</span></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>}</span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span><br /></span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span><br /></span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>There are a couple of things to note:</span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span><br /></span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>The seed particle shape node has an extra attribute added - separation. This is a float value that determines the distance between each particle in the trail. The number you use depends on the scale of the scene and velocity of the seed particle. It's handy to have this variable on the shape node rather than in the expression.</span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span><br /></span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>There is a per-particle vector attribute called beforePosition, which stores a particle's position from the previous frame. This attribute needs to be created using the Add Attribute dialogue:</span></p><br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQ_oQwqbjWw4LJH16mLxrpdInSu47mT6-ObbFRgyKYDT3bdKpGSsgJtcy2Z2Kcr8eAhGsGVkUssHAXxBZogr6io0WQ7KTAeWNap4C2wnKRVlPmsbgtiCaljPscnBqRD87RyoA0KPJquccE/s1600/trails_02.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 367px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQ_oQwqbjWw4LJH16mLxrpdInSu47mT6-ObbFRgyKYDT3bdKpGSsgJtcy2Z2Kcr8eAhGsGVkUssHAXxBZogr6io0WQ7KTAeWNap4C2wnKRVlPmsbgtiCaljPscnBqRD87RyoA0KPJquccE/s400/trails_02.jpg" border="0" alt="" id="BLOGGER_PHOTO_ID_5693363786897143298" /></a><br /><br /><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span>Also, note the variable $trail_pt in the expression. This is just the name of the trail particle object. Create this object before running the expression either by a 'particle' or 'nParticle' MEL command or by creating an emitter via the menus and deleting the emitter and leaving the particle object behind. Set the $trail_pt variable to the name of your trail particle object.</span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span><br /></span></p><p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "><span><br /></span></p><p></p> <p style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; "></p></div><br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyDVpN4Epd-B4uL-_o4ylCIILYZeGuakjhfKKjsqom2Wwm0h4VCGq96Vn_yWinsiVWGwh2maAueXDAbkexjOjzSxc4n1alD93EVdyREJdLImip3YUeOG_wO-18AImrQHzbNoBxV9RetkBC/s1600/trails.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 244px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyDVpN4Epd-B4uL-_o4ylCIILYZeGuakjhfKKjsqom2Wwm0h4VCGq96Vn_yWinsiVWGwh2maAueXDAbkexjOjzSxc4n1alD93EVdyREJdLImip3YUeOG_wO-18AImrQHzbNoBxV9RetkBC/s400/trails.jpg" border="0" alt="" id="BLOGGER_PHOTO_ID_5693328134091706162" /></a><br /><span></span><div>So, you can hopefully see that I have an image on a plane from which I am emitting some particles which are taking the colour from the plane. Those particles are then emitting more particles in a trail and inheriting the colour from the first particles. </div><div><br /></div><div>Thanks to Sigillarium for the excellent expression. Please check the <a href="http://www.sigillarium.com/blog/">Sigillarium blog</a> as it is outstanding and very clearly explains some difficult concepts.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-823382765028902335.post-179324765455117832011-11-29T02:02:00.000-08:002011-11-29T13:34:59.316-08:00Motion Vectors for hardware particlesI am trying to write an expression which will mimic the mv2DToxik motion vectors, but for hardware particles. Therefore removing the need to instance some geo to get the motion vectors to render.<br /><br />I have not got very far before I came across some vector maths...<br /><br /><br />Here is the setup which works for a camera pointing exactly down the z-axis.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEAmJBLxyHr46Y4ke57BJ0LrW_sEC3X0FbEXu5SIHaYjsKJjAoxydLQbv1J1cV_H35ETXhW9ZGlQJ82BRj8GasNmvUMBSmx2K-HZP5DTlFn3zFnnWwFRbbg-zC3e2NrIUPNAKeHQ3b3u-Y/s1600/mv_01.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 250px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEAmJBLxyHr46Y4ke57BJ0LrW_sEC3X0FbEXu5SIHaYjsKJjAoxydLQbv1J1cV_H35ETXhW9ZGlQJ82BRj8GasNmvUMBSmx2K-HZP5DTlFn3zFnnWwFRbbg-zC3e2NrIUPNAKeHQ3b3u-Y/s400/mv_01.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5680356188648334658" /></a><br /><br /><br /><br /><br />Here is the render loaded into Nuke. Notice the RGB values.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRejXrbeP1SCP-I4ojDCfs_aMOmgV9yKw5Mg_-KbDX53Q8sHe0z6KszH5q-Sk8gaawxv1ifpdDQxidMsgF_DlPI1QD9dDfjT8yBje4fLTlK5oE9e30U2ceKSZSl4bZ-EJ3ezBq2bfOhGk0/s1600/mv_02.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 250px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRejXrbeP1SCP-I4ojDCfs_aMOmgV9yKw5Mg_-KbDX53Q8sHe0z6KszH5q-Sk8gaawxv1ifpdDQxidMsgF_DlPI1QD9dDfjT8yBje4fLTlK5oE9e30U2ceKSZSl4bZ-EJ3ezBq2bfOhGk0/s400/mv_02.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5680356190950091778" /></a><br /><br />I now need to find a way to convert World Velocity to Screen Space Velocity.<br />In a MEL expression. Hmmm, time to ask the <a href="http://forums.cgsociety.org/showthread.php?p=7180108#post7180108">Forum</a><br /><br />After some great advice from Zoharl, I grabbed some code which uses the camera's worldInverseMatrix to transform the velocity vector.<br /><br />Here is the expression:<br /><br />//multiplier<br />float $mult=0.5;<br /><br />//get the particle's World Space velocity<br />vector $vel=particleShape1.worldVelocity;<br />float $xVel=$vel.x;<br />float $yVel=$vel.y;<br />float $zVel=$vel.z;<br /><br />// create particle's velocity matrix which is in World Space<br />matrix $WSvel[1][4]=<<$xVel,$yVel,$zVel,1>>;<br /><br />// get the camera's World Inverse Matrix<br />float $v[]=`getAttr camera1.worldInverseMatrix`;<br />matrix $camWIM[4][4]=<< $v[ 0], $v[ 1], $v[ 2], $v[ 3]; $v[ 4], $v[ 5], $v[ 6], $v[ 7]; $v[ 8], $v[ 9], $v[10], $v[11]; $v[12], $v[13], $v[14], $v[15] >>;<br /><br />//multiply particle's velocity matrix by the camera's World Inverse Matrix to get the velocity in Screen Space<br />matrix $SSvel[1][4]=$WSvel * $camWIM;<br /><br />vector $result = <<$SSvel[0][0],$SSvel[0][1],$SSvel[0][2]>>;<br />float $xResult = $mult * $result.x;<br />float $yResult = $mult * $result.y;<br />float $zResult = $mult * $result.z;<br /><br />//rgbPP<br />particleShape1.rgbPP=<<$xResult,$yResult,0>>;<br /><br /><br />So far it seems to be working, but I will try to test it and see if it breaks down.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJnYFJMNC8BRDqZXWn3BJFWE6j83FPfqLYYq5PKN7ZHa1evltCbrL0v-IMEteoxTAvoealI-ofcTwZ6Pzu2zaKoTvgRjsFui1RYmIQcDlBcX376YP4nSzr4iufTl_UDZUe6cf6SAo2hPeF/s1600/mv_03.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 250px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJnYFJMNC8BRDqZXWn3BJFWE6j83FPfqLYYq5PKN7ZHa1evltCbrL0v-IMEteoxTAvoealI-ofcTwZ6Pzu2zaKoTvgRjsFui1RYmIQcDlBcX376YP4nSzr4iufTl_UDZUe6cf6SAo2hPeF/s400/mv_03.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5680381125201092930" /></a><br /><br /><br />Thanks to Zoharl on the CGTalk forum and to <a href="http://xyz2.net/mel/mel.087.htm">xyz2.net</a> and <a href="http://www.185vfx.com/2003/03/convert-a-3d-point-to-2d-screen-space-in-maya/">185vfx</a> who came up with the original matrix manipulation code.Unknownnoreply@blogger.com0