upresFluid

< The upresFluid node resolves couple of major issues in the fluids workflow of Maya by separating dynamics and resolution from the "look", and by implementing of wavelet turbulence algorithm (based on Theodore Kim and Nils Thuerey's open source library) for additional details as a post process.



Fluid dynamic simulation strictly depends on the container resolution. If we change the resolution the fluid will behave differently. This is not good in production because as we design our fluid, we want to use low resolutions for fast turnarounds while working out the dynamics. But later, when we have figured out the dynamic aspect and start tweaking the look of the fluid, we often want to increase the resolution to get more detail (or the opposite) and often have to start again, tweaking the simulation to accomodate for the new resolution.

The upresFluid node effectively eliminates the dependency between resolution+simulation and shading+look. For this purpose we use two containers. In the first one we focus on the fluid motion. There we can keep changing the fluid resolution in order to to achieve the best motion, without worrying about the shading part. Once we are happy with that we focus on the second container used as a static display driver for shading and rendering purposes.

The best part of all this is that now we can change the resolution of the display driver container on the fly - at any point. It will inherit the basic data from the source container and will interpolate it to fit its own resolution set by the upresFluid node.
As a finale we can turn on the wavelet turbulence feature to add plenty of additional detail that is impossible or at least extremely hard to achieve otherwise.

* The current version of the upresFluid node does not fully support auto-resizing of fluids. Technically it works but sometimes the wavelet turbulence pattern gets offset.

 

Fire and explosion examples by David Schoneveld
You can find more information (video tutorials and other examples) on his Vimeo channel:
http://vimeo.com/17032492
http://vimeo.com/17033053

cage

< The cage node allows us to track which polygons of an object are outside another object. In the provided example video the list of polygons is piped into deleteComponent node for procedural deletion.



voronoiTexture3D

< Voronoi procedural 3D texture node.

retarget

< Using the (geometry) retarget node we can relatively transfer the shape of one geometry object onto another. There are multiple built-in methods that solve many general and specific cases. The node can be used as a standard or "relative" wrap deformer, uvBlendShaper, or mixture of both.

On the video example the yellow shirt and blue pants on the walking character are cloth sim. The green shirt and blue pants on the dancing guy are basically the same cloth motion but relativelly transfered to accommodate for the different body motion.
Later i will post more videos showing other applications of the retarget node, like transfering of facial blendShapes between two heads with different topology and proportions while preserving the facial expressions and their specific details, etc.

Retarget node

smooth + boundary preservation
smooth + boundary + fast volume preservation
smooth + boundary + accurate volume preservation

smooth

< Smoothing of geometry using Laplacian algorithm. There is option for boundary preservation and two methods for volume preservation (fast and accurate).

The fast method is useful for objects with "simple" topology - notice how on the third video some verts in the eye corners and ears are starting to missbehave. The accurate method will take care of that, but on the price of additional calculations.



pfxToArray

< Maya PaintFX is very powerful L-System but lacks the ability to instance custom geometry to its elements. To fill the gap we can use pfxToArray node to extract the paintFX data for further modification and custom usage.

< In the first video we selectively read subsets of points.

 

 

 

 

 

< Here we pipe the extracted data into geometry instancer using arrayToDynamicArray nodes.



Particles projected onto polygonal cube
Shrink-wrapped sphere with box

rayProject

< The rayProject node projects point clouds (meshes, curves, surfaces and particles) onto mesh objects. There are multiple options for precise control over what gets projected, where and how. A subset of the effects that can be produced with this node are also known as shrink-wrapping.

< In the first example paintFX tree gets projected onto mesh sphere.

 

 

 

 

< Game engine like shadows - set of polygons projected onto the ground surface that resemble the shape of a moving character.
An interesting twist is that part of the animation where the shadow polygons morph into the character and then go back in shadow mode.




peak

< The peak deformer can do miracles if you need to make the blobby looking nParticles mesh more liquid like.
On the left side we have standard nParticles mesh, on the right side is the same geometry but with peak deformer applied to it.
SPH sim by Ivan Turgeon - PFVE.

 

 

< In case liquid sims are not your "forte" and you are still not clear what's the role of the peak node in the example above - here is another (simpler) one for you where the shrinking effect is exagerrated. Again - standard nParticles mesh on the left and peak+smooth on the right.

morph

< Morph is fast, multithreaded and memory efficient blend shape deformer that can handle thousands of targets with ease and without degrading performance.
It can be used for pre-deformation blend shapes and post-
deformation corrective shapes.

Key features:
- fast, multithreaded computation
- primary, inbetween and combination targets
- interactive per-frame caching for even better performance
- per-target world, dag_node and surface transformation spaces
- per-target inMesh connections
- paintable targets weights
- comprehensive Python API
- streamlined GUI

Kwai Bun (manymany) provided two excellent videos showcasing some of the main Morph features:

workflow
modifiers
performance

shotmodel

< Shotmodeling is important part of every project that involves character animation. For complex character shows it often becomes one of the most important pivots in production.
This GUI and the underlying API are designed to simplify and streamline the shotmodeling workflow from both artistic and pipeline standpoints.

BlendShapesManager

< A whole new approach to working with blend shapes.
This is a comprehensive toolset that can handle huge amount of blend shapes with ease.

- loading/saving of targets and split maps to/from disk
- inverse targets
- auto-generation of "derivatives" for combinations
- targets presented in 2 sections - primaries and secondaries (inbetweens and combinations)
- flexible workflow with hotkeys and mouse actions via overloaded Maya widgets
- associate split maps at any time to any targets
- powerful tools for transferring of targets between objects with different shape and topology
- compile data and bake it to a Morph deformer for high-performance
- complete Python API
- plug-in API for seamless integration of custom tools and into existing pipeline

video tutorial

mush

< The words "delta mush" are popular these days. Here is how it is done the SOuP way - using smooth+morph nodes.

video_tutorial_how_to_set_things_up







scatter

< The scatter node can generate point clouds on the surface of mesh geometry or inside its volume. Here a scatter node creates points inside the volume of a deforming mesh geometry. This point cloud can be used in conjuction with pointCloudFluidEmitter to emit fluid from the entire volume of given geometry and not just from its surface.

Same for pointCloudField - we can affect dynamic properties of objects using the entire volume of object, but not just its surface points. Don't forget that we can transfer point attributes from the mesh surface to the point cloud using the attributeTransfer node - things like point colors, point velocities, etc.

 

 

< We can block out areas by painting weight maps (vertex colors) on the source mesh geometry, this way we can control where the scattered points go. In this example the scatter node creates points on the surface of deforming object. Notice how the point cloud is forming only around the white areas.

 

 

 

 

 

 

< Here we have a mesh cube with two faces deleted. The scatter node still figures out what the volume of the object is like and does the right thing. Also the scatter node has a feature that allows us to generate points only within specified range from the geometry surface.

 

 

 

 

 

 

 

< In general, geometry is never prepared for fluid emission. Modelers model things based primary on rigging, animation and lookdev needs. So we end up with too many, too few, or irregularly placed points.

In this example we have a box with 8 points only. If we decide to use the standard Maya fluid emitter, we have two options:
- emit from these 8 points - pretty useless
- emit from the entire surface

Here we can use fractal or bitmap textures to control the emission process, but they do not allow for localized control and do not react on other events in the scene. The solution is simple - we can use scatter node to resample the geometry. The result is regularly placed points on the surface of the object, inside its volume, or both.

Then the point cloud can be piped directly into a
pointCloudFluidEmitter node or go first through attributeTransfer node that can assign additional bits of data such as emission rate, density, fuel, temperature, color, etc., for more precise control over the emission process. Notice how the cube gets filled with points and the emission happens from the entire volume, but not just the poly faces or vertices. Also, there is a local override of the color emission in the right corner.









scatter + projectors

< Using texture based distribution we can precisely shape the scattered points in many different ways, including "boolean" operations from multiple projection planes, textures and UV Sets.
As you may know already the scatter node can be used to directly drive particles, geometry instancers and procedural shatter nodes.
With the texture based distribution we gain complete control over the scattered points and in this way over the mentioned above systems
.

 

< basic projection

 

 

 

< boolean projections

 

 

 

 

 

 

 

 

 

< boolean projections + texture masking (checker + grid in this case)

 

 

 

 

 

 

 

 

< source geometry uv based




< The scatter node has inPositionPP attribute that can be used to supply custom point cloud to it and in this way to bypass the generation of points internally. Many interesting effects can be achieved by supplying vertices, particles, voxel or pfx data to the scatter node for post-processing - for example - uniform filling of objects (as shown here).
Also, the scatter node distance to surface data for each point - notice how in the two provided two examples the voxel colors get yellow when deep inside the object and dark when close to the surface.

 

 

< Data flow:
fluidAttributeToArray extracts voxel positions from the fluid container and passes them to the scatter node. The scatter node strips all points outside the mesh object. Remaining point positions and distances to surface data gets passed to pointCloudFluidEmitter node that uses them to emit fluid properties into the container.
In the provided examples the pointCloudFluidEmitter is in attribute transfer mode, which forces the container to resamble the shape of the input geometry.

With this technique we can easily achieve the best case scenario for fluid emission - always in the center of the voxels.




< Scatter nodes can be used to drive particles in a procedural manner. This way you don't have to rely on dynamic simulation if you want to stick particles to geometry for example. You can freely scrub the timeline for and back and things will just work.
Here baked point cloud is driving meshed nParticles to create the effect of mud sticking on the character. Peak deformer is used to offset the points of the generated mesh along their averaged normals to make it look more like liquid.

ComputeVelocity node calculates the velocities of the baked point cloud then attributeTransfer node passes them to the mud geometry. If you render with motion blur turned on, you will see that even the mud geometry is changing all the time the motion vectors stay consistent.

 

< Basic example showing instancing of "sprites" to scattered point cloud on the surface (left) and inside an object (right).
AttributeTransfer node is used to properly orient the instances along the normals of the box vertices.











shatter

< Using the shatter node we can shatter mesh geometry, being it static or deforming.
It relies on input point cloud generated by scatter node, particles or nurbs curve. Then voronoi cells get calculated and geometry is cut on the boundaries.
In the example scenes (as on the shown videos) the shatter nodes are in "auto evaluate" mode, but generally you will be using the "bake result" button located inside the shatter node's AE. This way we get the shattered geometry only when needed. The shatter node can generate solid or surface shards.

 

 

 

< Here I animate first the number of points within the point cloud - the output reacts accordingly on the fly. Then I animate the distance between the different shards. Finally I increase the resolution of the sphere.

Additional nodes can be used to further refine the shape and distribution of the scattered point cloud. This way we can precisely place or remove points.
In this example attributeTransfer and bounding object nodes influence the positions of the scattered points. The red points are the original point cloud, the blue ones are the post modified positions. Notice how the shards react on that - the closer the points the finer the shards. In this example i used only one bounding object, but you can use more if needed.

 

 

 

< Shattering of deforming geometry. Notice how the shards stick to their relative positions. The trick here is to pre-cache the input point cloud coming from the scatter node. Inside the scatter node's AE is located a button that will allow you to bake the point cloud to nurbs curve. Then you can deform that curve with the geometry and feed it into shatter node.

The nMaxCutPP attribute drastically improves performance by limiting the lookups needed to create a new shard to the closest n points. The denser the input pointCloud, the bigger the performance improvement is.
Lowering too much the value of this attribute may lead to artefacts - such as overlapping shards. In this particular example, setting nMaxCutPP to 30 resulted in 2.5x shorter time needed for shattering the entire geomtry.

 

 

< Shattered geometry in action.

 

 

 

 

 

 

 

 

 

< Again we are using a combination of baked shatter objects from SOuP, convert to nCloth mesh using default setting and then transform constraint all our vertices so the nCloth remains static in space. Now we can feed the mesh through an attributeTransfer node and with 2 bounding objects - set one to envelope the whole nCloth and set the weight to 1, then the second one to a weight of -1 - this one will used to break per vertex constraint. Now we connect outWeightPP to per vertex glue/strength on the nConstraint.

 

 

 

 

 

< This is a more involving example - here we have local shattering of geometry that grows over time. We split the data flow in two separate streams and combine them at the end.
First stream is used to remove all ground faces that do not interact with the dancing character.
The second stream is used to generate scatter mask (point colors) so we get points only where the character touches the ground. Notice that here we use the original ground geo, but not the one from the first data stream, where we remove faces at each evaluation step. This way we ensure tatic shards. Then we plug the scattered point cloud and the remaining faces into a shatter node to get the desired result.



computeVelocity

< ComputeVelocity calculates point velocities of the dancing character and stores them in array. ArrayToPointColor converts this array to point colors. AttributeTransfer transfers colors from character to ground plane (hidden here) based on proximity between their points. PointCloudFluidEmitter emits fluid properties only from the area where character contacts the ground, and the fluid is colored accordingly.

 

 

 

 

 

< ComputeVelocity calculates the velocity vectors of each point of the geometry before it gets modified (the original moving teapot). AttributeTransfer node transfers these values to the final geometry, so even the point count changes over time we still get consistent motion vectors. Using the remapArray node we post-modify the velocity data. The velocity vector array gets converted to a set of pointColors by the arrayToPointColors node and in this particular case it is named "velocity". Finally the modified teapot mesh's attribute "motionVectorColorSet" points to that "velocity" colorSet and passes it directly to the render.








group

< How to render changing point count geometry with proper motion blur ? Easy.

In this example we have particles falling over moving teapot. BoundingObject is passing the particle positions and radiuses to group node. The group node collects the face ids around the contact points where particles collide with the teapot surface.
This componentsList gets passed to polySmoothMesh and deleteComponent nodes. The polySmoothMesh subdivides the faces to get more resolution, so when the deleteComponent node does its thing we get round holes.

 

 

 

< How to render changing point count geometry with proper motion blur? Part 2.

This is a more involving version of the example above.
In addition to everything from the teapot setup, here we have group nodes that collect the boundary faces of the tearing surfaces. They pass the inverted componentsLists to deleteComponent nodes that are plugged to separate meshShapes - so we always get the boundary faces no matter what is happening to the upstream geometry. Then we emit particles from these faces. This way we get blood particles only where and when the geometry gets torn. Using this simple approach we can eliminate a lot of tedious work by hand needed to ensure proper particle emission from the right place and at the right time.

 

 

< Notice on the rendered video how even the point count and order changes we still get everything properly motion blurred. I used only one collision sample here, that's why some pieces get stuck inside the knifes, and the blood could look a lot better. Good enough for a fast'n'dirty example.

interactive caching system

<
The interactive caching system (ICS) is designed to
improve the viewport perofrmance of deforming
geometry with consistent point count over time.

Once applied to deforming objects it automatically
begins to operate by tracking input conditions and
caching internally geometry data for each frame we
step on without any further intervention by the user
which results in a fluid workflow.
If the input conditions don't change when we step
later on the same frames dependency graph evaluation
is bypassed and the internally cached data is used
instead.

The system is ideal for complex rigs, heavy geometry
and slow to evaluate nodal networks because it step
on disk (slow) but but uses the system memory
instead.

 

Example video description:
- in the first part the raw rig performance is shown
- ICS gets applied to the rigged geometry and hooked to the rig controls
- after the first go through the frames the performance improvement is over 7 times
- one of the controls gets animation change, the frames that get affected by the modified animation curves fall back on the raw DG evaluation, but the second time we step on them things are fast again







boundingObject

< BoundingObject reads particles positionPP, rgbPP, radiusPP and feeds group and attributeTransfer nodes with them.
The Group node has an option to store componentsList and objectGroup data for previous+current states (by default it considers only the current state). This data gets passed to deleteComponent node that deletes faces from the leaves geometry.
AttributeTransfer node slightly attracts the leaves around each particle and recolors them (in red - all particles in this example are red). As result we get an "acid rain" effect.

Mind, there is no transparency hack or anything like that. It is all procedural geometry manipulation.

 

 

< Procedurally delete geometry. Group node collects the face ids inside the bounding object and passes them to deleteComponents node.

 

 

 

 

 

 

 

 

< BoundingObject in pointCloud mode reads particle positionPP and rgbPP attributes. AttributeTransfer node transfers them to the ground surface. Alpha channel is modulated by "alpha" ramp attribute located on the boundingObject node - that's how we get multiple circles around each particle.
Transfering of point positions produces the "swimming" effect - each particle attracts ground points around itself.



point

< Point node randomizes grid points in the XZ plane and assigns to them random colors. AttributeTransfer node transfers the colors to another plane. The result is a Voronoi noise.
Here we "project" it on a flat plane, but it can be used for things like fracturing objects with complex topology.

 

 

 

 

 

 

< You can achieve the same result by simply spraying particles around.

Video2
Video3

bound

< Bound node creates sparse voxel grid around static or deforming geometry with consistent or changing point count and order - a walking character in this case.
The blue wireframe is actually a mesh shape with "display shading" turned-off.

Bound nodes can be used for effortless "downresing" of complex objects for simulation purposes.
The first video shows out of the box simulation of the proxy geometry (1300 points) generated from the original tree (31000 points). The second video shows simulation of the original geometry.
Notice the frame rates.


pointCloudField

< PolyCylinder is deformed by wave deformers and its position is animated. PointAttributeToArray node passes the point positions and tangents to pointCloudField node. The tangent vectors are interpreted as velocites and are applied to the particles. Second pointCloudField node attracts the particles around each mesh vertex so they do not escape when pushed by the first pointCloudField.
Using pointCloudFields we can use any geometry or custom arrays to control dynamic objects in ways that are hard to achieve otherwise.

tensionMap

< This node measures how much the geometry stretches or contracts. There are multiple color coding methods. In this case red is compression, green is neutral, blue is stretching. You can use these color maps to control wrinkle, muscle, veins and whatever other maps you may need for your characters or other things.

There are two modes - distance based (shown here) and in-between angle based. The first method measures distances between points (edge lengths), the second one measures angles between edges - this is useful when we have deformations without stretching/contraction - for example bending of skinny elbow - points get closer, but their edges keep same length.


scriptsManager

< Manage your python and mel scripts the easy way - execute, source/import, edit, tweak attributes in the UI, argument presets, etc.



pointCloudFluidEmitter

< Emit fluid from point clouds. This emitter node brings lots of flexibility to the table. Extract point clouds from meshes, curves, surfaces, particles, paintFX, fluids, etc. and supply that on the input of the node and fluid will be emitted accordingly. Fluid attributes will be emitted according to the data on the input.

In the provided example videos: Particles move through fluidContainer, pointCloudFluidEmitter reads their positions and emits fluid properties in the voxel grid. In this example i use only the particle positions, but in addition you can feed the pointCloudFluidEmitter with per-point radius, density, heat, fuel and color (optionally - from specified colorSet). The node can use pointCloud (arrays), swept geometry or regular mesh, surface, curve or particles as input. As mentioned - in this example we keep things simple - just positions.

If you play the first video you will notice that something is amiss. The fluid tries to do its own thing instead of fallowing the particles - not looking very flamethrowerish.


< To make things better we slap a pointCloudField node that uses particle positions, radius and velocities to push the fluid in the desired direction. As a result the second video looks a lot more like a flamethrower. On a similar note - the pointCloudField can use pointClouds (arrays), swept geometry, mesh, surface, curve or particles as input.

 

 

 

 

 

< Using this node and a bit of creative thinking we can apply deformers to fluids much like any other geometry in Maya.

textureToArray

< Texture based fluid emission is good but often we need more precise control over what we emit and where. In this example textureToArray node converts animated ramp texture to point colors. AttributeTransfer node uses boundingObject to override the texture colors in specific area of the surface. In this case we do it for colors, but it can be anything else - density, fuel, etc.
PointCloudFluidEmitter picks the final colors and emits them into the voxel grid.

Using similar techniques we can build very precise and flexible fluid emission systems. For example, we can emit fluids based on the tensionMap values from the example above, or if you look at the example below, we can procedurally apply multiple textures based on proximity to (complex) geometry or pointCloud and then emit fluids based on that. Throw some extra boundingObjects in the mix to override/block/edit things and you get some pretty interesting stuff going on.

On a similar note:
take a look at the fluidAttributeToArray example - there one fluidContainer is used to emit properties into another fluidContainer.




peak

< TextureToArray node converts animated ramp texture to point attributes (in this case - per-point weight). This weight data is passed to peak deformer. The peak deformer offsets points along their averaged normal.
This effect can be used for many things - static or animated wrinkles, liquidish looking deformations, bulging flesh, etc.


< In the first example we have and local override of the weight map calculated byt the textureToArray node. So we don't get bulging for the points that are inside the boundingObject.

 

 

 

< TextureToArray feeds a peak deformer with pixel values from noise procedural texture. Second peak deformers makes the blobby "mushroom" effect. AttributeTransfer adds point colors.


trajectory

< Using the trajectory system you can non-destructivelly manipulate animation paths directly in the viewport. With "nondestructive" i mean that you can work simultaneously in the graph editor and with the trajectory's manipulators in the viewport and the animation curves will always be in tact. If you change something in the graph editor the trajectory stuff will automatically update in the viewport and the opposite - you edit the path in the viewport the animation curves in the graph editor will update according to that.

The displayAttributes node allows you to display attribute values in the viewport. This is very handy when you want to debug things during playback or when you simply want to display things around.


timeOffset

< We can cook complex objects (meshes, curves, surfacec, etc) at different times - effectively offseting them in time.
In this example we have a running character. I inserted between the skinCluster and the visible geometry a timeOffset node and animated its offset value.







fluidAttributeToArray

< The fire componet of the simulation exists in the small fluidContainer only. FluidAttributeToArray node extracts the voxel properties from there (in this case position + density only) and passes them to pointCloudFluidEmitter emitting smoke into the big fluidContainer.
Using this technique we can split the main elements of the fluid simulation (in this case - fire and smoke) between different containers for more precise and independent control over simulation and shading.

 

 

 

 

< I always wanted to be able to "voxelize" geometry and render it that way. ComputeVelocity calculates the point velocities of the dancing character and passes them to a pointCloudFluidEmitter node in attributeTransfer mode. At each step the pointCloudFluidEmitter will empty-up the fluidContainer before emitting fluid properties, effectively transfering attributes from input pointCloud or geometry to the fluid.

 

 

 

 

 

 

 

< Basic stuff. I painted some point colors that get emitted into the fluidContainer by the object.





multiAttributeTransfer

< MultiAttributeTransfer feeds cluster deformer with point weights based on proximity between character and ground geo.
The closer they are - the stronger the weight is.
Point radius, falloff ramps and other attributes can be controlled globally for the entire set of points or through weight maps for localized control.
The cluster handle is translated on -Y - that's how we get the ground deformations. You can use the peak node to offset points in the same manner for complex geometry (it will do it based on point normals instead of globally for the entire object like the cluster).

 

 

 

< Similar to the example above, but in this case we "remember" the contact areas between character and ground plane. Mind, this is not the regular soft body trick - it is all procedural - no dynamic simulation involved.

 

 

 

 

 

 

 


< MultiAttributeTransfer allows for localized control over deformer weightsMaps. In this particular case we have 4 blendShape targets applied to a head geometry. Each boundingObject is connected to multiAttributeTransfer node that controls the point weights of one of the four targets. The same result can be achieved by using attributeTransfer and arrayToMulti nodes. That's why there are two example scenes supplied.

Notice that blendShape weightMap attributes (much like the skinCluster's one) do not react on "dirty" flags. That's why there is a point node at the very end of the chain that has 4 getAttr lines in its pre-loop section to force-refresh the blendShapes.
Wonder if developers will ever fix this problem to allow for procedural control without having to "hack" things all the time?

Credit goes to the guys at Cinemotion for providing the head geometry and blendShape targets for this example.


< Component texture attachments in Maya are based on object groups. Using the group node we can control interactivelly that otherwise implicit system.
In this example we have a ramp texture with cranked up noise attribute assigned to couple of objects. There is a character walking around them that has a different texture assigned. Based on proximity we "transfer" the ramp texture from different objects to the character geometry.


attributeTransfer

< Basic "Summer and Autumn leaves" example where attributeTransfer node transfer point colors from boundingObjects to leaves geometry.

< As you may know it is very difficult to query scene data from within particle expressions - basically nobody is doing this because the performance hit is huge. There are no out-of-the box tools that bridge particles with the rest of the scene other than colliders and force fields. Using SOuP nodes you can easily do any of that.

In this example particle positions and velocity gets altered by the point normals from another object in the scene. As result instanced geomtry orients along the vertex normals. Using this approach we can make insteresting effects by making particles play nice with the rest of surrounding them objects.

< Combination of point, peak, arrayDataContainer and computeVelocity nodes can produce interesting motion based deformations.
This example shows the very basic of the idea that can be easily extended to achieve much more complex and refined results.









arrayDataContainer

< Using arrayDataContainer nodes we can create interesting effects like wetmaps or accumulated damage. Generated data can be used to drive blendShapes (example above) and texture maps.
AttributeTransfer node transfers per-point weights from fighter geo to the static guy. This data gets passed to arrayDataContainer node and then to arrayToPointColor one. MentalRay vertexColors texture pipes it into a shading network where it is used for blending between two textures.

 

 

 

 

< The arrayDataContainer node has attribute called "sink". At each evaluation step it sinks a little bit of the stored in the node data, creating a "wetmap" effect.

 

 

 

 

 

 

 

 

< PositionPP, radiusPP, rgbPP, weightPP get transfered from particles to the water surface. As result ripples form around every particle that hits the water. Peak deformer displaces the ripple points along Y. Also pointCloudFluidEmitter emits fluid properties from the white areas of the ripples.

 

 

 

 

 

 

 

< Here particles transfer weight over to the nCloth meshes via the arrayDataContainer which maintains the values over time allowing fluid emission. The pointCloudFluidEmitter gets its PositionPP from a pointAttributeToArray and the inDensityPP comes from the arrayDataContainer.

 

 

 

 

 

 

 

< Very similar to the above method except the reverse is happening here with the weight transference. The emitting mesh already has a weight of one but as particles land on its surface, the contact points turn black which prevents fluid emission, hence we can "put out the fire" so to speak.


< With the ability to now invert our weight transference, we can now pipe a boundingObject's weight value through the nComponent node of a dynamic constraint and use it to control a weld's weight per vertex attribute. So we could zip and unzip things or cause breakages in constraints using particles for example.


pointCloudToCurve

< Maya provides a simple way to "emit" geometry using particles, but there is this nasty cycling that happens to the geometry when the particles start dying. Also, there is no way to propagate per-particle attributes to the geometry points.
Here is how we create this effect the right way:
pointAttributeToArray nodes extract particle positions and map them to the idIndex arrays. PointCloudToCurve nodes get this data and create nurbsCurves. Loft node creates polygonal surface, attributeTransfer maps the particle colors to the polySurface (optionally we can transfer and velocity for proper motion blur). Ramp controls the opacity of the surface along its length. Render :)


pointCloudToMultiCurve

Auto-generate multiple nurbs curves from provided point cloud, live garbage collection, etc.

arrayToDynArrays

< This is a more complex example showing procedurally instanced feathers. Scatter node generates points randomly placed on the character's geometry. AttributeTransfer nodes properly adjust their normals that will be used later to orient the instanced feathers. Data gets collected and passed to the instancer node by arrayToDynArrays nodes. As result we get a guy fully covered with feathers.
There are actually two of these systems in the scene - one for the body and another one for the scalp (the big feathers).
Notice how unlike instancing to particles you can scrub the timeline for and back and things just work.

Using similar approach we can easily create things like objects built from lego bricks for example. Finally, don't forget that the instancer node has built-in LOD, where we can display the full res geo, bounding boxes only or nothing. Very useful when things start getting heavy.


Video2

< Not sure how to name this effect, but for now it goes by the name of sparse convex wrap.

rayProject

< The rayProject node can be very useful to create permanent collision deformations. In addition we use point node to apply vertex colors based on amount of deformation.


by Jun Eun Kim



instanceManager

< Instancing made easy !

SOuP provides powerful tools for geometry instancing but the workflow is demanding.
InstanceManager wraps it all in a simple GUI and straightforward workflow.

video tutorial
- most of it focuses on the basics
- the last 5 minutes explain instancing to instances

ripple transforms
stretchy joints
texture controlled transforms

pyExpression

< Python scripting as integral part of the dependecy graph.
Programatically and/or procedurally manipulate transform objects much like how SOuP operates on geometry level.



arrayToDynArrays

< Using arrayToDynArrays nodes we can build kDynArrayAttrsData structures to control the geometry instancer nodes in a procedural manner without the need to go through particles and expressions.

< In the first example fluidToArray node extracts the fluid properties and passes them to few arrayToDynArrays nodes that feed an instancer node. As result we instance geometry to fluid voxels. We can map voxel properties to instances in many different ways. In this i tried to keep things simple:
voxel density - instance scale
voxel velocity - instance aimDirection
If the voxel is empty (density = 0) the related instace gets hidden for better performance.

 

 

< This time using 3D fluidContainer and instancing of multiple objects. If you check the example scene, pay attention to how the multiple instances get randomized using fractal texture and textureToArray node.











< There is a simple way to turn any particle shape into a point cloud container reacting on input events.
Here one particle shape influences the size of another one.

By Sergey Tsyptsyn

 

 

 

 

 

 

 

< Transfer point colors from geometry to particles.
Remember how hard it was to make particles react on surface properties from surrounding them geometry. Well, not anymore.
By Sergey Tsyptsyn

 

 

 

 

 

 

 

 

 

< AttributeTransfer node influences the radius of particles passing through boundingObject.
By Sergey Tsyptsyn

 

 

 

 

 

 

 

< PfxToon color transfer to particles.
By Sergey Tsyptsyn

 

 

 

 

 

 

 

 

 

< Nucleus lacks one very useful feature we enjoyed in the old rigid body solver - collision detection. SOuP brings it back online.
By Sergey Tsyptsyn

 

 

 

 

 

 

 

 

< Another example of procedural control over particles from external events - notice how the particle colors always match the animated texture of the surface underneath.
By Sergey Tsyptsyn





audioToArray (maya audio node)

< Drive transform nodes or procedural networks in Maya with data from audio files (wav, aiff, aifc, snd, stk).
Particles, fluids, geometry generation or deformation, etc. all can benefit from this versatille node.

 

< Interpolate nurbs curve through the audio bands, feed the result into a peak deformer that offsets the points of the curve. Revolve a nurbs surface to visualize in 3D.

 

 

 

 

< Represent the audio bands with scaling transforms.

 

 

 

 

 

 

 

 

 

< Deform polygonal sphere with audio data. Colorize vertices according to amplitude.

mapToMesh & meshToMap nodes

< Using these two nodes we gain ultimate control over the UV points. Convert the uvs to mesh, apply deformers, animate, reposition vertices by hand or using other procedural approaches, then convert the final result back to UV points.



arrayToTexture2D

< ArrayToTexture2D converts on the fly array data to standard 2D texture that can be plugged directly into any shading network. This gives us the ability to drive shaders interactivelly and make them react on events happening at geometry level.
In the example images attributeTransfer node generates point weight map using boundingObject. ArrayToTexture2D node converts that to texture data feeding displacement node.



pointCloudParticleEmitter

< Particle emitter node that uses point cloud data as source for the emission. This approach provides all the freedom, flexibility and precision one may need. Much like the pointCloudFluidEmitter node we supply point cloud data (extracted from meshes, curves, surfaces, particles, fluids, paintFX, etc) on the input and emit particles according to that. The emitter node simplifies the often tedious job of managing PP attributes using standard methods like expressions and/or ramps. It can directly propagate rate, position, velocity, mass, lifespan, radius, rotation, color, opacity, 5 user scalar and 5 user vector attributes - all inherited from the supplied input data.

< Control particle emission by using bounding object. Vertex normals are used as velocities. Particles inherit the vertex colors.

 

< Particle emission from mesh surface with color inheritance.

 

 

 

 

 

 

 

 


< Emit particles from fluid voxel grid. Particles inherit voxel velocities. Voxel densities controls emission rate. Voxel densities are piped to the particles as userScalar1PP attribute that controls ramp attached to the rgbPP attribute - that way we recolor particles according to the densities.

 

 

 

 

 

 

 

< Making force shield effects is much simpler now.



slidersManager

< Easily create and manager large number of animation sliders and selectors. These objects can be used to control any element or group of elements in your scene. Facial animators often use similar tools to streamline their workflow.
This tool adds lots of additional features and flexibility. For more information read the help tab.









< Included is "takes" system that allows making of "snapshots" for all/selected sliders and applying back % of the takes to the sliders.


bmesh

< Remember the zSpheres tool in ZBrush ? Well, that's the same thing but directly in Maya and even more interactive.
How it works - select joint and run the "bmesh" command from the soup shelf. All joints under the "root" one will be used to form continous mesh surface. The mesh generator is fully interactive - manipulating or deleting existing joints or adding new ones instantly updates the mesh. The "volume" of each joint is controlled by its radius attribute.
Big credit goes to Michael Tuttle for sharing his working Maya version based on Justin Ardini's open source project (http://justinardini.com/).
Modified open source code is included in the SOuP archive.


< Interesting example provided by Jeremy Raven shows how to assemble different SOuP nodes to achieve sophisticated control over your effect elements.


shell

< A complex node for dealing with mesh shells. It can extract per-shell data - points, normals, colors, weights, radii, component ids, bounding box.
In addition it can control mesh shells with point cloud supplied on the input by remapping each point attributes to the corresponding mesh shell.





video tutorial

cocoon

< Draw connections between points based on proximity. The node provides a lot of control over how the connection lines are drawn - inpit point cloud attributes can be mapped to the lines - color, transparency, thickness on both sides and along the lines, etc.

 

 

 

 

 

< Precise controls over thickness and color along the length of the lines.

 

 

 

 

 

 


< Offset ramps allow to displace the links along their length independently in XYZ. This way we can create spiderwebs and other interesting shapes like the one provided on the image on the left.


tensionBlendShape

< Blend between different targets based on surface tension.


stickyLips

< Most high-quality head rigs need system that mimics sticky lips. The solutions usually end up being cluttered and messy large networks of nodes, constraints, expressions, deformers, etc.
This StickyLips system provides alternative that consists of just one generic node (stickyCurves) that takes care and hides all involved complexities. As a result the viewport performance gets a boost and technical artists have one thing less to worry about.
Simple API is provided to allow for easy integration into any scripted rig systems.
Included is GUI that streamlines the interactive workflow.


tensionMapSimple

< Similar to the tensionMap node, but here we can provide explicit list of point pairs to calculate tension from. Each tension value can be remapped by corresponding ramp. Included are additional options for limiting, absolute values, etc.



voxel reveal

voxelGrid

<The bound node can generate mesh cage around any geometry, but mesh generation is expensive for computation. It does not make sense to go through that if we need point cloud representation of the (sparse) voxel grid.
The combination of voxelGrid + pointsOnMeshInfo can generate dense voxel grids quickly and easily. Like everything else in SOuP this is a live data generator that provides many options for managing the data flow. For example - trim voxels located away from the base geometry, extract surface properties like colors, normals, uvs, etc and propagate it to the voxels.
Using arrayToDynArrays node voxels can be piped to Maya's Instancer or SOuP's Copier and rendered as arbitrary geometry.



copier

<Copier is extremely powerful node that allows us to copy-stamp mesh geometry. It supports per-instance time offset, handling of vertex colors, uvs, soft/hard edges, shader attachments, etc. The node outputs data as a single mesh object or instancer data.

Video tutorial here: http://vimeo.com/76754222



mesh2arrays

<Another very useful node provided by Alex Smolenchuk. It performs uniform scattering of points on mesh objects. These points have associated to them normals, tangents, etc. properties inherited from the underlying surface.

tetrahedralization and dynamic simulations

tessellation

tetrahedralization of complex geometry

"morph" between objects with different topology

geometry reconstruction (remeshing)

tetrahedralize

<This node offers plenty of options for constrainted or conforming tetrahedralization, cellularization, triangulation, convex hull, tessellation and geometry reconstruction (remeshing) of arbitrary mesh geometry or oriented point clouds.

contour

< Shape the silhouette of mesh objects from given viewport perspective in an intuitive and effortless way.

general workflow

blend direction

collide

< A powerful toolset for geometry collisions.
- high-quality collisions and bulging
- speed -
extra care was taken to ensure maximum performance in every case
- complete control over every aspect of the workflow - global settings, per-deformed object overrides, per-collision object per-deformed objects overrides, list goes on
- tight integration with the rest of SOuP nodes - normal/membership/weightmap modifiers, etc
- well structured UI makes all features convenient and easy to use

And the best yet - you will rarely use any of them !
Intelligent algorithm takes care of all details in the most cases.

Provided here are 3 videos:
- tutorial
- comparison between Maya sculpt, Maya muscle and SOuP collide
- basic bulging

directional diffusion

< Generate organic patterns on the surface or inside the volume of mesh objects.

Peter Shipkov ©2011 (pshipkov@yahoo.com)