Miro Board Link:
https://miro.com/app/board/uXjVOKHnfj0=/?share_link_id=371325218573
showreel:
Animation
Miro Board Link:
https://miro.com/app/board/uXjVOKHnfj0=/?share_link_id=371325218573
showreel:
Animation
model two joints constrained to each other one skinned to the main body one with the tail.
how to make the tail movement dynamic: create a curve
add a spline IK
make selected curve dynamic hair simulation
hairsystem drives the dynamics
nucleus is the physics of the world
follicle drives growth of the hair
if you play the simulation the curve drops.
point lock set to base will make the second end drop along with the curve
use the node editor to make the dynamic curve drive the IK handle
Start Curve Attract. Determines the amount of attraction of the current hair position to the start position. This attribute is useful, for example, where you want to have stiff hair, or hair that moves with a character.
When Attraction Damp is 1, the motion of hair moving towards its start curves is fully damped, leaving only its Start Positions and field forces to dynamically influence its motion.
The Attraction Scale ramp attenuates the Start Curve Attract attribute value along the length of the hair clumps in your hair system. You can use the ramp graph to define a varied stiffness from root to tip for the hair clumps in your hair system.
create a control for the system
Use the interactive playback to move things in the viewport
nCloth Collision attributes:
solver displays: specifies what Maya Nucleus solver information is displayed in the scene view for the current nCloth object. Solver Display can help you better diagnose and troubleshoot any problems you may be having with your nCloth.
Collision Thickness
When on, the collision volumes for the current nCloth object are displayed in the scene view. Collision Thickness helps you visualize an nCloth’s thickness and it is useful when tweaking an nCloth’s collisions with other nCloth objects or and nParticle and passive objects. The appearance of the current nCloth’s collision volumes is determined by its Collision Flag.
delete the tail part that is affected by the system and set the rest of the body as a passive collider (cloth settings). now the tail will move without colliding with the rest of the body.
auto skinning advanced skeleton:
deform option 2 will create the cage structure around your character
the green controls represent where the geometry falls
the red controls represent where is the fall off of the skinning
copy skin weight to project the weights onto the character:
however the head skin weight and some other parts may not work too well. In order to edit this the green controls can be adjusted to better project the weight on the character.
once the corrections have been applied and the copy weight projected again the head seems to be still problematic, so in that case manually skin weight the fall off of the weight is the right solution.
face skinning
symmetrical model and clean model to work with advanced skeleton
select a mask of the face
connect the geometry to the correspondent selection in advanced skeleton menu
fit menu for the detail on the face is very important to be precise since that the program would built the controls and skin the weight according to where you set the points on the face.
once is built by selecting toggle switch you can edit the selection and rebuilt the pose again.
There are either joint-based or blend shape-based approaches for rigging a face.
Joint-based systems
A joint-based facial rig utilises joints and skinning to affect areas of the model.
A joint-based system can be directly attached to the skinned (main-rig) model or offsite on a duplicate model and then connected to the main rig using a blend shape.
Joint systems can also be used on duplicate models as part of blend shape systems. For example, to help create a rotation for a blink or eyebrow behaviour.
Blend shape (on or off site) systems
Blend shape systems use duplicate models to create distinct shapes(targets) for movement.
You can either use fully duplicated models (offsite) or create (onsite) targets to create your system.
Offsite (full duplicate modelling) allows you to use sculpting tools because you are not directly altering the skinned mesh. A full duplicate can also be exported out to ZBrush for sculpting or to be archived
Onsite (skinned model modelling) means you are unable to use full sculpting tools because you are altering a skinned mesh.
cheek thinning blend shape using the shape editor
select the face mesh create first one empty blend shape and afterwards add some targets to it. when creating the target to edit the mesh the “edit” button should be turned on (red)
to make the edit definitive use the set driven key editor with the joint as the driver and the face shape as the driven with the corrective shape selected
sculpt tools (shape authoring) designed to work with blend shapes- erase and smooth and soft selection
pull out and in mouth target same process to push further the pose and keep the target value to 1 duplicate the target twice put the finial target to 0 and the second to 1 and the third to a 0.3 0.4 value merge the second and third targets and delete the first one: now the pose is pushed further and the value is still on 1.
curl in and out and pull in and out lips blend shapes once created they can be added to the rig using the set driven key editor where the driver in this case would be the mouth control and the driven the blend shapes all the key created have to be set as linear tangents in the graph editor.
Once every shape is created they should be connected to the face controls through the set driven key editor.
This week I have animated a lip-sync animation.
I first animated the jaw bounce and the lips and cheeks afterwards:
After working on the face animation I have added some body poses starting from the root control and finally the arms and the head to add emphasis to the action.
This week we learned how to use Animation bot and apply it into out workflow.
animBot is the most powerful toolset for Maya animators, used by more than 90% of the greatest full feature and AAA game studios. This are some of the features I have used:
For situations where you need to increase or decrease precise values to a key or attribute.
Nudge selected keys to the left or right.
Tip: if you go to an empty frame and Nudge Left Right, the next right key will snap to that frame.
Nudge commands move keys around. You can nudge selected keys or even all keys existent in the scene.
Store key times from selection, to be pasted later.
That’s the classic Tween Machine taken to a whole new level.
Based on traditional animation techniques, with this one you can create precise inbetween keys, first pass breakdowns and save hours of work by basically not dealing with animation curve tangents.
Selection buttons to help selecting chunk of rig controls.
It’s meant to be a quick and simple way to organize and select stuff in your scene and despite of not being a picker substitute, if you don’t have one it will do a pretty decent job.
Copy (store) the animation of selected objects.
You can then paste the animation in the same or another Maya session, to the same objects, other objects, other channels and even to another namespace/character.
Transfer animation to a fresh new scene:
I after started my animation with a monster rig. I have constrained some proxy to better visualise the silhouette and movement of the body of the character. I have imported a reference footage for the animation, blocked the poses using stepped keys first and splined them afterwards. This the final playblast.
Miro Board Link:
https://miro.com/app/board/uXjVOKHnfj0=/?share_link_id=371325218573
This week I have finalised the compositing stage.
For the first shot to add an introduction effect I have duplicated the trees for the background and positioned them in the foreground to give and impression of depth and animated them so that the more the camera pan closer to the character the trees would go out of the frame.
For the skeleton fish scene I had already added a fisheye effect in maya although I wanted to accentuate this effect even more so I have created a black solid layer with a mask set on subtract and feathered the edges to make the boarders even darker.
As I mentioned last week the depth map I have rendered was not working well since the gradient look very similar in the background and the foreground. So I have created a black solid layer and applied to it a gradient map to select the area affected by the blur. On top of it I have created an adjustment layer with a camera blur effect applied to it and the black solid layer selected as mask.
compositing with sounds
After I have composited all the different shots I have added sounds to them based on the animatic.
This week I have also made the intro in 3D since I have thought that the 2D version of it was not fitting with the style of the entire animation. I have used the fishing rod model from the animation and created a 3D text and parent it to the bait.
In order to make the animation more visually interesting I have added some 2D graphics:
some depression lines
comic text which I took inspiration from the deadpool video game showcasing the time and indicate the time spent waiting.
Moreover I have added some stars animation when he wakes up after he felt trying to catch the fish.
for some of the most crucial transitions in the animation I have created a comic slide transition to highlight the actions showcased in the scenes.
photoshop + illustrator
During the second experimentation I carried for the painting style for the background I thought I could import the keyframes into Illustrator after I had painted them an apply an image trace effect to them: the Image Trace feature in Adobe Illustrator is a quick way to convert your image to vector format for high quality printing at any size.
keyframe with the image trace effect applied to it.
This week I also rendered more keyframes to make the video longer (72 frames)
I painted the first key and every ten keyframe from there so that the transition would be smooth for the final output.
Ebsynth processes the input keyframes translating ten frame before and after those provided: this time I have placed the sequences according to the keyframes order in after effect setting an opacity to them so that the frame transition would look smoother.
render grey model, wireframe and final render
For the project submission thought that other than the ebsyth video I could render a wireframe of the model and the scene as well as the beauty and the paint effect to showcase the process behind the final result.
I have found this video to create the wireframe surface
I first applied a standard surface to it and created an aiwirame node in maya that I later linked to the standard surface base colour. I also changed the fill colour and the line colour to make them purple.
For editing I have used after effects and on each master layer I have applied some effects on the beauty layer such as the exposure and hue and saturation. I also overlaid the ao layer on top to give some depth and definition to the shadow as well. to the ao layer I then applied a gamma effect to edit the rgb channel.
ebsynth resolution pictures
In order for ebsynth to render the video the picture should not be in an exr format which I firstly exported the picture to. So after I rendered them as jpg the painted keys and the keys from the render would no longer match in resolution and ebsynth was not able to process them and kept giving me a “resolution error”. So I tried to manually make the resolution of the painted keys to match those of the footage in illustrator. Once I did that the program was working again.
submission and showreel
the initial version of the showreel had some presentation errors: the font I had used was too big and wrong, some of the bed renders appeared a bit too overwhelming since I had animated not only the camera but the model itself; moreover the resolution of the footage was not tv safe. So I have created a second version of the showreel applying these corrections.
I also used a different wireframe process since the other one was not correct as it appeared way too dense in the blanket and pillows areas.
I have applied a black surface and put the bed into a new layer changing the colour to white I also set the shader to wire frame.
this is a frame of the final wireframe shader I have used.
Miro Board Link:
https://miro.com/app/board/uXjVOKHnfj0=/?share_link_id=371325218573
Once I had rendered the animation I started the compositing stage both in Nuke and in After Effects. Both had this colour issues with the beauty layers of the render layers.
The problem was related to the beauty and alpha channel that I had previously rendered separately. In After effects I have positioned the different layers in the following way to make it work: the physical sky ate the bottom followed by the beauty from the master layer and the alpha channel from the master layer. the beauty mode should be set to alpha matte in order for the alpha to mask the black background and show the physical sky beneath.
the picture here showcases the nuke viewport in this case the merge node was acting as screen mode so the opacity of the layers above would appear with a lower opacity. However if I first merged the alpha channel and the beauty layer from the the master layer first and after create an additional merge node connecting those two with the physical sky, they would appear normal.
This week I also created the intro for the short animation. I first created the illustrations and text using illustrator so that they would be vector drawing and the quality world be high, and after I have imported the different assets in after effects to animate them accordingly to the animatic.
In order to add a depth to the scene especially for the trees in the back ground I have done a research looking for a way to render a depth map in maya. One way of doing it is to ad the Z render pass in the render settings. Once the image is rendered it can be imported in after effects although it would first appear as completely white. Once an exposure and levels effects are applied to it it would show a greyscale of the rendered image with the darker values being those in the foreground and the lighter values behind those in the far back.
This is the depth map from the first scene. Although, as you can see, the difference of foreground and background is not that evident so once I have linked it to the beauty layer effect “camera blur” in after effect it would all appear as blurred in the same way and the foreground and the background would not be differentiated.
water reflection
Unfortunately the water in some shots I had previously rendered appeared with a weird reflection especially when the waves faced directly the sun which came across as unnatural. At first I thought that the physical sky might have caused it although I compared the scenes where this did not happen and the values for the physical sky were the the same. The difference was is the water shade and specifically the colour and the IOR values so I have set all the scenes with this problem using the following settings.
color correction
Once all the rendering were ready I started to composite each shot. Other than the alpha channel the beauty of the master layers I also overlaid the AO layer to it which gave a nice shadow effect to the items in the scene. to the beauty I have applied a hue and saturation and exposure effect to the ao a gamma effect.
clouds
While compositing I felt that the sky was a bit “empty” and that adding some clouds would add some depth as well to it. So I looked for some clouds obj on turbosquid By VARRRG and I found these which are a bit stylised and fit with with the overall style of the animation.
Once all the textured were imported and edited I have rendered the scene.
in the render layer settings I have created a bed layer a background layer and a shadow matte as well. For the bed render layer I have created two different collections one with the bed model and a second one with the background but with the primary visibility turned off so that the shadows would still be displayed on the bed model but the background won’t be visible. I did this since that afterwards I am going to manually paint on the background to add a brush strokes effect that I will apply to the footage using absinth.
This is the first experimentation I did painting the background in photoshop: I have set three different layers the bottom one being the rendered frame from the background I would the created a painted layer on top of it and at the top the png of the bed. I have chosen to not paint the bed model since I thought it would make all the work I did in substance painter pointless covering all the texture.
After I was done with painting stage of the frame I have imported the frame and the rendered footage from the master layer in ebsynth to apply the paint effect to the rest of the layers.
Although the frame was just one and the program did not know how to fill the parts of the footage that were not visible in the painted keys at first but were in the frame in the last part of the video ( the top of the bed for instance).
Afterwards I have painted two additional frames to add to the footage that were far apart and covered everything showing throughout the footage.
this is a screenshot from the ebsynth settings: there are two directories where you can import the original footage in a sequence of images and the painted keyframe. based on the directory of the folders with the images the program would automatically create a project directory for the output footage which would be in image sequence as well. The name of the keyframes chosen to edit should be the same of those from the original footage.
this output which I composited in after effects was not working as well: the images were not enough and the change in effect was to sudden the transition had to be smoother.
After I have created an additional frame and added it into the program.
This is the final output for the first experimentation: