The Future of Artificial Intelligence (AI) In Animation

AI has intruded into the world of 3D animation in many ways and we expect it to intrude even more in future. Let’s face it, animation is hard work. Almost every facet of animation is painstakingly difficult. Therefore, any help artists can get from AI is most welcome. Let’s explore how AI will intrude into the animation production pipeline in the not-so-distant future.

Idea creation by its very nature is very creative. However, it’s possible for AI to develop ideas based on a loose story arc. In future we expect AI to get prompts describing a fictional reality in the future, past or present and based on those prompts, create a story that has characters, locations, dialogue and even conflicts and resolutions. A creative will for instance request a story set in the distant future based on a fifth world war and the possible story arcs from that. The AI could generate several stories out of that prompt.

Scriptwriting is fairly easy to adopt to AI because it’s based on an already written story. In future the AI will take the story and develops a script based on what it will determine the story arc to be. It will develop dialogue, create scenes and shots, populate the shots with props and characters, all based on the story. The creative will just need to feed it the various text chapters of the story to get the script.

Storyboarding is a pictorial representation of the script. It is a blow-by-blow image representation of story flow, tempo and development, as well as the camera angles. It is expected that AI will be able to take a script and suggest interesting camera angles to depict the action, create the drawings that depict that action, against story descriptions, and set a tempo to the actions. The AI will also be able to produce the animatic off of the storyboard, inclusive of music and dialogue if any.

In concept or character design, artists are involved in translating the idea of a character into real look and feel of that character. Though involvement of AI here could be tricky because of how creative this process is, there are still ways it will be used. Artists will be able to iterate designs by describing their creative ideas to an AI. They will describe the legs, the arms, the torso, the legs and the face, they will even describe the shape and form of the amour and weapons if any. The AI will be able to take a character description either through voice or text, and spew out a number of initial looks which the artist can work with. The artist will fine tune his creations by giving very specific instruction until the characters various body parts fit what he wants.

Let’s talk about modeling and texturing. Currently you have to use blueprints that have been drawn in front, side, and top elevations to model your character. As you model in a 3d program, you have to visualize how the three views come together to form the whole. We predict that in future AI will take over this work in spectacular ways. You will still need to prompt an AI by describing the character especially facial characteristics such as eye color, race, scars and deformities if any, type of character etc. Also if you have a picture of the character it will be even better. The AI will be able to take the front and side elevations and produce a perfect copy of the required character.

Currently AI’s are able to take a photograph and create a semblance of a 3D model but the models are usually too high resolution and tend to be only 50 percent of the characters width.

As for texturing, we have previously had a lengthy process of unwrapping parts of the body mesh, including accessories, and then switching to an image editor to paint and texture the various parts of the unwrapped character such as face hands and props. We project that AI will take over this process, you just have to describe the props characteristics if any. If the prop is a firearm, then you may need to describe which firearm, if a sword or spear, you may need to describe the texture you need on the weapon.

Rigging is that area 3D artist try to avoid if they can. One good thing is that over time we have had a number of plugins that assist in rigging such as Advanced skeleton 5 and so on. These plugins allow you to take a body mesh, apply a pre-created skeleton over it and with a key stroke, bind the skeleton to the bones in a fairly efficient way. This skeleton comes with its own controllers so you need not create any. Unfortunately, it has its own functional drawbacks. The first one is that the character’s two halves must be proportional, the same in terms of faces and vertexes. Any discrepancy causes the plugin not to work. Typically, the process involves you deleting half of you character and then creating a clone of the other half. You then weld the two together. Yet you may still see the plugin not working.

This will be well sorted by AI. An AI will be able to rig your character fully and apply correct controllers including creating the binding, just on being presented with a body mesh. Its important to mention here that a mesh may not be proportional in both halves by will of the creator, indeed may not even have the same elements in both halves. The AI will nevertheless rig these disproportionate characters perfectly.

Character animation consists of body mechanics and acting. For the most part this is something that is done by hand. However, in recent times we have seen assistance even in this area by the introduction of Motion capture. Motion capture is the mechanical capture of live motion and mapping the same to a digital subject such as a humanoid or monster mesh. This is typically done by using body suits which a person wears and then makes the required movements. The data from these movements is then mapped onto a digital character which makes the same movements. However, a lot of cleanup work has to be done to this data to avoid unnecessary movements. Also, this is relevant to realistic movement rather than cartoon movement. Animation, being an art form that relies heavily on exaggeration, there is need to create movements that can being out that principle.

This is where AI comes in. AI of the future will be able to take prompts from the animator describing the movement that is required such as “exaggerated jump from the high ledge onto the ground”. The AI will then execute this animation perfectly. Facial expressions and lip synch is an area that can be created using AI. The AI engine will be able to take a typical mesh and based on a text document that clearly states the emotion and the word at a given moment, animate the lip synch.

Sometimes its very difficult to physically light a scene by placing and manipulating lights. AI will receive an input telling it which objects to light, intensity of light, source of light, and it will be able to calculate the result with accuracy. Some objects such as bulbs will be needed to emit light into the scene. It will be  a ones step process to designate an object as a light emitter to the scene and also specify what kind of light. Alternatively, the AI will be able to analyze your scene and determine what amount of light to put and where to put the light. Also based on voice prompts, you will be able to say where you want the light and illuminating what, to what extent.

Visual effects are where AI fits very well, and can be able to do great things. Visual effects in animation can include the creation of hair, fur, water, fire, clothes, or dust. Other visual effects include explosions, demolitions, basically any chaotic happening you can think of. In future AI will be able to receive a descriptive analysis of an effect such as a blizzard or snowstorm or explosion and manufacture the same. It will produce an effect based on parameters fed to it verbally such as wind speed, intensity. Flame volume and so on. Similarly certain effects such as water effects will be created based on volume, height of waves, the frequency and so on.

Rendering is the last step of the pipeline where everything is brought together and fused into a single video. AI will be able to take voice instruction to render a given video in a certain look e.g., as cartoonish, as live action, as animation or as Claymation to name but a few. This should make things quite interesting since the original footage may be 3D animation but may be rendered as Claymation style.

Article by: James Kinyanjui