AU3576999A - Improvements in or relating to animation - Google Patents

Improvements in or relating to animation Download PDF

Info

Publication number
AU3576999A
AU3576999A AU35769/99A AU3576999A AU3576999A AU 3576999 A AU3576999 A AU 3576999A AU 35769/99 A AU35769/99 A AU 35769/99A AU 3576999 A AU3576999 A AU 3576999A AU 3576999 A AU3576999 A AU 3576999A
Authority
AU
Australia
Prior art keywords
animation
image
glove
computer
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU35769/99A
Inventor
Janette Dalgliesh
Jane Hollands
Michael Hollands
John Mciver
Shane Norris
Hugh Joseph Simpson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ACT III Pty Ltd
Original Assignee
ACT III Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AUPP4207A external-priority patent/AUPP420798A0/en
Application filed by ACT III Pty Ltd filed Critical ACT III Pty Ltd
Priority to AU35769/99A priority Critical patent/AU3576999A/en
Publication of AU3576999A publication Critical patent/AU3576999A/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Description

P/00/01 1 Regulation 3.2
AUSTRALIA
Patents Act 1990 0* 9 9 9 *999 9.
9 *9 9 9 .9 9* 9 ORI GINAL COMPLETE
SPECIFICATION
STANDARD
PATENT
9* 9 9 9999 9 Invention title: IMPROVEMENTS IN OR RELATING TO ANIMATION *9 9 9 9.
9 999999 9 The following statement is a full description of this invention, including the best method of performing it known to us: dxbm M01 10452405v 1999995 18,06.1999 IMPROVEMENTS IN OR RELATING TO ANIMATION FIELD OF THE INVENTION The present invention is directed to improvements in or relating to animation, and more particularly to improvements to real-time animation.
BACKGROUND OF THE INVENTION The use of computers for animation in the production of motion picture films and videos is now widespread. Computer animation extends to both animate objects (such as cartoon characters) and inanimate objects, the enlivening of which has traditionally been achieved by hand painting stills, photographing them on a frame by frame basis, and generating a moving image by moving the frames so photographed at a predetermined rate to give the illusion of motion.
Known computer animation techniques include motion capture, which is useful for *the animation of structures or objects of a similar character. In this technique the motion of 15 the character is logged and mapped onto an object of a similar structure. As a mapping So process this technique is clumsy and it is difficult to match objects to render them animate.
The use of body suits and head-mounted devices to generate motion as a basis for animating objects of like structure is also known. Parametric animation is another known animation S .technique.
These known techniques suffer from the same general disadvantage that they require significant amounts of data and hence can be slow and unresponsive. Furthermore they do not as a general rule give feedback to the animator or to the director until an animation sequence has been completed.
SUMMARY OF THE INVENTION The present invention accordingly provides in one embodiment an apparatus enabling real-time animation of a computer-generated image, the apparatus including at least one manually operable input means for generation of data to enable real-time manipulation of the image, a driver for the input means for evaluating data fed from the input means for input into animation software, and motion control means for controlling movement of the image in response to data from the input means and the animation software.
An apparatus according to the present invention is in a preferred embodiment capable of generating enhanced data as an input for existing animation software whereby to enhance the animation capability of the software.
The present invention provides in another embodiment a method for real-time animation of a computer-generated image. The method includes the steps of deforming in real-time one or more discrete zones of the image using a manually operable input means whereby to animate the image, recording the animation(s) in a plurality of passes, and overlaying the passes on one another, whereby to create a multi-track animated sequence.
The present invention involves an adaptation of existing animation software involving feeding data to the animation software and monitoring the development in real time. As the system is a real-time system, substantial productivity gains are achievable in long form .production.
oo. The present invention has been developed to minimise the amount of raw data being o" captured from the input means by having as direct control as possible over manipulation of S 15 the image. This results in an extremely small data flow being needed to control the animatable 4. O" °parameters, which makes real-time animation possible on much more accessible and inexpensive hardware.
In an animation sequence, the director can talk directly to the animator, a pass can be recorded and played back immediately to provide real-time feedback for both the director and the animator. Because the system makes object or character animation performance based, directors are able to interact with objects or characters directly.
In a particularly preferred embodiment, the animation image is created in the animation software and animated using methods and apparatus according to the present invention.
The present inventors realised that the skills of puppeteers could be used to control animation of computer-generated images with a view to rendering control of the images as direct as possible.
In the context of the present specification, the term "frame" as used herein generally refers to a single image at a discrete unit of time. For example, a thirty second television commercial for broadcast in Australia will have a frame rate of 25 frames per second and will have a duration of 750 frames. An individual frame will have a duration of 1/25th of a second. The duration of individual sequences within the production will usually be referred to in frames. The term "frame" is mostly used herein to refer to a single image once the animation is complete and a final sequence of images has been generated.
Keyframing refers to an animation technique which involves animators drawing the set of extreme positions which define the motion in a sequence. The extremes are referred to as "keyframes". In animation by computer, when the keyframes have been set, the frames between keyframes, namely the "in-betweens", are generated by the computer by a process of interpolation between the extreme positions. Computer-generated in-betweens however have a mechanical perfection which looks unrealistic except perhaps for mechanical objects.
In accordance with the present invention, animation information is stored for every frame created by the animator. The present invention however has the flexibility to permit .the use of both "every frame" animation and keyframe animation and combinations of these.
A computer-generated image according to the invention may take any suitable form.
0 The image is preferably capable of being viewed by an animator on a computer screen so that the image may be manipulated by the animator in real-time. The image may be animate or 15 inanimate. It may be an everyday object, such as a teapot. The image may be human or otherwise have humanoid characteristics. It may have an animal form. In a particularly preferred embodiment, the image is three-dimensional and capable of being manipulated in three dimensions. Other computer-generated images are envisaged within the scope of the present invention.
Character animation of three-dimensional computer-generated objects generally requires animated deformation of a model, commonly referred to as a "mesh" of the 0 .6 character. In this context, the objects are virtual, ie they are realised as virtual objects within the computer as spatial definitions of three-dimensional surfaces, but the images generated by them, and which give the animators information to control them, are actual. The model of the character is displayed on the computer screen in as low resolution as possible, with a view to giving the fastest possible screen updates, but with sufficient detail for the animator to see how the model is behaving in response to input instructions. The virtual camera, which generates the screen display, may be quite different from the virtual camera used in the final animation. The three-dimensional deformation information is recorded and attached to the model, which can then be viewed from any angle in the three-dimensional scene.
Manually operable input means according to the invention may take any suitable form.
The input means is preferably capable of continuously generating and sending data to the i_ ~ji~ II 4 driver. In a preferred embodiment the input means is capable of providing real-time manipulation of the computer-generated image, thus providing immediate visual feedback to the animator. The input means may be a glove, mouse, sliders, body suits, joy sticks and pedals. Other input means are envisaged within the scope of the present invention.
A preferred input means according to the present invention comprises one or more gloves capable of being operated by one or more animators. A glove according to the invention may include one or more deformation controllers whereby the glove can be configured to manipulate discrete zones of the computer-generated image. In one embodiment the glove is a modified virtual reality (VR) glove. In a particularly preferred embodiment the input means in the form of a glove is based on a lycra glove having seven separate controls, namely one for each finger, and one each for the pitch and roll of the hand.
The separate controls within the glove can be assigned through software to control any part of an animation. One advantage of the present invention is that an animator can rapidly change *from controlling leg movements in a walking character to making speech movements.
S° 15 In accordance with a preferred embodiment of the invention discrete zones of the computer-generated image requiring expressiveness or movement, such as the mouth of an o *animate object, are identified and assigned to one or more deformation controllers connected to the input means, whereby to create expression or movement of the image in the zone :.**concerned.
20 A particularly preferred input means according to the invention is a 5DT Dataglove, which is a glove capable of measuring finger bend. The glove is a light lycra glove which is ;unmechanical in feel, providing the animator with an animation tool responsive to normal hand and finger movements. The glove includes a fibre optic laser VR device which measures the amount of bend applied to fingers and thumb, as well as pitch and roll of the hand. In this embodiment the glove comprises five laser guns and receptors, whereby the fibre optics extend to the end of the finger and back to the base of the glove. As the fibre optics are bent in response to finger bend this reduces the packet information coming back to a counter arranged on the glove.
An input means according to the present invention may include calibration means. In a preferred embodiment the calibration means includes means for identifying zones of the image to be animated which correspond to the fingers of the glove, and means for adjusting the degree of movement of each zone by finger bend.
The calibration information generated by the calibration means is stored in the computer database so that the computer will recognise the manipulation instructions during an animation sequence. In the embodiment where the input means comprises a glove, the calibration of the glove is achieved by the animator donning the glove, the animator's fingers are moved to set the sensitivity required on an individual basis, following which the fingers are attached to a movement of the object or character to be animated, for example rotation of the head. The controller ranges can also be set to limit movement, for example head movement from -30' to +300. Other calibration means are envisaged within the scope of the present invention.
When used by an experienced animator, an input means according to the invention in the form of a glove can be calibrated in about one minute and can be provided with varying degrees of sensitivity depending on matters such as the degree of expressiveness required for the object or character.
A driver according to the invention for the input means may take any suitable form.
15 In a preferred embodiment the driver is capable of receiving data fed to it from the input means and determining whether and in what form the data is to be sent to other parts of the system such as the motion control means and the animation software, and/or whether to sample the data it receives. Data may also be spooled so that any data not being used or orequired for a current animation sequence can be stored and/or updated by comparing it with other data sent to the driver. It is estimated in accordance with the present invention that data generated in the practice of the invention is updated about 100 times/sec, but that data is only 0 needed about 30 times/sec.
In accordance with the invention the driver selects optimum data from the data fed to it. The driver includes a loop in the form of a stack that continually replaces old data values with updated data values. When the system is asked to evaluate itself it copies the data to another site where it is stored. Other drivers for the input means are envisaged within the scope of the present invention.
A driver according to the invention may include delay means. In conventional animation procedures, when the data is sampled normally it is set at a particular time. In accordance with the present invention a delay can be factored into the animation so that a feeling of inertia can be created, such as a tail wag on an animal or a time delay.
_~_Cn~sll_ Motion control means according to the invention can take any suitable form. The motion control means is preferably capable of controlling movement of the computergenerated image in response to data fed to it on an interactive basis between the animation software and the driver for the input means.
In a preferred embodiment the motion control means is in the form of a motion controller capable of receiving, processing, storing and feeding on an interactive basis data from the driver and the animation software whereby to control movement of the computergenerated image. The motion control means provides an ongoing evaluation of data transmitted to it for storing and/or dumping, and in one embodiment is capable of storing data values saved to create the image.
~Enhanced data provided by an input means in the form of a glove is preferably fed to conventional animation software. One animation software product particularly preferred in accordance with the present invention is marketed under the name 3-D Studio Max by Kinetix. Other animation software packages are envisaged for use within the scope of the S" 15 present invention.
In a preferred embodiment the software programmer builds surfaces for the image and the controllers for deformation of the characters. Considerations the programmer needs to take into account include the constraints of the animation system, and the part of the image and expressions the animators want to be controllable.
20 In accordance with the present invention the motion controller is interrogated to determine what the system hardware is doing at any given time. That information is then returned to the animation software. In a recording mode, in accordance with the invention values are obtained at each frame and are stored in the track. This is a procedure the animator normally has to do manually, wherein controllers of the prior art take the information from memory which are predetermined by the animator.
In accordance with the present invention the computer-generated image on the screen is continually being updated so the animator can see what is happening to the image at any instant. In a preferred control sequence the animator has a choice between storing and dumping data using the commands STOP, START and TEST. The TEST command will update to the screen (for practice sessions), whilst the START command records the values and updates the screen so that the image can be played back at any time.
Methods and apparatus according to the present invention are intended to operate as a "plug-in", ie as an input, to existing animation software such as the 3-D Studio Max software referred to above. This involves utilising as described an input means and animation software, and creating software operating as a driver for the input means, as a "plug-in" to the animation software.
An apparatus according to the present invention may include recording means for the animated image. The apparatus may also include means for varying the angle from which the animated image is recorded. Accordingly the apparatus may include means for rotating the image through 1800 to generate a visual image which is variable and capable of highlighting movement or expression of different parts of the image. In a preferred embodiment, recording of the animation and variation of the camera angles are achieved by the animation software. Performance information of the system can be recorded, and is editable within, the ~animation software such as the Kinetix 3-D Max software. Performance recording with the "system can be synchronised with audio play-back.
15 An apparatus according to the invention may include means for multi-tracking animation sequences. That is, a performance run can be recorded for, for example, head, mouth, eyes and eyelids in a single run. Then in a second run the arm and hand information can be recorded, and so on. This allows a single puppeteer to control all parameters of a character.
20 In an animation sequence, the animator(s) don the glove(s), the glove is calibrated and a pass recorded. The sequence is stored and then subsequent passes are overlaid on one another. The animation occurs in real-time, but the sequence is put together through a series of separate overlaid passes.
One of the more difficult problems in character animation is lipsynching a cartoon or 3-D character. Using traditional methods, a quality 30 second animation can take up to two weeks to complete. Methods and apparatus according to the present invention can produce the same result in as little as 30 seconds.
One advantage of the present invention is that it is multi-layerable. Instead of using a team of thirty animators or puppeteers to work one character, one or two puppeteers can be used. For example, the lipsynch of the character can be done first, such as at a half speed, then the sequence can be repeated by connecting the input device to other parts of the character in order to move some other part so that all the animation of one character can be done by an individual. This results in increased productivity, and higher precision in gestures and animation. With present methods an experienced animator would take a whole day to produce 30 seconds of animation for one character. In accordance with the present invention productivity can increase substantially to the extent that one animator per workstation can increase output by up to thirty times.
A further advantage of the present invention is that it can be run on a good personal computer, and does not require significant capital or infrastructure to enable it to be used.
The present invention can be configured to control any parameter available to be animated in the animation software package. This includes, but is not limited, 3-D models (including character, objects and scenery), lighting effects, colours, surface textures and camera movements, simulated within the virtual environment of the computer.
For example, effects such as flickering fire light can be created in a mathematically random way by the computer, or in accordance with the invention can be controlled randomly by a human operator using an input device such as a glove. Other parameters, such as camera movements can be done in complex, smooth paths by the computer, or in accordance with the invention can re-create a jerky, hand-held look if required by using a human operator to S" control the movement.
This allows for greater expressive and artistic use of the animation software. In most 3-D animation production, the computer is used to fill in details when necessary, extrapolating sections of movement or other elements. With the present invention, it is possible to select .which elements of the animation to affect through the direct control of an input device, and which elements to leave for the computer to extrapolate.
The small data flow needed to control each 3-D model makes it possible to produce 3- D models which can be downloaded via the internet, then manipulated by performers remotely, resulting in the possibility of interactive, live characters available to internet users.
In a particularly preferred embodiment the present invention provides an apparatus enabling real-time animation of a computer-generated image, the apparatus including at least one manually operable VR glove generating data as an input for animation software and capable of directly manipulating the image in real-time, the glove including one or more deformation controllers for controlling the deformation of predetermined zones of the image to create expressiveness and/or movement in said zones, a glove driver for evaluating data fed from the glove for input into animation software, and a motion controller for controlling movement of the image in response to data from the glove and the animation software.
DESCRIPTION OF PREFERRED
EMBODIMENT
The invention will now be described with reference to a particularly preferred embodiment in which: Figure 1 is a diagrammatic representation of an apparatus for real-time animation of a computer-generated image according to one preferred embodiment of the present invention.
Turning to the drawing, Figure 1 shows generally an apparatus 10 for real-time animation of a computer-generated image. Apparatus 10 includes input means 11, a driver 12 for the input means and a motion controller 13. Data from motion controller 13 is fed to animation software 14 providing instructions to manipulate the image on the screen of .°•computer 15. Apparatus 10 in the embodiment shown also includes delay means 16. In conventional animation procedures, when the data is sampled normally it is set at a particular time. In accordance with the present invention a delay can be factored into the animation so that a feeling of inertia can be created, such as a tail wag on an animal or a time delay.
~In the embodiment shown input means 11 is in the form of a manually operable VR glove generating data as an input for the animation software and capable of directly manipulating the image in real-time. The glove includes a fibre optic laser VR device which measures the amount of bend applied to fingers and thumb, as well as pitch and roll of the hand. In this embodiment the glove comprises five laser guns and receptors, whereby the "fibre optics extend to the end of the finger and back to the base of the glove. As the fibre optics are bent in response to finger bend this reduces the packet information coming back to a counter arranged on the glove.
The glove is capable of being calibrated to suit the requirements of the individual animator, and includes one or more deformation controllers linked to the fingers for controlling the deformation of predetermined zones of the image to create expressiveness and/or movement in the zones.
Driver 12 for the input means is capable of receiving data fed to it from the input means 11 in the form of a VR glove and determining whether and in what form the data is to be sent to other parts of the system such as the motion controller 13 and the animation software 14, and/or whether to sample the data it receives. Data may also be spooled so that any data not being used or required for a current animation sequence can be stored and/or updated by comparing it with other data sent to the driver.
Motion controller 13 is capable of controlling movement of the computer-generated image in response to data fed to it on an interactive basis between the animation software 14 and the driver 12 for the input means.
In use, an animator dons input means 11 in the form of a VR glove and undertakes a calibration step. The calibration involves identifying zones of the image to be animated which correspond to the fingers of the glove, and adjusting the degree of movement of each zone by finger bend. The calibration information is stored in the database so that the computer will recognise the manipulation instructions during an animation sequence.
A pass may now be recorded. This may involve the animation of one or more parts of a model of the character, such as the mouth in the case of an animate object. The animation is recorded, and the process repeated for another part of the model. The or each pass may be overlaid onto another pass to generate a multi-track sequence. Sound, such as a voiceover, lipsynch or music, can be overlaid on the animation sequence to produce a finished animated sequence.
S
S•The present invention accordingly provides a "plug-in" which creates an interface between an existing proprietary 3-D animation software package and various peripheral input devices, including but not limited to gloves, mouse, sliders, body suits, joy sticks and pedals.
The invention provides in a preferred embodiment a glove based software system that marry S. traditional puppeteering with computer graphics. The preferred embodiment uses a fibreoptic, laser-based glove that breathes life into computer animations, ranging from cartoon characters to three-dimensional screen characters.
The present invention is essentially a puppetry controlled real time manipulation of 3- D computer generated images, and requires only a desk top work station with operator and puppeteer to function.
The system plugs the information from the glove into an animation software package such as Kinetix 3-D animation software 3-D S Max. This means that the animation is done on the actual models and in the actual scenes which will be rendered in the final production.
In relation to its benefits to the animation industry, through greater artistic expression and efficiencies, the present invention also makes animation "live" performance accessible to artists who would not otherwise have access to it. For example, the present invention could 11 be used by a theatre company or an individual live performer to create a performance without the financial investment which would make the use of previously existing systems prohibitive.
Existing real-time animation software packages are limited by their complexity. They generally involve purpose-built studio space or equipment and display software, and do not allow licence holders of the packages to create their own 3-D models. Existing systems which allow for real-time 3-D animation are built stand-alone packages, which are capable of moving and/or deforming 3-D models (character, objects and scenery) within a computer's virtual environment. The 3-D models are moved when measuring data from a variety of input devices, then applying the data to the models through complex processes using high-end hardware. The present invention allows any animation house or individual animator to use the proprietary animation software to create and build their own models, which they can .9° *o 9 animate using the "plug-in" of the present invention, or any of the other features of the proprietary animation package.
The word "comprising" and forms of that word as used in this description and in the 9. 15 claims does not limit the invention claimed to exclude variants or additions which are obvious to the person skilled in the art and do not have a material effect on the invention.
9 Whilst it has been convenient to describe the invention herein in relation to particularly preferred embodiments, it is to be appreciated that other constructions and 9 °arrangements are also considered as falling within the scope of the invention. Various modifications, alterations, variations and/or additions to the constructions and arrangements described herein are also considered as falling within the scope and ambit of the present invention.

Claims (16)

1. An apparatus enabling real-time animation of a computer-generated image, said apparatus comprising at least one manually operable input means for generation of data to enable real-time manipulation of the image, a driver for said input means for evaluating data fed from said input means for input into animation software, and motion control means for controlling movement of the image in response to data from said input means and said animation software.
2. An apparatus enabling real-time animation of a computer-generated image, said apparatus comprising at least one manually operable virtual reality glove generating data as an input for animation software and capable of directly manipulating the image S: ~in real-time, the glove including one or more deformation controllers for controlling the deformation of predetermined zones of said image to create expressiveness and/or r movement in said zones, a glove driver for evaluating data fed from said glove for input into animation software, and a motion controller for controlling movement of said image in response to data from said glove(s) and said animation software. 0
3. An apparatus according to claim 1 or 2 wherein said input means comprises one or 0*e* *more gloves capable of being operated by one or more animators to manipulate said computer-generated image.
4. An apparatus according to claim 3, wherein said glove(s) include one or more deformation controllers whereby to manipulate discrete zones of said computer- generated image.
5. An apparatus according to any one of claims 1 to 4 wherein said glove comprises calibration means including means for identifying zones of said image to be animated which correspond to the fingers of said glove, and means for adjusting the degree of movement of each zone by finger bend of the fingers of said glove.
6. An apparatus according to any one of claims 1 to 5, wherein said motion control means comprises a motion controller capable of receiving, processing, storing and feeding on an interactive basis data from said driver and said animation software whereby to control movement of said computer-generated image.
7. An apparatus according to any one of claims 1 to 6, wherein said driver includes delay means wherein a delay can be factored into the animation image so that a feeling of inertia can be created in said image.
8. An apparatus according to any one of claims 1 to 7, further comprising recording means for the animated image.
9. An apparatus according to claim 8, further comprising means for varying the angle from which the animated image is recorded.
10. An apparatus according to claim 8 or 9, further comprising means for multi-tracking animation sequences, wherein a pass is recorded and the sequence created thereby is stored, and wherein subsequent passes are overlaid on one another to create an animation sequence.
11. A method for real-time animation of a computer-generated image, said method comprising the steps of deforming in real-time one or more discrete zones of said image using a manually operable input means whereby to animate said image, recording said animation(s) in a plurality of passes and overlaying said passes on one ""•another, whereby to create a multi-track animated sequence.
*12. A method according to claim 11, wherein said manually operable input means 15 comprises at least one virtual reality glove capable of measuring finger bend.
13. A method according to claim 11 or 12, further comprising the step of calibrating said glove whereby to identify zones of said image to be animated which correspond to the fingers of said glove, and further comprising the step of adjusting the degree of movement of each zone by finger bend.
14. A method according to claim 13, wherein in an animation sequence, said glove is .*.calibrated and a pass recorded, and wherein the sequence created by said pass is stored and subsequent passes are overlaid on one another, wherein said animation occurs in real-time but said animation sequence is assembled through a series of separate overlaid passes.
15. An apparatus enabling real-time animation of a computer-generated image, substantially as hereinbefore described and with reference to the accompanying drawing. 14
16. A method for real-time animation of a computer-generated image, substantially as hereinbefore described and with reference to the accompanying drawing. DATED this 18 th day of June 1999 McMASTER OBERIN ARTHUR ROBINSON HEDDERWICKS Patent Attorneys for ACT III PTY LTD .B *B o*
AU35769/99A 1998-06-18 1999-06-18 Improvements in or relating to animation Abandoned AU3576999A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU35769/99A AU3576999A (en) 1998-06-18 1999-06-18 Improvements in or relating to animation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AUPP4207 1998-06-18
AUPP4207A AUPP420798A0 (en) 1998-06-18 1998-06-18 Improvements in or relating to animation
AU35769/99A AU3576999A (en) 1998-06-18 1999-06-18 Improvements in or relating to animation

Publications (1)

Publication Number Publication Date
AU3576999A true AU3576999A (en) 2000-01-06

Family

ID=25623426

Family Applications (1)

Application Number Title Priority Date Filing Date
AU35769/99A Abandoned AU3576999A (en) 1998-06-18 1999-06-18 Improvements in or relating to animation

Country Status (1)

Country Link
AU (1) AU3576999A (en)

Similar Documents

Publication Publication Date Title
Gonzalez-Franco et al. The rocketbox library and the utility of freely available rigged avatars
US20130100141A1 (en) System and method of producing an animated performance utilizing multiple cameras
US7791608B2 (en) System and method of animating a character through a single person performance
US9939887B2 (en) Avatar control system
US10134179B2 (en) Visual music synthesizer
US7053915B1 (en) Method and system for enhancing virtual stage experience
US20030001834A1 (en) Methods and apparatuses for controlling transformation of two and three-dimensional images
JP2011238291A (en) System and method for animating digital facial model
US11475647B2 (en) Augmenting a physical object with virtual components
CN107197385A (en) A kind of real-time virtual idol live broadcasting method and system
JP2000502823A (en) Computer-based animation production system and method and user interface
Thalmann Using virtual reality techniques in the animation process
Niewiadomski et al. Human and virtual agent expressive gesture quality analysis and synthesis
Thalmann A new generation of synthetic actors: the real-time and interactive perceptive actors
WO1998035320A1 (en) Animation system and method
Baptista et al. MotionDesigner: Augmented artistic performances with kinect-based human body motion tracking
AU3576999A (en) Improvements in or relating to animation
Thalmann The virtual human as a multimodal interface
US20220405997A1 (en) Systems and methods for animated figure media projection
Brusi Making a game character move: Animation and motion capture for video games
KR100684401B1 (en) Apparatus for educating golf based on virtual reality, method and recording medium thereof
Magnenat Thalmann et al. 3-D devices and virtual reality in human animation
WO2022266432A1 (en) Systems and methods for animated figure media projection
Lupiac et al. Expanded Virtual Puppeteering
Balet et al. The VISIONS project

Legal Events

Date Code Title Description
DA3 Amendments made section 104

Free format text: THE NATURE OF THE AMENDMENT IS: AMEND THE NAME OF THE APPLICANT TO INCLUDE JANETTE DALGLIESH AND HUGH JOSEPH SIMPSON

MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period