CN101573959A - Segment tracking in motion picture - Google Patents

Segment tracking in motion picture Download PDF

Info

Publication number
CN101573959A
CN101573959A CNA2007800490225A CN200780049022A CN101573959A CN 101573959 A CN101573959 A CN 101573959A CN A2007800490225 A CNA2007800490225 A CN A2007800490225A CN 200780049022 A CN200780049022 A CN 200780049022A CN 101573959 A CN101573959 A CN 101573959A
Authority
CN
China
Prior art keywords
mark
pattern
known pattern
sequence
picture frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2007800490225A
Other languages
Chinese (zh)
Inventor
德曼·乔丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Sony Pictures Entertainment Inc
Original Assignee
Sony Corp
Sony Pictures Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp, Sony Pictures Entertainment Inc filed Critical Sony Corp
Publication of CN101573959A publication Critical patent/CN101573959A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present invention discloses a segment tracking in motion picture, including: applying a marking material having a known pattern to a surface; acquiring a sequence of image frames, each image frameof the sequence including a plurality of images of the known pattern covering the surface; deriving position and orientation information regarding the known pattern for each image frame of the sequen ce; and generating animation data incorporating the position and orientation information.

Description

Segment tracking in the motion picture
The cross reference of intersecting application
The application requires the common unsettled U.S. Provisional Patent Application No.60/856 that is entitled as " Segment Tracking in Motion Picture " of submission on November 1st, 2006 according to 35U.S.C. § 119,201 priority is incorporated the disclosure of this application hereby by reference into.
The application also incorporates the disclosure of the application of following common transfer by reference into: the U.S. Patent application 10/427,114 that is entitled as " System and Method for Capturing Facial and Body Motion " that on May 1st, 2003 submitted to; The U.S. Patent application 11/467,503 that is entitled as " Labeling Usedin Motion Capture " that on August 25th, 2006 submitted to; The U.S. Patent application 11/829,711 that is entitled as " FACS Cleaning in Motion Capture " that on July 27th, 2007 submitted to; And on July 11st, 2007 U.S. Patent application 11/776,358 that is entitled as " Motion Capture Using Quantum NanoDots " submitted to, incorporate the disclosure of above-mentioned application hereby by reference into.
Technical field
Relate generally to of the present invention is motion-captured, more specifically relates to the segment tracking (segment tracking) that uses movement mark data (motion marker data).
Background technology
Motion capture system is used to catch performer or motion of objects, and it is mapped on performer/object that computer generates, with as the mode that makes it animation.These systems are everlasting and are used to create the numeral of performer or object in the making of motion picture and video-game, and this numeral is used as source data, to create computer graphical (" CG ") animation.In exemplary systems, the performer is wearing cover clothes, and mark (for example, little reflective markers be attached to health and four limbs) has been adhered in each position that this suit is loaded onto.Then, when mark was illuminated, suitably the digital camera of arranging was recorded in the body kinematics of performer in the capture space from different perspectives.The position (for example, space coordinates) of system's subsequent analysis image to determine the mark on the costume in each frame.By the position of trace labelling, the space representation of system creation mark in a period of time, and the numeral of the performer in the structure motion.This motion is applied to the mathematical model in the Virtual Space subsequently, and this mathematical model can be endowed texture and color subsequently, represents with the complete CG that produces performer and/or performance.This technology is used for producing animation true to nature in many popular films by special effects company.
Summary of the invention
Some implementation disclosed herein provides and has been used for the method, system and the computer program that provide segment tracking motion-captured.
In one aspect, a kind of method disclosed herein provides segment tracking.This method comprises: apply the marker material with known pattern to a surface; Obtain the sequence of picture frame, each picture frame of this sequence comprises a plurality of images of the described known pattern that covers described surface; For each picture frame of described sequence, draw position and directional information about described known pattern; And generation combines the animation data of described position and directional information.
In one aspect of the method, a kind of system that is used for segment tracking is disclosed.This system comprises: image collection module, be configured to generate the sequence of picture frame, and each picture frame comprises a plurality of synchronized images that are deployed in a lip-deep known pattern; And the segment tracking module, be configured to receive the sequence of described picture frame and generate animation data based on being deployed in described lip-deep described known pattern.
After consulting following the detailed description and the accompanying drawings, those of ordinary skill in the art will know other features and advantages of the present invention easilier.
Description of drawings
A structure of the present invention and a details of operation part can be grasped by the research accompanying drawing, in the accompanying drawing:
Fig. 1 is the block diagram according to a kind of motion capture system of implementation;
Fig. 2 shows the sample collection of a kind of mark of implementation according to the present invention;
Fig. 3 is the diagram that has the portrait of label placement position according to a kind of implementation;
Fig. 4 has provided the front view that is equipped with markd manikin;
Fig. 5 shows 1/4th views dorsad of the markd manikin of configuration;
Fig. 6 is a flow chart of describing the method for segment tracking;
Fig. 7 A shows and is marked or patterning is represented the mark of capitalization A;
Fig. 7 B shows a known pattern, and it comprises a plurality of marks, and these a plurality of marks form the alphabetical A as pattern;
Fig. 7 C shows the mark of the textual that comprises alphabetical A;
Fig. 8 is the flow chart that the illustrative methods of the band that utilizes marker material is shown;
Fig. 9 is the functional block diagram of a kind of implementation of segment tracking system;
Figure 10 A shows department of computer science's user representing of unifying;
Figure 10 B illustrates to hold the functional block diagram that the place the computer system of segment tracking system; And
Figure 11 is the functional block diagram that the implementation of segment tracking module is shown.
Embodiment
Some implementation disclosed herein provides the segment tracking in motion-captured.Implementation comprises uses the one or more known indicia patterns that is applied on performer and/or the object.(one or more) mark that constitutes pattern is tracked as a group, rather than tracked separately.Like this, pattern can provide the information such as identification (identification), position/translation (translation) and direction (orientation)/rotation, and these information help mark to follow the tracks of very much.
After reading this specification, those skilled in the art will know as various alternative implementations how realize the present invention in various alternative application.Yet, though will describe various implementation of the present invention here, should be appreciated that what these embodiment just provided by way of example, rather than provide with ways to restrain.Thereby the detailed description to various alternative implementations should not be interpreted as limiting scope of the present invention or the range that is defined by the following claims here.
According to implementation of the present invention, used the mark of known pattern coding to be applied to performer and/or object, cover each surface of performer and/or object usually.Write down and digitlization by image, obtain identification, position/translation and the direction/rotation of performer and/or object pattern." mark disappearance " effect during the pattern of mark can be selected to alleviate mark and covers in motion-captured.In another kind of implementation, discernible random pattern is used as " known pattern ".
Known (with at random) pattern for example can utilize the material that comprises following material to generate: luminous material (that is fluorescent material) in quantum nano dot (quantum nanodot), the dark, tatoo and almost any visible material, infrared or ultraviolet ink, coating or the material that can apply with abundant discernible pattern.Pattern also can comprise the pattern of the inherent feature (for example, mole or wrinkle) of performer and/or object.
In one implementation, pattern comprises a plurality of marks or the feature that is coupled to performer's health.In another kind of implementation, pattern comprises single marking (for example, mark band) or feature.Pattern can be applied in or be fixed on performer's the surface of four limbs, hand and foot or these surperficial around.In one example, can dispose (that is appendage) with the annular that is wrapped in the pattern band on the four limbs and draw the barycenter information relevant with pattern.By applying known pattern to performer and/or object and utilizing camera to write down their motion subsequently, not only can obtain the position, can also obtain mark identity and direction in space.
Fig. 1 is the functional block diagram according to a kind of motion capture system 100 of implementation.Motion capture system 100 comprises motion capture processor 110, motion capture camera 120,122,124, teller work station 130 and the performer's health 140 and the face 150 that suitably are equipped with marker material 260 by predetermined pattern.Though Fig. 1 only shows 13 mark 160A-160F, in health 140 and face 150, can use much more mark.The wired ground of motion capture processor 110 or wirelessly be connected to workstation1 30.Motion capture processor 110 is configured to receive the control data grouping from workstation1 30 usually.
As shown in the figure, three motion capture camera 120,122,124 are connected to motion capture processor 110.According to various demand and the requirements relevant with user and animation, generally need be more than three motion capture camera.Motion capture camera 120,122,124 focusings are in performer's health 140 that has been applied in mark 160A-160F and face 150.
The layout of mark 160A-160F is configured to catch interested motion, for example comprises the motion of performer's health 140, face 150, hand 170, arm 172, shank 174,178 and foot 176.In implementation shown in Figure 1, mark 160A catches the motion of face 150; Mark 160B catches the motion of arm 172; Mark 160C catches the motion of health 140; Mark 160D, 160E catch the motion of shank 174; And mark 160F catches the motion of foot 176.In addition, the uniqueness of the pattern on the mark 160A-160F provides and can be used for obtaining the identification of mark and the information of direction.Mark 160D is configured to be wrapped in the pattern band on performer's the shank.
Motion capture camera 120,122,124 passive movements are caught the synchronizing sequence that two dimension (" the 2-D ") image of mark is caught in processor 110 controls.Synchronous images is integrated into picture frame, the frame in the time series of each picture frame presentation graphs picture frame.That is to say that each picture frame comprises integrated a plurality of 2-D images that obtain simultaneously, each 2-D image is generated by each motion capture camera 120,122,124.Like this 2-D image of Bu Zhuoing can be stored in 130 places, teller work station usually or in the teller work station 130 places watched in real time, perhaps not only be stored in 130 places, teller work station but in the teller work station 130 places watched in real time.
Motion capture processor 110 is carried out integrated (that is, the carrying out " reconstruction ") of 2-D image, to generate the frame sequence of three-dimensional (" 3-D " or " volumetric ") flag data.This sequence of volumetric frame is commonly called one " bat " (beat), and this also can be considered to once " shooting " (take) on kinematography.Traditionally, mark is object or the vision point that disperses, and the flag data of rebuilding comprises a plurality of discrete mark data points, and wherein each mark data points represents to be coupled to (that is 3-D) the position,, space of the mark of target (for example the performer 140).Like this, each volumetric frame comprises a plurality of mark data points of the spatial model of representing target.Motion capture processor 110 is fetched the volumetric frame sequence, and carries out following function, will before getting up with the mark data points related (" mapping " in other words) of subsequently frame in the mark data points of each frame and this sequence.
For example, each mark data points in the first volumetric frame is corresponding to the single marking that is arranged on performer's health 140.It is the label of a uniqueness of each this mark data points appointment of the first volumetric frame.Mark data points subsequently with the second volumetric frame in the respective markers data point associate, and the unique label of the mark data points of the first volumetric frame is assigned to the respective markers data point of the second volumetric frame.When finishing mark (that is, following the tracks of) process for whole volumetric frame sequence, so just can follow the trail of the mark data points of the first volumetric frame, thereby draw each mark data points track separately by this sequence.
Discrete markers is used to catch the motion of the fragment of rigid object or object or health traditionally.For example, define the position of each end of forearm attached to the rigidity mark of ancon and wrist.When forearm moves, the motion of ancon and wrist mark tracked and parsing in the sequence of volumetric frame as mentioned above.Thereby the motion of forearm is modeled as a rigid body (for example, shaft), and wherein only end is limited by ancon and wrist mark.Yet, though this translational motion of forearm is resolved in the variation of locus that can be by analyzing ancon and wrist mark at an easy rate, but but be difficult to detect the common torsional motion of forearm because distortion can and the situation of less mobile wrist or ancon under carry out.
In one implementation, be not to use traditional discrete markers, and be to use the mark that is deployed to pattern, thereby make motion capture processor 110 this pattern can be followed the tracks of as a group, rather than independent trace labelling.Because pattern provides identifying information, so the mark of a pattern can be calculated with respect to the motion of another pattern.In one implementation, the pattern of following the tracks of like this is resorted to the individual subject with spatial positional information in each volumetric frame.Follow the tracks of this object by the sequence of volumetric frame, thereby drawn the virtual animation of for example various spatial translations, rotation and distortion of a part that expression has been applied in the performer of this pattern.
In one implementation, one or more known patterns are printed on the band 160D.Band 160D is wrapped in each limbs (that is, appendage) of performer subsequently, makes each limbs have at least two bands.For example, figure 1 illustrates two band 160D on the left thigh that is wrapped in the performer.Yet, only utilize a band just to be enough to each effector of mark (for example, hand, foot, head).As mentioned above, in case be captured, the printed patterns of the band 160D that twines just makes motion capture processor 110 can follow the tracks of the position and the direction of each " fragment " of expression performer's limbs from any angle, have only on one of them fragment few to a mark as seen.As shown in Figure 1, performer's thigh 178 is regarded as a fragment at motion capture processor 110 places.Band pattern band 160D by will having a plurality of marks is wrapped on the limbs in the mode of ring-type basically, can determine " barycenter " of limbs (that is fragment).Utilize a plurality of band pattern band 160D of mark, can determine barycenter, so that estimation or the simulation to the bone in the limbs to be provided.In addition, can bring direction, translation and the rotation information of determining about entire segment according to (the perhaps a plurality of) mark and/or the bar that are applied on the fragment if visible words.
In one implementation, motion capture processor 110 is carried out segment tracking according to technology disclosed herein, according to this segment tracking, is that a group echo (zone that perhaps is labeled) generates identification, position/translation and direction/rotation information.Though traditional optical motion capture system only writes down the position of mark usually, segment tracking makes motion capture processor 110 can discern position and direction which or which mark just is being captured and is locating the mark that is captured of fragment.In case mark is detected and identification, just can draw position and direction/rotation information about fragment from the mark of being discerned.Along with the more mark of detection and Identification, also increased for the position of fragment and the confidence of determining of directional information.
Mark that applies by known pattern (and discernible random pattern) or marker material have been encoded in fact and have been helped the identification and the directional information of segment tracking efficiently.In one implementation, the tick marks of known pattern constitutes single marking or marked region.For example, in fact the mark that is marked or is patterned as expression capitalization A shown in Fig. 7 A comprises six tick marks A, B, C, D, E and F.In another kind of implementation, known pattern comprises a plurality of marks or marked region, and these marks or marked region form the alphabetical A as pattern, shown in Fig. 7 B.Eight points that form alphabetical A serve as single bigger mark.Perhaps, mark can comprise the textual of alphabetical A, shown in Fig. 7 C.The denominator of the mark shown in Fig. 7 A-C is that they all have discernible direction.In each case, mark all can be rotated, but still can be identified as alphabetical A, and this has improved mark greatly and has followed the tracks of efficient, follows the tracks of it because will be easy between the frame of the view data of being caught and frame.Except identification, can also between frame and frame, determine direction for each mark example.For example, if mark 160B (referring to Fig. 1) is coupled to performer's forearm, so when the performer moves to forearm upwards stand up position from the position of hanging down, therefore it will rotate basically 180 and spend.Tracking to mark not only will disclose the position that has spatially moved to the crown, but also will disclose the direction of forearm fragment and the difference that 180 degree have been arranged before.Like this, utilize the mark or the mark group of encoded identification and directional information, strengthen and improved the mark tracking.
Fig. 2 shows the sample collection of a kind of mark of implementation according to the present invention.Each mark comprises the 6x6 matrix of little black and white square.By unique arrangement at this 6x6 matrix words spoken by an actor from offstage square, will discern and directional information be coded in each mark.In each case, can not cause at the identity of determining mark and the ambiquity aspect the direction, thereby prove the validity of the scheme of this coded message the rotation of mark.Will be appreciated that, also can realize utilizing except here as the encoding scheme of other layouts the 6x6 matrix of the disclosed black and white element of example.
Fig. 3 is the diagram that has the portrait of label placement position according to a kind of implementation.Shown mark utilization and similar scheme shown in Figure 2 encoded identification and directional information.They are substantially symmetrically located, and make each main acra (that is fragment) of health be limited by at least one mark.In the shown mark at least half be positioned in health in shown front view on the sightless surface, and change the arrow that comprises to the position of its approximate crested into.With reference to figure 1, motion capture camera 120,122,124 (represented and wanted much more camera usually) has been surrounded a capture space, and in this capture space, performer's health 140 and face 150 are in the motion.Even when certain subclass of motion capture camera 120,122,124 be can't see any mark because of crested, the motion of the mark of crested will still can be seen and catch to another subclass.Thereby the almost any motion that has been equipped with the performer of mark like this can utilize the system of contact Fig. 1 description to catch, and utilizes the segment tracking method to follow the tracks of.
Fig. 4 has provided the front view of the manikin that is equipped with the mark of describing among Fig. 3.As shown in the figure, have only the lip-deep mark of the face forward of model to be only visible.All the other mark parts or all cresteds.Fig. 5 shows 1/4th views dorsad that have the same manikin of essentially identical attitude with manikin shown in Figure 4.In this view, be arranged in positive mark crested, but many marks of crested are now visible among Fig. 4.Thereby, at any given time, basically underlined all be visible for certain subclass that is arranged in a plurality of motion capture camera 120,122,124 around the capture space.
In addition, the label placement on Fig. 4 and the 3-D model shown in Figure 5 defines the main acra (fragment) of the expressive movement on the health and zone (for example, head, shoulder, buttocks, ankle, or the like) basically.When the data of being caught are carried out segment tracking, arrange on it that markd body position will be orientable and its direction will be confirmable.In addition, the health fragment that limits by label placement, the upper arm fragment between ancon and the shoulder for example also will be because of the mark of each end that is arranged in this fragment basically but orientable.Can also determine the position and the direction of upper arm fragment according to the direction that draws from each mark that limits upper arm.
Fig. 6 is the flow chart of description according to a kind of method 600 of segment tracking of implementation.610, the marker material with known pattern or discernible random pattern is applied to certain surface.In one implementation, this surface is the surface of performer's health, and pattern comprises a plurality of marks that are coupled to performer's health.In another kind of implementation, pattern comprises the single marking (for example, mark band) that is coupled to performer's health.Pattern also can be formed band 160D and by with around mode be fixed on performer's four limbs, hand and the foot, as the contact Fig. 1 as described in.Mark also comprises the reflectivity spheroid, stick on and tatoo, be printed on material on performer's health or performer's inherent feature (for example, mole or wrinkle) on performer's health.In one implementation, can utilize the next pattern that applies mark to the performer of temporarily tatooing.
Next, 620, according to here and above contact Fig. 1 describe be used for the sequence that motion-captured method and system obtains picture frame.In case be captured, view data for example just is used to rebuild the 3-D model that is equipped with markd performer or object.In one implementation, this is included in 630 and draws the position and the directional information of indicia patterns for each picture frame.As described in reference to figure 4 and Fig. 5, come aided location information by the unique identification information that is coded in the mark that the pattern that utilizes the characteristic rotational invariance provides.Directional information can draw by definite rotation amount that constitutes the mark of pattern.Mark rotation and direction generally can limit by the affine expression in the 3-D space of aid mark tracking.For example, direction can be represented by the value of the six-freedom degree (" 6DOF ") of indicated object in the 3-D space.Like this, direction can have expression with respect to three values of the displacement (translation) of the position of the initial point of coordinate system (for example, Euclidean space) and expression three values with respect to the angular displacement (rotation) of the primary axis of coordinate system.
640, generate animation data based on the motion of mark.For example, the data that the motion by the performer that catches from view data draws will activate the virtual digit model.During performing, the performer waves shank according to drama.Performer's shank in each main joint (for example, buttocks, knee and ankle) locate to be equipped with mark.The motion of mark is determined and is used to limit the motion of the fragment of performer's shank.The fragment of performer's shank is corresponding to the fragment of the shank of mathematical model, and therefore the motion of the fragment of performer's shank is mapped to the shank of virtual digit model to make it animation.
As get in touch as described in Fig. 6,610, pattern can be applied to band 160D (referring to Fig. 1) and by with on the four limbs, hand and the foot that are fixed on the performer around mode.Can dispose to determine barycenter according to the ring-type that is wrapped in the band on the limbs then.For example, two this bands that are positioned at each end of limbs are used to definite vertical barycenter about these limbs (fragment) subsequently, thus the backbone element of approximate lower floor.
The flow chart of the method that the band that utilizes marker material 800 is shown is provided among Fig. 8.810, the band with marker material of known pattern is applied to one of four limbs of performer, and its mode normally is wrapped on these limbs by the band with marker material.Because the band of the marker material that twines is deployed ring-type basically, so the barycenter of band can be determined and be used to express the motion of the health that has been wound band.In addition, thus the motion of the backbone element of these limbs can be by approximate and be used for the skeleton modeling.
Next, 820,, obtain the sequence of picture frame according to the motion-captured method and system that is used for of above contact Fig. 1 description.In case be captured, with Fig. 6 at the 620 identical in fact modes of describing of mode, view data is used to for example rebuild the 3-D model that is equipped with markd performer.In one implementation, this be included in 830 with Fig. 6 in come to draw the position and the directional information of known pattern at the identical in fact mode of 630 modes of describing for each band in each picture frame.
840, according to position and directional information, for the performer's limbs that are equipped with one or more bands draw barycenter information.As mentioned above, the barycenter of one or more bands can be used for the skeleton structure (that is bone) in approximate performer's limbs.The motion of the barycenter of mark band is used to indicate the motion of skeleton structure subsequently, thereby expresses the motion of limbs.
850, generate animation data based on the motion of mark band.For example, during performing, the performer waves its shank according to drama.Performer's shank is equipped with the band of the marker material of winding as mentioned above.That is to say that for example, band can be wrapped in thigh top, and alternative in vitro test can be wrapped near the knee.Therefore the motion of the barycenter of mark band is determined and is used to limit the skeleton structure of thigh, and is used to limit the backbone element in the thigh with the corresponding virtual digit model of performer.Therefore the motion of barycenter can be used for making the shank animation of virtual digit model.
In one implementation, mark utilizes quantum nano dot (being also referred to as " quantum dot " or " QD ") to print or forms.System should be similar to traditional retroreflective system in configuration, but the excitation source that has added characteristic frequency (for example, from existing through the illumination ring of filtering or from another source) and be arranged in narrow band gap filter after the camera lens of existing camera, thereby with camera be tuned to light wavelength that QD sent.
QD is configured to be excited by the light of specific wavelength, makes them issue bright dipping (that is fluorescence) at a different wavelength.Because they can send with respect to excitation wavelength light of quantum migration upwards on frequency spectrum, so can carry out filtering to the exciting light from camera.This makes any light outside the specific emission spectrum that drops on QD all be blocked substantially at the camera place.
QD can be tuned to the almost any wavelength in visible spectrum or the invisible spectrum.This means and to carry out filtering to one group of camera, to make it only to see a specific group echo, and can't see other, thus significantly reduced the required workload of given camera, and make camera in the narrow response range of QD, to work in " wide opening " mode.
In one implementation, the quantum nano dot is added on the medium such as ink, coating, plastics, blank temporarily tatoo or the like.
Fig. 9 is the functional block diagram of a kind of implementation of segment tracking system 900.Image collection module 910 generates the picture frame of motion-captured view data, and segment tracking module 920 receives picture frame and generates animation data.
The method that image collection module 910 is discussed according to the described motion capture system 100 of above contact Fig. 1 is operated.In one implementation, picture frame comprises volumetric data, and this volumetric data comprises not tracked flag data.That is to say that flag data is present in each frame with the form of spatial data of mark not, and is mutually not integrated with the flag data of other frames.
Segment tracking module 920 is operated according to method and scheme that above contact Fig. 2 to Fig. 9 describes.
Figure 11 is the functional block diagram that the exemplary configuration of segment tracking module 920 is shown.As shown in the figure, segment tracking module 920 comprises identification module 1100, direction module 1110, tracking module 1120 and animation 1130.
Identification module 1100 comprises that identification has the ability of the mark (mark 160A-160F for example shown in Figure 1) of known pattern.In one implementation, identification module 1100 is carried out pattern match to locate the mark 160 of known and random pattern between frame and frame.In another kind of implementation, direction module 1110 is configured to discern and comprises one group of single big mark than tick marks, as described in contact Fig. 7 A-7C.Mark 160 is designed to its rotation status and how all can be identified, and identification module 1100 correspondingly is configured to carry out the identification to the invariable rotary formula of mark 160.In case mark 160 is discerned uniquely, they just by with performer's health for example on specific part that it was coupled to suitably associate.As shown in Figure 1, mark 160B is associated with performer's upper arm 172, and mark 160D is associated with thigh upper and lower 178 (being wrapped in the mark on the shank fragment), and mark 160C is associated with trunk.Identifying information is passed to direction module 1110 and tracking module 1120 subsequently.
In exemplary configuration shown in Figure 11, direction module 1110 receives identifying information and generates directional information.In one example, directional information comprises the 6DOF data.Subsequently in each frame evaluation of markers 160 to determine that it is at developing position and rotation (that is, affine) state.In one implementation, form the 3-D affine transformation, be used for the variation of these states between frame and the frame that are described at each mark 160.Being wrapped at bar tape label 160D under the situation of each end of a fragment (for example thigh 178 (referring to Fig. 1)) of performer's limbs, can be the barycenter that this limbs fragment is determined approximate lower floor's skeleton structure (that is bone).Directional information (6DOF, affine transformation) is passed to animation 1130.
Tracking module 1120 receives identifying information and generates the mark trace information.That is to say, be marked at tracked in the sequence of picture frame, be marked, and track is determined, in one implementation, the U.S. Patent application 11/467 that is entitled as " Labeling Used in MotionCapture " that employing was submitted to according on August 25th, 2006, the mark and the FACS method of the U.S. Patent application 11/829,711 that is entitled as " FACS Cleaning in Motion Capture " that on July 27th, 503 and 2007 submitted to.Markup information (that is track data) is passed to animation 1130.
Animation 1130 receive directions and markup information and generation animation data.In one implementation, mark position and direction are mapped to the corresponding position, position of going up the coupling mark with the performer on (virtual) digital person model.Similarly, the fragment that is limited by the mark on performer's health is mapped to the appropriate section on the digital person model.For example, the motion of the barycenter of the skeleton structure of the approximate performer's limbs motion of also having simulated this part of performer's health.The animation data as the respective segments animation that is used to make digital personage is suitably formatd and is generated in the conversion of the motion of descriptive markup 160 and relevant barycenter subsequently.
Animation 1130 receives animation information and applies it to digital personage, thereby draws its animation.Animation is examined usually with the fidelity of finding out that it moves to the performer, and determines whether need any processing again and how many processing again of needs for the result who obtains to expect.
Figure 10 A shows computer system 1000 and user's 1002 expression.User 1002 system 1000 that uses a computer carries out segment tracking.Computer system 1000 storages are also carried out segment tracking system 1090, and this segment tracking system 1090 handles image frame data.
Figure 10 B illustrates to hold the functional block diagram that the place the computer system 1000 of segment tracking system 1090.Controller 1010 is programmable processors, and the operation of control computer system 1000 and assembly thereof.The controller 1010 load instructions form of computer program (for example, with) and carry out these instructions from memory 1020 or embedded controller memory (not shown) with control system.In it was carried out, controller 1010 provided segment tracking system 1090 with the form of software systems.Perhaps, this service can be implemented as the independent assembly in controller 1010 or the computer system 1000.
Memory 1020 temporary storaging datas are for other assemblies uses of computer system 1000.In one implementation, memory 1020 is implemented as RAM.In one implementation, memory 1020 also comprises long-term or permanent memory, for example flash memory and/or ROM.
Storage device 1030 interim or store data long term, other assemblies uses for computer system 1000 for example are used to store the data that feed section tracking system 1090 is used.In one implementation, storage device 1030 is hard disk drives.
Medium apparatus 1040 receives removable media and the medium that inserts is carried out reading and/or writing of data.In one implementation, for example, medium apparatus 1040 is CD drive.
User interface 1050 comprises and is used to accept from user's input of the user of computer system 1000 and to the assembly of user's presentation information.In one implementation, user interface 1050 comprises keyboard, mouse, audio tweeter and display.Controller 1010 uses regulates the operation of computer system 1000 from user's input.
I/O interface 1060 comprises one or more I/O ports, is used to be connected to corresponding I/O equipment, for example external memory or ancillary equipment (for example, printer or PDA).In one implementation, the port of I/O interface 1060 comprises such as with lower port: USB port, pcmcia port, serial port and/or parallel port.In another kind of implementation, I/O interface 1060 comprises wave point, is used for carrying out radio communication with external equipment.
Network interface 1070 comprises wired and/or wireless network connects, and for example supports " Wi-Fi " interface (including but not limited to 802.11) or RJ-45 that Ethernet connects.
Computer system 1000 comprises typical other hardware and softwares (for example, power supply, cooling device, operating system) in the computer system, but these assemblies do not specifically illustrate in Figure 10 B for the sake of simplicity.In other implementations, the difference of the system that can use a computer configuration (for example, different bus or stored configuration or multiprocessor configuration).
Various illustrative implementation mode of the present invention has been described.But those of ordinary skill in the art will recognize that other implementation also is in the cards, and within the scope of the present invention.For example, known and discernible random pattern can be printed, be coated with and paint and China ink is coated onto on the surface of performer or object.In addition, print, be coated with paint, China ink is coated with and tatoo, any combination of quantum nano dot and intrinsic physical trait can be used for the pattern that obtains to expect.
Be to be further appreciated that it is in order to describe easily that function is grouped in a module or the piece.Concrete function can be moved to another from a module or piece, and does not break away from the present invention.
Therefore, the invention is not restricted to above-mentioned those embodiment.

Claims (22)

1. method comprises:
Apply marker material with known pattern to a surface;
Obtain the sequence of picture frame, each picture frame of this sequence comprises a plurality of images of the described known pattern that covers described surface;
For each picture frame of described sequence, draw position and directional information about described known pattern; And
Generation combines the animation data of described position and directional information.
2. the method for claim 1, wherein described marker material meets described surface.
3. the method for claim 1, wherein described known pattern utilizes the quantum nano dot to be deployed on the described marker material.
4. the method for claim 1, wherein described quantum nano dot is supported in the medium that comprises ink, coating and plastics.
5. the method for claim 1, wherein described marker material comprises temporarily tatoos.
6. method as claimed in claim 5, wherein, described temporarily tatooing comprises that face tatoos.
7. method as claimed in claim 6, wherein, described face tatoos and comprises a plurality of tatooing of separating.
8. the method for claim 1, wherein described marker material comprises reflective material.
9. the method for claim 1, wherein described marker material comprises fluorescent material.
10. the method for claim 1, wherein described surface comprises
People's surface.
11. method as claimed in claim 10, wherein, described people's surface comprises
At least a in face, health, hand, foot, arm and the shank.
12. the method for claim 1, wherein described surface comprises
The surface of object.
13. method as claimed in claim 12, wherein, described object comprises
At least a in scene and the stage property.
14. the method for claim 1, wherein described pattern is the pattern of being scheduled to.
15. the method for claim 1, wherein described pattern is a random pattern.
16. the method for claim 1, wherein described step that applies the marker material with known pattern to a surface comprises
At least one band that is printed with known pattern is wrapped on performer's the appendage.
17. method as claimed in claim 16, wherein, the described step that draws position and directional information comprises
Determine the barycenter of described band.
18. the method for claim 1, wherein described step that applies the marker material with known pattern to a surface comprises
At least two bands that are printed with known pattern are wrapped on performer's the appendage.
19. method as claimed in claim 18, wherein, the described step that draws position and directional information comprises
Determine the barycenter of described performer's appendage.
20. a system comprises:
Image collection module is configured to generate the sequence of picture frame, and each picture frame comprises a plurality of synchronized images that are deployed in a lip-deep known pattern; And
The segment tracking module is configured to receive the sequence of described picture frame and generates animation data based on being deployed in described lip-deep described known pattern.
21. system as claimed in claim 20, wherein, described segment tracking module comprises:
Identification module is configured to receive the sequence of described picture frame and generates identifying information about the described known pattern in each picture frame of described sequence;
The direction module is configured to receive the sequence of described picture frame and described identifying information and generates directional information;
Tracking module is configured to receive described identifying information and generation mark trace information; And
Animation is configured to receive described directional information and described mark trace information and generates animation data.
22. a computer program that is stored in the computer-readable recording medium, this program comprise the executable instruction that makes computer carry out following operation:
Obtain the sequence of picture frame, each picture frame of this sequence comprises a plurality of images of the known pattern that covers a surface;
Draw position and directional information about described known pattern, and
Generation combines the animation data of described position and directional information.
CNA2007800490225A 2006-11-01 2007-11-01 Segment tracking in motion picture Pending CN101573959A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US85620106P 2006-11-01 2006-11-01
US60/856,201 2006-11-01
US11/849,916 2007-09-04

Publications (1)

Publication Number Publication Date
CN101573959A true CN101573959A (en) 2009-11-04

Family

ID=41232314

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2007800490225A Pending CN101573959A (en) 2006-11-01 2007-11-01 Segment tracking in motion picture

Country Status (1)

Country Link
CN (1) CN101573959A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252712A (en) * 2013-06-27 2014-12-31 卡西欧计算机株式会社 Image generating apparatus and image generating method
CN106061571A (en) * 2014-04-08 2016-10-26 Eon现实公司 Interactive virtual reality systems and methods
CN109313710A (en) * 2018-02-02 2019-02-05 深圳蓝胖子机器人有限公司 Model of Target Recognition training method, target identification method, equipment and robot

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252712A (en) * 2013-06-27 2014-12-31 卡西欧计算机株式会社 Image generating apparatus and image generating method
CN104252712B (en) * 2013-06-27 2018-11-16 卡西欧计算机株式会社 Video generation device, image generating method and recording medium
CN106061571A (en) * 2014-04-08 2016-10-26 Eon现实公司 Interactive virtual reality systems and methods
CN109313710A (en) * 2018-02-02 2019-02-05 深圳蓝胖子机器人有限公司 Model of Target Recognition training method, target identification method, equipment and robot
WO2019148453A1 (en) * 2018-02-02 2019-08-08 深圳蓝胖子机器人有限公司 Method for training target recognition model, target recognition method, apparatus, and robot

Similar Documents

Publication Publication Date Title
EP2078419B1 (en) Segment tracking in motion picture
CN101573733A (en) Capturing surface in motion picture
CN101310289B (en) Capturing and processing facial motion data
CN103930944B (en) Adaptive tracking system for space input equipment
US9984285B2 (en) Adaptive tracking system for spatial input devices
CN101681423B (en) Method of capturing, processing, and rendering images
CN101536494B (en) System and method for genture based control system
JP5202316B2 (en) Motion capture using primary and secondary markers
US8941590B2 (en) Adaptive tracking system for spatial input devices
US10699165B2 (en) System and method using augmented reality for efficient collection of training data for machine learning
CN102460510B (en) For the space multi-mode opertaing device used together with spatial operation system
KR100782974B1 (en) Method for embodying 3d animation based on motion capture
US20130076522A1 (en) Adaptive tracking system for spatial input devices
CN101796545A (en) Integrated motion capture
JP2010519629A (en) Method and device for determining the pose of a three-dimensional object in an image and method and device for creating at least one key image for object tracking
CN108257177A (en) Alignment system and method based on space identification
CN109800645A (en) A kind of motion capture system and its method
CN110140100A (en) Three-dimensional enhanced reality object user's interface function
CN101573959A (en) Segment tracking in motion picture
JP2005339363A (en) Device and method for automatically dividing human body part
AU2012203097B2 (en) Segment tracking in motion picture
JP2011092657A (en) Game system for performing operation by using a plurality of light sources
CN102804206B (en) The control system based on posture representing including data, operate and exchanging

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20091104