CN101799927B - Cartoon role contour tracing method based on key frame - Google Patents

Cartoon role contour tracing method based on key frame Download PDF

Info

Publication number
CN101799927B
CN101799927B CN2010101311832A CN201010131183A CN101799927B CN 101799927 B CN101799927 B CN 101799927B CN 2010101311832 A CN2010101311832 A CN 2010101311832A CN 201010131183 A CN201010131183 A CN 201010131183A CN 101799927 B CN101799927 B CN 101799927B
Authority
CN
China
Prior art keywords
frame
role
profile
subsequence
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010101311832A
Other languages
Chinese (zh)
Other versions
CN101799927A (en
Inventor
肖俊
周春銮
庄越挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2010101311832A priority Critical patent/CN101799927B/en
Publication of CN101799927A publication Critical patent/CN101799927A/en
Application granted granted Critical
Publication of CN101799927B publication Critical patent/CN101799927B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a cartoon role contour tracing method based on key frame, comprising the following steps of: firstly, artificially selecting key frames from a section of motion sequence of cartoon roles and drawing up the contours of the cartoon roles on each key frame by using curves; then, constructing an optimization model by utilizing contour information labeled on the key frames; and finally, transforming the optimization model into a non-linear least square question for solving and tracing the positions of the contours of the cartoon roles on other frames of the motion sequence. The method can precisely trace the contours of the cartoon roles from a section of motion sequence by effectively utilizing constraint information on time sequences and space, and can be used for analyzing the deformations the roles on each position of the motion sequence by utilizing the contour of the cartoon roles on each position of the motion sequence, wherein the contour of the cartoon roles on each position is captured and obtained by the contour tracing method.

Description

Cartoon role contour tracing method based on key frame
Technical field
The present invention relates to the wide method of a kind of trail angle colour wheel from the cartoon character motion sequence based on key frame, belong to computer two-dimensional animation field.
Background technology
The making of tradition cartoon normally relatively expends time in and the labour's.Along with traditional cartoon is widely used in every field, a large amount of ready-made cartoon materials can be reused.Through reusing existing cartoon material, can reduce the complexity that cartoon is made to a certain extent, and then improve the efficient that cartoon is made.
The cartoon character capturing movement is one of mode of reusing in existing cartoon material with being redirected.This reuse means is given a new cartoon character with this sports style then through from one section existing cartoon sequence, extracting the sports style of cartoon character, generates one section new motion sequence.Article " Turning to the Masters:Motion CapturingCartoons " such as in the SIGGRAPH meeting, delivering in 2002 is exactly to adopt this mode to reuse existing cartoon material.In this article, the motion sequence of given one section cartoon character is therefrom specified the key frame of cartoon character artificially, and role's sports style is abstracted into the affine deformation coefficient at each position and the interpolation coefficient of key frame; Then, the user provides one group of key frame with key frame similar new role on attitude of appointment, and the key frame that corresponding coefficient is applied to new role obtains one section new motion sequence.A main difficulty that adopts this mode to reuse the cartoon material is the motion of from the motion sequence of cartoon character, catching the role.The cartoon character method for capturing movement that this article adopts at first is decomposed into a plurality of positions with the role, confirms the affine deformation coefficient at each position then through a kind of tracking based on the zone, utilizes the affine deformation coefficient to confirm the interpolation coefficient of key frame at last.Follow the tracks of respectively at each position of this catching method persona, and do not utilize role's one-piece construction and each position constraint information on motion sequence, follows the tracks of the often not enough robust of the result who obtains.The article of on PacificConference on Computer Graphics and Applications in 2002, delivering " Cartoon MotionCapture by Shape Matching " has proposed a kind of cartoon character method for capturing movement based on form fit.Role's sports style is obtained in the deformation that this method takes place on motion sequence through the profile of analyzing cartoon character.At first; Some points of role's outline up-sampling in each frame; Utilize the method for robust point coupling to set up the correspondence between the configuration sampling point on two different frames then, and then according to the corresponding deformation coefficient of confirming between the role's profile on two different frames of the point of setting up.In order to use this method, must in advance cartoon character be split from this motion sequence; And when being blocked in some frame at motion sequence of some position, this method will be made mistakes.
Profile is followed the tracks of a kind of means of catching the cartoon character motion that can be used as.Profile is followed the tracks of profile is represented with some point on it, confirms the position of these point on whole motion sequence through following the tracks of.Because the correspondence of following the tracks of between the point in the different frame obtain is set up, can be used to analyze the deformation that each position of cartoon character takes place between different frame.The article of in SIGGRAPH meeting in 2004, delivering " Keyframe-Based Tracking for Rotoscoping and Animation " has proposed a kind of method of aircraft pursuit course from the ordinary video sequence based on key frame.The curve that this method will be followed the tracks of is modeled as by three Beziers of the discrete point on it as the reference mark.At first, the reference mark of mark aircraft pursuit course according to the mark on the key frame, makes up an optimal model on key frame, confirms the position of each reference mark in other frames through finding the solution this optimal model.Because in the cartoon motion sequence; Variation between the two frame consecutive frames is bigger; Bigger deformation takes place in the profile regular meeting in motion process that adds the role position; And the variation of curve between adjacent two frames that this method hypothesis is followed the tracks of is smaller, is not adapted at following the tracks of on the cartoon motion sequence role's contour curve.
Summary of the invention
The objective of the invention is to overcome the deficiency of prior art, a kind of cartoon role contour tracing method based on key frame is provided.
Cartoon role contour tracing method based on key frame comprises the steps:
1) from one section cartoon motion sequence, select key frame, the frame sequence between two adjacent key frames constitutes a motion subsequence;
2) for each motion subsequence; On the profile at each position of role on first frame and the last frame, key point is set respectively; Two adjacent key points are connected with straight line; Each position is expressed as a polygon, and the key point number that is provided with on first frame and the last frame is identical and corresponding one by one, and the profile that marks on first frame is copied to this subsequence and goes up on other frames except that last frame;
3) to each motion subsequence, making up one is the Optimization Model that comprises temporal constraint and space constraint of optimization variable with all key points on this subsequence;
4) above-mentioned Optimization Model is changed into the non-linear least square problem of
Figure GSA00000057847700021
; Adopt damped least square method to find the solution; Behind the position of finding the solution each key point on the intermediate frame that obtains subsequence; Adjacent key point on every frame is connected with line segment, obtain role's profile.
Described to each motion subsequence; Making up one is the Optimization Model that comprises temporal constraint and space constraint of variable with the set of keypoints on this subsequence: to objective function E of the structure of the motion subsequence between per two adjacent key frames; This objective function comprises 6 bound terms; Wherein preceding 4 is the temporal constraint item, and back 2 is the space constraint item:
E=w LE L+w AE A+w SE S+w VE V+w EE E+w RE R 1
W wherein L, w A, w S, w V, w EAnd w RBe respectively the weight coefficient of each item, introduce the definition and the effect of each item below successively;
The 1st bound term is length item E L, this has reflected the length variations of each bar line segment on motion subsequence on role's profile,
E L = Σ i = 1 N f - 1 Σ j = 1 N s ( | | s i ( j ) | | - | | s i + 1 ( j ) | | ) 2 - - - 2
S wherein i(j) be j bar line segment on role's profile in the i frame, N fBe the frame number of motion subsequence, N sNumber for line segment on role's profile;
The 2nd bound term is angle item E A, this has reflected the variation of angle on motion subsequence at each key point place on role's profile,
E A = Σ i = 1 N f - 1 Σ j = 1 N c | | ( c i ( a j ( 1 ) ) - 2 c i ( j ) + c i ( a j ( 2 ) ) ) - ( c i + 1 ( a j ( 1 ) ) - 2 c i + 1 ( j ) + c i + 1 ( a j ( 2 ) ) ) | | 2 - - - 3
C wherein i(j) be j key point on middle role's profile on the i frame, N cThe number of key point on the expression role profile;
The 3rd bound term is area item E S, this has reflected role's the area change of each position on motion subsequence,
E S = Σ i = 1 N f - 1 Σ j = 1 N p ( f S ( p i ( j ) ) - f S ( p i + 1 ( j ) ) ) 2 - - - 4
P wherein i(j) be the polygon at j position of expression role in the i frame, f SBe the function that calculates area of a polygon, N pNumber for the role position.
The 4th bound term is speed term E V, this reflected each key point on role's profile in the variation of the position offset between consecutive frame on motion subsequence,
E V = Σ i = 1 N f - 2 Σ j = 1 N c ( ( c i + 1 ( j ) - c i ( j ) ) - ( ( c i + 2 ( j ) - c i + 1 ( j ) ) ) 2 - - - 5
The 5th bound term is border item E E, this make role's profile on each frame of motion subsequence near image boundary, border item computing method are following:
At first to each the two field picture F in the motion subsequence iAdopt the Tuscany border detection algorithm to extract image boundary, obtain image boundary figure E iThen, to each width of cloth image boundary figure E iCalculate the Euclidean distance conversion, obtain the range conversion matrix D of a H * W i, wherein H and W are the height and the width of image, D i(x, y) the presentation video coordinate is that (x, pixel y) is at E iIn from the Euclidean distance of nearest boundary pixel point; Then, utilize line segment on the range conversion definition role profile to the distance of image boundary.At each bar line segment s i(j) go up an equal interval sampling t pixel, the image coordinate of establishing them is { (x k, y k) | 1≤k≤t}, formula calculates the distance of line segment to image boundary below using:
f seg ( s i ( j ) ) = Σ k = 1 t D i ( x k , y k ) - - - 6
At last, the border item is defined as:
E E = Σ i = 2 N f - 1 Σ j = 1 N s f seg ( s i ( j ) ) 2 - - - 7
The 6th bound term is area item E R, this makes that in each frame of motion subsequence, correct image-region is dropped at each position of role, the computing method of area item are following:
At first, utilize the mark on the key frame, each position of following the tracks of out the role is in the approximate location on the intermediate frame: the position p that marks on according to two key frames 1(j) and
Figure GSA00000057847700043
Use linear interpolation to obtain j the general shape of position on the i frame of cartoon character, be designated as p i' (j), adopt affine filling algorithm to utilize p 1(j) texture is p i' (j) Fill Color; Through calculating p i' (j) mean value of each apex coordinate obtains p i' (j) center cp i' (j), with cp i' (j) locate cp in the search window at center i' (j) the optimum position cp on the i frame i *(j), make position p i' (j) center cp i' (j) move to cp i *(j) after, position p i' (j) minimum with the color distortion of the image-region at its place;
Then, to representing the polygon p at j position of role on the i frame i(j), the mean value of each apex coordinate through calculating it obtains the center cp at this position i(j);
At last, use following formula definition area item:
E R = Σ i = 2 N f - 1 Σ j = 1 N p | | cp i ( j ) - cp i * ( j ) | | 2 - - - 8
The affine filling algorithm of described employing utilizes p 1(j) texture is p i' (j) the Fill Color step comprises:
1) with polygon p 1(j) and p i' (j) be expressed as matrix A and the B of 2 * M, wherein the k of the k of A row and B is listed as and representes p respectively 1(j) and p i' (j) the image coordinate on k summit;
2) adopt linear least square to minimize following formula and calculate polygon p i' (j) to polygon p 1(j) affined transformation AT:
error = | | A - AT · B 1 | | - - - 9
3) to polygon p i' each pixel px ' in (j), the coordinates table of px ' is shown as
Figure GSA00000057847700051
Form, obtain a pixel px through computes:
px = AT · px ′ 1 - - - 10
If the coordinate position of px is at polygon p 1(j) inside is then with polygon p 1(j) the color assignment that is positioned at the pixel of this coordinate position on is given px ';
4) for polygon p i' (j) in each also uncoloured pixel, select the color value of the color of the painted pixel nearest as it with it.
The useful effect that the present invention has is: be utilized in the profile that marks on the key frame of motion sequence of one section cartoon character; Make up an Optimization Model that comprises temporal constraint and space constraint; Then this Optimization Model is changed into the non-linear least square problem; Adopt damped least square method to find the solution, can compare accurately and the definite apace position of role's profile on other frames of motion sequence.Utilize profile to follow the tracks of the motion of catching cartoon character,, avoided outline from tracking results because the correspondence between the profile on different frame at each position can directly obtain.
Description of drawings
Fig. 1 is role's profile synoptic diagram of the mark of the present invention on one section cartoon character motion sequence;
Fig. 2 is the distance calculation mode synoptic diagram of the line segment that defines of the present invention to image boundary;
Fig. 3 is the present invention follows the tracks of role's upper arm parts position on one section cartoon character motion sequence a synoptic diagram as a result;
Fig. 4 is the principle of work synoptic diagram of the area item that defines of the present invention;
Fig. 5 is the present invention's wide synoptic diagram as a result of trail angle colour wheel on one section cartoon character motion sequence.
Embodiment
The cartoon role contour tracing method that the present invention proposes; Mark role's profile on the key frame of the motion sequence of cartoon character at first; Make up an Optimization Model according to the mark on the key frame, through finding the solution the position of this Optimization Model on profile other frames of motion tracking card release current flow angle look at motion sequence.Because traditional cartoon cartoon all is to make with lower frame per second, the variation of the profile of cartoon character between adjacent two frames is bigger; And the position of cartoon character tends to the deformation that takes place relatively to exaggerate in motion process.This Optimization Model has effectively utilized temporal constraint and space constraint, can from the cartoon character motion sequence, follow the tracks of out role's profile more exactly.
Be that the wide examples of implementation of trail angle colour wheel are introduced concrete technical scheme of the present invention and implementation step the motion sequence of 15 frames below in conjunction with length from one section this cartoon character of Sun Wukong:
(1) from one section cartoon motion sequence of input, selects key frame, and the frame sequence between two adjacent key frames is formed a motion subsequence; A length this role of Sun Wukong that Fig. 1 provides is in the motion sequence of 15 frames, and the 1st frame and the 15th frame are chosen as key frame;
(2) for each motion subsequence; On the profile at each position of role on first frame and the last frame, key point is set respectively; Two adjacent key points are connected with straight line; Each position is expressed as a polygon, and the key point number that is provided with on first frame and the last frame is identical and corresponding one by one, and the profile that marks on first frame is copied to this subsequence and goes up on other frames except that last frame; Fig. 1 has provided an example at the mark profile of motion subsequence, and red circle is a key point, and blue line constitutes role's profile;
(3) to each motion subsequence, making up one is the Optimization Model that comprises temporal constraint and space constraint of optimization variable with all key points on this subsequence.Just, to objective function E of motion subsequence structure, this objective function comprises 6 bound terms, and wherein preceding 4 is the temporal constraint item, and back 2 is the space constraint item:
E=w LE L+w AE A+w SE S+w VE V+w EE E+w RE R 1
W wherein L, w A, w S, w V, w EAnd w RBe respectively the weight coefficient of each item; In this embodiment, it is 8,9,3,4 that these parameters are set gradually, 4 and 32, introduce the definition and the effect of each item below successively;
The 1st bound term is length item E L, this has reflected the length variations of each bar line segment on motion subsequence on role's profile,
E L = Σ i = 1 N f - 1 Σ j = 1 N s ( | | s i ( j ) | | - | | s i + 1 ( j ) | | ) 2 - - - 2
S wherein i(j) be j bar line segment on role's profile in the i frame, N fBe the frame number of motion subsequence, N sNumber for line segment on role's profile;
The 2nd bound term is angle item E A, this has reflected the variation of angle on motion subsequence at each key point place on role's profile,
E A = Σ i = 1 N f - 1 Σ j = 1 N c | | ( c i ( a j ( 1 ) ) - 2 c i ( j ) + c i ( a j ( 2 ) ) ) - ( c i + 1 ( a j ( 1 ) ) - 2 c i + 1 ( j ) + c i + 1 ( a j ( 2 ) ) ) | | 2 - - - 3
C wherein i(j) be j key point on middle role's profile on the i frame, N cThe number of key point on the expression role profile;
The 3rd bound term is area item E S, this has reflected role's the area change of each position on motion subsequence,
E S = Σ i = 1 N f - 1 Σ j = 1 N p ( f S ( p i ( j ) ) - f S ( p i + 1 ( j ) ) ) 2 - - - 4
P wherein i(j) be the polygon at j position of expression role in the i frame, f SBe the function that calculates area of a polygon, N pNumber for the role position.
The 4th bound term is speed term E V, this reflected each key point on role's profile in the variation of the position offset between consecutive frame on motion subsequence,
E V = Σ i = 1 N f - 2 Σ j = 1 N c ( ( c i + 1 ( j ) - c i ( j ) ) - ( ( c i + 2 ( j ) - c i + 1 ( j ) ) ) 2 - - - 5
The 5th bound term is border item E E, this make role's profile on each frame of motion subsequence near image boundary, border item computing method are following:
At first to each the two field picture F in the motion subsequence iAdopt the Tuscany border detection algorithm to extract image boundary, obtain image boundary figure E iThen, to each width of cloth image boundary figure E iCalculate the Euclidean distance conversion, obtain the range conversion matrix D of a H * W i, wherein H and W are the height and the width of image, D i(x, y) the presentation video coordinate is that (x, pixel y) is at E iIn from the Euclidean distance of nearest boundary pixel point; Then, utilize line segment on the range conversion definition role profile to the distance of image boundary.At each bar line segment s i(j) go up an equal interval sampling t pixel, the image coordinate of establishing them is { (x k, y k) | 1≤k≤t}, the formula below using calculates the distance of line segment to image boundary:
f seg ( s i ( j ) ) = Σ k = 1 t D i ( x k , y k ) - - - 6
Generally, the value of t influences not quite the result, considers counting yield, can get a relative smaller value, and between 5 to 10, the value of t is 7 in these examples of implementation.Fig. 2 has provided the signal of line segment to the distance calculation mode of image boundary; Wherein Fig. 2 (a) is image boundary figure; What Fig. 2 (b) illustrated is that line segment is to frontier distance; Green line is represented line segment, and red circle is the pixel from the line segment up-sampling, the yellow line represents be the minor increment of sampling pixel points to image boundary.At last, the border item is defined as:
E E = Σ i = 2 N f - 1 Σ j = 1 N s f seg ( s i ( j ) ) 2 - - - 7
The 6th bound term is area item E R, this makes that in each frame of motion subsequence, correct image-region is dropped at each position of role, the computing method of area item are following:
At first, utilize the mark on the key frame, follow the tracks of out role's the approximate location of each position on intermediate frame.According to the position p that on two key frames, marks 1(j) and
Figure GSA00000057847700074
Use linear interpolation to obtain j the general shape of position on the i frame of cartoon character, be designated as p i' (j), adopt affine filling algorithm according to p 1(j) texture is p i' (j) Fill Color; Through calculating p i' (j) mean value of each apex coordinate obtains p i' (j) center cp i' (j), with cp i' (j) locate cp in the search window at center i' (j) the optimum position cp on the i frame i *(j), make position p i' (j) center cp i' (j) move to cp i *(j) after, position p i' (j) minimum with the color distortion of the image-region at its place; Fig. 3 has provided the approximate location of right upper arm on each frame of motion sequence of following the tracks of this role of Sun Wukong who obtains, and green circle is represented the center at position;
Then, to representing the polygon p at j position of role on the i frame i(j), the mean value of each apex coordinate through calculating it obtains the center cp at this position iAnd the formula definition area item below using (j):
E R = Σ i = 2 N f - 1 Σ j = 1 N p | | cp i ( j ) - cp i * ( j ) | | 2 - - - 8
What Fig. 4 illustrated is the action principle of area item.In Fig. 4 (a), yellow circle is represented the optimum position at role's left hand palm center, and blue square is the current center of the left hand palm; In Fig. 4 (b), under the effect of area item, the center of the left hand palm is constantly near its optimum position, and final the coincidence makes the curve of the expression left hand palm near to its tram in image;
The described p that utilizes 1(j) texture is p i' (j) the affine filling algorithm step of Fill Color comprises:
1. with polygon p 1(j) and p i' (j) be expressed as matrix A and the B of 2 * M, wherein the k of the k of A row and B is listed as and representes p respectively 1(j) and p i' (j) the image coordinate on k summit;
2. adopt linear least square to minimize following formula and calculate polygon p i' (j) to polygon p 1(j) affined transformation AT:
error = | | A - AT · B 1 | | - - - 9
3. to polygon p i' each pixel px ' in (j), the coordinates table of px ' is shown as
Figure GSA00000057847700083
Form, obtain a pixel px through computes:
px = AT · px ′ 1 - - - 10
If the coordinate position of px is at polygon p 1(j) inside is then with polygon p 1(j) the color assignment that is positioned at the pixel of this coordinate position on is given px ';
4. for polygon p i' (j) in each also uncoloured pixel, select the color value of the color of the painted pixel nearest as it with it.
(4) above-mentioned Optimization Model is changed into the form of the non-linear least square problem of ; Adopt damped least square method to find the solution; Obtain key point on role's profile of first frame mark behind the position on each intermediate frame of subsequence; Adjacent key point on every frame is connected with line segment, obtain role's profile.Fig. 5 has provided the result of the profile tracking of this embodiment.

Claims (1)

1. the cartoon role contour tracing method based on key frame is characterized in that comprising the steps:
1) from one section cartoon motion sequence, select key frame, the frame sequence between two adjacent key frames constitutes a motion subsequence;
2) for each motion subsequence; On the profile at each position of role on first frame and the last frame, key point is set respectively; Two adjacent key points are connected with straight line; Each position is expressed as a polygon, and the key point number that is provided with on first frame and the last frame is identical and corresponding one by one, and the profile that marks on first frame is copied to this subsequence and goes up on other frames except that last frame;
3) to each motion subsequence, making up one is the Optimization Model that comprises temporal constraint and space constraint of optimization variable with all key points on the subsequence;
4) above-mentioned Optimization Model is changed into the non-linear least square problem of , adopt damping
Little square law is found the solution, and behind the position of finding the solution each key point on the intermediate frame that obtains subsequence, the adjacent key point on every frame is connected with line segment, obtains role's profile;
Described to each motion subsequence; Making up one is the Optimization Model that comprises temporal constraint and space constraint of variable with the set of keypoints on this subsequence: to objective function E of the structure of the motion subsequence between per two adjacent key frames; This objective function comprises 6 bound terms; Wherein preceding 4 is the temporal constraint item, and back 2 is the space constraint item:
E=w LE L+w AE A+w SE S+w VE V+w EE E+w RE R 1
W wherein L, w A, w S, w V, w EAnd w RBe respectively the weight coefficient of each item, introduce the definition and the effect of each item below successively;
The 1st bound term is length item E L, this has reflected the length variations of each bar line segment on motion subsequence on role's profile,
E L = Σ i = 1 N f - 1 Σ j = 1 N S ( | | s i ( j ) | | - | | s i + 1 ( j ) | | ) 2 - - - 2
S wherein i(j) be j bar line segment on role's profile in the i frame, N fBe the frame number of motion subsequence, N sNumber for line segment on role's profile;
The 2nd bound term is angle item E A, this has reflected the variation of angle on motion subsequence at each key point place on role's profile,
E A = Σ i = 1 N f - 1 Σ j = 1 N c | | ( c i ( a j ( 1 ) ) - 2 c i ( j ) + c i ( a j ( 2 ) ) ) - ( c i + 1 ( a j ( 1 ) ) - 2 c i + 1 ( j ) + c i + 1 ( a j ( 2 ) ) ) | | 2 - - - 3
C wherein i(j) be j key point on middle role's profile on the i frame, N cThe number of key point on the expression role profile;
The 3rd bound term is area item E S, this has reflected role's the area change of each position on motion subsequence,
E S = Σ i = 1 N f - 1 Σ j = 1 N p ( f S ( p i ( j ) ) - f S ( p i + 1 ( j ) ) ) 2 - - - 4
P wherein i(j) be the polygon at j position of expression role in the i frame, f SBe the function that calculates area of a polygon, N pBe the number at role position,
The 4th bound term is speed term E V, this reflected each key point on role's profile in the variation of the position offset between consecutive frame on motion subsequence,
E V = Σ i = 1 N f - 2 Σ j = 1 N c ( ( c i + 1 ( j ) - c i ( j ) ) - ( ( c i + 2 ( j ) - c i + 1 ( j ) ) ) 2 - - - 5
The 5th bound term is border item E E, this make role's profile on each frame of motion subsequence near image boundary, border item computing method are following:
At first to each the two field picture F in the motion subsequence iAdopt the Tuscany border detection algorithm to extract image boundary, obtain image boundary figure E iThen, to each width of cloth image boundary figure E iCalculate the Euclidean distance conversion, obtain the range conversion matrix D of a H * W i, wherein H and W are the height and the width of image, D i(x, y) the presentation video coordinate is that (x, pixel y) is at E iIn from the Euclidean distance of nearest boundary pixel point; Then, utilize line segment on the range conversion definition role profile to the distance of image boundary, at each bar line segment s i(j) go up an equal interval sampling t pixel, the image coordinate of establishing them is { (x k, y k) | 1≤k≤t}, formula calculates the distance of line segment to image boundary below using:
f seg ( s i ( j ) ) = Σ k = 1 t D i ( x k , y k ) - - - 6
At last, the border item is defined as:
E E = Σ i = 2 N f - 1 Σ j = 1 N S f seg ( s i ( j ) ) 2 - - - 7
The 6th bound term is area item E R, this makes that in each frame of motion subsequence, correct image-region is dropped at each position of role, the computing method of area item are following:
At first, utilize the mark on the key frame, each position of following the tracks of out the role is in the approximate location on the intermediate frame: the position p that marks on according to two key frames 1(j) and
Figure FSB00000682850400031
Use linear interpolation to obtain j the general shape of position on the i frame of cartoon character, be designated as p ' i(j), adopt affine filling algorithm to utilize p 1(j) texture is p ' i(j) Fill Color; Through calculating p ' i(j) mean value of each apex coordinate obtains p ' i(j) center cp ' i(j), with cp ' i(j) locate cp ' in the search window at center i(j) optimum position on the i frame
Figure FSB00000682850400032
Make position p ' i(j) center cp ' i(j) move to
Figure FSB00000682850400033
(j) after, position p ' i(j) minimum with the color distortion of the image-region at its place;
Then, to representing the polygon p ' at j position of role on the i frame i(j), the mean value of each apex coordinate through calculating it obtains the center cp at this position iAnd the formula definition area item below using (j):
E R = Σ i = 2 N f - 1 Σ j = 1 N p | | cp i ( j ) - cp i * ( j ) | | 2 - - - 8 ;
The affine filling algorithm of described employing utilizes p 1(j) texture is p ' i(j) the Fill Color step comprises:
1) with polygon p 1(j) and p ' i(j) be expressed as matrix A and the B of 2 * M, wherein the k of the k of A row and B is listed as and representes p respectively 1(j) and p ' iThe image coordinate on k summit (j);
2) adopt linear least square to minimize following formula and calculate polygon p ' i(j) to polygon p 1(j) affined transformation AT:
error = | | A - AT · B 1 | | - - - 9
3) to polygon p ' i(j) each the pixel px ' in, the coordinates table of px ' is shown as x y T Form, obtain a pixel px through computes:
px = AT · px ′ 1 - - - 10
If the coordinate position of px is at polygon p 1(j) inside is then with polygon p 1(j) the color assignment that is positioned at the pixel of this coordinate position on is given px ';
4) for polygon p ' i(j) each also uncoloured pixel in is selected the color value of the color of the painted pixel nearest with it as it.
CN2010101311832A 2010-03-23 2010-03-23 Cartoon role contour tracing method based on key frame Expired - Fee Related CN101799927B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101311832A CN101799927B (en) 2010-03-23 2010-03-23 Cartoon role contour tracing method based on key frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101311832A CN101799927B (en) 2010-03-23 2010-03-23 Cartoon role contour tracing method based on key frame

Publications (2)

Publication Number Publication Date
CN101799927A CN101799927A (en) 2010-08-11
CN101799927B true CN101799927B (en) 2012-05-09

Family

ID=42595599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101311832A Expired - Fee Related CN101799927B (en) 2010-03-23 2010-03-23 Cartoon role contour tracing method based on key frame

Country Status (1)

Country Link
CN (1) CN101799927B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930555B (en) * 2011-08-11 2016-09-14 深圳迈瑞生物医疗电子股份有限公司 A kind of method and device that area-of-interest in ultrasonoscopy is tracked
JP6073562B2 (en) * 2012-02-24 2017-02-01 東芝メディカルシステムズ株式会社 Medical image processing device
CN103198510B (en) * 2013-04-18 2015-09-30 清华大学 The model gradual deformation method of data-driven
CN106682595A (en) * 2016-12-14 2017-05-17 南方科技大学 Image content marking method and apparatus thereof
CN108389162B (en) * 2018-01-09 2021-06-25 浙江大学 Image edge preserving filtering method based on self-adaptive neighborhood shape
CN110580715B (en) * 2019-08-06 2022-02-01 武汉大学 Image alignment method based on illumination constraint and grid deformation
CN110687929B (en) * 2019-10-10 2022-08-12 辽宁科技大学 Aircraft three-dimensional space target searching system based on monocular vision and motor imagery

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216948A (en) * 2008-01-14 2008-07-09 浙江大学 Cartoon animation fabrication method based on video extracting and reusing
CN101329768A (en) * 2008-07-29 2008-12-24 浙江大学 Method for rapidly modeling of urban street based on image sequence

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216948A (en) * 2008-01-14 2008-07-09 浙江大学 Cartoon animation fabrication method based on video extracting and reusing
CN101329768A (en) * 2008-07-29 2008-12-24 浙江大学 Method for rapidly modeling of urban street based on image sequence

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A.Agarwala et al.Keyframe-Based Tracking for Rotoscoping and Animation.《ACM Transactions on Graphics》.2004,第23卷(第3期),584-591. *
C.Bregler et al.Turning to the Masters: Motion Capturing Cartoons.《ACM Transactions on Graphics》.2002,第21卷(第3期),399-407. *
H.Wang et al.Cartoon Motion Capture by Shape Matching.《Proceedings of the 10th Pacific Conference on Computer Graphics and Applications》.2002,454-456. *

Also Published As

Publication number Publication date
CN101799927A (en) 2010-08-11

Similar Documents

Publication Publication Date Title
CN101799927B (en) Cartoon role contour tracing method based on key frame
CN101833786B (en) Method and system for capturing and rebuilding three-dimensional model
CN100583158C (en) Cartoon animation fabrication method based on video extracting and reusing
CN102521884B (en) Three-dimensional roof reconstruction method based on LiDAR data and ortho images
CN105158272B (en) A kind of method for detecting textile defect
CN102880866B (en) Method for extracting face features
CN101833791B (en) Scene modeling method under single camera and system
CN102842148B (en) Method and device for capturing markerless motion and reconstructing scene
CN106251353A (en) Weak texture workpiece and the recognition detection method and system of three-dimensional pose thereof
CN102222361A (en) Method and system for capturing and reconstructing 3D model
CN104935909B (en) Multi-image super-resolution method based on depth information
CN100571392C (en) Visual tracking method and system based on particle filter
CN103136393B (en) A kind of areal coverage computing method based on stress and strain model
CN104992441A (en) Real human body three-dimensional modeling method specific to personalized virtual fitting
CN102722697B (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method
CN101751569B (en) Character segmentation method for offline handwriting Uighur words
CN105469084A (en) Rapid extraction method and system for target central point
CN105718989A (en) Bar counting method based on machine vision
CN102521869B (en) Three-dimensional model surface texture empty filling method guided by geometrical characteristic
CN102938066A (en) Method for reconstructing outer outline polygon of building based on multivariate data
CN101520892B (en) Detection method of small objects in visible light image
CN101329768B (en) Method for synthesizing cartoon based on background view
CN104794737A (en) Depth-information-aided particle filter tracking method
CN105120517A (en) Indoor WLAN signal plan mapping and positioning method based on multidimensional scaling analysis
CN115187946B (en) Multi-scale intelligent sensing method for fusion of underground obstacle point cloud and image data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120509

Termination date: 20180323

CF01 Termination of patent right due to non-payment of annual fee