CN101799927A - Cartoon role contour tracing method based on key frame - Google Patents

Cartoon role contour tracing method based on key frame Download PDF

Info

Publication number
CN101799927A
CN101799927A CN 201010131183 CN201010131183A CN101799927A CN 101799927 A CN101799927 A CN 101799927A CN 201010131183 CN201010131183 CN 201010131183 CN 201010131183 A CN201010131183 A CN 201010131183A CN 101799927 A CN101799927 A CN 101799927A
Authority
CN
China
Prior art keywords
frame
role
profile
subsequence
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010131183
Other languages
Chinese (zh)
Other versions
CN101799927B (en
Inventor
肖俊
周春銮
庄越挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2010101311832A priority Critical patent/CN101799927B/en
Publication of CN101799927A publication Critical patent/CN101799927A/en
Application granted granted Critical
Publication of CN101799927B publication Critical patent/CN101799927B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a cartoon role contour tracing method based on key frame, comprising the following steps of: firstly, artificially selecting key frames from a section of motion sequence of cartoon roles and drawing up the contours of the cartoon roles on each key frame by using curves; then, constructing an optimization model by utilizing contour information labeled on the key frames; and finally, transforming the optimization model into a non-linear least square question for solving and tracing the positions of the contours of the cartoon roles on other frames of the motion sequence. The method can precisely trace the contours of the cartoon roles from a section of motion sequence by effectively utilizing constraint information on time sequences and space, and can be used for analyzing the deformations the roles on each position of the motion sequence by utilizing the contour of the cartoon roles on each position of the motion sequence, wherein the contour of the cartoon roles on each position is captured and obtained by the contour tracing method.

Description

Cartoon role contour tracing method based on key frame
Technical field
The present invention relates to a kind of method of trail angle colour wheel exterior feature from the cartoon character motion sequence, belong to computer two-dimensional animation field based on key frame.
Background technology
The making of tradition cartoon normally relatively expends time in and the labour's.Along with traditional cartoon is widely used in every field, a large amount of ready-made cartoon materials can be reused.By reusing existing cartoon material, can reduce the complexity that cartoon is made to a certain extent, and then improve the efficient that cartoon is made.
The cartoon character capturing movement is one of mode of reusing in existing cartoon material with being redirected.This reuse means is given this sports style a new cartoon character then by extract the sports style of cartoon character from one section existing cartoon sequence, generates one section new motion sequence.Such as the article of delivering in the SIGGRAPH meeting in 2002 " Turning to the Masters:Motion CapturingCartoons " is exactly to adopt this mode to reuse existing cartoon material.In this article, the motion sequence of given one section cartoon character is therefrom specified the key frame of cartoon character artificially, and role's sports style is abstracted into the affine deformation coefficient at each position and the interpolation coefficient of key frame; Then, the user provides one group of key frame to key frame similar new role on attitude of appointment, and the key frame that corresponding coefficient is applied to new role obtains one section new motion sequence.A main difficulty that adopts this mode to reuse the cartoon material is the motion of catching the role from the motion sequence of cartoon character.The cartoon character method for capturing movement that this article adopts at first is decomposed into the role a plurality of positions, determines the affine deformation coefficient at each position then by a kind of tracking based on the zone, utilizes the affine deformation coefficient to determine the interpolation coefficient of key frame at last.This catching method is followed the tracks of respectively each position of role, and does not utilize role's one-piece construction and each position constraint information on motion sequence, follows the tracks of the often not enough robust of the result who obtains.The article of delivering on PacificConference on Computer Graphics and Applications in 2002 " Cartoon MotionCapture by Shape Matching " has proposed a kind of cartoon character method for capturing movement based on form fit.Role's sports style is obtained in the deformation that this method takes place on motion sequence by the profile of analyzing cartoon character.At first, some points of role's outline up-sampling in each frame, utilize the method for robust point coupling to set up the correspondence between the configuration sampling point on two different frames then, and then according to the corresponding deformation coefficient of determining between the role's profile on two different frames of the point of setting up.In order to use this method, must in advance cartoon character be split from this motion sequence; And when being blocked in some frame at motion sequence of some position, this method will be made mistakes.
Profile is followed the tracks of a kind of means of catching the cartoon character motion that can be used as.Profile is followed the tracks of profile is represented with some point on it, determines the position of these point on whole motion sequence by following the tracks of.Because the correspondence of following the tracks of between the point in the different frame obtain is set up, can be used to analyze the deformation that each position of cartoon character takes place between different frame.The article of delivering in SIGGRAPH meeting in 2004 " Keyframe-Based Tracking for Rotoscoping and Animation " has proposed a kind of method of aircraft pursuit course from the ordinary video sequence based on key frame.The curve that this method will be followed the tracks of is modeled as by three Beziers of the discrete point on it as the reference mark.At first, the reference mark of mark aircraft pursuit course according to the mark on the key frame, makes up an optimal model on key frame, determines the position of each reference mark in other frames by finding the solution this optimal model.Because in the cartoon motion sequence; variation between the two frame consecutive frames is bigger; bigger deformation takes place in the profile regular meeting in motion process that adds the role position; and the variation of curve between adjacent two frames that this method hypothesis is followed the tracks of is smaller, is not adapted at following the tracks of on the cartoon motion sequence role's contour curve.
Summary of the invention
The objective of the invention is to overcome the deficiencies in the prior art, a kind of cartoon role contour tracing method based on key frame is provided.
Cartoon role contour tracing method based on key frame comprises the steps:
1) select key frame from one section cartoon motion sequence, the frame sequence between two adjacent key frames constitutes a motion subsequence;
2) for each motion subsequence, on the profile at each position of role on first frame and the last frame, key point is set respectively, two adjacent key points are connected with straight line, each position is expressed as a polygon, the key point number that is provided with on first frame and the last frame is identical and corresponding one by one, and the profile that marks on first frame is copied to this subsequence and goes up on other frames except that last frame;
3) to each motion subsequence, making up one is the Optimization Model that comprises temporal constraint and space constraint of optimization variable with all key points on this subsequence;
4) above-mentioned Optimization Model is changed into The non-linear least square problem, adopt damped least square method to find the solution, behind the position of finding the solution each key point on the intermediate frame that obtains subsequence, the adjacent key point on every frame is connected with line segment, obtain role's profile.
Described to each motion subsequence, making up one is the Optimization Model that comprises temporal constraint and space constraint of variable with the set of keypoints on this subsequence: to objective function E of the structure of the motion subsequence between per two adjacent key frames, this objective function comprises 6 bound terms, wherein preceding 4 is the temporal constraint item, and back 2 is the space constraint item:
E=w LE L+w AE A+w SE S+w VE V+w EE E+w RE R 1
W wherein L, w A, w S, w V, w EAnd w RBe respectively every weight coefficient, introduce every definition and effect below successively;
The 1st bound term is length item E L, this has reflected the length variations of each bar line segment on motion subsequence on role's profile,
E L = Σ i = 1 N f - 1 Σ j = 1 N s ( | | s i ( j ) | | - | | s i + 1 ( j ) | | ) 2 - - - 2
S wherein i(j) be j bar line segment on role's profile in the i frame, N fBe the frame number of motion subsequence, N sNumber for line segment on role's profile;
The 2nd bound term is angle item E A, this has reflected the variation of angle on motion subsequence at each key point place on role's profile,
E A = Σ i = 1 N f - 1 Σ j = 1 N c | | ( c i ( a j ( 1 ) ) - 2 c i ( j ) + c i ( a j ( 2 ) ) ) - ( c i + 1 ( a j ( 1 ) ) - 2 c i + 1 ( j ) + c i + 1 ( a j ( 2 ) ) ) | | 2 - - - 3
C wherein i(j) be j key point on middle role's profile on the i frame, N cThe number of key point on the expression role profile;
The 3rd bound term is area item E S, this has reflected role's the area change of each position on motion subsequence,
E S = Σ i = 1 N f - 1 Σ j = 1 N p ( f S ( p i ( j ) ) - f S ( p i + 1 ( j ) ) ) 2 - - - 4
P wherein i(j) be the polygon at j position of expression role in the i frame, f SBe the function that calculates area of a polygon, N pNumber for the role position.
The 4th bound term is speed term E V, this reflected each key point on role's profile in the variation of the position offset between consecutive frame on motion subsequence,
E V = Σ i = 1 N f - 2 Σ j = 1 N c ( ( c i + 1 ( j ) - c i ( j ) ) - ( ( c i + 2 ( j ) - c i + 1 ( j ) ) ) 2 - - - 5
The 5th bound term is border item E E, this make role's profile on each frame of motion subsequence near image boundary, border item computing method are as follows:
At first to each the two field picture F in the motion subsequence iAdopt the Tuscany border detection algorithm to extract image boundary, obtain image boundary figure E iThen, to each width of cloth image boundary figure E iCalculate the Euclidean distance conversion, obtain the range conversion matrix D of a H * W i, wherein H and W are the height and the width of image, D i(x, y) the presentation video coordinate is that (x, pixel y) is at E iIn from the Euclidean distance of nearest boundary pixel point; Then, utilize line segment on the range conversion definition role profile to the distance of image boundary.At each bar line segment s i(j) go up an equal interval sampling t pixel, the image coordinate of establishing them is { (x k, y k) | 1≤k≤t}, formula calculates the distance of line segment to image boundary below using:
f seg ( s i ( j ) ) = Σ k = 1 t D i ( x k , y k ) - - - 6
At last, the border item is defined as:
E E = Σ i = 2 N f - 1 Σ j = 1 N s f seg ( s i ( j ) ) 2 - - - 7
The 6th bound term is area item E R, this makes that in each frame of motion subsequence, correct image-region is dropped at each position of role, the computing method of area item are as follows:
At first, utilize the mark on the key frame, each position of following the tracks of out the role is in the approximate location on the intermediate frame: the position p that marks on according to two key frames 1(j) and Use linear interpolation to obtain j the general shape of position on the i frame of cartoon character, be designated as p i' (j), adopt affine filling algorithm to utilize p 1(j) texture is p i' (j) Fill Color; By calculating p i' (j) mean value of each apex coordinate obtains p i' (j) center cp i' (j), with cp i' (j) locate cp in the search window at center i' (j) the optimum position cp on the i frame i *(j), make position p i' (j) center cp i' (j) move to cp i *(j) after, position p i' (j) with the color distortion minimum of the image-region at its place;
Then, to representing the polygon p at j position of role on the i frame i(j), the mean value of each apex coordinate by calculating it obtains the center cp at this position i(j);
At last, use following formula definition area item:
E R = Σ i = 2 N f - 1 Σ j = 1 N p | | cp i ( j ) - cp i * ( j ) | | 2 - - - 8
The affine filling algorithm of described employing utilizes p 1(j) texture is p i' (j) the Fill Color step comprises:
1) with polygon p 1(j) and p i' (j) be expressed as matrix A and the B of 2 * M, wherein the k of the k of A row and B is listed as and represents p respectively 1(j) and p i' (j) the image coordinate on k summit;
2) adopt linear least square to minimize following formula and calculate polygon p i' (j) to polygon p 1(j) affined transformation AT:
error = | | A - AT · B 1 | | - - - 9
3) to polygon p i' each pixel px ' in (j), the coordinates table of px ' is shown as
Figure GSA00000057847700051
Form, calculate a pixel px by following formula:
px = AT · px ′ 1 - - - 10
If the coordinate position of px is at polygon p 1(j) inside is then with polygon p 1(j) the color assignment that is positioned at the pixel of this coordinate position on is given px ';
4) for polygon p i' (j) in each also uncoloured pixel, select the color value of the color of the painted pixel nearest as it with it.
The useful effect that the present invention has is: the profile that utilization marks on the key frame of the motion sequence of one section cartoon character, make up an Optimization Model that comprises temporal constraint and space constraint, then this Optimization Model is changed into the non-linear least square problem, adopt damped least square method to find the solution, can compare accurately and the definite apace position of role's profile on other frames of motion sequence.Utilize profile to follow the tracks of the motion of catching cartoon character,, avoided outline from tracking results because the correspondence between the profile on different frame at each position can directly obtain.
Description of drawings
Fig. 1 is role's profile synoptic diagram of the mark of the present invention on one section cartoon character motion sequence;
Fig. 2 is the distance calculation mode synoptic diagram of the line segment that defines of the present invention to image boundary;
Fig. 3 is the present invention follows the tracks of role's upper arm parts position on one section cartoon character motion sequence a result schematic diagram;
Fig. 4 is the principle of work synoptic diagram of the area item that defines of the present invention;
Fig. 5 is the result schematic diagram of the present invention's trail angle colour wheel exterior feature on one section cartoon character motion sequence.
Embodiment
The cartoon role contour tracing method that the present invention proposes, mark role's profile on the key frame of the motion sequence of cartoon character at first, make up an Optimization Model according to the mark on the key frame, by finding the solution the position of this Optimization Model on profile other frames of motion tracking card release current flow angle look at motion sequence.Because traditional cartoon cartoon all is to make with lower frame per second, the variation of the profile of cartoon character between adjacent two frames is bigger; And the position of cartoon character tends to the deformation that takes place relatively to exaggerate in motion process.This Optimization Model has effectively utilized temporal constraint and space constraint, can follow the tracks of out role's profile more exactly from the cartoon character motion sequence.
Be that the examples of implementation of trail angle colour wheel exterior feature are introduced concrete technical scheme of the present invention and implementation step the motion sequence of 15 frames below in conjunction with length from one section this cartoon character of Sun Wukong:
(1) from one section cartoon motion sequence of input, selects key frame, and the frame sequence between two adjacent key frames is formed a motion subsequence; A length this role of Sun Wukong that Fig. 1 provides is in the motion sequence of 15 frames, and the 1st frame and the 15th frame are chosen as key frame;
(2) for each motion subsequence, on the profile at each position of role on first frame and the last frame, key point is set respectively, two adjacent key points are connected with straight line, each position is expressed as a polygon, the key point number that is provided with on first frame and the last frame is identical and corresponding one by one, and the profile that marks on first frame is copied to this subsequence and goes up on other frames except that last frame; Fig. 1 has provided an example at the mark profile of motion subsequence, and red circle is a key point, and blue line constitutes role's profile;
(3) to each motion subsequence, making up one is the Optimization Model that comprises temporal constraint and space constraint of optimization variable with all key points on this subsequence.Just, to objective function E of motion subsequence structure, this objective function comprises 6 bound terms, and wherein preceding 4 is the temporal constraint item, and back 2 is the space constraint item:
E=w LE L+w AE A+w SE S+w VE V+w EE E+w RE R 1
W wherein L, w A, w S, w V, w EAnd w RBe respectively every weight coefficient; In this embodiment, it is 8,9,3,4 that these parameters are set gradually, 4 and 32, introduce every definition and effect below successively;
The 1st bound term is length item E L, this has reflected the length variations of each bar line segment on motion subsequence on role's profile,
E L = Σ i = 1 N f - 1 Σ j = 1 N s ( | | s i ( j ) | | - | | s i + 1 ( j ) | | ) 2 - - - 2
S wherein i(j) be j bar line segment on role's profile in the i frame, N fBe the frame number of motion subsequence, N sNumber for line segment on role's profile;
The 2nd bound term is angle item E A, this has reflected the variation of angle on motion subsequence at each key point place on role's profile,
E A = Σ i = 1 N f - 1 Σ j = 1 N c | | ( c i ( a j ( 1 ) ) - 2 c i ( j ) + c i ( a j ( 2 ) ) ) - ( c i + 1 ( a j ( 1 ) ) - 2 c i + 1 ( j ) + c i + 1 ( a j ( 2 ) ) ) | | 2 - - - 3
C wherein i(j) be j key point on middle role's profile on the i frame, N cThe number of key point on the expression role profile;
The 3rd bound term is area item E S, this has reflected role's the area change of each position on motion subsequence,
E S = Σ i = 1 N f - 1 Σ j = 1 N p ( f S ( p i ( j ) ) - f S ( p i + 1 ( j ) ) ) 2 - - - 4
P wherein i(j) be the polygon at j position of expression role in the i frame, f SBe the function that calculates area of a polygon, N pNumber for the role position.
The 4th bound term is speed term E V, this reflected each key point on role's profile in the variation of the position offset between consecutive frame on motion subsequence,
E V = Σ i = 1 N f - 2 Σ j = 1 N c ( ( c i + 1 ( j ) - c i ( j ) ) - ( ( c i + 2 ( j ) - c i + 1 ( j ) ) ) 2 - - - 5
The 5th bound term is border item E E, this make role's profile on each frame of motion subsequence near image boundary, border item computing method are as follows:
At first to each the two field picture F in the motion subsequence iAdopt the Tuscany border detection algorithm to extract image boundary, obtain image boundary figure E iThen, to each width of cloth image boundary figure E iCalculate the Euclidean distance conversion, obtain the range conversion matrix D of a H * W i, wherein H and W are the height and the width of image, D i(x, y) the presentation video coordinate is that (x, pixel y) is at E iIn from the Euclidean distance of nearest boundary pixel point; Then, utilize line segment on the range conversion definition role profile to the distance of image boundary.At each bar line segment s i(j) go up an equal interval sampling t pixel, the image coordinate of establishing them is { (x k, y k) | 1≤k≤t}, the formula below using calculates the distance of line segment to image boundary:
f seg ( s i ( j ) ) = Σ k = 1 t D i ( x k , y k ) - - - 6
Generally, the value of t influences not quite the result, considers counting yield, can get a relative smaller value, and between 5 to 10, the value of t is 7 in these examples of implementation.Fig. 2 has provided the signal of line segment to the distance calculation mode of image boundary, wherein Fig. 2 (a) is image boundary figure, what Fig. 2 (b) illustrated is that line segment is to frontier distance, green line is represented line segment, red circle is the pixel from the line segment up-sampling, and what the yellow line segment table showed is the minor increment of sampling pixel points to image boundary.At last, the border item is defined as:
E E = Σ i = 2 N f - 1 Σ j = 1 N s f seg ( s i ( j ) ) 2 - - - 7
The 6th bound term is area item E R, this makes that in each frame of motion subsequence, correct image-region is dropped at each position of role, the computing method of area item are as follows:
At first, utilize the mark on the key frame, follow the tracks of out role's the approximate location of each position on intermediate frame.According to the position p that on two key frames, marks 1(j) and
Figure GSA00000057847700074
Use linear interpolation to obtain j the general shape of position on the i frame of cartoon character, be designated as p i' (j), adopt affine filling algorithm according to p 1(j) texture is p i' (j) Fill Color; By calculating p i' (j) mean value of each apex coordinate obtains p i' (j) center cp i' (j), with cp i' (j) locate cp in the search window at center i' (j) the optimum position cp on the i frame i *(j), make position p i' (j) center cp i' (j) move to cp i *(j) after, position p i' (j) with the color distortion minimum of the image-region at its place; Fig. 3 has provided the approximate location of right upper arm on each frame of motion sequence of following the tracks of this role of Sun Wukong who obtains, and green circle is represented the center at position;
Then, to representing the polygon p at j position of role on the i frame i(j), the mean value of each apex coordinate by calculating it obtains the center cp at this position iAnd the formula definition area item below using (j):
E R = Σ i = 2 N f - 1 Σ j = 1 N p | | cp i ( j ) - cp i * ( j ) | | 2 - - - 8
What Fig. 4 illustrated is the action principle of area item.In Fig. 4 (a), yellow circle is represented the optimum position at role's left hand palm center, and blue square is the current center of the left hand palm; In Fig. 4 (b), under the effect of area item, the center of the left hand palm finally overlaps and makes that the curve of the expression left hand palm is close to its tram in image constantly near its optimum position;
The described p that utilizes 1(j) texture is p i' (j) the affine filling algorithm step of Fill Color comprises:
1. with polygon p 1(j) and p i' (j) be expressed as matrix A and the B of 2 * M, wherein the k of the k of A row and B is listed as and represents p respectively 1(j) and p i' (j) the image coordinate on k summit;
2. adopt linear least square to minimize following formula and calculate polygon p i' (j) to polygon p 1(j) affined transformation AT:
error = | | A - AT · B 1 | | - - - 9
3. to polygon p i' each pixel px ' in (j), the coordinates table of px ' is shown as
Figure GSA00000057847700083
Form, calculate a pixel px by following formula:
px = AT · px ′ 1 - - - 10
If the coordinate position of px is at polygon p 1(j) inside is then with polygon p 1(j) the color assignment that is positioned at the pixel of this coordinate position on is given px ';
4. for polygon p i' (j) in each also uncoloured pixel, select the color value of the color of the painted pixel nearest as it with it.
(4) above-mentioned Optimization Model is changed into
Figure GSA00000057847700085
The form of non-linear least square problem, adopt damped least square method to find the solution, obtain key point on role's profile of first frame mark behind the position on each intermediate frame of subsequence, the adjacent key point on every frame is connected with line segment, obtain role's profile.Fig. 5 has provided the result of the profile tracking of this embodiment.

Claims (3)

1. the cartoon role contour tracing method based on key frame is characterized in that comprising the steps:
1) select key frame from one section cartoon motion sequence, the frame sequence between two adjacent key frames constitutes a motion subsequence;
2) for each motion subsequence, on the profile at each position of role on first frame and the last frame, key point is set respectively, two adjacent key points are connected with straight line, each position is expressed as a polygon, the key point number that is provided with on first frame and the last frame is identical and corresponding one by one, and the profile that marks on first frame is copied to this subsequence and goes up on other frames except that last frame;
3) to each motion subsequence, making up one is the Optimization Model that comprises temporal constraint and space constraint of optimization variable with all key points on the subsequence;
4) above-mentioned Optimization Model is changed into
Figure FSA00000057847600011
The non-linear least square problem, adopt damped least square method to find the solution, behind the position of finding the solution each key point on the intermediate frame that obtains subsequence, the adjacent key point on every frame is connected with line segment, obtain role's profile.
2. a kind of cartoon role contour tracing method according to claim 1 based on key frame, it is characterized in that, described to each motion subsequence, making up one is the Optimization Model that comprises temporal constraint and space constraint of variable with the set of keypoints on this subsequence: to objective function E of the structure of the motion subsequence between per two adjacent key frames, this objective function comprises 6 bound terms, wherein preceding 4 is the temporal constraint item, and back 2 is the space constraint item:
E=w LE L+w AE A+w SE S+w VE V+w EE E+w RE R 1
W wherein L, w A, w S, w V, w EAnd w RBe respectively every weight coefficient, introduce every definition and effect below successively;
The 1st bound term is length item E L, this has reflected the length variations of each bar line segment on motion subsequence on role's profile,
E L = Σ i = 1 N f - 1 Σ j = 1 N s ( | | s i ( j ) | | - | | s i + 1 ( j ) | | ) 2 - - - 2
S wherein i(j) be j bar line segment on role's profile in the i frame, N fBe the frame number of motion subsequence, N sNumber for line segment on role's profile;
The 2nd bound term is angle item E A, this has reflected the variation of angle on motion subsequence at each key point place on role's profile,
E A = Σ i = 1 N f - 1 Σ j = 1 N c | | ( c i ( a j ( 1 ) ) - 2 c i ( j ) + c i ( a j ( 2 ) ) ) - ( c i + 1 ( a j ( 1 ) ) - 2 c i + 1 ( j ) + c i + 1 ( a j ( 2 ) ) ) | | 2 - - - 3
C wherein i(j) be j key point on middle role's profile on the i frame, N cThe number of key point on the expression role profile;
The 3rd bound term is area item E S, this has reflected role's the area change of each position on motion subsequence,
E S = Σ i = 1 N f - 1 Σ j = 1 N p ( f S ( p i ( j ) ) - f S ( p i + 1 ( j ) ) ) 2 - - - 4
P wherein i(j) be the polygon at j position of expression role in the i frame, f SBe the function that calculates area of a polygon, N pNumber for the role position.
The 4th bound term is speed term E V, this reflected each key point on role's profile in the variation of the position offset between consecutive frame on motion subsequence,
E V = Σ i = 1 N f - 2 Σ j = 1 N c ( ( c i + 1 ( j ) - c i ( j ) ) - ( ( c i + 2 ( j ) - c i + 1 ( i ) ) ) 2 - - - 5
The 5th bound term is border item E E, this make role's profile on each frame of motion subsequence near image boundary, border item computing method are as follows:
At first to each the two field picture F in the motion subsequence iAdopt the Tuscany border detection algorithm to extract image boundary, obtain image boundary figure E iThen, to each width of cloth image boundary figure E iCalculate the Euclidean distance conversion, obtain the range conversion matrix D of a H * W i, wherein H and W are the height and the width of image, D i(x, y) the presentation video coordinate is that (x, pixel y) is at E iIn from the Euclidean distance of nearest boundary pixel point; Then, utilize line segment on the range conversion definition role profile to the distance of image boundary.At each bar line segment s i(j) go up an equal interval sampling t pixel, the image coordinate of establishing them is { (x k, y k) | 1≤k≤t}, formula calculates the distance of line segment to image boundary below using:
f seg ( s i ( j ) ) = Σ k = 1 t D i ( x k , y k ) - - - 6
At last, the border item is defined as:
E E = Σ i = 2 N f - 1 Σ j = 1 N s f seg ( s i ( j ) ) 2 - - - 7
The 6th bound term is area item E R, this makes that in each frame of motion subsequence, correct image-region is dropped at each position of role, the computing method of area item are as follows:
At first, utilize the mark on the key frame, each position of following the tracks of out the role is in the approximate location on the intermediate frame: the position p that marks on according to two key frames 1(j) and
Figure FSA00000057847600031
Use linear interpolation to obtain j the general shape of position on the i frame of cartoon character, be designated as p i' (j), adopt affine filling algorithm to utilize p 1(j) texture is p i' (j) Fill Color; By calculating p i' (j) mean value of each apex coordinate obtains p i' (j) center cp i' (j), with cp i' (j) locate cp in the search window at center i' (j) the optimum position cp on the i frame i *(j), make position p i' (j) center cp i' (j) move to cp i *(j) after, position p i' (j) with the color distortion minimum of the image-region at its place;
Then, to representing the polygon p at j position of role on the i frame i(j), the mean value of each apex coordinate by calculating it obtains the center cp at this position iAnd the formula definition area item below using (j):
E R = Σ i = 2 N f - 1 Σ j = 1 N p | | cp i ( j ) - cp i * ( j ) | | 2 - - - 8
3. a kind of cartoon role contour tracing method based on key frame according to claim 2 is characterized in that the affine filling algorithm of described employing utilizes p 1(j) texture is p i' (j) the Fill Color step comprises:
1) with polygon p 1(j) and p i' (j) be expressed as matrix A and the B of 2 * M, wherein the k of the k of A row and B is listed as and represents p respectively 1(j) and p i' (j) the image coordinate on k summit;
2) adopt linear least square to minimize following formula and calculate polygon p i' (j) to polygon p 1(j) affined transformation AT:
error = | | A - AT · B 1 | | - - - 9
3) to polygon p i' each pixel px ' in (j), the coordinates table of px ' is shown as
Figure FSA00000057847600034
Form, calculate a pixel px by following formula:
px = AT · px ′ 1 - - - 10
If the coordinate position of px is at polygon p 1(j) inside is then with polygon p 1(j) the color assignment that is positioned at the pixel of this coordinate position on is given px ';
4) for polygon p i' (j) in each also uncoloured pixel, select the color value of the color of the painted pixel nearest as it with it.
CN2010101311832A 2010-03-23 2010-03-23 Cartoon role contour tracing method based on key frame Expired - Fee Related CN101799927B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101311832A CN101799927B (en) 2010-03-23 2010-03-23 Cartoon role contour tracing method based on key frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101311832A CN101799927B (en) 2010-03-23 2010-03-23 Cartoon role contour tracing method based on key frame

Publications (2)

Publication Number Publication Date
CN101799927A true CN101799927A (en) 2010-08-11
CN101799927B CN101799927B (en) 2012-05-09

Family

ID=42595599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101311832A Expired - Fee Related CN101799927B (en) 2010-03-23 2010-03-23 Cartoon role contour tracing method based on key frame

Country Status (1)

Country Link
CN (1) CN101799927B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930555A (en) * 2011-08-11 2013-02-13 深圳迈瑞生物医疗电子股份有限公司 Method and device for tracking interested areas in ultrasonic pictures
CN103198510A (en) * 2013-04-18 2013-07-10 清华大学 Data-driven model gradual deformation method
CN103284737A (en) * 2012-02-24 2013-09-11 株式会社东芝 Medical image processing apparatus
CN106682595A (en) * 2016-12-14 2017-05-17 南方科技大学 Image content marking method and apparatus thereof
CN108389162A (en) * 2018-01-09 2018-08-10 浙江大学 A kind of image border holding filtering method based on adaptive neighborhood shape
CN110580715A (en) * 2019-08-06 2019-12-17 武汉大学 Image alignment method based on illumination constraint and grid deformation
CN110687929A (en) * 2019-10-10 2020-01-14 辽宁科技大学 Aircraft three-dimensional space target searching system based on monocular vision and motor imagery

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216948A (en) * 2008-01-14 2008-07-09 浙江大学 Cartoon animation fabrication method based on video extracting and reusing
CN101329768A (en) * 2008-07-29 2008-12-24 浙江大学 Method for rapidly modeling of urban street based on image sequence

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216948A (en) * 2008-01-14 2008-07-09 浙江大学 Cartoon animation fabrication method based on video extracting and reusing
CN101329768A (en) * 2008-07-29 2008-12-24 浙江大学 Method for rapidly modeling of urban street based on image sequence

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《ACM Transactions on Graphics》 20021231 C.Bregler et al Turning to the Masters: Motion Capturing Cartoons 399-407 1-3 第21卷, 第3期 2 *
《ACM Transactions on Graphics》 20041231 A.Agarwala et al Keyframe-Based Tracking for Rotoscoping and Animation 584-591 1-3 第23卷, 第3期 2 *
《Proceedings of the 10th Pacific Conference on Computer Graphics and Applications》 20021231 H.Wang et al Cartoon Motion Capture by Shape Matching 454-456 1-3 , 2 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930555A (en) * 2011-08-11 2013-02-13 深圳迈瑞生物医疗电子股份有限公司 Method and device for tracking interested areas in ultrasonic pictures
CN102930555B (en) * 2011-08-11 2016-09-14 深圳迈瑞生物医疗电子股份有限公司 A kind of method and device that area-of-interest in ultrasonoscopy is tracked
CN103284737A (en) * 2012-02-24 2013-09-11 株式会社东芝 Medical image processing apparatus
CN103198510A (en) * 2013-04-18 2013-07-10 清华大学 Data-driven model gradual deformation method
CN103198510B (en) * 2013-04-18 2015-09-30 清华大学 The model gradual deformation method of data-driven
CN106682595A (en) * 2016-12-14 2017-05-17 南方科技大学 Image content marking method and apparatus thereof
CN108389162A (en) * 2018-01-09 2018-08-10 浙江大学 A kind of image border holding filtering method based on adaptive neighborhood shape
CN110580715A (en) * 2019-08-06 2019-12-17 武汉大学 Image alignment method based on illumination constraint and grid deformation
CN110580715B (en) * 2019-08-06 2022-02-01 武汉大学 Image alignment method based on illumination constraint and grid deformation
CN110687929A (en) * 2019-10-10 2020-01-14 辽宁科技大学 Aircraft three-dimensional space target searching system based on monocular vision and motor imagery
CN110687929B (en) * 2019-10-10 2022-08-12 辽宁科技大学 Aircraft three-dimensional space target searching system based on monocular vision and motor imagery

Also Published As

Publication number Publication date
CN101799927B (en) 2012-05-09

Similar Documents

Publication Publication Date Title
CN101799927B (en) Cartoon role contour tracing method based on key frame
CN101833786B (en) Method and system for capturing and rebuilding three-dimensional model
CN100583158C (en) Cartoon animation fabrication method based on video extracting and reusing
CN105158272B (en) A kind of method for detecting textile defect
CN103389799B (en) A kind of opponent's fingertip motions track carries out the method for following the tracks of
CN102880866B (en) Method for extracting face features
CN101751569B (en) Character segmentation method for offline handwriting Uighur words
CN103136393B (en) A kind of areal coverage computing method based on stress and strain model
CN109816725A (en) A kind of monocular camera object pose estimation method and device based on deep learning
CN106251353A (en) Weak texture workpiece and the recognition detection method and system of three-dimensional pose thereof
CN104992441A (en) Real human body three-dimensional modeling method specific to personalized virtual fitting
CN102222361A (en) Method and system for capturing and reconstructing 3D model
CN105718989A (en) Bar counting method based on machine vision
CN102521869B (en) Three-dimensional model surface texture empty filling method guided by geometrical characteristic
CN105469084A (en) Rapid extraction method and system for target central point
CN102521884A (en) Three-dimensional roof reconstruction method based on LiDAR data and ortho images
CN101329768B (en) Method for synthesizing cartoon based on background view
CN101026759A (en) Visual tracking method and system based on particle filtering
CN102306386A (en) Method for quickly constructing third dimension tree model from single tree image
CN105069751A (en) Depth image missing data interpolation method
CN103443826A (en) Mesh animation
CN103218827A (en) Contour tracing method based on shape-transmitting united division and image-matching correction
CN103440667A (en) Automatic device for stably tracing moving targets under shielding states
CN104299246A (en) Production line object part motion detection and tracking method based on videos
CN102799646A (en) Multi-view video-oriented semantic object segmentation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120509

Termination date: 20180323