CN104574266B - Morphing based on contour line - Google Patents

Morphing based on contour line Download PDF

Info

Publication number
CN104574266B
CN104574266B CN201410451363.7A CN201410451363A CN104574266B CN 104574266 B CN104574266 B CN 104574266B CN 201410451363 A CN201410451363 A CN 201410451363A CN 104574266 B CN104574266 B CN 104574266B
Authority
CN
China
Prior art keywords
mrow
msub
msubsup
msup
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410451363.7A
Other languages
Chinese (zh)
Other versions
CN104574266A (en
Inventor
陈鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410451363.7A priority Critical patent/CN104574266B/en
Publication of CN104574266A publication Critical patent/CN104574266A/en
Application granted granted Critical
Publication of CN104574266B publication Critical patent/CN104574266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20096Interactive definition of curve of interest

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a kind of image distortion method based on contour line, comprises the following steps:Image to be deformed is subjected to rim detection first and obtains edge image, using the boundary curve on the edge image as contour line;User chooses the contour line to be deformed and dragged, and one group of one-to-one base is established on the two dimensional surface respectively before and after dragging;The point in point and deformed region on contour line has respective coordinate in the corresponding base before and after user drags contour line, the principle of the coordinate corresponded in base is approached after its movement after dragging after scale according to the coordinate put on deformation front-wheel profile to greatest extent to drag, the contour line after deformation can be tried to achieve and thereby determine that deformed region, the coordinate of point in deformed region corresponds to point before deformation in picture to drag to approach the principle for corresponding to the coordinate in base when it is not moved before being deformed after scale to greatest extent and try to achieve each point in deformed region, picture after being deformed with these point filling deformed regions.

Description

Contour-line-based image deformation technique
Technical Field
The invention relates to the technical field of image processing, in particular to a contour line-based image deformation method which can perform deformation operation on an image.
Background
The image deformation technology has important application in animation production, image special effect processing, medical image processing and the like. The user can modify the unsatisfactory area on the image and achieve the purpose of pleasure by exaggerating certain features by performing the deformation operation on one image. In addition, animation can be generated by using a plurality of pictures with gradual changes, and an algorithm for image deformation can also be used for generating special effect images. In the aspect of medical cosmetic treatment, a doctor can generate a target image through image deformation before an operation, and an operation process is carried out according to the target image, so that the operation risk is reduced.
The existing image deformation algorithms do not carry out image segmentation, so that the deformation operation of a user is to deform the whole picture, and the regions except the target deformation region are inevitably influenced to generate distortion. FFD technique is the earliest algorithm applied in the field of image deformation, please see Ron m., Kenneth i.: free-form formulations of imaging and polarization [ C ]. SIGRAPH' 96Proceedings of the 23rd annual con on Computer graphics and interactive technologies, 1996, pp.181-188. it is widely used in software, the algorithm embeds the deformed mesh into the whole picture, the user changes the shape of the mesh by dragging the mesh, and then the movement of all points on the picture is constrained by the mapping relationship between two meshes before and after dragging to achieve the purpose of deforming the picture, this method can not control the specific deformation area, if the deformation area is larger, the whole picture will be seriously distorted. MLS Image warping algorithms attempt to reduce the impact of warping operations on regions outside the warped region by setting weights, see Scott S., Travis M., Joe W., Image deformation using warping least squares [ J ]. Proceedings of ACM SIGGRAPH' 06,2006,25(3): pp.533-540, but this approach still has a significant impact on regions closer to the warped region.
The existing deformation method for a specific region can achieve the purpose of segmenting an image after the region is determined, and generates a good deformation effect, and a three-dimensional face reconstruction method (Chinese patent No. CN101751689B, published date: 2/22/2012) is a practical application for face deformation. But this method of deforming a specific region cannot be applied to other regions in the picture. Further, the deformation of the specific region is also deformation of the entire region, and thus the trimming of the details in the region cannot be performed.
In addition, the most reflective of the morphological characteristics of the object on the picture is the contour line of the object, and the existing methods cannot accurately adjust the shape of the contour line to generate the deformation effect desired by the user, so the deformation experience of the user is greatly reduced.
Disclosure of Invention
In order to solve the problems, the invention provides an image deformation method based on a contour line. The edge curve of the image can be divided according to the gray information on the image, so the edge curve can be approximately regarded as a contour line. Since the process of deforming the image is a process of interacting with the user, the user may also add or fix contour lines. By dividing the image by the contour lines, the influence of the deformation operation on the regions other than the deformation region can be effectively eliminated. Meanwhile, the user's deformation operation is a deformation operation directly on the contour line, so that the shape desired by the user can be accurately obtained.
The technical scheme of the invention is to provide an image deformation method based on contour lines, and the implementation steps are explained below.
Step 1: the color space of the image is converted from RGB to Lab.
Step 2: the image was filtered bilaterally in Lab color space using a bilateral filter.
And step 3: and converting the filtered image color space from Lab to RGB.
And 4, step 4: and after the RGB image is converted into a gray image, performing edge detection by using a Canny edge detection operator to obtain an edge image, taking an edge curve on the edge image as a contour line, and simultaneously adding or erasing the contour line by a user.
And 5: the user selects certain two points on the edge picture as anchor points to determine the contour line to be dragged.
Step 6: the user clicks a certain point on the outline to be a dragging point, the anchor point and the dragging point are connected pairwise to obtain a group of vectors, each vector and the respective orthogonal vector construct a two-dimensional plane base, and the base is recorded asWherein, KiIs a vector of the unit,is KiThe orthogonal unit vector of (2).
And 7: the user stretches or compresses the dragging point to obtain a target dragging point, another group of bases can be obtained by the same method as the step 6, and the ith base is marked asAnd in step 6And (7) corresponding.
And 8: ith base of any point P on contour line before deformationThe center coordinate isThe deformed corresponding point P' is at the ith baseHas the coordinates ofWhere P and P' are both two-dimensional column vectors, the requirementScaled to the drag ratio to be maximally equal toIs to find
Wherein,is the drag ratio of the ith vector,Kiis corresponding to KiThe non-unitized vector of (a) is,to correspond toOf the non-unitized vector, r1 Is composed ofAnd KiVector in the vertical directionAnd Ki The scaling ratio between the two parts is changed,α is a user distortion parameter for adjusting the vertical scaling omegaiAre weights of the bases, here, we take
Where σ is a user deformation parameter for adjusting the smoothness after deformation, since MiBeing the origin of each radical, finding the deformed position P' translates into finding the solution to the following least squares problem:
and after the P 'is obtained, the P' is used as a point on the contour line after dragging, all the points are obtained and then are connected in sequence, and the contour line after deformation is obtained after smoothing treatment.
And step 9: the user adjusts the shape of the contour line by adjusting the deformation parameters in the adjustment formula (3) to obtain the desired deformed shape.
Step 10: determining a deformation area according to the dragged contour line, wherein the method for determining the deformation area comprises the following steps:
the method comprises the following steps: linearly connecting the two anchor points in the step 5 to obtain a straight line and a deformed contour line which form a closed area together, and taking the area as a deformed area;
the method 2 comprises the following steps: the user repeats steps 6-9 to obtain another deformed contour, wherein the anchor point is still the anchor point in step 5, and the contour and the previous contour form a closed region as a deformed region.
Step 11: point at arbitrary position P' in the deformed region after deformationThe center coordinate isThis point corresponds to point P on the picture before the deformation, point P being the ith base before the draggingHas the coordinates ofThe same requirements are set forScaled to the drag ratio to be maximally equal toNamely, the following steps are obtained:
the above formula can also be written as
In the formulaAll other variables have the same meanings as those of the formula (1), the formula (2) and the formula (3), and the point P is determinedThe pixel interpolation point p' of (2) is obtained, and the deformed image is generated after the pixel interpolation of all the points in the deformed area is completed. When the deformed area determined by the user dragging the contour line cannot completely cover the area determined by the original contour line, a blank area appears on the image. Since the contour lines divide the image into regions according to the gray scale, the color and texture in the same region have little difference. In this regard, the present algorithm will use pixels of the same region to mean fill the blank region, i.e.
Here, ,VPis the pixel value of any point P in the blank area, PiThe points in the 8 neighbourhood of P,is PiPixel value of RPIs the region where P is located, when P isiAnd P belong to the same area, they are weighted-filled.
Step 12: and (4) adjusting the parameters in the step 9 by the user to adjust the color texture in the deformation area to the clearest state.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention uses the contour line to deform, can divide the image by the contour line, namely the deformed area is determined by the deformed contour line, so that the non-deformed area can not be influenced, and the huge defect of the non-deformed area of the distorted image in the prior method is eliminated;
2. the invention directly deforms the contour line, and the user adjusts the shape of the contour line by adjusting the deformation parameters, so that the optimal deformed shape can be obtained, and the defect that the user can not accurately adjust the deformed shape in the existing method is overcome.
Drawings
Fig. 1 is a flow chart of the execution of the method.
In fig. 2, (a) is an image to be deformed, and (b) is an edge image thereof.
Fig. 3 is a schematic diagram of the principle of 1 dragging point.
Fig. 4 is a schematic diagram of 2 dragging points.
Fig. 5(a), 5(b), and 5(c) show the shape of the contour line and the deformed image when the other deformation parameters are not changed in the σ value.
Fig. 6(a), 6(b), and 6(c) show the shape of the contour line and the deformed image when the values of the other parameters are not changed and the parameters α are transformed.
Fig. 7(a), 7(b), and 7(c) show the shape of the contour line and the deformed image when the values of the other parameters are not changed and the parameters β are transformed.
Fig. 8(a) shows the blank area to be mean-filled, the picture having been segmented with contour lines, and fig. 8(b) shows the result after mean-filling.
Fig. 9(a) shows the deformed outline, and fig. 9(b) shows the deformed image.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific examples, wherein the present invention adopts a manner of dragging a contour line by a user to deform an image, fig. 1 is a specific implementation step of the present invention, and the picture in fig. 2(a) is taken as an example to demonstrate each implementation step.
Step 1: the color space of the picture is converted from RGB to Lab.
Step 2: bilateral filtering was performed on the pictures in Lab color space using a bilateral filter.
And step 3: and converting the color space of the filtered picture from Lab to RGB.
And 4, step 4: after the RGB picture is converted into a gray-scale picture, a Canny edge detection operator is used to perform edge detection to obtain an edge image, and as shown in fig. 2(b), an edge curve on the edge image is used as a contour line.
And 5: the user selects certain two points A and B on the edge picture as anchor points to determine the contour line to be dragged. The principle of deformation is demonstrated by using fig. 3 and 4, fig. 3 shows the condition of dragging the nose contour line in fig. 2(b), in fig. 3, a contour line to be dragged by the nose can be determined by two points on the nose contour line, fig. 4 shows the condition of dragging the eye contour line in fig. 2(b), and the eye contour line to be dragged can be determined by two points on the eye contour line, and because the eye contour line is closed, the upper and lower contour lines can be respectively selected to be dragged.
Step 6: the user clicks a certain point on the outline to be a dragging point, a dragging point C is selected in the drawing 3, and the anchor point and the dragging point are connected in pairs to obtain three vectorsFIG. 4 shows two dragging points C and D, which can be obtained Let KiIs a unit vector of each vector, and its orthogonal vectorBases which together form a two-dimensional plane, denoted
And 7: the user stretches or compresses the dragging point to obtain the target dragging point, such as point D in FIG. 3 and point E in FIG. 4And F, connecting each anchor point and the target dragging point in pairs, obtaining another group of bases by the same method as the step 6, and marking the ith base asAnd in step 6And (7) corresponding.
And 8: according to the requirement that the coordinate of any point P on the contour line before deformation in each base before dragging is zoomed in the dragging proportion and approaches to the coordinate of the corresponding point P 'after dragging to the maximum extent, namely the coordinate of the corresponding point P' after dragging in each base after dragging
And obtaining the position of the point after dragging. Wherein,is P in the radicalThe projected coordinates of (a) are,is P' in the radicalDue to projected coordinates in So solving P' translates to solving the following least squares problem:
in the formula,is the drag ratio of the ith vector,Kiis corresponding to KiThe non-unitized vector of (a) is,to correspond toOf the non-unitized vector, ri Is composed ofAnd KiVector in the vertical directionAnd Ki The scaling ratio between the two parts is changed,in FIG. 3, in the calculationAndcorresponding toWhen the temperature of the water is higher than the set temperature,are respectively asr1 =r1In calculatingCorresponding toWhen the temperature of the water is higher than the set temperature,(i.e., dragging point D toDistance and drag point C toThe ratio of the distances of). In the context of figure 4, it is shown,Miα is a user distortion parameter for adjusting the vertical scaling omegaiAre weights of the bases, here, we takeWherein, sigma is a user deformation parameter used for adjusting the smoothness after deformation. After all the points are obtained, all the adjacent points are connected in sequence by straight lines, and then the whole line is smoothed to obtain continuous contour lines after dragging, such as a nose contour line after dragging with A and B as end points in fig. 3 and two eye contour lines after dragging with A and B as end points in fig. 4.
And step 9: the user adjusts the shape of the contour line by adjusting the deformation parameters to obtain the desired deformation shape, and the specific parameters are as follows:
parameter 1: σ in the formula (2). The user can adjust the smoothness of the contour line by using the parameter, and the effect of adjusting σ is as shown in fig. 5(a), 5(b), and 5(c) when the other parameters are the same, and the deformed pictures are obtained when σ is 1, 0.1, and 0.01 in the order of fig. 5(a), 5(b), and 5 (c);
parameter 2: α in equation (1) is set to enable the user to adjust the zoom ratio in the drag vertical direction, the user can adjust the width of the deformed shape by adjusting the parameter, the larger the value α is, the wider the deformed shape is, the same other parameters are, the effect when adjusting α is as shown in fig. 6(a), 6(b), and 6(c), and fig. 6(a), 6(b), and 6(c) are deformed pictures when α is 0, α is 1, and α is 3 in sequence;
parameter 3: to maximize the user's deformation experience, the present invention introduces the rigid concept of contour lines, FIG. 3 and FIG. 4The destination drag point is determined after it has been determined. Here, we will turn KiSlowly rotate toWhen P' in the formula (3) is calculated, the farther the position of P is from the drag point,and KiThe smaller the angle, and thus the smaller the amount of rotation of the point at the position farther from the trailing point, the more flexible the contour line is, the greater the amount of rotation of the point at the position closer to the trailing point, i.e. the rigidity is, for any point P, the corresponding point P is setIs arranged asThen
WhereinGamma is from KiToTurning angle, using β as the deformation parameter for the user, the greater β, the more flexible the contour line, the smaller β, the more rigid the contour line, when β is 0,the final position is always the one, flexibility is not exhibited, other parameters are the same, and the effect when the β is transformed is as shown in fig. 7(a), 7(b), and 7(c), and the deformed pictures when β is 0, β is 1, and β is 2 in the order of fig. 7(a), 7(b), and 7 (c).
Step 10: determining a deformation area according to the stretched contour line, directly connecting two anchor points A and B to obtain a line segment AB under the condition that the nose has a dragging point C in the figure 3, forming a closed area together with the deformed contour line, and taking the closed area as the deformation area of the nose; for the case of the eye in fig. 4 having two dragging points C and D, the two deformed contour lines are combined together to form a closed region as the deformed region of the eye.
Step 11: according to the principle that the coordinate of any point in the deformation area in each base after dragging is zoomed in the dragging proportion and approaches to the coordinate of the base before dragging to the maximum extent, namely the coordinate of any point in the deformation area in each base before dragging corresponds to the coordinate of any point in the deformation area when not dragged
The position of any point in the deformed area corresponding to the picture before deformation is obtained according to the step 8. In the above formulaAll other variables have the same meaning as step 8. When the deformed area determined by the user dragging the contour line cannot completely cover the area determined by the original contour line, a blank area appears on the image. As shown by the white regions 1 in fig. 8 (a). Since the contour lines divide the image into regions according to the gray scale, the color and texture in the same region have little difference. In this regard, the present algorithm averages the blank region with pixels of the same regionFilling, i.e.
Here, ,VPis the pixel value of any point P in the blank area, PiThe points in the 8 neighbourhood of P,is PiPixel value of RPIs the region where P is located. When P is presentiWhen P belongs to the same region, the sum is weighted and filled, and after the nose contour is compressed in fig. 8(a), the region 1 and the region 2 become the same region, and the region 1 is filled with the pixel mean value in the region 2, as a result, as shown in the right graph in fig. 8 (b).
Step 12: the user adjusts the parameters in step 10 to adjust the color texture in the deformed region to the clearest state, and finally obtains the deformed picture shown in fig. 9(b) after filling the deformed contour line shown in fig. 9 (a).
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (1)

1. A method for deforming an image based on contour lines, which can change the shape of an object in the image, is characterized by the following steps:
step 1, filtering a picture to be deformed, carrying out edge detection to obtain an edge image, taking an edge curve on the edge image as a contour line, and simultaneously adding or erasing the contour line on the edge image by a user;
step 2, clicking two anchor points and one dragging point by a user, determining a contour line to be dragged by the anchor points, connecting each anchor point and the dragging point in pairs to determine a group of vectors, and establishing a base on a two-dimensional plane by each vector and an orthogonal vector thereof;
step 3, the user drags the dragging point to obtain a target dragging point, and another group of bases is established by the target dragging point and the anchor point by the same method as the step 2 and is in one-to-one correspondence with each base in the step 2;
step 4, according to any point P on the contour line before deformation, each base before draggingCoordinates of (5)After being scaled by the dragging proportion, the point P' after the deformation is approached to the maximum extentCoordinates of (5)This requirement is that
<mrow> <munder> <mrow> <mi>arg</mi> <mi> </mi> <mi>min</mi> </mrow> <msup> <mi>P</mi> <mo>&amp;prime;</mo> </msup> </munder> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <msub> <mi>&amp;omega;</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <msubsup> <mi>K</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>&amp;CenterDot;</mo> <mo>(</mo> <mi>P</mi> <mo>-</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>I</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <msup> <mi>P</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mi>&amp;alpha;r</mi> <mi>i</mi> <mo>&amp;perp;</mo> </msubsup> <msubsup> <mi>K</mi> <mi>i</mi> <mrow> <mo>&amp;perp;</mo> <mi>T</mi> </mrow> </msubsup> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mi>P</mi> <mo>-</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>I</mi> <mi>i</mi> <mrow> <mo>&amp;perp;</mo> <mi>T</mi> </mrow> </msubsup> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <msup> <mi>P</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>)</mo> </mrow>
Determining the deformed contour, whereiIs the weight of each base before dragging, riIs the drag ratio of the ith vector,is the drag ratio in the vertical direction of the ith vector, Miα, adjusting the deformation parameters of the scaling in the vertical direction for the user to obtain a deformed contour line and determining a deformed area by the contour line;
step 5, according to the requirement that the coordinate of any point in the deformation area in each base after dragging is zoomed in the dragging proportion and approaches to the coordinate of the corresponding base before dragging to the maximum extent, namely
<mrow> <munder> <mrow> <mi>arg</mi> <mi> </mi> <mi>min</mi> </mrow> <mi>P</mi> </munder> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <msup> <msub> <mi>&amp;omega;</mi> <mi>i</mi> </msub> <mo>&amp;prime;</mo> </msup> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <msubsup> <mi>K</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>&amp;CenterDot;</mo> <mo>(</mo> <mi>P</mi> <mo>-</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>I</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <msup> <mi>P</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mi>&amp;alpha;r</mi> <mi>i</mi> <mo>&amp;perp;</mo> </msubsup> <msubsup> <mi>K</mi> <mi>i</mi> <mrow> <mo>&amp;perp;</mo> <mi>T</mi> </mrow> </msubsup> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mi>P</mi> <mo>-</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>I</mi> <mi>i</mi> <mrow> <mo>&amp;perp;</mo> <mi>T</mi> </mrow> </msubsup> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <msup> <mi>P</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>)</mo> </mrow>
Finding its correspondence to the pre-deformation imageIn which ω isiAnd' is the weight of each base after dragging, and the pixel at the position is inserted into the deformation area to obtain the final deformation image.
CN201410451363.7A 2013-12-19 2014-09-07 Morphing based on contour line Active CN104574266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410451363.7A CN104574266B (en) 2013-12-19 2014-09-07 Morphing based on contour line

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2013106997955 2013-12-19
CN201310699795 2013-12-19
CN201410451363.7A CN104574266B (en) 2013-12-19 2014-09-07 Morphing based on contour line

Publications (2)

Publication Number Publication Date
CN104574266A CN104574266A (en) 2015-04-29
CN104574266B true CN104574266B (en) 2018-02-16

Family

ID=53090254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410451363.7A Active CN104574266B (en) 2013-12-19 2014-09-07 Morphing based on contour line

Country Status (1)

Country Link
CN (1) CN104574266B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023284B (en) * 2015-07-16 2018-01-16 山东济宁如意毛纺织股份有限公司 A kind of fabric for two-dimentional garment virtual display fills deformation texture method
CN105824907A (en) * 2016-03-15 2016-08-03 西安建筑科技大学 Method and system for analyzing digital information of ancient frescoes
CN107169976A (en) * 2017-04-20 2017-09-15 温州市鹿城区中津先进科技研究院 The product contour line method for drafting of electric business platform exhibiting pictures big data
CN110134921B (en) * 2018-02-09 2020-12-04 北大方正集团有限公司 Method and device for checking whether font outline is deformed
CN110390630A (en) * 2018-04-17 2019-10-29 上海碧虎网络科技有限公司 Image distortion method, device, storage medium, display system and automobile
CN110443745B (en) * 2019-07-03 2024-03-19 平安科技(深圳)有限公司 Image generation method, device, computer equipment and storage medium
CN110443751B (en) * 2019-07-10 2022-09-23 广东智媒云图科技股份有限公司 Image deformation method, device and equipment based on drawing lines and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1617174A (en) * 2004-12-09 2005-05-18 上海交通大学 Human limb three-dimensional model building method based on image cutline
CN101504768A (en) * 2009-03-20 2009-08-12 陕西师范大学 Color image fast partition method based on deformation contour model and graph cut

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2438668B (en) * 2006-06-02 2008-07-30 Siemens Molecular Imaging Ltd Deformation of mask-based images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1617174A (en) * 2004-12-09 2005-05-18 上海交通大学 Human limb three-dimensional model building method based on image cutline
CN101504768A (en) * 2009-03-20 2009-08-12 陕西师范大学 Color image fast partition method based on deformation contour model and graph cut

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
个性化人脸建模技术研究;岳振;《中国优秀硕士学位论文全文数据库 信息科技辑》;20080715(第07期);参见第23-39页 *
叶图像提取研究及虚拟植物可视化实现;李云峰;《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》;20070115(第01期);参见第60-102页 *

Also Published As

Publication number Publication date
CN104574266A (en) 2015-04-29

Similar Documents

Publication Publication Date Title
CN104574266B (en) Morphing based on contour line
JP4966431B2 (en) Image processing device
CN111194550B (en) Processing 3D video content
Wei et al. Fisheye video correction
US8830236B2 (en) Method for estimating a pose of an articulated object model
CN103826032B (en) Depth map post-processing method
CN103443826B (en) mesh animation
CN102436671B (en) Virtual viewpoint drawing method based on depth value non-linear transformation
Zeng et al. Region-based bas-relief generation from a single image
CN102592275A (en) Virtual viewpoint rendering method
TW201101226A (en) Image processing method and related apparatus for rendering two-dimensional image to show three-dimensional effect
Pan et al. Sketch-based skeleton-driven 2D animation and motion capture
WO2015188666A1 (en) Three-dimensional video filtering method and device
CN107105214B (en) A kind of 3 d video images method for relocating
CN117011493B (en) Three-dimensional face reconstruction method, device and equipment based on symbol distance function representation
Liu et al. A new model-based method for multi-view human body tracking and its application to view transfer in image-based rendering
CN109461197B (en) Cloud real-time drawing optimization method based on spherical UV and re-projection
CN106780383A (en) The depth image enhancement method of TOF camera
CN104822030B (en) A kind of squaring antidote of irregular video based on anamorphose
CN104978707A (en) Image deformation technique based on contour
Wang et al. Compressibility-aware media retargeting with structure preserving
Koh et al. View-dependent adaptive cloth simulation
US20210074076A1 (en) Method and system of rendering a 3d image for automated facial morphing
JP7155670B2 (en) Medical image processing apparatus, medical image processing method, program, and data creation method
Islam et al. Warping-based stereoscopic 3d video retargeting with depth remapping

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant