CN107154030B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN107154030B
CN107154030B CN201710348772.8A CN201710348772A CN107154030B CN 107154030 B CN107154030 B CN 107154030B CN 201710348772 A CN201710348772 A CN 201710348772A CN 107154030 B CN107154030 B CN 107154030B
Authority
CN
China
Prior art keywords
deformation
image
pixel
point
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710348772.8A
Other languages
Chinese (zh)
Other versions
CN107154030A (en
Inventor
吴磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN201710348772.8A priority Critical patent/CN107154030B/en
Publication of CN107154030A publication Critical patent/CN107154030A/en
Application granted granted Critical
Publication of CN107154030B publication Critical patent/CN107154030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image processing method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an original image; determining a deformation region based on feature points of a first target object to be deformed in an original image, wherein the feature points are used for reflecting the outline and/or texture features of the first target object; selecting a plurality of pixel points from the deformation area as fixed deformation constraint source points; determining a target point based on a deformation constraint source point and deformation strength, wherein the target point is a pixel point of a deformation image formed after the first target object is deformed, and the pixel parameter of the target point is equal to the pixel parameter of the deformation constraint origin; determining pixel parameters of each pixel in the deformation area after deformation based on original pixel parameters of each pixel in the deformation area and the target point, so as to obtain a deformation area image; and fusing the deformation region image into the deformation region of the original image to obtain a deformed image.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of information technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
The image processing includes: and performing deformation processing on partial areas in the existing image to obtain image beautifying processing and the like. For example, common image processing may include: face beautification may involve deformation of the target organ during the face beautification process. The existing face beautifying processing generally involves a large amount of calculation amount, high complexity, large response time delay, low processing efficiency, poor processed image quality and other problems.
Disclosure of Invention
In view of this, embodiments of the present invention are expected to provide an image processing method and apparatus, an electronic device, and a storage medium, which solve the problems of poor image quality obtained by image processing and/or large calculation amount and high complexity in the image processing process.
In order to achieve the above purpose, the technical scheme of the invention is realized as follows:
a first aspect of an embodiment of the present invention provides an image processing method, including:
acquiring an original image;
determining a deformation region based on feature points of a first target object to be deformed in an original image, wherein the feature points are used for reflecting the outline and/or texture features of the first target object;
Selecting a plurality of pixel points from the deformation area as fixed deformation constraint source points;
determining a target point based on a deformation constraint source point and deformation strength, wherein the target point is a pixel point of a deformation image formed after the first target object is deformed, and the pixel parameter of the target point is equal to the pixel parameter of the deformation constraint origin;
determining pixel parameters of each pixel in the deformation area after deformation based on original pixel parameters of each pixel in the deformation area and the target point, so as to obtain a deformation area image;
and fusing the deformation region image into the deformation region of the original image to obtain a deformed image.
Based on the above scheme, the determining the deformation area based on the feature points of the first target object to be deformed in the original image includes:
acquiring a plurality of characteristic points of the first target object;
selecting an intermediate feature point of the first target object as a center point of the deformation region according to coordinate parameters of the feature points in the original image, wherein the intermediate feature point is the feature point positioned at the most intermediate position among the feature points;
Obtaining deformation size parameters;
and determining the deformation area based on the deformation size parameter and the center point.
Based on the above scheme, the obtaining the deformation dimension parameter includes:
determining a first deformation radius according to the edge characteristic points and the center characteristic points of the first target object, wherein the edge characteristic points are the characteristic points positioned at edge positions in a plurality of characteristic points;
determining a second deformation radius according to the first deformation radius and the first adjustment parameter;
the determining the deformation region based on the deformation dimension parameter and the center point includes:
the deformed region is determined based on the second deformed radius and the center point.
Based on the above scheme, the method further comprises:
determining a first deformation strength according to the characteristic points of a second target object where the first target object is located;
determining a second deformation strength according to the first deformation strength and the second adjustment parameter;
the determining the target point based on the deformation constraint source point and the deformation strength comprises the following steps:
and determining a target point based on the deformation constraint point and the second deformation strength.
Based on the above scheme, the second target object is a face; the first target object is a nose.
Based on the above scheme, the method further comprises:
acquiring the circumscribed rectangle of the deformation area;
intercepting the original image according to the external rectangle, and intercepting the image;
converting the intercepted image based on the deformation area to obtain a mask image, wherein the pixel value of a pixel point in the deformation area of the mask image is a first value, and the pixel value of a pixel point outside the deformation area is a second value;
the determining the pixel parameter after each pixel in the deformation area is deformed based on the original pixel parameter of each pixel in the deformation area and the target point, thereby obtaining a deformation area image, including:
and obtaining the deformation region image based on the mask image, the original pixel parameters of the intercepted image and the target point.
Based on the above scheme, the method further comprises:
performing fuzzy processing on the mask map to obtain a gradient map of the intercepted image;
the fusing the deformed region image into the deformed region of the original image to obtain a deformed image, including:
acquiring fusion weight parameters based on the gradual change map;
and fusing the original image and the deformation region image based on the fusion parameters.
Based on the above scheme, the obtaining the deformation region image based on the mask map, the capturing the original pixel parameters of the image and the target point includes:
determining pixels to be processed of the truncated image based on the mask map;
and acquiring the deformation area image based on the original pixel parameters of the pixel to be processed and the target point.
A second aspect of an embodiment of the present invention provides an image processing apparatus including:
a first acquisition unit configured to acquire an original image;
the first determining unit is used for determining a deformation area based on characteristic points of a first target object to be deformed in an original image, wherein the characteristic points are used for reflecting the outline and/or texture characteristics of the first target object;
a selecting unit, configured to select a plurality of pixel points from the deformation region as fixed deformation constraint source points;
the second determining unit is used for determining a target point based on the deformation constraint source point and the deformation strength, wherein the target point is a pixel point of a deformation image formed after the first target object is deformed, and the pixel parameter of the target point is equal to the pixel parameter of the deformation constraint origin;
the forming unit is used for determining pixel parameters of each pixel in the deformation area after deformation based on original pixel parameters of each pixel in the deformation area and the target point, so that a deformation area image is obtained;
And the fusion unit is used for fusing the deformation area image into the deformation area of the original image to obtain a deformed image.
Based on the above-mentioned scheme, the first determining unit is configured to obtain a plurality of the feature points of the first target object; selecting an intermediate feature point of the first target object as a center point of the deformation region according to coordinate parameters of the feature points in the original image, wherein the intermediate feature point is the feature point positioned at the most intermediate position among the feature points; obtaining deformation size parameters; and determining the deformation area based on the deformation size parameter and the center point.
Based on the above scheme, the first determining unit is configured to determine a first deformation radius according to an edge feature point of the first target object and the center feature point, where the edge feature point is the feature point located at an edge position among the feature points; determining a second deformation radius according to the first deformation radius and the first adjustment parameter; the deformed region is determined based on the second deformed radius and the center point.
Based on the above scheme, the device further comprises:
the third determining unit is used for determining the first deformation strength according to the characteristic points of the second target object where the first target object is located; determining a second deformation strength according to the first deformation strength and the second adjustment parameter;
the second determining unit is specifically configured to determine a target point based on the deformation constraint point and the second deformation strength.
Based on the above scheme, the second target object is a face; the first target object is a nose.
Based on the above scheme, the device further comprises:
a second obtaining unit, configured to obtain an circumscribed rectangle of the deformed region;
the intercepting unit is used for intercepting the original image according to the external rectangle and intercepting the image;
the conversion unit is used for converting the intercepted image based on the deformation area to obtain a mask image, wherein the pixel value of a pixel point of the mask image positioned in the deformation area is a first value, and the pixel value of a pixel point outside the deformation area is a second value;
the forming unit is used for obtaining the deformation area image based on the mask image, the original pixel parameters of the truncated image and the target point.
Based on the above scheme, the device further comprises:
the blurring processing unit is used for blurring the mask image to obtain a gradient image of the intercepted image;
the fusion unit is used for acquiring fusion weight parameters based on the gradual change map; and fusing the original image and the deformation region image based on the fusion parameters.
Based on the above scheme, the forming unit is specifically configured to determine, based on the mask map, a pixel to be processed of the truncated image; and acquiring the deformation area image based on the original pixel parameters of the pixel to be processed and the target point.
A third aspect of an embodiment of the present invention provides an electronic device, including:
a memory for storing a computer program;
and the processor is connected with the memory and is used for realizing the image processing method provided by any one of the previous items by executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer storage medium storing a computer program capable of implementing the image processing method provided in any one of the foregoing aspects, when the computer program is executed by a processor.
When the image processing method, the image processing device, the electronic equipment and the storage medium are used for image processing in the embodiment, firstly, the characteristic point of a first target object which needs to be subjected to deformation processing in the image is obtained, the deformation range is determined based on other characteristic points, the deformation area comprising the first target object can be accurately determined, then, the corresponding relation between the deformation constraint source point and the target point is adopted for deformation in the deformation area, the deformed image is obtained, the deformed image is fused into an original image, and better image effect can be obtained relative to the image processing which can not accurately position the deformation area; and the original shape of the deformation-free part is maintained in the processing process, and the calculation complexity is low, so that the consumption of the image processing device is low.
Drawings
Fig. 1 is a flowchart of a first image processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating how the deformation region is determined according to an embodiment of the present invention;
FIG. 3 is a schematic view of a nose feature point provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a change in deformation area based on user input according to an embodiment of the present invention;
fig. 5 is a schematic display diagram of an image processing method according to an embodiment of the present invention;
FIG. 6 is a schematic view of the image of FIG. 5 after thinning;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 9 is a flowchart of another image processing method according to an embodiment of the present invention;
fig. 10 is a schematic diagram of the evolution between images according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further elaborated below by referring to the drawings in the specification and the specific embodiments.
As shown in fig. 1, the present embodiment provides an image processing method, including:
step S110: acquiring an original image;
step S120: determining a deformation region based on feature points of a first target object to be deformed in an original image, wherein the feature points are used for reflecting the outline and/or texture features of the first target object;
Step S130: selecting a plurality of pixel points from the deformation area as fixed deformation constraint source points;
step S140: determining a target point based on a deformation constraint source point and deformation strength, wherein the target point is a pixel point of a deformation image formed after the first target object is deformed, and the pixel parameter of the target point is equal to the pixel parameter of the deformation constraint origin;
step S150: determining pixel parameters of each pixel in the deformation area after deformation based on original pixel parameters of each pixel in the deformation area and the target point, so as to obtain a deformation area image;
step S160: and fusing the deformation region image into the deformation region of the original image to obtain a deformed image.
The image processing method provided by the embodiment can be applied to various image processing devices, such as mobile phones, tablet computers, or various electronic devices running image processing applications, such as wearable devices.
Acquiring the original image in step S110 may include: the original image is acquired through a camera, or is received from other electronic equipment through a communication interface, or is extracted from a local storage medium of the image processing equipment. The original image in this embodiment includes: pixel parameters for each pixel point. The pixel parameters of each pixel point herein may include: color values and transparency values; the color values may include: color values of three primary colors of red (r), green (g), blue (b), and the like, transparency of each pixel, and the like.
In step S120, feature points of the first target object to be deformed are performed, and a deformed region is obtained. The feature points of the first target object may be pixel points that characterize the contour and/or texture of the first target object by using various feature point extraction methods. For example, feature points of the first target object extracted by the FAST feature extraction algorithm may be employed. The FAST may be an abbreviation for Features from Accelerated Segment Test, but is not limited to this algorithm. In some embodiments, the feature point is a pixel point whose gray value differs from the gray value around it within a preset range.
The feature points of the first target object include: the feature points characterizing the edge position of the first target object, and the feature points of the intermediate position, in this embodiment, may be at least based on the feature points of the edge position, and the edges of the deformed region may be determined, and in some embodiments, the deformed region may at least need to include all the feature points of the first target object.
On the one hand, in this embodiment, when determining the deformation area, the feature points of the first target object are acquired, and the deformation area is determined based on the distribution position or the pixel coordinates of the feature points, so that, compared with random interception or interception based on user delineation in the prior art, an accurate deformation area can be obtained when the first target object is in different postures or forms, and compared with inaccuracy of random interception or human operation, the deformation area can be accurately determined, thereby realizing that images in different postures can obtain better image processing effects.
On the other hand, in the embodiment, the deformation area is determined based on the feature points of the first target object, and even if the first target object presents different postures in the original image due to shooting angles and the like, the graphic area including the first target object can be adaptively extracted as the deformation area, so that the stability and the accuracy of deformation are improved.
In some embodiments, step S130 may be used to determine a deformation constraint source point based on the deformation region. In some embodiments, the number and positions of the deformation constraint source points may be determined according to the area of the deformation region. In summary, the deformation area in this embodiment may be: only the image area of the first target object is included in the original image. For example, the first target object is a nose, and the deformation region may be: including the graphical region of the entire nose and including only the graphical region of the nose. For example, if the first target object is an eye, the deformation region may be: only the graphical region of the eye is included in the original image.
For example, the deformation area is a circular deformation area, and the deformation constraint source points can be selected from the periphery of the deformation area according to a preset angle and an equal angle. When the area of the deformation area is larger than the first area or the deformation radius is larger than the first radius, adopting a first preset angle to take the deformation constraint source points at equal angles; and when the area of the deformation area is not larger than the first area or the deformation radius is not larger than the first radius, taking the deformation constraint source point by adopting a second preset angle and the like. The first preset angle is smaller than the second preset angle. Of course, the circular deformation region may also be an elliptical region.
In step S150, the target point is determined based on the deformation constraint source point and the deformation strength, and the deformation of the target object is performed. In this way, the deformation area is positioned based on the feature points, and the coordinates of the deformation constraint source points and the target points are combined, so that the characteristics of simple algorithm and low calculation complexity are realized, and the image processing method provided by the embodiment has the characteristics of simplicity and convenience in implementation.
The deformation strength may be a parameter indicating a degree of deformation of the first target object after deformation, and the deformation strength may include: scaling the scale of the first target object. The deformation strength may further include: scaling the zoom-in value or the zoom-out value of the first target object, and the like. The deformation strength may further include: radian adjustment values for a portion of the arc of the profile of the first target object, and the like. In short, the specific parameters corresponding to the deformation strength are various, and are not limited to any of the above.
When the deformation strength may be a scaling factor, for example, the first target object may be a nose, a scaling factor for scaling the nose, and the first target object may be a face, a scaling factor for scaling the face.
In this embodiment, the target point is a pixel parameter of the deformation constraint source point, and the pixel point corresponds to the deformed target point. The pixel parameters herein may include color values, transparency values, and the like.
The pixel parameters of each pixel within the deformed region are redetermined based on the original pixel parameters, the target point in step S150, thereby obtaining an image of the deformed region after the deformation. The image of the deformed region after the deformation is referred to as a deformed region image in this embodiment.
In step S150, only the deformed region is deformed, and in this way, after the deformed region image is obtained, the deformed region image is fused to the position corresponding to the deformed region of the original image, so that the graphics processing on the other regions except the deformed region in the original image is obviously avoided, and thus, the original presentation of the original image without deformation can be maintained, and the other graphics objects except the first target object are protected.
In step S160, the original image and the deformed region image are fused based on the original image and the deformed region image, and a deformed image is obtained. The step S160 in this embodiment may include: and replacing the deformed region image with the deformed region image in the original image, so that the deformed image can be directly obtained. However, in this embodiment, in order to avoid the problem that the replacement is too sharp at the replacement boundary after the replacement of the deformed region image of the deformed region is performed, in this embodiment, when the original image and the deformed region image are fused, after the deformed region image is replaced by the deformed region image of the original image, an edge blurring process is performed, so that the edge region is excessively gentle, and the image quality of the deformed image is further improved.
Optionally, as shown in fig. 2, the S120 may include:
step S121: acquiring a plurality of characteristic points of the first target object;
step S122: selecting an intermediate feature point of the first target object as a center point of the deformation region according to coordinate parameters of the feature points in the original image, wherein the intermediate feature point is the feature point positioned at the most intermediate position among the feature points;
step S123: obtaining deformation size parameters;
step S124: and determining the deformation area based on the deformation size parameter and the center point.
The obtaining the feature point in this embodiment includes: acquiring pixel coordinates of each characteristic point of a first target object in an original image; after each pixel coordinate is obtained, a feature point located at a position intermediate to the feature points, which is called an intermediate feature point, can be selected based on the coordinate of each pixel coordinate. In this embodiment, the intermediate feature point is taken as the center point of the deformed region in this embodiment. For example, if the deformed region is a circular region, the pixel coordinates of the intermediate feature point will be the circular shape of the circular region, and if the deformed region is a rectangular region, the pixel coordinates of the intermediate feature point will be the center point of the rectangular region. If the deformed region is an elliptical region, the intermediate feature band is pixel coordinates as an intermediate point of the elliptical region.
In fig. 3, the first target object is a nose of a face, and feature points of the nose can be seen to describe contours of the nose in the face image; the feature points of the nose in fig. 3 include: edge feature points located at the edges of the nose, and intermediate feature points located at intermediate positions of the nose. Typically the edge feature points surround the intermediate feature points. The dotted circle shown in fig. 3 may be a circular deformed region formed with intermediate feature points.
In this embodiment, the preset shape of the deformed region corresponds to the first target object. If the current first target object is a nose or eyes, etc., the preset shape of the deformation area is a circle; if the first target object is a face or a lip, the preset shape of the deformation area is an ellipse or the like. Of course, the above is merely an example, and the specific implementation is not limited to these cases.
The deformation strength may be determined by the electronic device according to a preset rule, for example, when deforming an organ in a face, the preset rule may be determined based on a deformation rule set by the aesthetic sense of a human.
The deformation dimension parameter here may be a parameter describing the deformation region in essence, for example, a radius of a circular deformation region, a side length of a rectangular deformation region, pixel coordinates of vertex pixels of the rectangular deformation region, values of major and minor axes of an elliptical deformation region, and the like.
After the deformation size parameter is determined in step S120, it is apparent that the deformation region can be precisely determined based on the center point, thereby obtaining the deformation region with high accuracy.
Alternatively, the step S123 may include:
determining a first deformation radius according to the edge characteristic points and the center characteristic points of the first target object, wherein the edge characteristic points are the characteristic points positioned at edge positions in a plurality of characteristic points;
determining a second deformation radius according to the first deformation radius and the first adjustment parameter;
the step S124 may include:
the deformed region is determined based on the second deformed radius and the center point.
The first adjustment parameter in this embodiment may be input based on a user instruction.
After determining the center point and the first deformation parameter, as shown in fig. 4, the reference deformation area corresponding to the first deformation radius is displayed on the original image displayed by the electronic device in a superimposed manner, and an adjustment space is displayed, and in fig. 4, an adjustment bar is displayed, where the adjustment bar includes an adjustment guide rail and a slider located on the guide rail, and a user can make the slider move on the guide rail through touch or mouse operation, and the electronic device determines the first adjustment parameter or the second parameter according to the movement parameter of the slider. For example, the scaling of the first deformation parameter is determined based on at least one of the moving amount and the moving direction of the slider. Specifically, the movement amount is used to determine a zoom increment, and the movement direction is used to determine a sign of the zoom increment. When the moving direction is the first direction, the first adjusting parameter a= 1+B, and when the moving direction is the first direction, a=1-B; the first direction is opposite to the second direction, and the original position of the sliding block is located at the middle position of the guide rail. And B is a scaling increment. Of course, the above is only a distance, and in a specific implementation, the first adjustment parameter may be formed based on a movement operation of the user moving the edge of the reference deformation area, for example, the user clicks the edge of the reference deformation area and pushes the edge on the screen, and when pushing is stopped, the first adjustment parameter may be determined based on the pushing amount, so as to determine the second deformation radius.
The adjustment bar is a second adjustment control in fig. 4 that can be used to adjust the deformation strength. Also shown in fig. 4 is a first adjustment control for adjusting the extent of the deformed region.
In the right-hand view of fig. 4, where the slider is located at the far left of the guide rail, the dashed circle shown in the left-hand view of fig. 4 indicates a deformation region determined based on the first deformation radius, and in the right-hand view of fig. 4, the slider moves to a position halfway along the guide rail, it can be understood that the deformation strength is increased. The first adjustment control in fig. 4 includes: and after the sub-controls with different degrees are selected, selecting a fourth sub-control corresponding to the first adjustment parameters, wherein the fourth sub-control corresponding to the first adjustment control in the left diagram in fig. 4 is selected, the corresponding deformation area is a dotted line circle in the left diagram in fig. 4, and the first adjustment control in the right diagram in fig. 4 obtains a fifth sub-control selected, and the obtained deformation area is determined based on the second deformation radius. It is apparent that the area of the deformed region in the left figure is smaller than that in the right figure.
In some embodiments, the method further comprises:
determining a first deformation strength according to the characteristic points of a second target object where the first target object is located;
Determining a second deformation strength according to the first deformation strength and the second adjustment parameter;
the step S140 may include:
and determining a target point based on the deformation constraint point and the second deformation strength.
In some embodiments, the deformation strength may be directly specified, in this embodiment, a deformation strength is first recommended by the electronic device according to a preset rule, where the deformation strength may be a first strength, and then, based on the individual requirement of the user, the second deformation strength is adjusted based on the first deformation strength.
The second adjustment parameter here may be an adjustment parameter determined in various ways.
For example, if the current approach is used to perform nose reduction, the deformation strength may be used to determine the ratio of nose reduction, the area of the reduced nose, and the like.
In this embodiment, the target point corresponding to the deformation constraint source point is determined based on the deformation strength.
For example, the second target object is a human face; the first target object is a nose. For example, the second object is a face, and based on the aesthetic of the masses, how large the face corresponds to how large the nose is, the electronic device may give a recommended deformation strength, that is, the first deformation strength, according to a preset proportional relationship. Then, some users may have their own needs, and indicate how much strength is needed, and in this embodiment, the first deformation strength is adjusted based on the second adjustment parameter, so as to obtain the second deformation strength.
In some embodiments, the optimal deformation strength and the recommended range may be determined based on a preset proportional relationship between the face and the nose; typically the optimal deformation strength is a median of the recommended range; the optimal deformation strength is one of the first deformation strength, and then the second adjustment parameter is adjusted within the recommended range, so that the situation that the nose is too large or too small due to unfamiliar or improper operation of a user is avoided, and a photo which does not accord with human aesthetic quality is formed.
When the photo is interesting, the method is also applicable to the recommended intensity and the recommended range of interesting deformation of the first deformation intensity, and the second adjustment parameter is that the recommended intensity is changed in the recommended range, so that the deformed image is ensured to have enough deformation, and the interesting image is formed.
In a specific implementation, if the second target object is a face, the first target object may be any organ in the face, that is, the first target object is not limited to a nose, but may be an organ or a part of a face such as a pair of glasses, a lip, or a forehead.
Optionally, the method further comprises:
Acquiring the circumscribed rectangle of the deformation area;
intercepting the original image according to the external rectangle, and intercepting the image;
converting the intercepted image based on the deformation area to obtain a mask image, wherein the pixel value of a pixel point in the deformation area of the mask image is a first value, and the pixel value of a pixel point outside the deformation area is a second value;
the step S150 may include:
and obtaining the deformation region image based on the mask image, the original pixel parameters of the intercepted image and the target point.
As shown in fig. 10, the original image (imgA) is subjected to matting processing, an image in which a deformed region of imgA is located is cut out, and the cut-out image (imgI) is formed.
In order to keep the portion other than the deformed region in the original image unchanged, in this embodiment, by acquiring the truncated image, only the truncated image is subjected to subsequent processing to form a deformed region image replacing the deformed region in the original image.
As shown in fig. 10, the mask map (imgM) may be a binarized image in this embodiment. The gray values of the pixels in the mask map only include: a first value and a second value. In this embodiment, the gray value of the pixel in the deformed area of the mask map may be 255, and the gray value of the pixel outside the deformed area may be 0. Thus, after the electronic device obtains the mask image, the electronic device can know which pixels of the truncated image need to be subjected to pixel parameter conversion.
Of course, the first value and the second value may be different, and are not limited to 255 and 0, and may be 0 and 1, etc.
Optionally, the method further comprises: performing fuzzy processing on the mask map to obtain a gradient map of the intercepted image;
the step S160 may include: acquiring fusion weight parameters based on the gradual change map;
and fusing the original image and the deformation region image based on the fusion parameters.
A gradual change map is also obtained in this embodiment; the gradation map is based on a mask map. For example, the gray values of the pixels in the deformed region of the mask map are set to 255, and the gray values of the pixels outside the deformed region are set to 0, so that the gray value change distance of the pixels at the edge of the deformed region in the mask map is gradually changed by the blurring process in the present embodiment. So that the gray value of the pixel at the edge position gradually progresses from 0 to 255 outside the deformed region, and so on.
In this embodiment, a weight parameter is determined according to the gray value of each pixel in the gradation map. For example, when the gray value of the pixel coordinate (a, b) in the current gradation map is c, the c is used for fusing the deformed region image and the original image, and the pixel coordinate is a weight parameter for fusing the pixel (a, b) and the corresponding pixel in the original image. The weight parameters may include: the first weight parameter is the influence value of the original pixel parameter of the corresponding pixel in the original image in the fusion parameter; the second weight parameter may be a degree of influence value of a pixel parameter of a pixel corresponding to the deformed region image in the fusion parameter. The pixel parameter of the corresponding pixel of the fused image is the product of the pixel parameter value of the original image and the first weight parameter, and the product of the pixel parameter of the corresponding pixel in the deformed image area and the second weight parameter is added. Of course, this is by way of example only and is not limited to this example.
Optionally, the obtaining the deformation region image based on the mask map, the capturing original pixel parameters of the image and the target point includes:
determining pixels to be processed of the truncated image based on the mask map;
and acquiring the deformation area image based on the original pixel parameters of the pixel to be processed and the target point.
In this embodiment, the mask map is not only used for determining the weight parameters, but also used for delineating the pixel points for converting the pixel parameters, and obviously, one mask map realizes two functions, realizes multiplexing of data, and simplifies the processing flow of the device.
Fig. 5 is a schematic view of a face image before a nose is thinned, and a broken line at a position of the nose in fig. 5 represents a contour of the nose after the nose is thinned, and a solid line at a position of the nose represents a contour of the nose before the nose is thinned.
Fig. 6 is a schematic view of the face image after nose is thinned.
As shown in fig. 7, the present embodiment provides an image processing apparatus including:
a first acquisition unit 110 for acquiring an original image;
a first determining unit 120, configured to determine a deformation area based on feature points of a first target object to be deformed in an original image, where the feature points are used to embody a contour and/or texture feature of the first target object;
A selecting unit 130, configured to select a plurality of pixel points from the deformation region as fixed deformation constraint source points;
a second determining unit 140, configured to determine a target point based on a deformation constraint source point and a deformation strength, where the target point is a pixel point of a deformed image formed after the first target object is deformed, and a pixel parameter of the target point is equal to a pixel parameter of the deformation constraint origin;
a forming unit 150, configured to determine pixel parameters after each pixel in the deformation area is deformed based on original pixel parameters of each pixel in the deformation area and the target point, so as to obtain a deformation area image;
and a fusion unit 160, configured to fuse the deformed region image into the deformed region of the original image, so as to obtain a deformed image.
The image processing apparatus provided in this embodiment may be applied to various image processing devices, and the first acquisition unit 110 may include: and the communication interface is used for receiving the original image from the peripheral. The first acquiring unit 110 may also include: the camera can be used for automatically acquiring the original image.
The first determining unit 120, the selecting unit 130, the second determining unit 140, the forming unit 150, and the fusing unit 160 may correspond to a processor or a processing circuit. The processor may be a central processor, a microprocessor, a digital signal processor, an application processor, or a programmable array. The processing circuit may include: an application specific integrated circuit.
The processor or processing circuit implements the above functions by execution of executable code such as a computer program.
Optionally, the first determining unit 120 is configured to obtain a plurality of the feature points of the first target object; selecting an intermediate feature point of the first target object as a center point of the deformation region according to coordinate parameters of the feature points in the original image, wherein the intermediate feature point is the feature point positioned at the most intermediate position among the feature points; obtaining deformation size parameters; and determining the deformation area based on the deformation size parameter and the center point.
In this embodiment, a plurality of feature points of the first target object are first acquired, where the feature points are edge feature points located at edge positions and intermediate feature points located in an intermediate region of the first target object. In this embodiment, the deformation region is determined always based on the distribution of the feature points. In this embodiment, the intermediate feature points are used as the center points of the deformed region, and then the deformed dimension parameters are formed based on the edge feature points, thereby forming the deformed region surrounding at least all of the feature points.
Optionally, the first determining unit 120 is configured to determine a first deformation radius according to an edge feature point of the first target object and the center feature point, where the edge feature point is the feature point located at an edge position among the feature points; determining a second deformation radius according to the first deformation radius and the first adjustment parameter; the deformed region is determined based on the second deformed radius and the center point.
In this embodiment, the first adjustment parameter may be determined based on user input, so that the user may conveniently control the deformation area corresponding to the first target object by itself, thereby meeting the individual requirement of the user.
Optionally, the apparatus further comprises:
the third determining unit is used for determining the first deformation strength according to the characteristic points of the second target object where the first target object is located; determining a second deformation strength according to the first deformation strength and the second adjustment parameter;
the second determining unit 140 is specifically configured to determine the target point based on the deformation constraint point and the second deformation strength.
The third determining unit in this embodiment may also correspond to a processor or a processing circuit, where the processor or the processing circuit may also implement the acquisition of the second deformation strength by execution of a code.
The second determination unit 140 determines the target point, in particular based on the second deformation strength.
Optionally, the second target object is a face; the first target object is a nose.
In some embodiments, the apparatus further comprises:
a second obtaining unit, configured to obtain an circumscribed rectangle of the deformed region;
the intercepting unit is used for intercepting the original image according to the external rectangle and intercepting the image;
the conversion unit is used for converting the intercepted image based on the deformation area to obtain a mask image, wherein the pixel value of a pixel point of the mask image positioned in the deformation area is a first value, and the pixel value of a pixel point outside the deformation area is a second value;
the forming unit 150 is configured to obtain the deformation area image based on the mask map, the original pixel parameters of the truncated image, and the target point.
In this embodiment, the second obtaining unit, the intercepting unit and the converting unit may correspond to a processor or a processing circuit, and may simply implement the above functions by executing corresponding codes.
The forming unit 150 forms the determined deformed region based on the mask map, and determines pixel parameters pixel by pixel again in the pixels in the corresponding deformed region, thereby obtaining a deformed region image.
Optionally, the apparatus further comprises:
the blurring processing unit is used for blurring the mask image to obtain a gradient image of the intercepted image;
the fusion unit 160 is configured to obtain a fusion weight parameter based on the gradual change map; and fusing the original image and the deformation region image based on the fusion parameters.
The blurring processing unit may also correspond to a processor or processing circuit that enables generation of a new warped image by code execution.
Further, the forming unit 150 is specifically configured to determine, based on the mask map, a pixel to be processed of the truncated image; and acquiring the deformation area image based on the original pixel parameters of the pixel to be processed and the target point.
As shown in fig. 8, this embodiment further provides an electronic device, which is characterized by including:
a memory 210 for storing a computer program;
and a processor 220, coupled to the memory 210, for implementing the image processing method provided in any of the foregoing embodiments by executing the computer program.
The memory 210 may include: various types of storage media may be non-transitory storage media, such as read-only storage media, etc., although the memory may also include: flash memory, etc.
The processor 220 may include: a central processing unit, a microprocessor, a digital signal processor, an application processor or a programmable array, etc.
The processor 220 and the memory 210 are coupled via a bus 230. The bus 230 may be an integrated circuit (IIC) bus or a peripheral interconnect standard (PCI) bus. The bus may be used for information interaction between the memory and the processor.
In some embodiments, the electronic device further comprises: and a display 240, wherein the display 240 is used for displaying image information and/or text information, so as to conveniently display an original image, a truncated image, a deformed region image, a fused deformed image and the like.
In some embodiments, as shown, the electronic device further comprises: a communication interface 250, the communication interface 250 being operable to interact with other electronic devices.
The present embodiment provides a computer storage medium storing a computer program capable of implementing the image processing method provided in any one of the foregoing embodiments, when executed by a processor.
The following provides a specific solution in connection with any of the above embodiments:
the present example provides an image processing method including: inputting a face image imgA; and outputting a result image imgR.
The first step: and carrying out face feature positioning from the imagA to obtain face feature points, wherein the face feature points Fi are used for representing contour information of the face in the imagA and/or position information of each organ in the face and the like. For example, the Fi includes M, which may be equal to 80, where the 56 th, i.e., i=56, is a feature point of the nose, i.e., fi (i=56, …, 64). In this example, 9 points are used as the nose part related locating points obtained by the face recognition algorithm, so that i5=56-64 respectively.
And a second step of: and obtaining a deformation radius parameter R and a deformation strength parameter M.
And a third step of: determining the deformation region may include: according to the input image locating point information Fi (i=64), the deformation radius R is calculated to obtain a rectangular region Rect (x, y, w, h) corresponding to the thin nose, and a corresponding region image imgI. Wherein, the coordinates (x, y) represent one vertex position of the rectangular region, the width of the rectangle represented by w, and the height of the rectangle represented by h. The coordinates (x, y) in this example may be the coordinates of the left vertex of the rectangular region.
The rectangular area corresponds to the pixel area in imaA as follows:
r0=r ratio (1), where ratio (1) is a test empirical constant, for example, 1.3, and ratio (1) is a first adjustment parameter, which may be an empirical value or a value determined based on user input;
x=fi.x-R0; y = fi.y-R0; w=2.0×r0; h=2.0×r0; the Fi.x is the abscissa of the ith nose feature point; fi.y is the ordinate of the ith nose feature point.
Fourth step: deforming the deformed region may include:
obtaining a mask (mask) image imgM of a thin nose deformation area based on Fi position information processing, wherein the processing method comprises the following steps:
an image imgM is created, a single channel, a length-width-identical Rect (w, h), and a pixel parameter setting all pixel values to 0. And drawing a circle on the imgM by taking Fi (i=64) as a circle center and setting the pixel parameter of the pixels in the circular area to be 255, wherein the radius is r 0. The pixel parameters here include at least: the gray value is characterized.
The r0=rxc0, c0 is a fixed parameter value, and is extracted according to a test experience value, for example, 0.9;
the processing result is a deformation area mask diagram: imgM, single channel, arbitrary position (x, y) corresponds to pixel value G, defining G >0: a deformation region; g=0: non-deformed regions.
The deformation constraint source point Sj (j=0, 1,2, … 7) is calculated according to the deformation radius parameter R and the feature point Fi (i=64), and the target point Dj (j=0, 1,2, … 7) is exemplified by Dj (dx, dy), fi (i=64) (fx, fy), and Si (sx, sy). The abscissa of the target point denoted by dx; dy represents the ordinate of the target point; fx represents the abscissa of the feature point and fy represents the ordinate of the feature point.
The following provides a calculation method by taking the conversion of the jth pixel point as an example:
Sx=fx+R1*cos(Ai),sy=fy+R1*sin(Aj);
dx=fx+R2*cos(Ai),dy=fy+R2*sin(Aj);
r1 is calculated according to the formula: r1=rxc1, c1 is a fixed parameter value derived from a test empirical value, e.g. 0.8;
r2 calculation formula: r2=r (1.0-M ratio (2)), ratio (2) being an empirical constant, e.g. 0.6, M being the input deformation strength parameter.
The angle values of the deformation constraint origin are selected from the Aj angles, specifically, 0 degree, 45 degrees, 90 degrees and … degrees, and the angle values are correspondingly extracted by increasing the angle values by 45 degrees.
Based on the input image (imgI), sj, fi (i=64), dj, imgM, a deformation result map imgR0 is calculated based on a deformation algorithm.
Fifth step: fusion of the deformed image and the original image is performed, including:
performing fuzzy processing on the imgM, wherein the result is expressed as a gradual change graph (imgAlpha), the imgAlpha image is used as a weight parameter, imgR0 is fused to the input graph imgA, a result graph imgR is obtained, and the calculation result process is as follows:
defining the input image position (x, y) to correspond to the pixel value imgAlpha (G), imgA (r, G, b), imgR (r, G, b), imgR0 (r, G, b), and the calculation formula is as follows:
R(r,g,b)=A(r,g,b)*(255-G)+R0(r,g,b)*(G);
r represents a red color value corresponding to a pixel, and the value range can be 0 to 255; g represents a green color value corresponding to a pixel, and the value range can be 0 to 255; b represents a color value of blue corresponding to a pixel, and the value range can be 0 to 255; the gray value of the pixel indicated by G may be in the range of 0 to 255.
As shown in fig. 9, the image processing method provided in this example includes:
step S1: inputting a graph imgA, a nose feature point Fi (i=64), a deformation radius R and a deformation strength mag;
step S2: taking Fi (i=64) as a center, r_ratio (1) as a radius, and taking 8 points at a time as deformation constraint source points Sj (j=0, … …, 7) at an interval of 45-degree included angles;
step S3: calculating a deformed target point Dj (j=0, … …, 7) based on the deformation constraint origin and the deformation strength mag;
step S4: calculating a deformation rectangular region Rect (x, y, w, h), and copying an image of the Rect (x, y, w, h) from the imagA as imgI to be used as a deformation input;
step S5: based on Rect (x, y, w, h) and imgI, a thin nose deformation mask map is calculated: imgM;
step S6: based on imgM, calculating a single-channel gray level image imgAlpa for merging deformation output results;
step S7: inputting Sj, dj, imgI and imgM, and applying a deformation algorithm to obtain an output image imgR0;
step S8: based on imgAlpha, fusing imgR0 and imgA, and outputting a result graph imagR;
step S9: and outputting a result graph imgR.
FIG. 10 is a graph showing the evolution relationship between images during image processing; firstly, intercepting and obtaining imgI from an original image imgA, and processing the imgI to obtain a mask image imgM; processing the mask image imgM to obtain a gradual change image imgAlpa; obtaining an imgR0 after nasal reduction based on imgI and imgM; imgA, imgAlpha and imgR0 were fused to obtain imgR.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing module, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (13)

1. An image processing method, comprising:
acquiring an original image;
determining a deformation region based on the distribution position or pixel coordinates of feature points of a first target object to be deformed in an original image, wherein the feature points are pixel points representing the outline and/or texture features of the first target object, and the feature points comprise middle feature points and edge feature points;
A first adjustment control is displayed on the original image in a superimposed mode, wherein the first adjustment control is used for adjusting the range of the deformation area;
selecting a plurality of pixel points from the deformation area as fixed deformation constraint source points;
determining a first deformation strength according to the characteristic points of a second target object where the first target object is located and the proportional relation between the second target object and the first target object, wherein the first deformation strength is recommended deformation strength;
a second adjustment control is displayed on the original image in a superimposed mode, wherein the second adjustment control is used for determining a second adjustment parameter, and the second adjustment parameter is the adjusted proportion or area of the second target object;
adjusting the first deformation strength according to the second adjustment parameter to obtain a second deformation strength;
determining a target point based on the deformation constraint source point and the second deformation strength, wherein the target point is a pixel point of a deformation image formed after the first target object is deformed, and the pixel parameter of the target point is equal to the pixel parameter of the deformation constraint origin;
determining pixel parameters of each pixel in the deformation area after deformation based on original pixel parameters of each pixel in the deformation area and the target point, so as to obtain a deformation area image;
And fusing the deformation region image into the deformation region of the original image to obtain a deformed image.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the determining a deformation area based on the distribution position or the pixel coordinates of the feature points of the first target object to be deformed in the original image includes:
acquiring a plurality of characteristic points of the first target object;
selecting the middle characteristic point of the first target object as a center point of the deformation area according to the distribution positions or pixel coordinates of the characteristic points in the original image, wherein the middle characteristic point is the characteristic point positioned at the most middle position among the characteristic points;
obtaining deformation size parameters;
and determining the deformation area based on the deformation size parameter and the center point.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the obtaining deformation dimension parameters includes:
determining a first deformation radius according to the edge characteristic points and the central characteristic points of the first target object, wherein the edge characteristic points are the characteristic points positioned at edge positions in a plurality of characteristic points;
Determining a second deformation radius according to the first deformation radius and the first adjustment parameter;
the determining the deformation region based on the deformation dimension parameter and the center point includes:
the deformed region is determined based on the second deformed radius and the center point.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the second target object is a human face; the first target object is a nose.
5. A method according to claim 1, 2 or 3, characterized in that,
the method further comprises the steps of:
acquiring the circumscribed rectangle of the deformation area;
intercepting the original image according to the external rectangle, and intercepting the image;
converting the intercepted image based on the deformation area to obtain a mask image, wherein the pixel value of a pixel point in the deformation area of the mask image is a first value, and the pixel value of a pixel point outside the deformation area is a second value;
the determining the pixel parameter after each pixel in the deformation area is deformed based on the original pixel parameter of each pixel in the deformation area and the target point, thereby obtaining a deformation area image, including:
and obtaining the deformation region image based on the mask image, the original pixel parameters of the intercepted image and the target point.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
the method further comprises the steps of:
performing fuzzy processing on the mask map to obtain a gradient map of the intercepted image;
the fusing the deformed region image into the deformed region of the original image to obtain a deformed image, including:
acquiring fusion weight parameters based on the gradual change map;
and fusing the original image and the deformation region image based on the fusion parameters.
7. The method of claim 5, wherein the step of determining the position of the probe is performed,
the obtaining the deformation region image based on the mask image, the original pixel parameters of the truncated image and the target point includes:
determining pixels to be processed of the truncated image based on the mask map;
and acquiring the deformation area image based on the original pixel parameters of the pixel to be processed and the target point.
8. An image processing apparatus, comprising:
a first acquisition unit configured to acquire an original image;
the first determining unit is used for determining a deformation area based on the distribution position or pixel coordinates of feature points of a first target object to be deformed in an original image, wherein the feature points are pixel points representing the outline and/or texture features of the first target object, and the feature points comprise middle feature points and edge feature points; a first adjustment control is displayed on the original image in a superimposed mode, wherein the first adjustment control is used for adjusting the range of the deformation area;
A selecting unit, configured to select a plurality of pixel points from the deformation region as fixed deformation constraint source points;
the second determining unit is used for determining first deformation strength according to the characteristic points of a second target object where the first target object is located and the proportional relation between the second target object and the first target object, wherein the first deformation strength is recommended deformation strength; a second adjustment control is displayed on the original image in a superimposed mode, wherein the second adjustment control is used for determining a second adjustment parameter, and the second adjustment parameter is the adjusted proportion or area of the second target object; adjusting the first deformation strength according to the second adjustment parameter to obtain a second deformation strength; determining a target point based on the deformation constraint source point and the second deformation strength, wherein the target point is a pixel point of a deformation image formed after the first target object is deformed, and the pixel parameter of the target point is equal to the pixel parameter of the deformation constraint origin;
the third determining unit is used for determining the first deformation strength according to the characteristic points of the second target object where the first target object is located; determining a second deformation strength according to the first deformation strength and the second adjustment parameter;
The second determining unit is further configured to determine a target point based on the deformation constraint point and the second deformation strength;
the forming unit is used for determining pixel parameters of each pixel in the deformation area after deformation based on original pixel parameters of each pixel in the deformation area and the target point, so that a deformation area image is obtained;
and the fusion unit is used for fusing the deformation area image into the deformation area of the original image to obtain a deformed image.
9. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
the first determining unit is used for acquiring a plurality of characteristic points of the first target object; selecting an intermediate feature point of the first target object as a center point of the deformation region according to coordinate parameters of the feature points in the original image, wherein the intermediate feature point is the feature point positioned at the most intermediate position among the feature points; obtaining deformation size parameters; and determining the deformation area based on the deformation size parameter and the center point.
10. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
the first determining unit is configured to determine a first deformation radius according to an edge feature point of the first target object and the center feature point, where the edge feature point is the feature point located at an edge position among the feature points; determining a second deformation radius according to the first deformation radius and the first adjustment parameter; the deformed region is determined based on the second deformed radius and the center point.
11. The device according to claim 8, 9 or 10, wherein,
the apparatus further comprises:
a second obtaining unit, configured to obtain an circumscribed rectangle of the deformed region;
the intercepting unit is used for intercepting the original image according to the external rectangle and intercepting the image;
the conversion unit is used for converting the intercepted image based on the deformation area to obtain a mask image, wherein the pixel value of a pixel point of the mask image positioned in the deformation area is a first value, and the pixel value of a pixel point outside the deformation area is a second value;
the forming unit is used for obtaining the deformation area image based on the mask image, the original pixel parameters of the truncated image and the target point.
12. An electronic device, comprising:
a memory for storing a computer program;
a processor, coupled to the memory, for implementing the image processing method provided in any one of claims 1 to 7 by executing the computer program.
13. A computer storage medium storing a computer program, characterized in that the computer program, when executed by a processor, is capable of implementing the image processing method provided in any one of claims 1 to 7.
CN201710348772.8A 2017-05-17 2017-05-17 Image processing method and device, electronic equipment and storage medium Active CN107154030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710348772.8A CN107154030B (en) 2017-05-17 2017-05-17 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710348772.8A CN107154030B (en) 2017-05-17 2017-05-17 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107154030A CN107154030A (en) 2017-09-12
CN107154030B true CN107154030B (en) 2023-06-09

Family

ID=59792877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710348772.8A Active CN107154030B (en) 2017-05-17 2017-05-17 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107154030B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203963B (en) * 2016-03-17 2019-03-15 腾讯科技(深圳)有限公司 A kind of image processing method and device, electronic equipment
CN107526504B (en) * 2017-08-10 2020-03-17 广州酷狗计算机科技有限公司 Image display method and device, terminal and storage medium
CN107707818B (en) * 2017-09-27 2020-09-29 努比亚技术有限公司 Image processing method, image processing apparatus, and computer-readable storage medium
CN107730445B (en) * 2017-10-31 2022-02-18 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, storage medium, and electronic device
KR102466998B1 (en) * 2018-02-09 2022-11-14 삼성전자주식회사 Method and apparatus for image fusion
CN108765274A (en) * 2018-05-31 2018-11-06 北京市商汤科技开发有限公司 A kind of image processing method, device and computer storage media
CN108830784A (en) * 2018-05-31 2018-11-16 北京市商汤科技开发有限公司 A kind of image processing method, device and computer storage medium
CN110555794B (en) * 2018-05-31 2021-07-23 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109146769A (en) * 2018-07-24 2019-01-04 北京市商汤科技开发有限公司 Image processing method and device, image processing equipment and storage medium
CN110766603B (en) * 2018-07-25 2024-04-12 北京市商汤科技开发有限公司 Image processing method, device and computer storage medium
CN110852932B (en) * 2018-08-21 2024-03-08 北京市商汤科技开发有限公司 Image processing method and device, image equipment and storage medium
CN109242765B (en) * 2018-08-31 2023-03-10 腾讯科技(深圳)有限公司 Face image processing method and device and storage medium
CN110956679B (en) * 2018-09-26 2023-07-14 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111028137B (en) * 2018-10-10 2023-08-15 Oppo广东移动通信有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium
CN109472753B (en) * 2018-10-30 2021-09-07 北京市商汤科技开发有限公司 Image processing method and device, computer equipment and computer storage medium
CN109658360B (en) * 2018-12-25 2021-06-22 北京旷视科技有限公司 Image processing method and device, electronic equipment and computer storage medium
CN109685015B (en) * 2018-12-25 2021-01-08 北京旷视科技有限公司 Image processing method and device, electronic equipment and computer storage medium
CN110111240A (en) * 2019-04-30 2019-08-09 北京市商汤科技开发有限公司 A kind of image processing method based on strong structure, device and storage medium
CN112087648B (en) * 2019-06-14 2022-02-25 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110555796B (en) * 2019-07-24 2021-07-06 广州视源电子科技股份有限公司 Image adjusting method, device, storage medium and equipment
CN113096022B (en) * 2019-12-23 2022-12-30 RealMe重庆移动通信有限公司 Image blurring processing method and device, storage medium and electronic device
CN113706369A (en) * 2020-05-21 2021-11-26 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111736788A (en) * 2020-06-28 2020-10-02 广州励丰文化科技股份有限公司 Image processing method, electronic device, and storage medium
CN111968050B (en) * 2020-08-07 2024-02-20 Oppo(重庆)智能科技有限公司 Human body image processing method and related products

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040097200A (en) * 2002-03-26 2004-11-17 김소운 System and Method for 3-Dimension Simulation of Glasses
FR2920560A1 (en) * 2007-09-05 2009-03-06 Botton Up Soc Responsabilite L Three-dimensional synthetic actor i.e. avatar, constructing and immersing method, involves constructing psychic profile from characteristic points and features, and fabricating animated scene from head of profile and animation base
CN102221954B (en) * 2010-04-15 2014-01-29 中国移动通信集团公司 Zooming displayer as well as electronic device comprising same and zoom displaying method
US9495582B2 (en) * 2011-12-04 2016-11-15 Digital Makeup Ltd. Digital makeup
CN102999929A (en) * 2012-11-08 2013-03-27 大连理工大学 Triangular gridding based human image face-lift processing method
JP6355315B2 (en) * 2013-10-29 2018-07-11 キヤノン株式会社 Image processing apparatus, image processing method, and program
CN104637078B (en) * 2013-11-14 2017-12-15 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN104657974A (en) * 2013-11-25 2015-05-27 腾讯科技(上海)有限公司 Image processing method and device
US10339685B2 (en) * 2014-02-23 2019-07-02 Northeastern University System for beauty, cosmetic, and fashion analysis
CN104036453A (en) * 2014-07-03 2014-09-10 上海斐讯数据通信技术有限公司 Image local deformation method and image local deformation system and mobile phone with image local deformation method
CN106296590B (en) * 2015-05-11 2019-05-07 福建天晴数码有限公司 Skin roughness adaptively grinds skin method, system and client
CN106303153B (en) * 2015-05-29 2019-08-23 腾讯科技(深圳)有限公司 A kind of image processing method and device

Also Published As

Publication number Publication date
CN107154030A (en) 2017-09-12

Similar Documents

Publication Publication Date Title
CN107154030B (en) Image processing method and device, electronic equipment and storage medium
EP3323249B1 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
EP3614340B1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
CN109584151B (en) Face beautifying method, device, terminal and storage medium
CN109829930B (en) Face image processing method and device, computer equipment and readable storage medium
CN107484428B (en) Method for displaying objects
CN107564080B (en) Face image replacement system
CN107507216B (en) Method and device for replacing local area in image and storage medium
CN111754415B (en) Face image processing method and device, image equipment and storage medium
CN107610202B (en) Face image replacement method, device and storage medium
JP6685827B2 (en) Image processing apparatus, image processing method and program
JP7031697B2 (en) Information processing device and recognition support method
JP2017059235A (en) Apparatus and method for adjusting brightness of image
US20130120451A1 (en) Image processing device, image processing method, and program
US10169891B2 (en) Producing three-dimensional representation based on images of a person
EP3633606B1 (en) Information processing device, information processing method, and program
US10430967B2 (en) Information processing apparatus, method, and program
US20180213156A1 (en) Method for displaying on a screen at least one representation of an object, related computer program, electronic display device and apparatus
CN113724391A (en) Three-dimensional model construction method and device, electronic equipment and computer readable medium
JP6552266B2 (en) Image processing apparatus, image processing method, and program
CN112686820A (en) Virtual makeup method and device and electronic equipment
Moeslund et al. A natural interface to a virtual environment through computer vision-estimated pointing gestures
US20180213215A1 (en) Method and device for displaying a three-dimensional scene on display surface having an arbitrary non-planar shape
CN111742352A (en) 3D object modeling method and related device and computer program product
JP2009251634A (en) Image processor, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant