CN108399599B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN108399599B
CN108399599B CN201810229288.8A CN201810229288A CN108399599B CN 108399599 B CN108399599 B CN 108399599B CN 201810229288 A CN201810229288 A CN 201810229288A CN 108399599 B CN108399599 B CN 108399599B
Authority
CN
China
Prior art keywords
target
processing
pixel point
region
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810229288.8A
Other languages
Chinese (zh)
Other versions
CN108399599A (en
Inventor
李艳杰
眭一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201810229288.8A priority Critical patent/CN108399599B/en
Publication of CN108399599A publication Critical patent/CN108399599A/en
Application granted granted Critical
Publication of CN108399599B publication Critical patent/CN108399599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: detecting a plurality of key points corresponding to a processing object in an image; determining a region to be processed corresponding to a processing object in the image according to the plurality of key points; aiming at each pixel point in the image, judging whether the pixel point belongs to a region to be processed; and if the pixel point is judged to belong to the area to be processed, processing the pixel point according to a preset object processing rule. Therefore, the scheme of the invention determines the corresponding to-be-processed area according to the processing object in the image, so that the to-be-processed area can be more fit with the outline or the range of the processing object; secondly, all pixel points in the image do not need to be processed in the mode, only the pixel points in the region to be processed are processed, the calculation amount can be reduced, the image processing speed is improved, and the real-time performance of the image processing is guaranteed.

Description

Image processing method and device and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to an image processing method and device and electronic equipment.
Background
With the development of computer image processing technology, image beautification becomes more and more convenient and popular. For example, the micro-shaping special effect processing is performed on a face image, and the method comprises the technical means of face thinning, eye enlargement, nose bridge raising, nose wing narrowing and the like, and the technology is applied to scenes such as image post-processing, video live broadcast, video recording and the like, so that the interestingness and the aesthetic feeling of the image can be improved, and therefore, the micro-shaping special effect processing technology in the image beautification is more widely concerned and favored by people.
However, in the process of implementing the present invention, the inventors found that the above-mentioned manner in the prior art has at least the following problems: the micro-shaping special effect processing often processes all pixel points in the image without considering an actual deformation area, so that a lot of extra calculation amount is increased, and the real-time performance of the micro-shaping special effect processing is reduced. In summary, the calculation amount is often large by traversing all the pixels in the image to perform the micro-shaping special effect processing, and there is no technical scheme in the prior art that can well solve the above problems.
Disclosure of Invention
The present invention has been made in view of the above problems, and has an object to provide an image processing method, apparatus, and electronic device that overcome the above problems or at least partially solve the above problems.
According to an aspect of the present invention, there is provided an image processing method including: detecting a plurality of key points corresponding to a processing object in an image; determining a region to be processed corresponding to the processing object in the image according to the plurality of key points; aiming at each pixel point in the image, judging whether the pixel point belongs to the area to be processed; if yes, processing the pixel point according to a preset object processing rule.
Optionally, the to-be-processed region is an elliptical to-be-processed region, and the elliptical to-be-processed region is determined by an ellipse center, an ellipse transverse axis and an ellipse longitudinal axis.
Optionally, the step of determining, for each pixel point in the image, whether the pixel point belongs to the to-be-processed region specifically includes:
aiming at each pixel point in the image, judging whether the distance between the pixel point and the center of the ellipse is larger than the length of the cross shaft of the ellipse or not;
if not, determining that the pixel point belongs to the area to be processed.
Optionally, the area to be treated further includes: a target processing region and an environmental processing region; the step of determining, according to the plurality of key points, a region to be processed corresponding to the processing object included in the image specifically includes:
determining a target processing area corresponding to the processing object contained in the image according to the plurality of key points;
determining an environment processing area positioned at the periphery of the target processing area according to the target processing area;
and, the object processing rule further comprises: target processing rules and environment processing rules; and the step of processing the pixel point according to the preset object processing rule specifically comprises:
judging whether the pixel belongs to the target processing area or not; if yes, processing the pixel point according to the target processing rule; if not, processing the pixel point according to the environment processing rule.
Optionally, the target processing region is an elliptical target processing region, and the environment processing region is an elliptical ring-shaped environment processing region located at the periphery of the elliptical target processing region;
the step of determining a target processing region corresponding to the processing object included in the image according to the plurality of key points specifically includes:
determining a target circle center and a target transverse axis and/or a target longitudinal axis passing through the target circle center according to the plurality of key points, and determining the elliptical target processing area according to the target circle center and the target transverse axis and/or the target longitudinal axis passing through the target circle center;
the step of determining an environmental processing region located at the periphery of the target processing region according to the target processing region specifically includes:
determining the target circle center as the ellipse circle center, determining the ellipse transverse axis and/or the ellipse longitudinal axis according to the target transverse axis and/or the target longitudinal axis, and determining the ellipse to-be-processed area according to the ellipse circle center, the ellipse transverse axis and the ellipse longitudinal axis; and determining an elliptical annular environment processing area positioned at the periphery of the elliptical target processing area according to the elliptical to-be-processed area and the elliptical target processing area.
Optionally, the elliptical target processing region is a circular target processing region, and the elliptical ring-shaped environmental processing region is a ring-shaped environmental processing region; wherein the length of the target horizontal axis is the same as the length of the target longitudinal axis, and the length of the ellipse horizontal axis is the same as the length of the ellipse longitudinal axis.
Optionally, the step of determining whether the pixel belongs to the target processing region specifically includes:
judging whether the distance between the pixel point and the center of the target circle is greater than the length of the target cross shaft or not; if not, determining that the pixel point belongs to the target processing area.
Optionally, the length of the transverse axis of the ellipse is a first preset multiple of the length of the target transverse axis, and/or the length of the longitudinal axis of the ellipse is a second preset multiple of the length of the target longitudinal axis; and the first preset multiple and/or the second preset multiple are/is not less than 1.
Optionally, the method further comprises:
the method comprises the steps that a first mapping relation between a deformation coefficient of a pixel point and the distance from the pixel point to the center of a target circle is determined in advance when the distance from the pixel point to the center of the target circle is not larger than the length of a target horizontal axis, and a target processing rule is determined according to the first mapping relation;
and predetermining a second mapping relation between the deformation coefficient of the pixel point and the distance from the pixel point to the center of the target circle when the distance from the pixel point to the center of the target circle is greater than the length of the target horizontal axis, and determining the environment processing rule according to the second mapping relation.
Optionally, the object processing rule includes: translation type processing rules, rotation type processing rules, and compression type processing rules.
Optionally, the processing object includes: facial regions, facial contours, and/or facial features; wherein the five-sense-organ sites include at least one of: eyes, nose, eyebrows, mouth, and ears.
Optionally, when the processing object is a face region, the step of detecting a plurality of key points corresponding to the processing object in the image further includes: detecting a face center key point and a chin center key point in the face area;
the center of the target circle is determined according to the key point of the face center, and the longitudinal axis of the target is determined according to the distance between the key point of the face center and the key point of the chin center.
Optionally, the method is implemented by a graphics processor.
According to another aspect of the present invention, there is provided an image processing apparatus including: the key point detection module is suitable for detecting a plurality of key points corresponding to a processing object in the image; a to-be-processed region determining module, adapted to determine a to-be-processed region corresponding to the processing object in the image according to the plurality of key points; the judging module is suitable for judging whether each pixel point in the image belongs to the area to be processed; and the processing module is suitable for processing the pixel point according to a preset object processing rule if the pixel point is judged to belong to the area to be processed.
Optionally, the to-be-processed region is an elliptical to-be-processed region, and the elliptical to-be-processed region is determined by an ellipse center, an ellipse transverse axis and an ellipse longitudinal axis.
Optionally, the determining module is further adapted to:
aiming at each pixel point in the image, judging whether the distance between the pixel point and the center of the ellipse is larger than the length of the cross shaft of the ellipse or not;
if not, determining that the pixel point belongs to the area to be processed.
Optionally, the area to be treated further includes: a target processing region and an environmental processing region; the pending area determination module is further adapted to:
determining a target processing area corresponding to the processing object contained in the image according to the plurality of key points;
determining an environment processing area positioned at the periphery of the target processing area according to the target processing area;
and, the object processing rule further comprises: target processing rules and environment processing rules; and the processing module is further adapted to:
judging whether the pixel belongs to the target processing area or not; if yes, processing the pixel point according to the target processing rule; if not, processing the pixel point according to the environment processing rule.
Optionally, the target processing region is an elliptical target processing region, and the environment processing region is an elliptical ring-shaped environment processing region located at the periphery of the elliptical target processing region;
the to-be-processed region determination module is further adapted to:
determining a target circle center and a target transverse axis and/or a target longitudinal axis passing through the target circle center according to the plurality of key points, and determining the elliptical target processing area according to the target circle center and the target transverse axis and/or the target longitudinal axis passing through the target circle center;
the to-be-processed region determination module is further adapted to:
determining the target circle center as the ellipse circle center, determining the ellipse transverse axis and/or the ellipse longitudinal axis according to the target transverse axis and/or the target longitudinal axis, and determining the ellipse to-be-processed area according to the ellipse circle center, the ellipse transverse axis and the ellipse longitudinal axis; and determining an elliptical annular environment processing area positioned at the periphery of the elliptical target processing area according to the elliptical to-be-processed area and the elliptical target processing area.
Optionally, the elliptical target processing region is a circular target processing region, and the elliptical ring-shaped environmental processing region is a ring-shaped environmental processing region; wherein the length of the target horizontal axis is the same as the length of the target longitudinal axis, and the length of the ellipse horizontal axis is the same as the length of the ellipse longitudinal axis.
Optionally, the determining module is further adapted to:
judging whether the distance between the pixel point and the center of the target circle is greater than the length of the target cross shaft or not; if not, determining that the pixel point belongs to the target processing area.
Optionally, the length of the transverse axis of the ellipse is a first preset multiple of the length of the target transverse axis, and/or the length of the longitudinal axis of the ellipse is a second preset multiple of the length of the target longitudinal axis; and the first preset multiple and/or the second preset multiple are/is not less than 1.
Optionally, the apparatus further comprises:
the mapping relation determining module is suitable for determining a first mapping relation between the deformation coefficient of the pixel point and the distance from the pixel point to the center of the target circle in advance when the distance from the pixel point to the center of the target circle is not more than the length of the target horizontal axis, and the target processing rule is determined according to the first mapping relation;
the mapping determination module is further adapted to: and predetermining a second mapping relation between the deformation coefficient of the pixel point and the distance from the pixel point to the center of the target circle when the distance from the pixel point to the center of the target circle is greater than the length of the target horizontal axis, and determining the environment processing rule according to the second mapping relation.
Optionally, the object processing rule includes: translation type processing rules, rotation type processing rules, and compression type processing rules.
Optionally, the processing object includes: facial regions, facial contours, and/or facial features; wherein the five-sense-organ sites include at least one of: eyes, nose, eyebrows, mouth, and ears.
Optionally, the key point detection module is further adapted to:
when the processing object is a face region, detecting a face center key point and a chin center key point in the face region;
the center of the target circle is determined according to the key point of the face center, and the longitudinal axis of the target is determined according to the distance between the key point of the face center and the key point of the chin center.
According to still another aspect of the present invention, there is provided an electronic apparatus including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the image processing method.
According to still another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the image processing method as described above.
In the image processing method, the image processing device and the electronic equipment, a plurality of key points corresponding to a processing object in an image are detected; determining a region to be processed corresponding to a processing object in the image according to the plurality of key points; aiming at each pixel point in the image, judging whether the pixel point belongs to a region to be processed; and if the pixel point is judged to belong to the area to be processed, processing the pixel point according to a preset object processing rule. Therefore, the method determines the area to be processed according to the processing object in the image, so that the area to be processed can be more fit with the outline or the range of the processing object; secondly, all pixel points in the image do not need to be processed in the mode, only the pixel points in the region to be processed are processed, the calculation amount can be reduced, the image processing speed is improved, and the real-time performance of the image processing is guaranteed.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a schematic flow diagram of an image processing method according to an embodiment of the invention;
FIG. 2 shows a schematic flow diagram of an image processing method according to another embodiment of the invention;
FIG. 3 is a schematic configuration diagram showing an image processing apparatus according to still another embodiment of the present invention;
FIG. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the invention;
figure 5 shows a schematic diagram of one form of look-up table.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a flow diagram of an image processing method according to an embodiment of the invention. As shown in fig. 1, the method comprises the steps of:
in step S110, a plurality of key points corresponding to the processing target in the image are detected.
The image may be a photograph taken by a camera or an image frame in a captured video stream, and the processing object may be a facial region, five sense organs, and the like in the image, which is not limited in the present invention. Detecting a plurality of key points of a processing object aiming at the processing object in the image, wherein in practical application, the processing object in the image can be preset, namely, only the preset processing objects are processed in the processing process; or the user may also select the processing object in the image by himself during the processing, that is, the processing object selected by the user in real time may be adjusted during the processing, which is not limited in the present invention.
For example, in an application scenario for processing a face image, a processing object may be a face or a facial feature, and a key point may be a feature point corresponding to facial features and/or facial contours, specifically, a feature point corresponding to a facial contour position, a feature point corresponding to a facial feature position, and a feature point corresponding to other facial features. The present invention is not limited to the manner of detecting the key points.
Step S120, determining a region to be processed corresponding to the processing object in the image according to the plurality of key points.
According to the detected key points of the processing object, a region to be processed corresponding to the processing object in the image is determined, and the region shape, the region outline, the region range and the like of the region to be processed can be determined according to each key point of the processing object. For example, if it is determined that the processing object is an eye part, a region to be processed corresponding to the eye is determined from a plurality of key points of the detected eye part.
Step S130, for each pixel point in the image, determine whether the pixel point belongs to the region to be processed.
According to step S120, a to-be-processed region corresponding to the processing object is determined, and a pixel point in the to-be-processed region is a pixel point that needs to be processed, and in a specific application, a coordinate system may be established in the image, a coordinate value of each pixel point in the image in the coordinate system is calculated, and whether the pixel point belongs to the to-be-processed region is determined according to the coordinate value of the pixel point, or whether the pixel point belongs to the to-be-processed region is determined according to a distance between the pixel point and the center of the to-be-processed region.
Step S140, if it is determined that the pixel belongs to the to-be-processed region, processing the pixel according to a preset object processing rule.
In practical application, the micro-shaping special effect processing on the image usually only needs to process part of pixel points in the image, all the pixel points in the image do not need to be traversed, and the pixel points needing to be processed are determined according to the to-be-processed area obtained by division.
According to the image processing method provided by the embodiment, a plurality of key points corresponding to a processing object in an image are detected; determining a region to be processed corresponding to a processing object in the image according to the plurality of key points; aiming at each pixel point in the image, judging whether the pixel point belongs to a region to be processed; and if the pixel point is judged to belong to the area to be processed, processing the pixel point according to a preset object processing rule. Therefore, the method determines the area to be processed according to the processing object in the image, so that the area to be processed can be more fit with the outline or the range of the processing object; secondly, all pixel points in the image do not need to be processed in the mode, only the pixel points in the region to be processed are processed, the calculation amount can be reduced, the image processing speed is improved, and the real-time performance of the image processing is guaranteed.
Fig. 2 is a flow chart of an image processing method according to another embodiment of the present invention, wherein the method of the present embodiment can be implemented by a graphics processor, but of course, the method can also be implemented by other ways, which is not limited by the present invention. As shown in fig. 2, the method comprises the steps of:
in step S210, a plurality of key points corresponding to the processing target in the image are detected.
The image may be a photo taken by a camera or an image frame in a captured video stream, which is not limited in the present invention. In practical applications, the system may pre-select the processing objects in the image, that is, only some fixed processing objects are processed, or the user may select the processing objects in the image according to actual needs.
In an application scenario for processing a face image, a processing object includes: facial regions, facial contours, and/or facial features; wherein the five-sense-organ sites include at least one of: the number, distribution position and detection mode of the key points are not limited, and all the modes capable of detecting the key points are included in the protection scope of the invention.
Step S220, determining a to-be-processed area corresponding to a processing object in the image according to the plurality of key points, wherein the to-be-processed area is an elliptical to-be-processed area, and the elliptical to-be-processed area is determined through an elliptical circle center, an elliptical horizontal axis and an elliptical vertical axis.
In the existing micro-shaping special effect processing technology, all pixel points in an image are often required to be processed, and each pixel point in the image is processed, but the micro-shaping special effect processing technology is to process a partial region in the image, for example, the micro-shaping special effect processing is performed on an image containing a human face, only the pixel points in the human face region are required to be processed, and the pixel points outside the human face region are not required to be processed or only subjected to adaptive adjustment.
In an application scene for processing a face image, if a region to be processed is an elliptical region to be processed, an ellipse center, an ellipse transverse axis and an ellipse longitudinal axis are determined according to key points of a processing object, and the elliptical processing region is further determined according to the ellipse center, the ellipse transverse axis and the ellipse longitudinal axis. Because the outline of people's face and five sense organs position is more similar to the ellipse, set up the pending regional setting as the ellipse and can make the pending regional outline of laminating people's face or five sense organs position more to contain as few as possible pixel that need not handle in the pending regional, thereby reduce the operand, promote the speed of handling. The horizontal axis and the vertical axis of the ellipse are perpendicular to each other, but the invention does not limit the specific orientation of the horizontal axis and the vertical axis of the ellipse, and the person skilled in the art can adjust the orientation according to the actual needs. It should be noted that the oval to-be-processed region in this embodiment does not refer to only the oval to-be-processed region in which the length of the horizontal axis of the oval is not equal to the length of the vertical axis of the oval, but because the circle is a special oval, the oval to-be-processed region may also refer to a circular to-be-processed region in which the length of the horizontal axis of the oval is equal to the length of the vertical axis of the oval, and the specific requirement is determined according to the actual situation.
Optionally, the area to be treated further comprises: a target processing region and an environmental processing region; the step of determining the region to be processed corresponding to the processing object included in the image according to the plurality of key points specifically includes: determining a target processing area corresponding to a processing object contained in the image according to the plurality of key points; and determining an environmental processing area located at the periphery of the target processing area according to the target processing area.
In practical applications, under the condition that pixel points in a certain region in an image are processed and pixel points outside the region are kept in an original state, the processed image may appear unnatural and have obvious change traces, so that corresponding processing needs to be performed on part of the pixel points outside the region.
Therefore, the method of this embodiment further divides the region to be processed into a target processing region and an environment processing region, and since the target processing region is determined according to a plurality of key points of the processing object, the target processing region is the region most closely attached to the processing object, and a partial region located at the periphery of the target processing region is determined as the environment processing region, different processing rules can be adopted for the pixel points in the target processing region and the pixel points in the environment processing region to weaken the trace of change.
Further, the target processing area is an elliptical target processing area, and the environment processing area is an elliptical ring-shaped environment processing area located at the periphery of the elliptical target processing area. Therefore, the to-be-processed area comprises an elliptical target processing area and an elliptical annular environment processing area.
The step of determining a target processing region corresponding to the processing object included in the image according to the plurality of key points specifically includes: and determining the center of a target circle and a target transverse axis and/or a target longitudinal axis passing through the center of the target circle according to the plurality of key points, and determining an elliptical target processing area according to the center of the target circle and the target transverse axis and/or the target longitudinal axis passing through the center of the target circle.
The step of determining the environmental processing region located at the periphery of the target processing region according to the target processing region specifically includes: determining the center of a target circle as the center of an ellipse, determining an ellipse transverse axis and/or an ellipse longitudinal axis according to a target transverse axis and/or a target longitudinal axis, and determining an ellipse to-be-processed area according to the center of the ellipse, the ellipse transverse axis and the ellipse longitudinal axis; and determining an elliptical annular environment processing area positioned at the periphery of the elliptical target processing area according to the elliptical to-be-processed area and the elliptical target processing area. The length of the horizontal axis of the ellipse is a first preset multiple of the length of the target horizontal axis, and/or the length of the longitudinal axis of the ellipse is a second preset multiple of the length of the target longitudinal axis; and the first preset multiple and/or the second preset multiple are/is not less than 1.
According to a plurality of key points corresponding to a processing object, firstly, determining the center of a target circle, which is the center of a target processing area, and then, according to the position relation and the distance between the center of the target circle and each key point, determining the position direction and the length of a target horizontal axis and the position direction and the length of a target longitudinal axis. For example, when the processing object is an eye, the central key point of the eye is determined as the center of the target circle, the straight line connecting the central key point of the eye to the key points of the corner of the eye on either side of the eye is determined as the target horizontal axis, and the straight line connecting the central key point of the eye and the key points of the upper or lower boundary of the eye is determined as the target vertical axis. And finally, determining an elliptical target processing area according to the circle center, the target horizontal axis and the target longitudinal axis.
Then, an elliptical target processing region is determined according to the elliptical target processing region, and as can be seen from the above, the elliptical target processing region and the elliptical target processing region are concentric ellipses, specifically, the target horizontal axis may coincide with the elliptical horizontal axis, the target longitudinal axis may coincide with the elliptical longitudinal axis, and the length of the elliptical horizontal axis is set as a first preset multiple of the length of the target horizontal axis, and the length of the elliptical longitudinal axis is set as a second preset multiple of the length of the target longitudinal axis, where the first preset multiple and the second preset multiple may be the same or different, but at least one of the values of the first preset multiple and the second preset multiple cannot be less than 1, that is, the region range of the elliptical target processing region is greater than the region range of the target processing region. An elliptical ring-shaped environmental processing area located at the periphery of the elliptical target processing area is further determined according to the elliptical target processing area and the elliptical target processing area, that is, the elliptical ring-shaped environmental area is an area located between the elliptical target processing area and the elliptical target processing area.
In a specific application, when the processing object is a face region, the step of detecting a plurality of key points corresponding to the processing object in the image further includes: detecting a face center key point and a chin center key point in the face area; the center of the target circle is determined according to the key point of the face center, and the horizontal axis of the target is determined according to the distance from the key point of the face center to the key point of the chin center.
When the processing object is a facial region, determining a face center key point as a target circle center, determining a straight line connecting the face center key point and the chin center key point as a target longitudinal axis, and optionally determining a straight line connecting the face center key point and the temple key points on either side as a target transverse axis.
In another embodiment, the to-be-processed area may be set as a circular to-be-processed area, and the to-be-processed area may be further divided into a target processing area and an environment processing area, specifically, the elliptical target processing area is a circular target processing area, and the elliptical ring-shaped environment processing area is a ring-shaped environment processing area; the length of the target horizontal axis is the same as that of the target vertical axis, and the length of the ellipse horizontal axis is the same as that of the ellipse vertical axis. That is, the oval to-be-processed region in this embodiment does not simply refer to the oval to-be-processed region in which the length of the oval transverse axis is not equal to the length of the oval transverse axis, and since the circle is a special oval, the oval to-be-processed region may also be a circular to-be-processed region in which the length of the oval transverse axis is equal to the length of the oval longitudinal axis, and the specific requirement is determined according to the actual situation.
Step S230, determining whether the distance between the pixel point and the center of the ellipse is greater than the length of the cross axis of the ellipse for each pixel point in the image, and if not, executing step S240; if so, the method ends.
Specifically, a coordinate system may be established in the image, the coordinate values of each pixel point and the center of the ellipse in the image are calculated according to the coordinate system, and the distance between the pixel point and the center of the ellipse is calculated according to the coordinate values, or the coordinate system is established according to the horizontal axis and the vertical axis of the ellipse with the center of the ellipse as the origin, the coordinate value of each pixel point in the image is calculated in the coordinate system, the distance between the pixel point and the center of the ellipse is further calculated, and whether the distance between the pixel point and the center of the ellipse is greater than the length of the horizontal axis of the ellipse is determined.
Step S240, determining that the pixel belongs to the to-be-processed region, and processing the pixel according to a preset object processing rule.
If the distance between the pixel point and the center of the ellipse is judged to be not more than the length of the cross shaft of the ellipse, the pixel point is determined to belong to the area to be processed; if the distance between the pixel point and the center of the ellipse is judged to be larger than the length of the cross axis of the ellipse, the pixel point does not belong to the area to be processed, the pixel point does not need to be processed, and the method is finished.
Wherein the object processing rules include: translation type processing rules, rotation type processing rules, and compression type processing rules. The translation type processing rule can be a rule for performing translation processing on the pixel points according to the coordinate values of the pixel points; the processing rule of the rotation type can be a rule for performing rotation processing on the pixel points according to the coordinate values of the pixel points; the compression type processing rule may be a rule for compressing the region range of the region to be processed according to the coordinates of the pixel points and the boundary of the processing object. The object processing rules are not limited by the invention, and the person skilled in the art can set the object processing rules according to actual needs.
As described in the foregoing, in order to weaken the trace of the image change, the method of this embodiment further divides the to-be-processed area into the target processing area and the environment processing area, and different processing rules are respectively adopted for processing the pixel points in the target processing area and the environment processing area, so that after it is determined that a pixel point belongs to the to-be-processed area, it is necessary to further determine which processing area the pixel point specifically belongs to. Correspondingly, for the pixel points in different processing regions, corresponding object processing rules can be set respectively, and then the object processing rules further include: target processing rules and environment processing rules; the step of processing the pixel point according to the preset object processing rule specifically includes:
judging whether the pixel belongs to a target processing area or not; if yes, processing the pixel point according to a target processing rule; if not, processing the pixel point according to the environment processing rule.
That is, if it is determined that the distance between the pixel point and the center of the ellipse is not greater than the length of the cross axis of the ellipse, that is, the pixel point belongs to the region to be processed, it is further determined whether the pixel point belongs to the target processing region, specifically, it is determined whether the distance between the pixel point and the center of the target is not greater than the length of the cross axis of the target, if so, it is determined that the pixel point belongs to the target processing region, and the pixel point is processed according to the preset target processing rule; if not, determining that the pixel belongs to the environment processing area, and processing the pixel according to a preset environment processing rule.
Wherein the target processing rule and the environment processing rule can be determined by the following method:
the method comprises the steps that a first mapping relation between a deformation coefficient of a pixel point and the distance from the pixel point to the center of a target circle is determined in advance when the distance from the pixel point to the center of the target circle is not larger than the length of a target horizontal axis, and a target processing rule is determined according to the first mapping relation; and predetermining a second mapping relation between the deformation coefficient of the pixel point and the distance from the pixel point to the center of the target circle when the distance from the pixel point to the center of the target circle is greater than the length of the target horizontal axis, and determining the environment processing rule according to the second mapping relation.
The first mapping relation can be determined according to a first lookup table, a horizontal axis of the first lookup table represents a distance between a pixel point and a target circle center, a vertical axis of the first lookup table represents a deformation coefficient of the pixel point, and a mapping rule of the first mapping relation is as follows: when the distance from the pixel point to the center of the target circle is not more than the length of the target horizontal axis, in the process that the distance between the pixel point and the center of the target circle is changed from small to large, the deformation coefficient of the pixel point is firstly kept at a fixed value and then is gradually reduced from the fixed value, so that smooth transition between an unprocessed area and a processed area can be ensured.
The second mapping relationship can be determined according to a second lookup table, a horizontal axis of the second lookup table represents a distance between the pixel point and the center of the target circle, a vertical axis of the second lookup table represents a deformation coefficient of the pixel point, and a mapping rule of the second mapping relationship is as follows: when the distance from the pixel point to the center of the target circle is not more than the ellipse horizontal axis and is more than the target horizontal axis, the deformation coefficient of the pixel point is gradually reduced from a fixed value in the process that the distance between the pixel point and the center of the target circle is increased from small to large, and smooth transition between an unprocessed area and a processed area can be ensured through the mode. In addition, according to the method of the embodiment, by setting the object processing rules corresponding to different processing areas, the change traces of the image can be weakened, and the aesthetic feeling of the image can be improved. For convenience of understanding, fig. 5 shows a schematic diagram of a lookup table in one form, as shown in fig. 5, for a pixel point in a to-be-processed region, in a process that an equivalent circumferential distance between the pixel point and a center of a target circle is changed from small to large, a rotation angle of the pixel point is first maintained at a fixed value, and then is gradually reduced to zero from the fixed value, where R may specifically refer to a length of an ellipse longitudinal axis or an ellipse transverse axis of the to-be-processed region. In addition, as can be understood by those skilled in the art, the present invention sets the target processing rule and the environment processing rule for the purpose of: the target processing area and the environment processing area are distinguished, so that smooth transition can be better realized, and therefore, the invention does not limit the specific contents of the target processing rule and the environment processing rule (namely, the specific forms of the first mapping relation and the second mapping relation can be flexibly adjusted).
In addition, the inventor finds out in the process of implementing the invention that: a first mapping relation exists between the deformation coefficient of the pixel point in the target processing area and the distance from the pixel point to the center of the target circle, and a second mapping relation exists between the deformation coefficient of the pixel point in the environment processing area and the distance from the pixel point to the center of the target circle. Based on the first mapping relationship being different from the second mapping relationship, in the present embodiment, the region to be processed is further divided into the target processing region and the environment processing region, and the target processing rule and the environment processing rule are set, so that different processes can be respectively performed on the actual processing object and the peripheral region of the processing object, thereby further improving the processing effect. In addition, when the method is realized by the GPU, the processing efficiency can be improved by utilizing the parallel characteristic of the GPU.
In this embodiment, the method of the present invention is specifically described by taking the to-be-processed area as an elliptical to-be-processed area as an example, and it can be understood by those skilled in the art that the shape of the to-be-processed area of the present invention is not limited, and in practical application, the to-be-processed area is determined according to a plurality of key points corresponding to a processing object. According to the image processing method provided by the embodiment, a plurality of key points corresponding to a processing object in an image are detected; determining a region to be processed corresponding to a processing object in the image according to the plurality of key points, wherein the region to be processed is an elliptical region to be processed, and the elliptical region to be processed is determined by an elliptical circle center, an elliptical transverse axis and an elliptical longitudinal axis; aiming at each pixel point in the image, judging whether the distance between the pixel point and the center of the ellipse is larger than the length of the cross shaft of the ellipse or not; if not, determining that the pixel point belongs to the area to be processed, and processing the pixel point according to a preset object processing rule. Therefore, the method determines the area to be processed according to the processing object in the image, and sets the shape of the area to be processed to be elliptical, so that the area to be processed can be more fit with the outline or the range of the processing object; secondly, all pixel points in the image do not need to be processed in the mode, only the pixel points in the region to be processed are processed, the calculation amount can be reduced, the image processing speed is increased, and the real-time performance of the image processing is ensured; in addition, the to-be-processed area is divided into a target processing area and an environment processing area, and different object processing rules are adopted for the pixel points in different processing areas, so that the change trace of the image can be weakened, and the aesthetic feeling of the image can be improved.
Fig. 3 shows a schematic configuration diagram of an image processing apparatus according to still another embodiment of the present invention, as shown in fig. 3, the apparatus including:
a key point detection module 31 adapted to detect a plurality of key points corresponding to a processing object in an image;
a to-be-processed region determining module 32, adapted to determine a to-be-processed region corresponding to the processing object in the image according to the plurality of key points;
the judging module 33 is adapted to judge, for each pixel point in the image, whether the pixel point belongs to a region to be processed;
and the processing module 34 is adapted to process the pixel point according to a preset object processing rule if the pixel point is determined to belong to the region to be processed.
Optionally, the to-be-processed region is an elliptical to-be-processed region, and the elliptical to-be-processed region is determined by an ellipse center, an ellipse horizontal axis and an ellipse vertical axis.
Optionally, the determining module 33 is further adapted to:
aiming at each pixel point in the image, judging whether the distance between the pixel point and the center of the ellipse is larger than the length of the cross shaft of the ellipse or not;
if not, determining that the pixel point belongs to the area to be processed.
Optionally, the area to be treated further comprises: a target processing region and an environmental processing region; the pending area determination module 32 is further adapted to:
determining a target processing area corresponding to a processing object contained in the image according to the plurality of key points;
determining an environment processing area positioned at the periphery of the target processing area according to the target processing area;
and, the object processing rule further includes: target processing rules and environment processing rules; and the processing module 34 is further adapted to:
judging whether the pixel belongs to a target processing area or not; if yes, processing the pixel point according to a target processing rule; if not, processing the pixel point according to the environment processing rule.
Optionally, the target processing area is an elliptical target processing area, and the environment processing area is an elliptical ring-shaped environment processing area located at the periphery of the elliptical target processing area;
the pending area determination module 32 is further adapted to:
determining a target circle center and a target transverse axis and/or a target longitudinal axis passing through the target circle center according to the plurality of key points, and determining an elliptical target processing area according to the target circle center and the target transverse axis and/or the target longitudinal axis passing through the target circle center;
the pending area determination module 32 is further adapted to:
determining the center of a target circle as the center of an ellipse, determining an ellipse transverse axis and/or an ellipse longitudinal axis according to a target transverse axis and/or a target longitudinal axis, and determining an ellipse to-be-processed area according to the center of the ellipse, the ellipse transverse axis and the ellipse longitudinal axis; and determining an elliptical annular environment processing area positioned at the periphery of the elliptical target processing area according to the elliptical to-be-processed area and the elliptical target processing area.
Optionally, the elliptical target processing region is a circular target processing region, and the elliptical ring-shaped environmental processing region is a ring-shaped environmental processing region; the length of the target horizontal axis is the same as that of the target vertical axis, and the length of the ellipse horizontal axis is the same as that of the ellipse vertical axis.
Optionally, the determining module 33 is further adapted to:
judging whether the distance between the pixel point and the center of the target circle is greater than the length of the target cross shaft or not; if not, determining that the pixel point belongs to the target processing area.
Optionally, the length of the horizontal axis of the ellipse is a first preset multiple of the length of the target horizontal axis, and/or the length of the longitudinal axis of the ellipse is a second preset multiple of the length of the target longitudinal axis; and the first preset multiple and/or the second preset multiple are/is not less than 1.
Optionally, the apparatus further comprises:
the mapping relation determining module 35 is adapted to determine in advance a first mapping relation between a deformation coefficient of a pixel point and a distance between the pixel point and a center of a target circle when the distance between the pixel point and the center of the target circle is not greater than the length of a target horizontal axis, and a target processing rule is determined according to the first mapping relation;
the mapping relation determination module 35 is further adapted to: and predetermining a second mapping relation between the deformation coefficient of the pixel point and the distance from the pixel point to the center of the target circle when the distance from the pixel point to the center of the target circle is greater than the length of the target horizontal axis, and determining the environment processing rule according to the second mapping relation.
Optionally, the object processing rule includes: translation type processing rules, rotation type processing rules, and compression type processing rules.
Optionally, the processing object includes: facial regions, facial contours, and/or facial features; wherein the five-sense-organ sites include at least one of: eyes, nose, eyebrows, mouth, and ears.
Optionally, the keypoint detection module 31 is further adapted to:
when the processing object is a face area, detecting a face center key point and a chin center key point in the face area;
the center of the target circle is determined according to the key point of the face center, and the longitudinal and transverse axes of the target are determined according to the distance between the key point of the face center and the key point of the chin center.
The specific structure and the working principle of each module may refer to the description of the corresponding step in the method embodiment, and are not described herein again.
Yet another embodiment of the present application provides a non-volatile computer storage medium storing at least one executable instruction that can perform the image processing method in any of the above method embodiments.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 4, the electronic device may include: a processor (processor)402, a Communications Interface 404, a memory 406, and a Communications bus 408.
Wherein:
the processor 402, communication interface 404, and memory 406 communicate with each other via a communication bus 408.
A communication interface 404 for communicating with network elements of other devices, such as clients or other servers.
The processor 402 is configured to execute the program 410, and may specifically perform relevant steps in the above-described embodiment of the image processing method.
In particular, program 410 may include program code comprising computer operating instructions.
The processor 402 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 410 may specifically be configured to cause the processor 402 to perform the following operations: detecting a plurality of key points corresponding to a processing object in an image; determining a region to be processed corresponding to a processing object in the image according to the plurality of key points; aiming at each pixel point in the image, judging whether the pixel point belongs to a region to be processed; if yes, processing the pixel point according to a preset object processing rule.
In an alternative mode, the to-be-processed region is an elliptical to-be-processed region, and the elliptical to-be-processed region is determined by the center of an ellipse, the transverse axis of the ellipse and the longitudinal axis of the ellipse.
The program 410 may be further specifically configured to cause the processor 402 to perform the following operations: aiming at each pixel point in the image, judging whether the distance between the pixel point and the center of the ellipse is larger than the length of the cross shaft of the ellipse or not; if not, determining that the pixel point belongs to the area to be processed.
In an optional manner, the area to be processed further comprises: the target processing area and the environment processing area, the program 410 may be further specifically configured to cause the processor 402 to perform the following operations: the step of determining the region to be processed corresponding to the processing object included in the image according to the plurality of key points specifically includes: determining a target processing area corresponding to a processing object contained in the image according to the plurality of key points; determining an environment processing area positioned at the periphery of the target processing area according to the target processing area;
and, the object processing rule further includes: the target processing rule and the environment processing rule, the program 410 may be further specifically configured to cause the processor 402 to perform the following operations: judging whether the pixel belongs to a target processing area or not; if yes, processing the pixel point according to a target processing rule; if not, processing the pixel point according to the environment processing rule.
In an alternative manner, if the target processing area is an elliptical target processing area, and the environment processing area is an elliptical ring-shaped environment processing area located at the periphery of the elliptical target processing area, the program 410 may be specifically further configured to cause the processor 402 to perform the following operations: determining a target circle center and a target transverse axis and/or a target longitudinal axis passing through the target circle center according to the plurality of key points, and determining an elliptical target processing area according to the target circle center and the target transverse axis and/or the target longitudinal axis passing through the target circle center;
the program 410 may be further specifically configured to cause the processor 402 to perform the following operations: determining the center of a target circle as the center of an ellipse, determining an ellipse transverse axis and/or an ellipse longitudinal axis according to a target transverse axis and/or a target longitudinal axis, and determining an ellipse to-be-processed area according to the center of the ellipse, the ellipse transverse axis and the ellipse longitudinal axis; and determining an elliptical annular environment processing area positioned at the periphery of the elliptical target processing area according to the elliptical to-be-processed area and the elliptical target processing area.
In an alternative mode, the elliptical target processing region is a circular target processing region, and the elliptical ring-shaped environmental processing region is a ring-shaped environmental processing region; the length of the target horizontal axis is the same as that of the target vertical axis, and the length of the ellipse horizontal axis is the same as that of the ellipse vertical axis.
In an optional manner, the program 410 may be specifically further configured to cause the processor 402 to perform the following operations: judging whether the distance between the pixel point and the center of the target circle is greater than the length of the target cross shaft or not; if not, determining that the pixel point belongs to the target processing area.
In an alternative mode, the length of the transverse axis of the ellipse is a first preset multiple of the length of the target transverse axis, and/or the length of the longitudinal axis of the ellipse is a second preset multiple of the length of the target longitudinal axis; and the first preset multiple and/or the second preset multiple are/is not less than 1.
In an optional manner, the program 410 may be specifically further configured to cause the processor 402 to perform the following operations: the method comprises the steps that a first mapping relation between a deformation coefficient of a pixel point and the distance from the pixel point to the center of a target circle is determined in advance when the distance from the pixel point to the center of the target circle is not larger than the length of a target horizontal axis, and a target processing rule is determined according to the first mapping relation;
and predetermining a second mapping relation between the deformation coefficient of the pixel point and the distance from the pixel point to the center of the target circle when the distance from the pixel point to the center of the target circle is greater than the length of the target horizontal axis, and determining the environment processing rule according to the second mapping relation.
In an alternative approach, the object handling rules include: translation type processing rules, rotation type processing rules, and compression type processing rules.
In an alternative approach, the processing object includes: facial regions, facial contours, and/or facial features; wherein the five-sense-organ sites include at least one of: eyes, nose, eyebrows, mouth, and ears.
In an optional manner, the program 410 may be specifically further configured to cause the processor 402 to perform the following operations: when the processing object is a face area, detecting a face center key point and a chin center key point in the face area;
the center of the target circle is determined according to the key point of the face center, and the longitudinal and transverse axes of the target are determined according to the distance between the key point of the face center and the key point of the chin center.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in an image processing apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (23)

1. An image processing method comprising:
detecting a plurality of key points corresponding to a processing object in an image;
determining a region to be processed corresponding to the processing object in the image according to the plurality of key points, wherein the region shape, the region outline and the region range of the region to be processed are determined according to each key point of the processing object;
aiming at each pixel point in the image, judging whether the pixel point belongs to the area to be processed; if yes, processing the pixel point according to a preset object processing rule; wherein the area to be treated further comprises: a target processing region and an environmental processing region; the step of determining, according to the plurality of key points, a region to be processed corresponding to the processing object included in the image specifically includes:
determining a target processing area corresponding to the processing object contained in the image according to the plurality of key points;
determining an environment processing area positioned at the periphery of the target processing area according to the target processing area;
and, the object processing rule further comprises: target processing rules and environment processing rules; and the step of processing the pixel point according to the preset object processing rule specifically comprises:
judging whether the pixel belongs to the target processing area or not; if yes, processing the pixel point according to the target processing rule; if not, processing the pixel point according to the environment processing rule;
wherein the method further comprises:
the method comprises the steps that a first mapping relation between a deformation coefficient of a pixel point and the distance from the pixel point to the center of a target circle is determined in advance when the distance from the pixel point to the center of the target circle is not larger than the length of a target horizontal axis, and a target processing rule is determined according to the first mapping relation;
and predetermining a second mapping relation between the deformation coefficient of the pixel point and the distance from the pixel point to the center of the target circle when the distance from the pixel point to the center of the target circle is greater than the length of the target horizontal axis, and determining the environment processing rule according to the second mapping relation, wherein the deformation coefficient of the pixel point is the deformation coefficient of the pixel point along the direction from the pixel point to the center of the target circle.
2. The method of claim 1, wherein the area to be processed is an elliptical area to be processed, and the elliptical area to be processed is determined by an ellipse center, an ellipse transverse axis and an ellipse longitudinal axis.
3. The method according to claim 2, wherein the step of determining, for each pixel point in the image, whether the pixel point belongs to the region to be processed specifically includes:
aiming at each pixel point in the image, judging whether the distance between the pixel point and the center of the ellipse is larger than the length of the cross shaft of the ellipse or not;
if not, determining that the pixel point belongs to the area to be processed.
4. The method of claim 3, wherein the target processing region is an elliptical target processing region and the environmental processing region is an elliptical ring-shaped environmental processing region located at a periphery of the elliptical target processing region;
the step of determining a target processing region corresponding to the processing object included in the image according to the plurality of key points specifically includes:
determining a target circle center and a target transverse axis and/or a target longitudinal axis passing through the target circle center according to the plurality of key points, and determining the elliptical target processing area according to the target circle center and the target transverse axis and/or the target longitudinal axis passing through the target circle center;
the step of determining an environmental processing region located at the periphery of the target processing region according to the target processing region specifically includes:
determining the target circle center as the ellipse circle center, determining the ellipse transverse axis and/or the ellipse longitudinal axis according to the target transverse axis and/or the target longitudinal axis, and determining the ellipse to-be-processed area according to the ellipse circle center, the ellipse transverse axis and the ellipse longitudinal axis; and determining an elliptical annular environment processing area positioned at the periphery of the elliptical target processing area according to the elliptical to-be-processed area and the elliptical target processing area.
5. The method of claim 4, wherein the elliptical target processing region is a circular target processing region and the elliptical ring-shaped environmental processing region is a ring-shaped environmental processing region; wherein the length of the target horizontal axis is the same as the length of the target longitudinal axis, and the length of the ellipse horizontal axis is the same as the length of the ellipse longitudinal axis.
6. The method according to claim 4 or 5, wherein the step of determining whether the pixel belongs to the target processing region specifically comprises:
judging whether the distance between the pixel point and the center of the target circle is greater than the length of the target cross shaft or not; if not, determining that the pixel point belongs to the target processing area.
7. The method of claim 6, wherein the length of the transverse axis of the ellipse is a first preset multiple of the length of the target transverse axis and/or the length of the longitudinal axis of the ellipse is a second preset multiple of the length of the target longitudinal axis; and the first preset multiple and/or the second preset multiple are/is not less than 1.
8. The method of claim 7, wherein the object handling rules comprise: translation type processing rules, rotation type processing rules, and compression type processing rules.
9. The method of claim 8, wherein the processing the object comprises: facial regions, facial contours, and/or facial features; wherein the five-sense-organ sites include at least one of: eyes, nose, eyebrows, mouth, and ears.
10. The method of claim 9, wherein when the processing object is a face region, the detecting a plurality of key points corresponding to the processing object in the image further comprises: detecting a face center key point and a chin center key point in the face area;
the center of the target circle is determined according to the key point of the face center, and the longitudinal axis of the target is determined according to the distance between the key point of the face center and the key point of the chin center.
11. The method of claim 10, wherein the method is implemented by a graphics processor.
12. An image processing apparatus comprising:
the key point detection module is suitable for detecting a plurality of key points corresponding to a processing object in the image;
a to-be-processed region determining module, adapted to determine a to-be-processed region corresponding to the processing object in the image according to the plurality of key points;
the judging module is suitable for judging whether each pixel point in the image belongs to the area to be processed;
the processing module is suitable for processing the pixel point according to a preset object processing rule if the pixel point is judged to belong to the area to be processed, wherein the preset object processing rule comprises a translation type processing rule and a rotation type processing rule;
wherein the area to be treated further comprises: a target processing region and an environmental processing region; the pending area determination module is further adapted to:
determining a target processing area corresponding to the processing object contained in the image according to the plurality of key points;
determining an environment processing area positioned at the periphery of the target processing area according to the target processing area;
and, the object processing rule further comprises: target processing rules and environment processing rules; and the processing module is further adapted to:
judging whether the pixel belongs to the target processing area or not; if yes, processing the pixel point according to the target processing rule; if not, processing the pixel point according to the environment processing rule;
wherein the apparatus further comprises:
the mapping relation determining module is suitable for determining a first mapping relation between the deformation coefficient of the pixel point and the distance from the pixel point to the center of the target circle in advance when the distance from the pixel point to the center of the target circle is not more than the length of the target horizontal axis, and the target processing rule is determined according to the first mapping relation;
the mapping determination module is further adapted to: and predetermining a second mapping relation between the deformation coefficient of the pixel point and the distance from the pixel point to the center of the target circle when the distance from the pixel point to the center of the target circle is greater than the length of the target horizontal axis, and determining the environment processing rule according to the second mapping relation, wherein the deformation coefficient of the pixel point is the deformation coefficient of the pixel point along the direction from the pixel point to the center of the target circle.
13. The apparatus of claim 12, wherein the area to be processed is an elliptical area to be processed, and the elliptical area to be processed is determined by an ellipse center, an ellipse transverse axis, and an ellipse longitudinal axis.
14. The apparatus of claim 13, wherein the determining module is further adapted to:
aiming at each pixel point in the image, judging whether the distance between the pixel point and the center of the ellipse is larger than the length of the cross shaft of the ellipse or not;
if not, determining that the pixel point belongs to the area to be processed.
15. The apparatus of claim 14, wherein the target processing region is an elliptical target processing region and the environmental processing region is an elliptical ring-shaped environmental processing region located at a periphery of the elliptical target processing region;
the to-be-processed region determination module is further adapted to:
determining a target circle center and a target transverse axis and/or a target longitudinal axis passing through the target circle center according to the plurality of key points, and determining the elliptical target processing area according to the target circle center and the target transverse axis and/or the target longitudinal axis passing through the target circle center;
the to-be-processed region determination module is further adapted to:
determining the target circle center as the ellipse circle center, determining the ellipse transverse axis and/or the ellipse longitudinal axis according to the target transverse axis and/or the target longitudinal axis, and determining the ellipse to-be-processed area according to the ellipse circle center, the ellipse transverse axis and the ellipse longitudinal axis; and determining an elliptical annular environment processing area positioned at the periphery of the elliptical target processing area according to the elliptical to-be-processed area and the elliptical target processing area.
16. The apparatus of claim 15, wherein the elliptical target processing region is a circular target processing region and the elliptical annular environmental processing region is an annular environmental processing region; wherein the length of the target horizontal axis is the same as the length of the target longitudinal axis, and the length of the ellipse horizontal axis is the same as the length of the ellipse longitudinal axis.
17. The apparatus of claim 15 or 16, wherein the determining means is further adapted to:
judging whether the distance between the pixel point and the center of the target circle is greater than the length of the target cross shaft or not; if not, determining that the pixel point belongs to the target processing area.
18. The apparatus of claim 17, wherein the length of the transverse ellipse axis is a first preset multiple of the length of the target transverse axis and/or the length of the longitudinal ellipse axis is a second preset multiple of the length of the target longitudinal axis; and the first preset multiple and/or the second preset multiple are/is not less than 1.
19. The apparatus of claim 18, wherein the object handling rules comprise: translation type processing rules, rotation type processing rules, and compression type processing rules.
20. The apparatus of claim 19, wherein the processing object comprises: facial regions, facial contours, and/or facial features; wherein the five-sense-organ sites include at least one of: eyes, nose, eyebrows, mouth, and ears.
21. The apparatus of claim 20, wherein the keypoint detection module is further adapted to:
when the processing object is a face region, detecting a face center key point and a chin center key point in the face region;
the center of the target circle is determined according to the key point of the face center, and the longitudinal axis of the target is determined according to the distance between the key point of the face center and the key point of the chin center.
22. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the image processing method according to any one of claims 1-11.
23. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the image processing method of any one of claims 1-11.
CN201810229288.8A 2018-03-20 2018-03-20 Image processing method and device and electronic equipment Active CN108399599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810229288.8A CN108399599B (en) 2018-03-20 2018-03-20 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810229288.8A CN108399599B (en) 2018-03-20 2018-03-20 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN108399599A CN108399599A (en) 2018-08-14
CN108399599B true CN108399599B (en) 2021-11-26

Family

ID=63092649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810229288.8A Active CN108399599B (en) 2018-03-20 2018-03-20 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN108399599B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241921A (en) * 2018-09-17 2019-01-18 北京字节跳动网络技术有限公司 Method and apparatus for detecting face key point
US10785456B1 (en) 2019-09-25 2020-09-22 Haier Us Appliance Solutions, Inc. Methods for viewing and tracking stored items
CN111507896B (en) * 2020-04-27 2023-09-05 抖音视界有限公司 Image liquefaction processing method, device, equipment and storage medium
CN113781295B (en) * 2021-09-14 2024-02-27 网易(杭州)网络有限公司 Image processing method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899905A (en) * 2015-06-08 2015-09-09 深圳市诺比邻科技有限公司 Face image processing method and apparatus
CN107395958A (en) * 2017-06-30 2017-11-24 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
CN107578380A (en) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
CN107730465A (en) * 2017-10-09 2018-02-23 武汉斗鱼网络科技有限公司 Face U.S. face method and device in a kind of image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015186170A (en) * 2014-03-26 2015-10-22 ソニー株式会社 Image processing apparatus and image processing method
US9412176B2 (en) * 2014-05-06 2016-08-09 Nant Holdings Ip, Llc Image-based feature detection using edge vectors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899905A (en) * 2015-06-08 2015-09-09 深圳市诺比邻科技有限公司 Face image processing method and apparatus
CN107395958A (en) * 2017-06-30 2017-11-24 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
CN107578380A (en) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
CN107730465A (en) * 2017-10-09 2018-02-23 武汉斗鱼网络科技有限公司 Face U.S. face method and device in a kind of image

Also Published As

Publication number Publication date
CN108399599A (en) 2018-08-14

Similar Documents

Publication Publication Date Title
CN108389155B (en) Image processing method and device and electronic equipment
CN108346130B (en) Image processing method and device and electronic equipment
CN108364254B (en) Image processing method and device and electronic equipment
CN108399599B (en) Image processing method and device and electronic equipment
CN108198141B (en) Image processing method and device for realizing face thinning special effect and computing equipment
CN108447023B (en) Image processing method and device and electronic equipment
CN110046546B (en) Adaptive sight tracking method, device and system and storage medium
CN109840883B (en) Method and device for training object recognition neural network and computing equipment
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN107959798B (en) Video data real-time processing method and device and computing equipment
CN108121982B (en) Method and device for acquiring facial single image
CN110047059B (en) Image processing method and device, electronic equipment and readable storage medium
CN112214773B (en) Image processing method and device based on privacy protection and electronic equipment
US20200380250A1 (en) Image processing method and apparatus, and computer storage medium
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
WO2017173578A1 (en) Image enhancement method and device
CN112102198A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111047619B (en) Face image processing method and device and readable storage medium
US11017557B2 (en) Detection method and device thereof
CN107767326B (en) Method and device for processing object transformation in image and computing equipment
CN108734712B (en) Background segmentation method and device and computer storage medium
CN109361850A (en) Image processing method, device, terminal device and storage medium
CN107977644B (en) Image data processing method and device based on image acquisition equipment and computing equipment
CN108109107B (en) Video data processing method and device and computing equipment
CN110782439B (en) Method and device for auxiliary detection of image annotation quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant