CN113228045A - Image processing method, apparatus, removable platform, and storage medium - Google Patents

Image processing method, apparatus, removable platform, and storage medium Download PDF

Info

Publication number
CN113228045A
CN113228045A CN202080007161.7A CN202080007161A CN113228045A CN 113228045 A CN113228045 A CN 113228045A CN 202080007161 A CN202080007161 A CN 202080007161A CN 113228045 A CN113228045 A CN 113228045A
Authority
CN
China
Prior art keywords
determining
pixel point
processed
eye region
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080007161.7A
Other languages
Chinese (zh)
Inventor
席迎来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN113228045A publication Critical patent/CN113228045A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

An image processing method includes acquiring an image to be processed (S110); determining an eye region in the image to be processed, wherein a first length of the eye region extending in a first direction is not equal to a second length of the eye region extending in a second direction, and the first direction is different from the second direction (S120); the preset process is performed on the eye region (S130). The method and the device can accurately process the specific area. An apparatus, a removable platform, and a storage medium are also provided.

Description

Image processing method, apparatus, removable platform, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a movable platform, and a storage medium.
Background
With the development of internet technology, video, image entertainment and content consumption tend to be popular, and the demand for image processing by image photographers and producers of materials is increasing.
Some scenes need to process partial areas in the image, such as elliptical areas, rectangular areas, etc. For example, the beautification of the eye is an important component of the beauty function. However, when the user wears glasses or decorations on the face of the person, objects such as a frame are around the eyes, and when the user beautifies the image of the eyes, it is difficult to avoid and process the objects such as the frame, and the obtained beautified image is prone to have abnormal or inconsistent contents, so that the user experience is not good enough.
Disclosure of Invention
Based on this, the application provides an image processing method, an image processing device, a movable platform and a storage medium, and aims to solve the technical problems that when a part of an image is processed, the content outside the image area is affected, the processed image is easy to have abnormal or uncoordinated content, and the like.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring an image to be processed, and determining key points of a face in the image to be processed;
determining an eye region in the image to be processed according to the key points of the face, wherein a first length of the eye region extending in a first direction is not equal to a second length of the eye region extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the eye region.
In a second aspect, an embodiment of the present application provides an image processing method, including:
acquiring an image to be processed;
determining a region to be processed in the image to be processed, wherein a first length of the region to be processed extending in a first direction is not equal to a second length of the region to be processed extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the area to be treated.
In a third aspect, an embodiment of the present application provides an image processing apparatus, including one or more processors, working individually or collectively, to perform the steps of:
acquiring an image to be processed, and determining key points of a face in the image to be processed;
determining an eye region in the image to be processed according to the key points of the face, wherein a first length of the eye region extending in a first direction is not equal to a second length of the eye region extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the eye region.
In a fourth aspect, embodiments of the present application provide an image processing apparatus, including one or more processors, working individually or collectively, to perform the steps of:
acquiring an image to be processed;
determining a region to be processed in the image to be processed, wherein a first length of the region to be processed extending in a first direction is not equal to a second length of the region to be processed extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the area to be treated.
In a fifth aspect, an embodiment of the present application provides a movable platform, which can carry a shooting device, where the shooting device is used to obtain an image;
further comprising one or more processors, working individually or collectively, to perform the steps of:
acquiring an image to be processed, and determining key points of a face in the image to be processed;
determining an eye region in the image to be processed according to the key points of the face, wherein a first length of the eye region extending in a first direction is not equal to a second length of the eye region extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the eye region.
In a sixth aspect, embodiments of the present application provide a movable platform, which can carry a shooting device, where the shooting device is used to acquire an image;
further comprising one or more processors, working individually or collectively, to perform the steps of:
acquiring an image to be processed;
determining a region to be processed in the image to be processed, wherein a first length of the region to be processed extending in a first direction is not equal to a second length of the region to be processed extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the area to be treated.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, causes the processor to implement the above-mentioned method.
The embodiment of the application provides an image processing method, image processing equipment, a movable platform and a storage medium, wherein the to-be-processed areas with unequal lengths and widths are determined in an image, so that the to-be-processed areas are processed more accurately, attachments such as glasses in the width direction of the to-be-processed areas can be removed, the content outside the areas is prevented from being influenced when the to-be-processed areas are subjected to preset processing, and the processed image is not easy to have abnormal or uncoordinated content.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure of the embodiments of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an application scenario of the image processing method;
FIG. 3 is a schematic diagram illustrating the effect of the current image processing method on processing an eye region;
FIG. 4 is a schematic diagram of key points of a face in a face region;
FIG. 5 is a schematic diagram of determining eye regions from facial keypoints;
FIG. 6 is a schematic diagram illustrating the determination of the corresponding relationship between the second pixel point and the first pixel point;
FIG. 7 is a schematic illustration of determining a distortion intensity coefficient corresponding to a second pixel point;
fig. 8 is a schematic diagram illustrating an effect of processing an eye region by an image processing method according to an embodiment of the present application;
fig. 9 is a schematic flowchart of an image processing method according to another embodiment of the present application;
fig. 10 is a schematic block diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 11 is a schematic block diagram of a movable platform provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure. The image processing method can be applied to image processing equipment and is used for processing images and the like.
In some embodiments, the image processing apparatus comprises at least one of: camera, cell-phone, computer, server, movable platform.
Wherein, the movable platform can include at least one of unmanned vehicles, handheld cloud platform, cloud platform truck etc.. Further, unmanned vehicles can be rotor-type unmanned aerial vehicles, such as quad-rotor unmanned aerial vehicles, hexa-rotor unmanned aerial vehicles, and octa-rotor unmanned aerial vehicles, and also can be fixed-wing unmanned aerial vehicles. The computer may include at least one of a tablet computer, a notebook computer, a desktop computer, and the like.
In some embodiments, as shown in fig. 2, the movable platform 10 carries a camera 11, such as a camera. The movable platform 10 is also capable of communicative coupling with the terminal device 20. The terminal device 20 includes at least one of a cellular phone, a computer, a remote controller, and the like, for example.
For example, the photographing device 11 may photograph an image and process the photographed image according to an image processing method, and may transmit the processed image to the terminal device 20 through the movable platform 10, so that the terminal device 20 stores and/or displays the processed image.
Illustratively, the camera 11 may take an image and transmit the taken image to the movable platform 10. The movable platform 10 processes the image according to the image processing method and transmits the processed image to the terminal device 20 so that the terminal device 20 stores and/or displays the processed image.
Illustratively, the movable platform 10 transmits the image captured by the camera 11 to the terminal device 20, and the image is processed by the terminal device 20 according to the image processing method. Terminal device 20 may store and/or display the processed image.
In other embodiments, the camera may capture an image and process the captured image according to an image processing method; the camera can also send the processed image to the terminal device so that the terminal device can store and/or display the processed image. Or the shooting device can shoot the image and send the shot image to the terminal equipment, and the terminal equipment can process the image according to the image processing method and can also store and/or display the processed image.
Fig. 3 is a schematic diagram illustrating processing of a region to be processed, such as an eye region, in an image, specifically, distortion of the eye region including the eye region. The left image in the image 3 is the image to be processed, the right image is the image obtained after processing the eye region, when the human face wears the glasses, the glasses frame is arranged around the eyes, the glasses frame can also deform and distort during distortion, the abnormal or uncoordinated situation occurs, and the user experience is poor.
Aiming at the discovery, the inventor of the application improves the unmanned aerial vehicle to solve the technical problems that the content outside the region is influenced when the partial region in the image is processed, and the processed image is easy to have abnormal or uncoordinated content and the like.
As shown in fig. 1, the image processing method according to the embodiment of the present application includes steps S110 to S130.
S110, acquiring an image to be processed, and determining key points of the face in the image to be processed.
Illustratively, the image to be processed includes a human face.
The image to be processed may be, for example, an image currently taken, or an image acquired from a local storage, or an image read from another device, or the like.
In some embodiments, the determining the key points of the face in the image to be processed comprises: determining a face region in the image to be processed; and determining face key points of the face region.
Illustratively, the face region in the image to be processed is determined by a face detection algorithm.
For example, if a face region is not detected in the image to be processed, or the detected face is not a front face image, the user may be prompted that no face is detected in the current image to be processed, or the user is prompted to shoot or import an image including a front face.
Illustratively, a face keypoint (landmark) of the face region may be determined, as shown in fig. 4. For example, the positions of the face key points of the face are determined by a face key point detection method, for example, the positions of 68 face key points are determined.
Specifically, facial feature positions and facial contours can be determined from facial keypoints.
S120, determining an eye region in the image to be processed according to the face key point, wherein a first length of the eye region extending in a first direction is not equal to a second length of the eye region extending in a second direction, and the first direction is different from the second direction.
For example, a direction having a longer length in the first direction and the second direction may be referred to as a longitudinal direction of the eye region, and a direction having a shorter length may be referred to as a width direction of the eye region.
Illustratively, eye regions with unequal lengths and widths, such as rectangles and ellipses, are determined.
In some embodiments, the determining the eye region in the image to be processed according to the face key point includes: determining key points corresponding to the eye region from the face key points; and determining the eye region according to the key points corresponding to the eye region.
As shown in fig. 4, the key points corresponding to the eye region include at least one of the facial key points numbered 37-42, 43-48.
Illustratively, the key points corresponding to the eye regions include eye corner key points of the left eye and/or eye corner key points of the right eye.
As shown in fig. 4, the corner key points of the left eye include face key points numbered 37 and 40; the corner key points for the right eye include the face key points numbered 43 and 46.
Illustratively, the determining the eye region in the image to be processed according to the face key point comprises: and determining the eye region of the left eye according to the eye corner key point of the left eye and/or determining the eye region of the right eye according to the eye corner key point of the right eye.
As shown in fig. 5, the eye region of the left eye is determined according to the corner key point of the left eye, and the eye region is an elliptical eye region.
Illustratively, as shown in fig. 5, a first length a of the eye region extending in the first direction is not equal to a second length b of the eye region extending in the second direction.
Illustratively, the first direction is determined by a straight line where at least two canthus key points are located in the image to be processed.
For example, as shown in fig. 5, a first direction, such as the direction of a1, a2, is determined according to at least two corner key points of the left eye. Illustratively, the first orientation of the left eye region is determined from the corner key points 40 of the left eye near the facial midline and the corner key points 37 of the left eye near the facial contour.
In some embodiments, the determining the eye region in the image to be processed according to the face key point in step S120 includes: determining a center of the eye region and a first length of the eye region extending in the first direction according to the face key point; determining the eye region from the center and the first length.
Illustratively, as shown in FIG. 5, the center o of the eye region may be determined based on the intermediate positions of corner key points 40 near the face's midline and corner key points 37 near the face's contour. The center o of the eye region is determined, for example, from the average of the locations of corner key points 40 and corner key points 37.
Illustratively, as shown in fig. 5, a first length a of the eye region extending in the first direction may be determined from a line segment between an eye corner key point 40 near the face midline and an eye corner key point 37 near the face contour. For example, the first length a is determined as the distance between the corner key point 40 and the corner key point 37, or the first length a is obtained by multiplying the distance between the corner key point 40 and the corner key point 37 by a factor greater than 1 or less than 1. As shown in fig. 5, the distance between the corner key point 40 and the corner key point 37 is w, and 2 × w may be determined as the first length a.
Specifically, an eye region with unequal length and width can be determined according to the center o of the eye region and the first length a of the eye region extending in the first direction.
Illustratively, the determining the eye region according to the center of the eye region and the first length includes: determining a second length of the ocular region; determining the eye region according to the center of the eye region, the first length and the second length.
Specifically, the first direction is different from the second direction. For example, the first direction is substantially perpendicular to the second direction. Wherein substantially perpendicular comprises absolute perpendicular with less than 5 degrees deviation from absolute perpendicular.
Illustratively, the second length of the eye region may be determined from the first length. For example, the second length is less than the first length.
Illustratively, the second length is 0.3 to 0.8 times the first length, for example the second length is 0.5 times the first length.
Illustratively, the second length of the eye region may be determined from corner key points near the face midline and corner key points near the face contour. For example, the second length b of the eye region is determined based on the distance w between the corner key points 40 near the face midline and the corner key points 37 near the face contour. Illustratively, the second length b is 0.8 to 1.2 times the distance w, e.g., the second length b is equal to the distance w between the corner key point 40 and the corner key point 37.
In some embodiments, as shown in fig. 5, the eye region comprises an elliptical eye region, the first direction comprises a major axis direction of the ellipse, and the second direction comprises a minor axis direction of the ellipse. As shown in fig. 5, the long axis direction may be determined according to the corner key points, and the short axis direction perpendicular to the long axis direction may be determined according to the long axis direction. Illustratively, the length of the major axis of the elliptical eye region is a and the length of the minor axis is b.
Illustratively, the determining the eye region in the image to be processed according to the face key point comprises: determining the center, the length of the long axis, the length of the short axis and the end point of the long axis of the eye region according to the corner key points in the face key points; determining the focal length of the ellipse according to the length of the long axis and the length of the short axis; determining the focus of the ellipse according to the center, the focal length, the length of the long axis and the end point of the long axis; and determining an eye region in the image to be processed according to the focal point and the long axis length.
As shown in fig. 5, the center o of the eye region may be determined based on the middle position of the corner key points 40 near the face midline and the corner key points 37 near the face contour, and the major axis length a may be determined based on the distance w between the corner key points 40 near the face midline and the corner key points 37 near the face contour. Illustratively, the minor axis length b may also be determined from the major axis length a, e.g., the minor axis length b is half the major axis length a.
Illustratively, the long axis endpoint a1 and/or the long axis endpoint a2 may be determined from the center o and the long axis length a of the ocular region.
The focal length c of the ellipse can be determined according to the length a of the major axis and the length b of the minor axis of the ellipse, and the positions of the two focal points F1 and F2 of the ellipse can be determined according to the center o of the ellipse, the focal length c, the length a of the major axis and the end point of the major axis.
Specifically, the eye region in the image to be processed can be determined according to the two focal points F1, F2 and the long axis length a. Illustratively, if the sum of the distances from a pixel point in the image to be processed to the two focal points F1 and F2 is less than or equal to 2 × a, the pixel point is located inside the eye region. Specifically, if the sum of the distances from the pixel point to the two focal points F1 and F2 is equal to 2 × a, the pixel point is located at the boundary of the eye region.
It is understood that the face key points determined in step S110 may have different distribution and positions from those shown in fig. 4, and the eye region in the image to be processed may be determined according to the actually determined face key points.
And S130, performing preset processing on the eye region.
Illustratively, the preset processing is performed on the eye regions of the left eye and the right eye respectively.
In some embodiments, the preset process comprises: at least one of deformation processing, contrast adjustment, brightness adjustment, saturation adjustment, color mapping, clipping processing, and filling processing.
In some embodiments, the performing the preset process on the eye region in step S130 includes: and carrying out deformation processing on the original image in the eye region to obtain a deformed image of the eye region.
Illustratively, for each pixel point in the image to be processed, it is determined whether the sum of the distances from the pixel point to the two focal points F1 and F2 is less than or equal to 2 × a, and if less than or equal to 2 × a, the pixel point is in the eye region.
Illustratively, the original image in the eye region is subjected to a deformation process so that the eyes in the resulting deformed image appear larger or smaller.
The eye regions with unequal length and width are determined, so that the eye regions are more accurate, and accessories such as glasses in the width direction of the eye regions can be eliminated, so that when the eye regions are subjected to preset processing, the content outside the regions is prevented from being influenced, and abnormal or uncoordinated content is not easy to appear in the processed images.
For example, the performing deformation processing on the original image in the eye region to obtain a deformed image of the eye region includes: determining the corresponding relationship of the positions of the second pixel points of the deformed image and the first pixel points of the original image; and determining a second pixel value of the second pixel point according to the first pixel value of the first pixel point based on the position corresponding relation.
Specifically, the image after the deformation processing is determined to be a deformed image according to the corresponding relationship between the pixels in the eye region before and after the deformation processing and the pixel values of the pixels in the eye region before and after the deformation processing.
Illustratively, the determining a second pixel value of the second pixel point according to the first pixel value of the first pixel point based on the position correspondence includes: and determining a second pixel value of the second pixel point according to the first pixel value of the first pixel point in the original image and/or the first pixel values of other pixel points adjacent to the first pixel point in the original image.
For example, the red, green and blue components of each pixel in the original image in the eye region are stored into a buffer memory; creating a blank second image with the same boundary with the original image; and determining a second pixel value of a second pixel point corresponding to the first pixel point according to the first pixel value of the first pixel point in the original image and/or the first pixel values of other pixel points adjacent to the first pixel point in the original image, and filling the second pixel value to the position of the second pixel point in the second image.
For example, the red, green and blue components of the second pixel point corresponding to the first pixel point can be determined by performing linear interpolation on the red, green and blue components of the upper, lower, left and right four pixel points of the first pixel point, the red, green and blue components of the second pixel point are synthesized into a new pixel, and the new pixel is filled in the position of the second pixel point in the second image.
For example, the position of a first pixel point corresponding to a second pixel point in a second image may be determined, and the second pixel value of the second pixel point may be determined according to the first pixel value of the first pixel point and/or the first pixel values of other pixel points adjacent to the first pixel point in the original image.
For example, the position of a second pixel point corresponding to a first image in the first image may be determined, and the second pixel value of the second pixel point may be determined according to the first pixel value of the first pixel point and/or the first pixel value of another pixel point adjacent to the first pixel point in the original image.
In some embodiments, the determining the position correspondence between the second pixel point and the first pixel point includes: and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region.
For example, as shown in fig. 6, if the distance between the second pixel point P in the second image and the center o of the eye region is r and the distortion intensity coefficient is s, the position of the first pixel point P' corresponding to the second pixel point P can be determined according to s and r.
In some embodiments, the determining, according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region, a first pixel point corresponding to the second pixel point includes: determining a distance between the second pixel point and a center of the eye region; determining a first ratio between the distance and a maximum radius of the eye region; and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the first ratio.
For example, when a first length a of the eye region extending in the first direction is greater than a second length b of the eye region extending in the second direction, the maximum radius of the eye region may be determined according to the first length a, for example, the maximum radius r of the eye regionmaxEqual to the first length a.
Illustratively, the second pixel point P (d)x,dy) Position (d ') of corresponding first pixel P'x,d′y) Can be determined according to the following equation:
Figure BDA0003122221540000101
Figure BDA0003122221540000102
wherein (d)x,dy) Is the coordinate of the second pixel point P relative to the center o of the second image, r is the distance between the second pixel point P and the center o, (d'x,d′y) The coordinates of the first pixel point relative to the center o of the first image are shown, and s is a distortion intensity coefficient; r ismaxIs the maximum radius of the ocular region, for example the length of the ocular region.
For example, when s is in the range of [ -1,0), the effect of the deformation is to shrink, as the eye of the second image appears smaller; when s is in the range of (0, 1), as shown in fig. 6, the effect of the distortion is magnification, as the eye of the second image appears larger.
Illustratively, the closer a pixel point is to the center o of the eye region, the greater the degree of deformation thereof, i.e., the greater the deformation; the farther a pixel point is from the center o of the eye region, the smaller the deformation degree is, that is, the smaller the deformation is, for example, at the end point of the long axis of the elliptical eye region, the pixel point is not deformed.
In other embodiments, the determining, according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region, a first pixel point corresponding to the second pixel point includes: determining a distance between the second pixel point and a center of the eye region; determining the radius of the eye region corresponding to the second pixel point; determining a second ratio between the distance and the radius of the eye region corresponding to the second pixel point; and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the second ratio.
Specifically, as shown in fig. 7, the radius R of the eye region corresponding to the pixel point represents a straight line distance extending from the center o of the eye region to the boundary of the eye region through the pixel point.
Illustratively, the determining the radius of the eye region corresponding to the second pixel point includes: and determining the length of the center of the eye region extending to the edge of the eye region through the second pixel point as the radius of the eye region corresponding to the second pixel point.
As shown in fig. 7, the eye region radius R of the pixel point on the short axis of the elliptical eye region is equal to the short axis length b, the eye region radius R of the pixel point on the long axis is equal to the long axis length a, and the eye region radius R is not less than b and not greater than a.
Illustratively, the determining the radius of the eye region corresponding to the second pixel point includes: determining the extension direction of the radius of the eye region according to the second pixel points and the center of the eye region; and determining an included angle between the extending direction and the first direction, and determining the radius of the eye region corresponding to the second pixel point according to the included angle.
As shown in fig. 7, an included angle between the extending direction and the first direction, for example, the long axis direction is γ, and the radius R of the eye region corresponding to the second pixel point can be determined according to γ, the long axis length a and the short axis length b. Illustratively, this may be determined according to the following equation:
Figure BDA0003122221540000121
in some embodiments, an angle between the first direction and a lateral direction of the image to be processed is 0 to 45 degrees. As shown in fig. 5 and 7, since the face of a person in the image to be processed is inclined, the angle between the longitudinal direction of the eye region and the lateral direction of the image to be processed is β. By determining the angle θ between the extension direction of the second pixel point and the center of the eye region and the lateral direction, the angle γ between the extension direction and the first direction can be determined.
Illustratively, the second pixel point P (d)x,dy) Position (d ') of corresponding first pixel P'x,d′y) Can be determined according to the following equation:
Figure BDA0003122221540000122
Figure BDA0003122221540000123
for example, when s is in the range of [ -1,0), the effect of the deformation is to shrink, as the eye of the second image appears smaller; when s is in the range of (0, 1), as shown in fig. 6, the effect of the distortion is magnification, as the eye of the second image appears larger.
Illustratively, when the radius R of the eye region of a pixel point is constant, the closer the pixel point is to the center o of the eye region, the greater the degree of deformation thereof, that is, the greater the deformation; the closer the pixel point is to the boundary of the eye region, the smaller the deformation degree is, that is, the smaller the deformation is, for example, at the boundary of the elliptical eye region, the pixel point is not deformed, so that in the image processed by the eye region, the smooth transition between the second image and other parts of the image appears more natural.
Illustratively, when the distance to the center o of the eye region is constant, the radius R of the eye region of the pixel point closer to the length direction of the eye region is larger, the deformation is larger, and the radius R of the eye region of the pixel point closer to the width direction of the eye region is smaller, and the deformation is smaller; for example, the deformation of the pixel points in the major axis direction of the elliptical eye region is large, and the deformation of the pixel points in the minor axis direction is small. For example, the large eye effect deforms more along the direction of the eye corner connecting line, and deforms less along the direction perpendicular to the eye corner connecting line.
In some embodiments, the determining, according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region, a first pixel point corresponding to the second pixel point includes: determining a distortion intensity coefficient corresponding to the second pixel point; and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the distance between the second pixel point and the center of the eye region.
For example, the determining the distortion intensity coefficient corresponding to the second pixel point includes: determining the radius of the eye region corresponding to the second pixel point; and determining the distortion intensity coefficient corresponding to the second pixel point according to the radius of the eye region corresponding to the second pixel point and a preset intensity coefficient.
Illustratively, the distortion intensity coefficient corresponding to the second pixel point is positively correlated with the radius of the eye region corresponding to the second pixel point. When the distance from the center o of the eye region is constant, the radius R of the eye region of the pixel point closer to the length direction of the eye region is larger, the larger the distortion intensity coefficient is, and the larger the deformation is; the radius R of the eye region of the pixel point closer to the width direction of the eye region is smaller, the distortion intensity coefficient is smaller, and the deformation is smaller.
For example, among the distortion intensity coefficients corresponding to the second pixel points of the deformation image, the distortion intensity coefficient corresponding to the second pixel point in the first direction is the largest, and the distortion intensity coefficient corresponding to the second pixel point in the second direction is the smallest.
For example, the deformation of the pixel points in the major axis direction of the elliptical eye region is large, and the deformation of the pixel points in the minor axis direction is small. For example, the large-eye effect has larger deformation along the direction of the canthus connecting line and smaller deformation along the direction perpendicular to the canthus connecting line, thereby further avoiding or reducing the distortion of accessories such as the spectacle frame and the like.
For example, the determining the distortion intensity coefficient corresponding to the second pixel point according to the radius of the eye region corresponding to the second pixel point and a preset intensity coefficient includes: determining a third ratio between the radius of the eye region corresponding to the second pixel point and the maximum radius of the eye region; and determining a distortion intensity coefficient corresponding to the second pixel point according to the third ratio and a preset intensity coefficient.
As shown in fig. 7, the radius R of the eye region corresponding to the second pixel point and the maximum radius R of the eye regionmaxThe third ratio between the major axes a of the eye region like an ellipse is R ÷ a, and the distortion intensity coefficient s' corresponding to the second pixel point can be determined according to the product of the third ratio R ÷ a and the preset intensity coefficient s.
For example, the determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the distance includes: determining a fourth ratio between the distance and the radius of the eye region corresponding to the second pixel point; and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the fourth ratio.
Illustratively, the second pixel point P (d)x,dy) Position (d ') of corresponding first pixel P'x,d′y) Can be determined according to the following equation:
Figure BDA0003122221540000141
Figure BDA0003122221540000142
wherein R is a distance between the second pixel point P and the center o, R is an eye region radius corresponding to the second pixel point, and s' is a distortion intensity coefficient corresponding to the second pixel point. Illustratively, the closer a pixel point is to the center o of the eye region, the greater the degree of deformation thereof, i.e., the greater the deformation; the farther a pixel point is from the center o of the eye region, the smaller the deformation degree is, that is, the smaller the deformation is, for example, at the end point of the long axis of the elliptical eye region, the pixel point is not deformed.
Illustratively, when the distance to the center o of the eye region is constant, the radius R of the eye region of the pixel point closer to the length direction of the eye region is larger, the distortion intensity coefficient is larger, and the deformation is larger; the radius R of the eye region of the pixel point closer to the width direction of the eye region is smaller, the distortion intensity coefficient is smaller, and the deformation is smaller. For example, the deformation of the pixel points in the major axis direction of the elliptical eye region is large, and the deformation of the pixel points in the minor axis direction is small. For example, the large eye effect deforms more along the direction of the eye corner connecting line, and deforms less along the direction perpendicular to the eye corner connecting line. And when the radius R of the eye region of the pixel point is fixed, the closer the pixel point is to the center o of the eye region, the larger the deformation is; the closer the pixel point is to the boundary of the eye region, the smaller the deformation is. For example, at the boundary of the elliptical eye region, the pixel points are not deformed.
The processing of different degrees is carried out on the pixel points according to the positions of the pixel points, so that the processed image is more natural and smoother, the processing strength on the boundary of the eye region with unequal length and width can be weakest, for example, the eye region image is not deformed, and the processed eye region image can be in smooth transition with an external image.
Fig. 8 is a schematic diagram illustrating processing of a region to be processed, such as an eye region, in an image by the image processing method according to the embodiment of the present application, specifically, performing distortion on the eye region including the eye. In fig. 8, the left image is an image to be processed, the right image is an image obtained after processing the eye region, and it is possible to prevent the content outside the eye region from being affected when the eye region is subjected to the preset processing, for example, glasses are used, and the processed image is not easy to have abnormal or uncoordinated content.
According to the image processing method provided by the embodiment of the application, the eye regions with unequal length and width are determined, so that the eye regions are more accurate, and accessories such as glasses in the width direction of the eye regions can be removed, so that when the eye regions are subjected to preset processing, the content outside the regions is prevented from being influenced, and the processed images are not easy to have abnormal or uncoordinated content. Even if the spectacle frame is very close to the eye socket, the deformation of the spectacle frame can be reduced to a very small and inconspicuous degree, and the user experience is improved.
It can be understood that the image processing method according to the embodiment of the present application may be applied to processing an image including regions to be processed with different lengths and widths, for example, the whole face, arms, legs, and the whole human body may be used as the regions to be processed to perform image processing, and of course, the regions to be processed are not limited to the human body or parts of the human body, and may be other objects.
Referring to fig. 9 in conjunction with the foregoing embodiments, fig. 9 is a schematic flowchart of an image processing method according to another embodiment of the present application.
The image processing method can be applied to image processing equipment and is used for processing images and the like.
In some embodiments, the image processing apparatus comprises at least one of: camera, cell-phone, computer, server, movable platform.
Wherein, the movable platform can include at least one of unmanned vehicles, handheld cloud platform, cloud platform truck etc.. Further, unmanned vehicles can be rotor-type unmanned aerial vehicles, such as quad-rotor unmanned aerial vehicles, hexa-rotor unmanned aerial vehicles, and octa-rotor unmanned aerial vehicles, and also can be fixed-wing unmanned aerial vehicles. The computer may include at least one of a tablet computer, a notebook computer, a desktop computer, and the like.
In some embodiments, the movable platform carries a camera, such as a camera. The movable platform can also be in communication connection with image processing equipment, such as a mobile phone, a computer, a remote controller and the like. The image processing apparatus can acquire an image captured by a camera mounted on a movable platform and process the image.
As shown in fig. 9, the image processing method of the present embodiment includes steps S210 to S240.
And S210, acquiring an image to be processed.
S220, determining a region to be processed in the image to be processed, wherein a first length of the region to be processed extending in a first direction is not equal to a second length of the region to be processed extending in a second direction, and the first direction is different from the second direction.
In some embodiments, the determining a region to be processed in the image to be processed includes: and displaying the image to be processed, and determining the area to be processed according to the selection operation of the user on the image to be processed. For example, the area where the arm is located is determined as the area to be processed by the user's selection operation.
In some embodiments, the determining a region to be processed in the image to be processed includes: and determining a target object in the image to be processed, and determining the area to be processed according to the area where the target object is located.
For example, a target object such as a human body, eyes, a face, legs, etc. in the image to be processed is detected, and then the region to be processed is determined according to the region where the target object is located, for example, an elliptical region or a rectangular region where the target object such as a face, arms, legs, and the whole human body is located is determined as the region to be processed. Of course, the target object is not limited to the human body or the part of the human body, and may be another object.
Illustratively, the first direction is substantially perpendicular to the second direction.
Illustratively, the area to be processed includes an elliptical area to be processed, the first direction includes a major axis direction of the elliptical shape, and the second direction includes a minor axis direction of the elliptical shape.
And S230, performing preset treatment on the area to be treated.
In some embodiments, the performing the preset treatment on the region to be treated includes: and carrying out deformation processing on the original image in the region to be processed to obtain a deformation image of the region to be processed.
For example, the performing deformation processing on the original image in the region to be processed to obtain a deformed image of the region to be processed includes: determining the corresponding relationship of the positions of the second pixel points of the deformed image and the first pixel points of the original image; and determining a second pixel value of the second pixel point according to the first pixel value of the first pixel point based on the position corresponding relation.
For example, the determining the position corresponding relationship between the second pixel point and the first pixel point includes: and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the region to be processed.
Exemplarily, the determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the region to be processed includes: determining the distance between the second pixel point and the center of the area to be processed; determining a first ratio between the distance and a maximum radius of the area to be treated; and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the first ratio.
Exemplarily, the determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the region to be processed includes: determining the distance between the second pixel point and the center of the area to be processed; determining the radius of the area to be processed corresponding to the second pixel point; determining a second ratio between the distance and the radius of the area to be processed; and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the second ratio.
Exemplarily, the determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the region to be processed includes: determining a distortion intensity coefficient corresponding to the second pixel point; and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the distance.
For example, the determining the distortion intensity coefficient corresponding to the second pixel point includes: determining the radius of the area to be processed corresponding to the second pixel point; and determining the distortion intensity coefficient corresponding to the second pixel point according to the radius of the area to be processed corresponding to the second pixel point and a preset intensity coefficient.
For example, the determining the radius of the region to be processed corresponding to the second pixel point includes: and determining the length of the center of the region to be processed extending to the edge of the region to be processed through the second pixel point as the radius of the region to be processed corresponding to the second pixel point.
For example, the determining the radius of the region to be processed corresponding to the second pixel point includes: determining the extending direction of the radius of the area to be processed according to the second pixel points and the center of the area to be processed; and determining an included angle between the extending direction and the first direction, and determining the radius of the area to be processed corresponding to the second pixel point according to the included angle.
For example, the determining the distortion intensity coefficient corresponding to the second pixel point according to the radius of the to-be-processed area corresponding to the second pixel point and the preset intensity coefficient includes: determining a third ratio between the radius of the area to be processed corresponding to the second pixel point and the maximum radius of the area to be processed; and determining a distortion intensity coefficient corresponding to the second pixel point according to the third ratio and a preset intensity coefficient.
In some embodiments, the distortion intensity coefficient corresponding to the second pixel point is positively correlated with the radius of the region to be processed corresponding to the second pixel point.
For example, among the distortion intensity coefficients corresponding to the second pixel points of the deformation image, the distortion intensity coefficient corresponding to the second pixel point in the first direction is the largest, and the distortion intensity coefficient corresponding to the second pixel point in the second direction is the smallest.
For example, the determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the distance includes: determining a fourth ratio between the distance and the radius of the eye region corresponding to the second pixel point; and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the fourth ratio.
In some embodiments, the determining, based on the position correspondence, a second pixel value of the second pixel point according to the first pixel value of the first pixel point includes: and determining a second pixel value of the second pixel point according to the first pixel value of the first pixel point in the original image and/or the first pixel values of other pixel points adjacent to the first pixel point in the original image.
According to the image processing method provided by the embodiment of the application, the regions to be processed with unequal lengths and widths are determined in the image, so that the regions to be processed are processed more accurately, attachments such as glasses in the width direction of the regions to be processed can be eliminated, the content outside the regions is prevented from being influenced when the regions to be processed are subjected to preset processing, and the processed image is not easy to have abnormal or inconsistent content.
Referring to fig. 10 in conjunction with the above embodiments, fig. 10 is a schematic block diagram of an image processing apparatus 600 according to an embodiment of the present application. The image processing apparatus 600 comprises one or more processors 601, working individually or collectively, for performing the steps of the image processing method described previously.
The image processing apparatus 600 may further include a memory 602.
Illustratively, the processor 601 and the memory 602 are coupled by a bus 603, such as an I2C (Inter-integrated Circuit) bus.
Specifically, the Processor 601 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 602 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
The processor 601 is configured to run a computer program stored in the memory 602, and when executing the computer program, implement the foregoing image processing method.
Illustratively, the image processing apparatus includes at least one of: camera, cell-phone, computer, server, movable platform.
Illustratively, the processor 601 is configured to run a computer program stored in the memory 602 and to implement the following steps when executing the computer program:
acquiring an image to be processed, and determining key points of a face in the image to be processed;
determining an eye region in the image to be processed according to the key points of the face, wherein a first length of the eye region extending in a first direction is not equal to a second length of the eye region extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the eye region.
Illustratively, the processor 601 is configured to run a computer program stored in the memory 602 and to implement the following steps when executing the computer program:
acquiring an image to be processed;
determining a region to be processed in the image to be processed, wherein a first length of the region to be processed extending in a first direction is not equal to a second length of the region to be processed extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the area to be treated.
The specific principle and implementation manner of the image processing apparatus provided in the embodiment of the present application are similar to the control method of the image processing method in the foregoing embodiment, and are not described herein again.
Referring to fig. 11, fig. 11 is a schematic block diagram of a movable platform 700 according to an embodiment of the present disclosure.
The movable platform 700 can mount the imaging device 30, and the imaging device 30 is used to acquire an image.
The moveable platform 700 includes one or more processors 701, working individually or collectively, to perform the steps of the image processing method described previously.
The removable platform 700 may also include memory 702.
Illustratively, the processor 701 and the memory 702 are connected by a bus 703, such as an I2C (Inter-integrated Circuit) bus.
Specifically, the Processor 701 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 702 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
The processor 701 is configured to run a computer program stored in the memory 702, and when executing the computer program, implement the image processing method.
Illustratively, the movable platform 700 includes at least one of: unmanned vehicles, handheld cloud platform, cloud platform truck.
Illustratively, the processor 701 is configured to run a computer program stored in the memory 702 and to implement the following steps when executing the computer program:
acquiring an image to be processed, and determining key points of a face in the image to be processed;
determining an eye region in the image to be processed according to the key points of the face, wherein a first length of the eye region extending in a first direction is not equal to a second length of the eye region extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the eye region.
Illustratively, the processor 701 is configured to run a computer program stored in the memory 702 and to implement the following steps when executing the computer program:
acquiring an image to be processed;
determining a region to be processed in the image to be processed, wherein a first length of the region to be processed extending in a first direction is not equal to a second length of the region to be processed extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the area to be treated.
The specific principle and implementation manner of the movable platform provided in the embodiment of the present application are similar to those of the image processing method in the foregoing embodiment, and are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, where the computer program includes program instructions, and when the computer program is executed by a processor, the processor is enabled to implement the steps of the image processing method provided in the foregoing embodiment.
The computer-readable storage medium may be an internal storage unit of the image processing apparatus or the mobile platform according to any of the foregoing embodiments, for example, a hard disk or a memory of the image processing apparatus or the mobile platform. The computer readable storage medium may also be an external storage device of the image processing apparatus or the removable platform, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the image processing apparatus or the removable platform.
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
It should also be understood that the term "and/or" as used in this application and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (53)

1. An image processing method, comprising:
acquiring an image to be processed, and determining key points of a face in the image to be processed;
determining an eye region in the image to be processed according to the key points of the face, wherein a first length of the eye region extending in a first direction is not equal to a second length of the eye region extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the eye region.
2. The method of claim 1, wherein the determining facial keypoints in the image to be processed comprises:
determining a face region in the image to be processed;
and determining face key points of the face region.
3. The method of claim 1, wherein determining the eye region in the image to be processed according to the facial keypoints comprises:
determining key points corresponding to the eye region from the face key points;
and determining the eye region according to the key points corresponding to the eye region.
4. The method of claim 3, wherein the keypoints for the eye region comprise corner keypoints for the left eye and/or corner keypoints for the right eye.
5. The method of claim 4, wherein determining the eye region in the image to be processed according to the facial keypoints comprises:
and determining the eye region of the left eye according to the eye corner key point of the left eye and/or determining the eye region of the right eye according to the eye corner key point of the right eye.
6. The method according to any of claims 1-5, wherein an angle between the first direction and a lateral direction of the image to be processed is 0 to 45 degrees.
7. The method of claim 6, wherein the first direction is determined by a straight line in which at least two corner key points are located in the image to be processed.
8. The method of claim 6, wherein the first direction is substantially perpendicular to the second direction.
9. The method according to any one of claims 1-5, wherein said determining eye regions in the image to be processed from the facial keypoints comprises:
determining a center of the eye region and a first length of the eye region extending in the first direction according to the face key point;
determining the eye region from the center and the first length.
10. The method of claim 9, wherein said determining a center of the eye region and a first length that the eye region extends in the first direction from the facial keypoints comprises:
determining the center of the eye region according to the middle positions of the corner key points close to the face midline and the corner key points close to the face outline;
a first length of the eye region extending in the first direction is determined from a line segment between an eye corner keypoint proximate to a facial midline and an eye corner keypoint proximate to a facial contour.
11. The method of claim 9, wherein determining the eye region from the center of the eye region and the first length comprises:
determining a second length of the eye region according to the first length, or determining the second length of the eye region according to an eye corner key point close to a face middle line and an eye corner key point close to a face contour;
determining the eye region according to the center of the eye region, the first length and the second length.
12. The method of any one of claims 1-11, wherein the eye region comprises an elliptical eye region, the first direction comprises a major axis direction of the ellipse, and the second direction comprises a minor axis direction of the ellipse.
13. The method of claim 12, wherein determining the eye region in the image to be processed according to the facial keypoints comprises:
determining the center, the length of the long axis, the length of the short axis and the end point of the long axis of the eye region according to the corner key points in the face key points;
determining the focal length of the ellipse according to the length of the long axis and the length of the short axis;
determining the focus of the ellipse according to the center, the focal length, the length of the long axis and the end point of the long axis;
and determining an eye region in the image to be processed according to the focal point and the long axis length.
14. The method according to any one of claims 1-13, wherein said pre-conditioning said eye region comprises:
and carrying out deformation processing on the original image in the eye region to obtain a deformed image of the eye region.
15. The method according to claim 14, wherein the morphing the original image in the eye region to obtain a morphed image of the eye region comprises:
determining the corresponding relationship of the positions of the second pixel points of the deformed image and the first pixel points of the original image;
and determining a second pixel value of the second pixel point according to the first pixel value of the first pixel point based on the position corresponding relation.
16. The method according to claim 15, wherein the determining the position correspondence between the second pixel and the first pixel comprises:
and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region.
17. The method of claim 16, wherein determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region comprises:
determining a distance between the second pixel point and a center of the eye region;
determining a first ratio between the distance and a maximum radius of the eye region;
and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the first ratio.
18. The method of claim 16, wherein determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region comprises:
determining a distance between the second pixel point and a center of the eye region;
determining the radius of the eye region corresponding to the second pixel point;
determining a second ratio between the distance and the radius of the ocular region;
and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the second ratio.
19. The method of claim 16, wherein determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region comprises:
determining a distortion intensity coefficient corresponding to the second pixel point;
and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the distance.
20. The method of claim 19, wherein determining the distortion intensity coefficient corresponding to the second pixel point comprises:
determining the radius of the eye region corresponding to the second pixel point;
and determining the distortion intensity coefficient corresponding to the second pixel point according to the radius of the eye region corresponding to the second pixel point and a preset intensity coefficient.
21. The method of claim 18 or 20, wherein said determining the radius of the eye region corresponding to the second pixel point comprises:
and determining the length of the center of the eye region extending to the edge of the eye region through the second pixel point as the radius of the eye region corresponding to the second pixel point.
22. The method of claim 18 or 20, wherein said determining the radius of the eye region corresponding to the second pixel point comprises:
determining the extension direction of the radius of the eye region according to the second pixel points and the center of the eye region;
and determining an included angle between the extending direction and the first direction, and determining the radius of the eye region corresponding to the second pixel point according to the included angle.
23. The method of claim 20, wherein determining the distortion intensity coefficient corresponding to the second pixel point according to the radius of the eye region corresponding to the second pixel point and a preset intensity coefficient comprises:
determining a third ratio between the radius of the eye region corresponding to the second pixel point and the maximum radius of the eye region;
and determining a distortion intensity coefficient corresponding to the second pixel point according to the third ratio and a preset intensity coefficient.
24. The method of claim 20, wherein the distortion intensity coefficients corresponding to the second pixel points are positively correlated with the radius of the ocular region corresponding to the second pixel points.
25. The method of claim 24, wherein the distortion intensity coefficients corresponding to each second pixel point of the deformed image have a maximum distortion intensity coefficient corresponding to the second pixel point in the first direction and a minimum distortion intensity coefficient corresponding to the second pixel point in the second direction.
26. The method of claim 19, wherein determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the distance comprises:
determining a fourth ratio between the distance and the radius of the eye region corresponding to the second pixel point;
and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the fourth ratio.
27. The method of claim 15, wherein determining the second pixel value of the second pixel point from the first pixel value of the first pixel point based on the positional correspondence comprises:
and determining a second pixel value of the second pixel point according to the first pixel value of the first pixel point in the original image and/or the first pixel values of other pixel points adjacent to the first pixel point in the original image.
28. An image processing method, comprising:
acquiring an image to be processed;
determining a region to be processed in the image to be processed, wherein a first length of the region to be processed extending in a first direction is not equal to a second length of the region to be processed extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the area to be treated.
29. The method of claim 28, wherein the first direction is substantially perpendicular to the second direction.
30. The method of claim 29, wherein the area to be treated comprises an elliptical area to be treated, the first direction comprises a major axis direction of the ellipse, and the second direction comprises a minor axis direction of the ellipse.
31. The method according to any one of claims 28 to 30, wherein the pre-treating the area to be treated comprises:
and carrying out deformation processing on the original image in the region to be processed to obtain a deformation image of the region to be processed.
32. The method according to claim 31, wherein the performing deformation processing on the original image in the region to be processed to obtain a deformed image of the region to be processed comprises:
determining the corresponding relationship of the positions of the second pixel points of the deformed image and the first pixel points of the original image;
and determining a second pixel value of the second pixel point according to the first pixel value of the first pixel point based on the position corresponding relation.
33. The method of claim 32, wherein determining the position correspondence between the second pixel and the first pixel comprises:
and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the region to be processed.
34. The method of claim 33, wherein determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the region to be processed comprises:
determining the distance between the second pixel point and the center of the area to be processed;
determining a first ratio between the distance and a maximum radius of the area to be treated;
and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the first ratio.
35. The method of claim 33, wherein determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the region to be processed comprises:
determining the distance between the second pixel point and the center of the area to be processed;
determining the radius of the area to be processed corresponding to the second pixel point;
determining a second ratio between the distance and the radius of the area to be processed;
and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the second ratio.
36. The method of claim 33, wherein determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the region to be processed comprises:
determining a distortion intensity coefficient corresponding to the second pixel point;
and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the distance.
37. The method of claim 36, wherein determining the distortion intensity coefficient corresponding to the second pixel point comprises:
determining the radius of the area to be processed corresponding to the second pixel point;
and determining the distortion intensity coefficient corresponding to the second pixel point according to the radius of the area to be processed corresponding to the second pixel point and a preset intensity coefficient.
38. The method according to claim 35 or 37, wherein the determining the radius of the region to be processed corresponding to the second pixel point comprises:
and determining the length of the center of the region to be processed extending to the edge of the region to be processed through the second pixel point as the radius of the region to be processed corresponding to the second pixel point.
39. The method according to claim 35 or 37, wherein the determining the radius of the region to be processed corresponding to the second pixel point comprises:
determining the extending direction of the radius of the area to be processed according to the second pixel points and the center of the area to be processed;
and determining an included angle between the extending direction and the first direction, and determining the radius of the area to be processed corresponding to the second pixel point according to the included angle.
40. The method of claim 37, wherein the determining the distortion intensity coefficient corresponding to the second pixel point according to the radius of the region to be processed corresponding to the second pixel point and a preset intensity coefficient comprises:
determining a third ratio between the radius of the area to be processed corresponding to the second pixel point and the maximum radius of the area to be processed;
and determining a distortion intensity coefficient corresponding to the second pixel point according to the third ratio and a preset intensity coefficient.
41. The method of claim 37, wherein the distortion intensity coefficients corresponding to the second pixel points are positively correlated with the radius of the region to be processed corresponding to the second pixel points.
42. The method of claim 41, wherein the distortion intensity coefficients corresponding to the second pixel points of the deformed image have the largest distortion intensity coefficient corresponding to the second pixel points in the first direction and the smallest distortion intensity coefficient corresponding to the second pixel points in the second direction.
43. The method of claim 36, wherein said determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the distance comprises:
determining a fourth ratio between the distance and the radius of the eye region corresponding to the second pixel point;
and determining a first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the fourth ratio.
44. The method of claim 32, wherein determining the second pixel value of the second pixel point from the first pixel value of the first pixel point based on the positional correspondence comprises:
and determining a second pixel value of the second pixel point according to the first pixel value of the first pixel point in the original image and/or the first pixel values of other pixel points adjacent to the first pixel point in the original image.
45. An image processing apparatus comprising one or more processors, working individually or collectively, to perform the steps of:
acquiring an image to be processed, and determining key points of a face in the image to be processed;
determining an eye region in the image to be processed according to the key points of the face, wherein a first length of the eye region extending in a first direction is not equal to a second length of the eye region extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the eye region.
46. The image processing apparatus of claim 45, wherein the image processing apparatus comprises at least one of: camera, cell-phone, computer, server, movable platform.
47. An image processing apparatus comprising one or more processors, working individually or collectively, to perform the steps of:
acquiring an image to be processed;
determining a region to be processed in the image to be processed, wherein a first length of the region to be processed extending in a first direction is not equal to a second length of the region to be processed extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the area to be treated.
48. The image processing apparatus of claim 47, wherein the image processing apparatus comprises at least one of: camera, cell-phone, computer, server, movable platform.
49. A movable platform capable of carrying a camera for capturing images;
further comprising one or more processors, working individually or collectively, to perform the steps of:
acquiring an image to be processed, and determining key points of a face in the image to be processed;
determining an eye region in the image to be processed according to the key points of the face, wherein a first length of the eye region extending in a first direction is not equal to a second length of the eye region extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the eye region.
50. The movable platform of claim 49, wherein the movable platform comprises at least one of: unmanned vehicles, handheld cloud platform, cloud platform truck.
51. A movable platform capable of carrying a camera for capturing images;
further comprising one or more processors, working individually or collectively, to perform the steps of:
acquiring an image to be processed;
determining a region to be processed in the image to be processed, wherein a first length of the region to be processed extending in a first direction is not equal to a second length of the region to be processed extending in a second direction, and the first direction is different from the second direction;
and carrying out preset treatment on the area to be treated.
52. The movable platform of claim 51, wherein the movable platform comprises at least one of: unmanned vehicles, handheld cloud platform, cloud platform truck.
53. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computing program, which when executed by a processor causes the processor to realize:
the image processing method according to any one of claims 1 to 27; and/or
An image processing method as claimed in any one of claims 28 to 44.
CN202080007161.7A 2020-05-18 2020-05-18 Image processing method, apparatus, removable platform, and storage medium Pending CN113228045A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/090916 WO2021232209A1 (en) 2020-05-18 2020-05-18 Image processing method, and device, movable platform and storage medium

Publications (1)

Publication Number Publication Date
CN113228045A true CN113228045A (en) 2021-08-06

Family

ID=77086016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080007161.7A Pending CN113228045A (en) 2020-05-18 2020-05-18 Image processing method, apparatus, removable platform, and storage medium

Country Status (2)

Country Link
CN (1) CN113228045A (en)
WO (1) WO2021232209A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9569661B2 (en) * 2015-05-21 2017-02-14 Futurewei Technologies, Inc. Apparatus and method for neck and shoulder landmark detection
CN107862673B (en) * 2017-10-31 2021-08-24 北京小米移动软件有限公司 Image processing method and device
CN108090450B (en) * 2017-12-20 2020-11-13 深圳和而泰数据资源与云技术有限公司 Face recognition method and device
CN108665498B (en) * 2018-05-15 2023-05-12 北京市商汤科技开发有限公司 Image processing method, device, electronic equipment and storage medium
CN110942043B (en) * 2019-12-02 2023-11-14 深圳市迅雷网络技术有限公司 Pupil image processing method and related device
CN111144348A (en) * 2019-12-30 2020-05-12 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2021232209A1 (en) 2021-11-25

Similar Documents

Publication Publication Date Title
US11948282B2 (en) Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data
US10674069B2 (en) Method and apparatus for blurring preview picture and storage medium
JP6864449B2 (en) Methods and devices for adjusting the brightness of the image
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
WO2020259271A1 (en) Image distortion correction method and apparatus
CN105474263B (en) System and method for generating three-dimensional face model
CN109937434B (en) Image processing method, device, terminal and storage medium
CN108364254B (en) Image processing method and device and electronic equipment
CN108346130B (en) Image processing method and device and electronic equipment
US9613404B2 (en) Image processing method, image processing apparatus and electronic device
CN108682050B (en) Three-dimensional model-based beautifying method and device
CN112214773B (en) Image processing method and device based on privacy protection and electronic equipment
CN111311482A (en) Background blurring method and device, terminal equipment and storage medium
CN110706283B (en) Calibration method and device for sight tracking, mobile terminal and storage medium
CN108399599B (en) Image processing method and device and electronic equipment
CN109361850B (en) Image processing method, image processing device, terminal equipment and storage medium
CN109726613B (en) Method and device for detection
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
WO2024002186A1 (en) Image fusion method and apparatus, and storage medium
CN111402354B (en) Color contrast enhancement drawing method, device and system suitable for optical transmission type head-mounted display
US8971636B2 (en) Image creating device, image creating method and recording medium
CN113228045A (en) Image processing method, apparatus, removable platform, and storage medium
JP2019205205A (en) Image processing apparatus, image processing method, and program
CN116264612A (en) Projection correction method, projection correction device, electronic equipment and storage medium
CN110852934A (en) Image processing method and apparatus, image device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination