WO2021232209A1 - 图像处理方法、设备、可移动平台和存储介质 - Google Patents

图像处理方法、设备、可移动平台和存储介质 Download PDF

Info

Publication number
WO2021232209A1
WO2021232209A1 PCT/CN2020/090916 CN2020090916W WO2021232209A1 WO 2021232209 A1 WO2021232209 A1 WO 2021232209A1 CN 2020090916 W CN2020090916 W CN 2020090916W WO 2021232209 A1 WO2021232209 A1 WO 2021232209A1
Authority
WO
WIPO (PCT)
Prior art keywords
processed
area
pixel
pixel point
eye
Prior art date
Application number
PCT/CN2020/090916
Other languages
English (en)
French (fr)
Inventor
席迎来
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080007161.7A priority Critical patent/CN113228045A/zh
Priority to PCT/CN2020/090916 priority patent/WO2021232209A1/zh
Publication of WO2021232209A1 publication Critical patent/WO2021232209A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Definitions

  • This application relates to the field of image processing technology, and in particular to an image processing method, device, removable platform and storage medium.
  • some areas in the image need to be processed, such as elliptical areas and rectangular areas.
  • the beautification of the eyes is an important part of the beauty function.
  • beautifying the image of the eyes it is inevitable to process the frames and other objects at the same time, and the beautification image obtained is prone to abnormalities. Or inconsistent content, the user experience is not good enough.
  • this application provides an image processing method, equipment, removable platform, and storage medium, aiming to solve the problem that the processing of some areas in the image will affect the content outside the area, and the processed image is prone to abnormalities or Technical issues such as uncoordinated content.
  • an image processing method including:
  • an image processing method including:
  • the first length of the area to be processed in the first direction is not equal to the second length of the area to be processed in the second direction, the first The direction is different from the second direction;
  • an embodiment of the present application provides an image processing device, including one or more processors, which work individually or together, and are used to perform the following steps:
  • an embodiment of the present application provides an image processing device, including one or more processors, which work individually or collectively, and are used to perform the following steps:
  • the first length of the area to be processed in the first direction is not equal to the second length of the area to be processed in the second direction, the first The direction is different from the second direction;
  • an embodiment of the present application provides a movable platform capable of carrying a photographing device, and the photographing device is used to obtain images;
  • It also includes one or more processors, working individually or together, for performing the following steps:
  • an embodiment of the present application provides a movable platform capable of carrying a photographing device, and the photographing device is used to obtain images;
  • It also includes one or more processors, working individually or together, for performing the following steps:
  • the first length of the area to be processed in the first direction is not equal to the second length of the area to be processed in the second direction, the first The direction is different from the second direction;
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the above-mentioned method.
  • the embodiments of the present application provide an image processing method, device, movable platform, and storage medium. By determining the area to be processed with unequal length and width in the image, it is more accurate when processing the area to be processed, and the Accessories such as glasses in the width direction of the area to be processed are eliminated, so that when the area to be processed is pre-processed, the content outside the area is prevented from being affected, and the processed image is not prone to abnormal or inconsistent content.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of an application scenario of the image processing method
  • FIG. 3 is a schematic diagram of the effect of the current image processing method on the eye area
  • Fig. 4 is a schematic diagram of the key points of the face in the face area
  • FIG. 5 is a schematic diagram of determining the eye area according to the key points of the face
  • FIG. 6 is a schematic diagram of determining the corresponding relationship between the second pixel point and the first pixel point
  • FIG. 7 is a schematic diagram of determining the distortion intensity coefficient corresponding to the second pixel point
  • FIG. 8 is a schematic diagram of the effect of processing the eye area by the image processing method of the embodiment of the present application.
  • FIG. 9 is a schematic flowchart of an image processing method according to another embodiment of the embodiment of the present application.
  • FIG. 10 is a schematic block diagram of an image processing device provided by an embodiment of the present application.
  • Fig. 11 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the image processing method can be applied to an image processing device for processing images and other processes.
  • the image processing device includes at least one of the following: a camera, a mobile phone, a computer, a server, and a mobile platform.
  • the movable platform may include at least one of an unmanned aerial vehicle, a handheld PTZ, and a PTZ cart.
  • the unmanned aerial vehicle can be a rotary-wing drone, such as a four-rotor drone, a hexa-rotor drone, an eight-rotor drone, or a fixed-wing drone.
  • the computer may include at least one of a tablet computer, a notebook computer, a desktop computer, and the like.
  • the movable platform 10 is equipped with a photographing device 11 such as a camera.
  • the mobile platform 10 can also be communicatively connected with the terminal device 20.
  • the terminal device 20 includes, for example, at least one of a mobile phone, a computer, and a remote control.
  • the photographing device 11 may photograph images, and process the photographed images according to the image processing method, and may also send the processed images to the terminal device 20 through the movable platform 10, so that the terminal device 20 can store and/or display The processed image.
  • the photographing device 11 may photograph an image, and send the photographed image to the movable platform 10.
  • the movable platform 10 processes the image according to the image processing method, and sends the processed image to the terminal device 20 so that the terminal device 20 can store and/or display the processed image.
  • the movable platform 10 sends the image taken by the photographing device 11 to the terminal device 20, and the terminal device 20 processes the image according to the image processing method.
  • the terminal device 20 may store and/or display the processed image.
  • the photographing device may take images and process the photographed images according to the image processing method; the photographing device may also send the processed images to the terminal equipment so that the terminal equipment can store and/or display the processed images. image.
  • the photographing device may photograph an image and send the photographed image to a terminal device.
  • the terminal device may process the image according to the image processing method, and may also store and/or display the processed image.
  • Fig. 3 is a schematic diagram of processing an area to be processed in an image, such as an eye area, specifically the eye area including the eye is deformed.
  • the image on the left in Figure 3 is the image to be processed, and the image on the right is the image obtained after processing the eye area.
  • the inventor of the present application has improved the drone to solve the problem of processing some areas in the image that will affect the content outside the area, and the processed image is prone to abnormal or inconsistent content. problem.
  • the image processing method of the embodiment of the present application includes step S110 to step S130.
  • S110 Acquire an image to be processed, and determine key points of the face in the image to be processed.
  • the image to be processed includes a human face.
  • the image to be processed may be, for example, a currently captured image, or an image obtained from local storage, or an image read from another device.
  • the determining the face key points in the image to be processed includes: determining the face area in the image to be processed; determining the face key points in the face area.
  • the face area in the image to be processed is determined by a face detection algorithm.
  • the user may be prompted that no face is detected in the current image to be processed or the user may be prompted Capture or import images that include frontal faces.
  • the facial landmarks of the human face area can be determined, as shown in FIG. 4.
  • the location of the face key points of the face is determined by the face key point detection method, for example, the locations of 68 face key points are determined.
  • the facial features and facial contours can be determined according to key points of the face.
  • S120 Determine an eye area in the image to be processed according to the key points of the face, a first length of the eye area extending in a first direction and a first length of the eye area extending in the second direction.
  • the two lengths are not equal, and the first direction is different from the second direction.
  • the first direction and the second direction corresponding to the longer length direction may be referred to as the length direction of the eye area, and the corresponding shorter length direction may be referred to as the width direction of the eye area.
  • the determining the eye area in the image to be processed according to the key points of the face includes: determining the key point corresponding to the eye area from the key points of the face; The key points corresponding to the part area determine the eye area.
  • the key points corresponding to the eye area include at least one of the face key points numbered 37-42 and 43-48.
  • the key points corresponding to the eye area include the key points of the corner of the left eye and/or the key points of the corner of the right eye.
  • the key points of the left eye corners include face key points numbered 37 and 40; the key points of the right eye corners include face key points numbered 43 and 46.
  • the determining the eye area in the image to be processed according to the key points of the face includes: determining the eye area of the left eye according to the key points of the left eye and/or according to the right The key points of the corner of the eye determine the eye area of the right eye.
  • the eye area of the left eye is determined according to the key points of the corner of the left eye, and the eye area is an oval eye area.
  • the first length a of the eye region extending in the first direction is not equal to the second length b of the eye region extending in the second direction.
  • the first direction is determined by a straight line where at least two key points of the corners of the eye are located in the image to be processed.
  • the first direction is determined according to at least two key points of the left eye corners, such as the directions where A1 and A2 are located.
  • the first direction of the eye area of the left eye is determined according to the key point 40 of the corner of the eye of the left eye close to the center line of the face and the key point of the corner of the eye 37 of the left eye close to the contour of the face.
  • determining the eye area in the image to be processed according to the key points of the face in step S120 includes: determining the center of the eye area and the eye area according to the key points of the face A first length of the region extending in the first direction; the eye region is determined according to the center and the first length.
  • the center o of the eye area may be determined according to the middle position of the corner key point 40 near the center line of the face and the corner key point 37 near the contour of the face.
  • the center o of the eye area is determined based on the average value of the corner of the eye key point 40 and the position of the corner key point 37.
  • the length of the eye area extending in the first direction may be determined according to the line segment between the key point 40 near the corner of the face and the key point 37 near the contour of the face.
  • the first length a For example, the distance between the corner of the eye key point 40 and the corner of the eye key point 37 is determined as the first length a, or the distance between the corner of the eye key point 40 and the corner of the eye key point 37 is multiplied by a coefficient greater than or less than 1 to obtain the first length a.
  • the distance between the corner key point 40 and the corner key point 37 is w, and 2 ⁇ w can be determined as the first length a.
  • an eye area with unequal length and width can be determined.
  • the determining the eye area according to the center of the eye area and the first length includes: determining the second length of the eye area; and according to the center and the first length of the eye area The first length and the second length determine the eye area.
  • the first direction is different from the second direction.
  • the first direction is substantially perpendicular to the second direction.
  • Roughly vertical includes absolute vertical, and the deviation from absolute vertical is less than 5 degrees.
  • the second length of the eye region may be determined according to the first length.
  • the second length is less than the first length.
  • the second length is 0.3 times to 0.8 times the first length, for example, the second length is 0.5 times the first length.
  • the second length of the eye area may be determined according to key points of the corners of the eyes close to the midline of the face and key points of the corners of the eyes close to the contour of the face.
  • the second length b of the eye area is determined according to the distance w between the key point 40 near the corner of the face and the key point 37 near the contour of the face.
  • the second length b is 0.8 to 1.2 times the distance w, for example, the second length b is equal to the distance w between the key point 40 at the corner of the eye and the key point 37 at the corner of the eye.
  • the eye area includes an elliptical eye area
  • the first direction includes the long axis direction of the ellipse
  • the second direction includes the ellipse.
  • the direction of the minor axis As shown in Figure 5, the major axis direction can be determined according to the key points of the corner of the eye, and the minor axis direction perpendicular to the major axis direction can be determined according to the major axis direction.
  • the length of the major axis of the oval eye region is a
  • the length of the minor axis is b.
  • the determining the eye area in the image to be processed according to the key points of the face includes: determining the center and the long axis of the eye area according to the key points of the corners of the eyes in the key points of the face Length, minor axis length, and major axis end point; determine the focal length of the ellipse according to the major axis length and the minor axis length; determine the elliptical focal length according to the center, focal length, major axis length, and major axis end points Focus; determine the eye area in the image to be processed according to the focus and the length of the long axis.
  • the center o of the eye area can be determined according to the middle position of the key point 40 near the center line of the face and the key point 37 of the corner of the eye near the contour of the face.
  • the distance w from the key point 37 of the corner of the eye close to the contour of the face determines the length a of the major axis.
  • the minor axis length b can also be determined according to the major axis length a, for example, the minor axis length b is half of the major axis length a.
  • the long axis end point A1 and/or the long axis end point A2 may be determined according to the center o of the eye region and the long axis length a.
  • the focal length c of the ellipse can be determined according to the length of the major axis a and the length of the minor axis b of the ellipse.
  • the eye area of the ellipse can be determined according to the center o, focal length c, length of the major axis a and the end points of the major axis of the eye area of the ellipse. The positions of the two focal points F1 and F2 of the area.
  • the eye area in the image to be processed can be determined according to the two focal points F1 and F2 and the length a of the long axis.
  • the pixel point is located inside the eye area.
  • the pixel point is located at the boundary of the eye area.
  • the face key points determined in step S110 may also have a different distribution and position from the face key points shown in FIG. 4, and the eye area in the image to be processed may be determined according to the actual face key points determined .
  • preset processing is performed on the eye regions of the left eye and the right eye respectively.
  • the preset processing includes: at least one of deformation processing, contrast adjustment, brightness adjustment, saturation adjustment, color mapping, cropping processing, and filling processing.
  • the performing the preset processing on the eye area in step S130 includes: performing deformation processing on the original image in the eye area to obtain a deformed image of the eye area.
  • each pixel in the image to be processed it is determined whether the sum of the distance between the pixel and the two focal points F1 and F2 is less than or equal to 2 ⁇ a. If it is less than or equal to 2 ⁇ a, the pixel is in the eye. Department area.
  • the original image in the eye area is deformed, so that the eyes in the deformed image obtained look larger or smaller.
  • the eye area By determining the eye area with unequal length and width, the eye area can be made more accurate, and accessories such as glasses in the width direction of the eye area can be excluded, so as to prevent the area from being affected when the eye area is pre-processed Outside content, the processed image is not prone to abnormal or inconsistent content.
  • the performing deformation processing on the original image in the eye area to obtain the deformed image of the eye area includes: determining the second pixel of the deformed image and the first pixel of the original image The position correspondence relationship of the pixel points; based on the position correspondence relationship, the second pixel value of the second pixel point is determined according to the first pixel value of the first pixel point.
  • the image after the deformation processing is determined to be the deformed image according to the correspondence between the pixels of the eye area before and after the deformation processing and the pixel values of the pixels of the eye area before the deformation processing.
  • the determining the second pixel value of the second pixel according to the first pixel value of the first pixel based on the position correspondence relationship includes: according to the first pixel value in the original image The first pixel value of the pixel and/or the first pixel value of other pixels adjacent to the first pixel in the original image determine the second pixel value of the second pixel.
  • each pixel in the original image in the eye area in the buffer For example, store the red, green, and blue components of each pixel in the original image in the eye area in the buffer; create a blank second image that is consistent with the original image boundary; according to the first pixel in the original image A pixel value and/or the first pixel value of other pixels adjacent to the first pixel in the original image determine the second pixel value of the second pixel corresponding to the first pixel, and the second pixel The pixel value is filled to the position of the second pixel in the second image.
  • the red, green, and blue components of the second pixel corresponding to the first pixel can be determined by linear interpolation of the red, green, and blue components of the four pixels above, below, left, and right of the first pixel.
  • the red, green, and blue components of the second pixel are synthesized into new pixels and filled in the position of the second pixel in the second image.
  • the position of the first pixel corresponding to the second pixel in the second image may be determined, based on the first pixel value of the first pixel and/or the proximity of the first pixel in the original image
  • the first pixel value of the other pixel point is the second pixel value of the second pixel point.
  • the position of the second pixel point corresponding to the first image in the first image may be determined, based on the first pixel value of the first pixel point and/or the adjacent first pixel point in the original image
  • the first pixel value of other pixels is the second pixel value of the second pixel.
  • the determining the positional correspondence between the second pixel point and the first pixel point includes: according to a distortion intensity coefficient, the distance between the second pixel point and the center of the eye region To determine the first pixel point corresponding to the second pixel point.
  • the distance between the second pixel point P and the center o of the eye area in the second image is r
  • the distortion intensity coefficient is s
  • the second pixel point can be determined according to s and r The position of the first pixel point P'corresponding to P.
  • the determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region includes: determining the The distance between the second pixel point and the center of the eye area; determine the first ratio between the distance and the maximum radius of the eye area; according to the distortion intensity coefficient and the first ratio Determine the first pixel point corresponding to the second pixel point.
  • the maximum radius of the eye region may be based on the A length a is determined, for example, the maximum radius r max of the eye area is equal to the first length a.
  • the second pixel P (d x, d y) corresponding to a first pixel P 'position (d' x, d 'y ) can be determined according to the formula:
  • (d x , d y ) is the coordinate of the second pixel point P relative to the center o of the second image
  • r is the distance between the second pixel point P and the center o
  • (d′ x , d′ y ) Is the coordinate of the first pixel relative to the center o of the first image
  • s is the distortion intensity coefficient
  • r max is the maximum radius of the eye area, for example, the length of the eye area.
  • the effect of deformation is to reduce, for example, the eyes of the second image appear smaller; when the value of s is in the range of (0,1], as shown in Figure 6 , The effect of deformation is to enlarge, such as the eyes of the second image appear bigger.
  • the pixel point closer to the center o of the eye region has a greater degree of deformation, that is, the greater the deformation; the pixel point farther from the center o of the eye region, the lesser the degree of deformation, that is The smaller the deformation, for example, at the end of the long axis of the elliptical eye area, the pixel has no deformation.
  • the determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region includes: determining The distance between the second pixel point and the center of the eye area; determine the eye area radius corresponding to the second pixel point; determine the eye area radius corresponding to the second pixel point and the distance The second ratio between the two; the first pixel point corresponding to the second pixel point is determined according to the distortion intensity coefficient and the second ratio.
  • the eye area radius R corresponding to a pixel point represents a linear distance extending from the center o of the eye area through the pixel point to the boundary of the eye area.
  • the determining the radius of the eye area corresponding to the second pixel point includes: determining that the length from the center of the eye area to the edge of the eye area through the second pixel point is The radius of the eye area corresponding to the second pixel.
  • the eye area radius R of the pixel on the short axis of the oval eye area is equal to the short axis length b
  • the eye area radius R of the pixel on the long axis is equal to the long axis length a.
  • the area radius R is not less than b and not greater than a.
  • the determining the eye area radius corresponding to the second pixel point includes: determining the extension direction of the eye area radius according to the second pixel point and the center of the eye area; The included angle between the extension direction and the first direction determines the radius of the eye area corresponding to the second pixel according to the included angle.
  • the extension direction and the first direction such as the long axis direction, the included angle is ⁇
  • the eye corresponding to the second pixel point can be determined according to ⁇ and the long axis length a and the short axis length b. ⁇ radius R.
  • it can be determined according to the following formula:
  • the angle between the first direction and the lateral direction of the image to be processed is 0 to 45 degrees.
  • the angle between the length direction of the eye area and the lateral direction of the image to be processed is ⁇ .
  • the second pixel P (d x, d y) corresponding to a first pixel P 'position (d' x, d 'y ) can be determined according to the formula:
  • the effect of deformation is to reduce, for example, the eyes of the second image appear smaller; when the value of s is in the range of (0,1], as shown in Figure 6 , The effect of deformation is to enlarge, such as the eyes of the second image appear bigger.
  • the pixel point closer to the center o of the eye area has a greater degree of deformation, that is, the greater the deformation; the pixel that is closer to the boundary of the eye area
  • the smaller the degree of deformation the smaller the deformation.
  • the second image and other parts of the image Smooth transition between, looks more natural.
  • the eye area radius R of the pixel point closer to the length direction of the eye area is larger, the deformation is larger, and the eye area closer to the pixel point in the width direction of the eye area
  • the smaller the radius R of the region the smaller the deformation; for example, the deformation of the pixels in the long axis direction of the oval eye region is larger, and the pixel points in the short axis direction are deformed smaller.
  • the big-eye effect has greater deformation along the line of the corner of the eye, and less deformation along the perpendicular direction of the line of the corner of the eye.
  • the determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the eye region includes: determining the The distortion intensity coefficient corresponding to the second pixel point; the second pixel point corresponding to the second pixel point is determined according to the distortion intensity coefficient corresponding to the second pixel point and the distance between the second pixel point and the center of the eye region The first pixel.
  • the determining the distortion intensity coefficient corresponding to the second pixel includes: determining the radius of the eye area corresponding to the second pixel; and according to the radius of the eye area corresponding to the second pixel and the prediction Let the intensity coefficient determine the distortion intensity coefficient corresponding to the second pixel point.
  • the distortion intensity coefficient corresponding to the second pixel point is positively correlated with the radius of the eye region corresponding to the second pixel point.
  • the distance from the center o of the eye area is constant, the closer the pixel point in the length direction of the eye area, the larger the eye area radius R, the greater the distortion intensity coefficient, and the greater the deformation; the closer to the pixel point in the width direction of the eye area The smaller the radius R of the eye area, the smaller the distortion strength coefficient and the smaller the deformation.
  • the distortion intensity coefficient corresponding to the second pixel point in the first direction is the largest, and the second pixel point in the second direction corresponds to the largest distortion intensity coefficient.
  • the distortion intensity coefficient corresponding to the pixel point is the smallest.
  • the distortion of the pixels in the long axis direction is larger, and the distortion of the pixels in the short axis direction is smaller.
  • the big-eye effect has a larger deformation along the line of the corner of the eye, and a smaller deformation along the perpendicular direction of the line of the corner of the eye, which can further avoid or reduce the distortion of appendages such as the spectacle frame.
  • the determining the distortion intensity coefficient corresponding to the second pixel point according to the radius of the eye area corresponding to the second pixel point and the preset intensity coefficient includes: determining the eye corresponding to the second pixel point A third ratio between the radius of the area and the maximum radius of the eye area; the distortion intensity coefficient corresponding to the second pixel point is determined according to the third ratio and a preset intensity coefficient.
  • the eye area radius R corresponding to the second pixel point and the maximum eye area radius r max for example, the third ratio between the long axis a of the oval eye area is R ⁇ a, according to The product of the third ratio R ⁇ a and the preset intensity coefficient s can determine the distortion intensity coefficient s′ corresponding to the second pixel point.
  • the determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the distance includes: determining that the distance corresponds to the second pixel point The fourth ratio between the radii of the eye region; the first pixel corresponding to the second pixel is determined according to the distortion intensity coefficient corresponding to the second pixel and the fourth ratio.
  • the second pixel P (d x, d y) corresponding to a first pixel P 'position (d' x, d 'y ) can be determined according to the formula:
  • r is the distance between the second pixel point P and the center o
  • R is the radius of the eye area corresponding to the second pixel point
  • s′ is the distortion intensity coefficient corresponding to the second pixel point.
  • the pixel point closer to the center o of the eye region has a greater degree of deformation, that is, the greater the deformation; the pixel point farther from the center o of the eye region, the lesser the degree of deformation, that is The smaller the deformation, for example, at the end of the long axis of the elliptical eye area, the pixel has no deformation.
  • the distortion of the pixels in the long axis direction is larger, and the distortion of the pixels in the short axis direction is smaller.
  • the big-eye effect has greater deformation along the line of the corner of the eye, and less deformation along the perpendicular direction of the line of the corner of the eye.
  • the pixel point closer to the center o of the eye area has a greater deformation; the pixel point closer to the boundary of the eye area has a smaller deformation. For example, at the boundary of the oval eye area, the pixels are not deformed.
  • the processed image is made more natural and smoother, and the intensity of the processing on the boundary of the eye area with unequal length and width can be made the weakest, for example No deformation is performed, so that the processed eye area image can smoothly transition with the external image.
  • FIG. 8 is a schematic diagram of processing an area to be processed in an image, such as an eye area, by the image processing method of an embodiment of the present application, specifically, the eye area including the eye is deformed.
  • the image on the left in Figure 8 is the image to be processed, and the image on the right is the image obtained after processing the eye area. It can be concluded that when the eye area is pre-processed, it can prevent the eye area from being affected. Content, such as glasses, the processed image is not prone to abnormal or inconsistent content.
  • the image processing method provided by the embodiments of this application makes the eye area more accurate by determining the eye area with unequal length and width, and can eliminate the accessories such as glasses in the width direction of the eye area.
  • the preset processing is performed, the content outside the area is prevented from being affected, and the processed image is not prone to abnormal or inconsistent content. Even if the spectacle frame is very close to the eye socket, its deformation can be reduced to a very small and imperceptible degree, which improves the user experience.
  • the image processing method of the embodiments of the present application can be used to process the eye area of the face, for example, the eye area of the face of people, cats, dogs, etc., and can also be applied to processing including unequal length and width.
  • the image of the area to be processed for example, the entire face, arms, legs, and the entire human body can be processed as the area to be processed.
  • the area to be processed is not limited to the human body or parts of the human body, and may also be other objects.
  • FIG. 9 is a schematic flowchart of an image processing method according to another embodiment of the present application.
  • the image processing method can be applied to an image processing device for processing images and other processes.
  • the image processing device includes at least one of the following: a camera, a mobile phone, a computer, a server, and a mobile platform.
  • the movable platform may include at least one of an unmanned aerial vehicle, a handheld PTZ, and a PTZ cart.
  • the unmanned aerial vehicle can be a rotary-wing drone, such as a four-rotor drone, a hexa-rotor drone, an eight-rotor drone, or a fixed-wing drone.
  • the computer may include at least one of a tablet computer, a notebook computer, a desktop computer, and the like.
  • the movable platform is equipped with a photographing device, such as a camera.
  • the mobile platform can also communicate with image processing equipment, such as mobile phones, computers, and remote controls.
  • the image processing equipment can acquire the image taken by the camera mounted on the movable platform, and process the image.
  • the image processing method of this embodiment includes step S210 to step S240.
  • S220 Determine an area to be processed in the image to be processed, where the first length of the area to be processed in the first direction is not equal to the second length of the area to be processed in the second direction.
  • the first direction is different from the second direction.
  • the determining the area to be processed in the image to be processed includes: displaying the image to be processed, and determining the area to be processed according to a user's circle selection operation of the image to be processed. For example, the user's circle selection operation determines that the area where the arm is located is the area to be processed.
  • the determining the area to be processed in the image to be processed includes: determining a target object in the image to be processed, and determining the area to be processed according to the area where the target object is located.
  • target objects such as human body, eyes, face, and legs in the image to be processed, and then determining the area to be processed according to the area where the target object is located, such as determining the face, arms, legs, and the entire
  • the elliptical area or rectangular area where the target object such as the human body is located is the area to be processed.
  • the target object is not limited to the human body or parts of the human body, and may also be other objects.
  • the first direction is substantially perpendicular to the second direction.
  • the area to be processed includes an elliptical area to be processed, the first direction includes a major axis direction of the ellipse, and the second direction includes a minor axis direction of the ellipse.
  • the performing preset processing on the area to be processed includes: performing deformation processing on an original image in the area to be processed to obtain a deformed image of the area to be processed.
  • the performing deformation processing on the original image in the area to be processed to obtain the deformed image of the area to be processed includes: determining the second pixel of the deformed image and the first pixel of the original image The position correspondence relationship of the pixel points; based on the position correspondence relationship, the second pixel value of the second pixel point is determined according to the first pixel value of the first pixel point.
  • the determining the positional correspondence between the second pixel point and the first pixel point includes: according to a distortion intensity coefficient, the distance between the second pixel point and the center of the area to be processed , Determining the first pixel point corresponding to the second pixel point.
  • the determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the area to be processed includes: determining the first pixel point The distance between the two pixels and the center of the area to be processed; determine the first ratio between the distance and the maximum radius of the area to be processed; determine the distance between the distortion intensity coefficient and the first ratio The first pixel point corresponding to the second pixel point.
  • the determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the area to be processed includes: determining the first pixel point The distance between two pixels and the center of the area to be processed; determine the radius of the area to be processed corresponding to the second pixel; determine the second ratio between the distance and the radius of the area to be processed; The distortion intensity coefficient and the second ratio determine the first pixel point corresponding to the second pixel point.
  • the determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient and the distance between the second pixel point and the center of the area to be processed includes: determining the first pixel point The distortion intensity coefficient corresponding to the two pixel points; and the first pixel point corresponding to the second pixel point is determined according to the distortion intensity coefficient corresponding to the second pixel point and the distance.
  • the determining the distortion intensity coefficient corresponding to the second pixel includes: determining the radius of the area to be processed corresponding to the second pixel; and according to the radius of the area to be processed corresponding to the second pixel and the preset Let the intensity coefficient determine the distortion intensity coefficient corresponding to the second pixel point.
  • the determining the radius of the area to be processed corresponding to the second pixel point includes: determining that the length of the center of the area to be processed extending through the second pixel point to the edge of the area to be processed is said The radius of the area to be processed corresponding to the second pixel.
  • the determining the radius of the area to be processed corresponding to the second pixel includes: determining the extension direction of the radius of the area to be processed according to the second pixel and the center of the area to be processed; The included angle between the extension direction and the first direction determines the radius of the area to be processed corresponding to the second pixel according to the included angle.
  • the determining the distortion intensity coefficient corresponding to the second pixel point according to the radius of the area to be processed corresponding to the second pixel point and the preset intensity coefficient includes: determining the to-be-processed pixel point corresponding to the second pixel point The third ratio between the area radius and the maximum radius of the area to be processed; the distortion intensity coefficient corresponding to the second pixel point is determined according to the third ratio and a preset intensity coefficient.
  • the distortion intensity coefficient corresponding to the second pixel point is positively correlated with the radius of the area to be processed corresponding to the second pixel point.
  • the second pixel point in the first direction corresponds to the largest distortion intensity coefficient
  • the second pixel point in the second direction corresponds to the largest distortion intensity coefficient.
  • the distortion intensity coefficient corresponding to the pixel point is the smallest.
  • the determining the first pixel point corresponding to the second pixel point according to the distortion intensity coefficient corresponding to the second pixel point and the distance includes: determining that the distance corresponds to the second pixel point The fourth ratio between the radii of the eye region; the first pixel corresponding to the second pixel is determined according to the distortion intensity coefficient corresponding to the second pixel and the fourth ratio.
  • the determining the second pixel value of the second pixel according to the first pixel value of the first pixel based on the position correspondence relationship includes: according to the The first pixel value of the first pixel and/or the first pixel value of other pixels adjacent to the first pixel in the original image determine the second pixel value of the second pixel.
  • the image processing method provided by the embodiments of the present application determines the area to be processed with unequal length and width in the image, so that the processing of the area to be processed is more accurate, and the accessories such as glasses in the width direction of the area to be processed can be Elimination, so as to prevent the content outside the area from being affected when the area to be processed is pre-processed, and the processed image is not prone to abnormal or inconsistent content.
  • FIG. 10 is a schematic block diagram of an image processing device 600 according to an embodiment of the present application.
  • the image processing device 600 includes one or more processors 601, which work individually or together, for executing the steps of the aforementioned image processing method.
  • the image processing device 600 may further include a memory 602.
  • the processor 601 and the memory 602 are connected by a bus 603, and the bus 603 is, for example, an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 601 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 602 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor 601 is configured to run a computer program stored in the memory 602, and implement the aforementioned image processing method when the computer program is executed.
  • the image processing equipment includes at least one of the following: a camera, a mobile phone, a computer, a server, and a mobile platform.
  • the processor 601 is configured to run a computer program stored in the memory 602, and implement the following steps when the computer program is executed:
  • the processor 601 is configured to run a computer program stored in the memory 602, and implement the following steps when the computer program is executed:
  • the first length of the area to be processed in the first direction is not equal to the second length of the area to be processed in the second direction, the first The direction is different from the second direction;
  • FIG. 11 is a schematic block diagram of a movable platform 700 provided by an embodiment of the present application.
  • the movable platform 700 can be equipped with a photographing device 30, and the photographing device 30 is used to acquire images.
  • the movable platform 700 includes one or more processors 701, working individually or together, for executing the steps of the aforementioned image processing method.
  • the mobile platform 700 may also include a memory 702.
  • the processor 701 and the memory 702 are connected by a bus 703, and the bus 703 is, for example, an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 701 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 702 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor 701 is configured to run a computer program stored in the memory 702, and implement the aforementioned image processing method when the computer program is executed.
  • the movable platform 700 includes at least one of the following: an unmanned aerial vehicle, a handheld pan-tilt, and a pan-tilt cart.
  • the processor 701 is configured to run a computer program stored in the memory 702, and implement the following steps when the computer program is executed:
  • the processor 701 is configured to run a computer program stored in the memory 702, and implement the following steps when the computer program is executed:
  • the first length of the area to be processed in the first direction is not equal to the second length of the area to be processed in the second direction, the first The direction is different from the second direction;
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, when the computer program is executed by a processor, the processor realizes The steps of the image processing method provided in the above embodiment.
  • the computer-readable storage medium may be the image processing device or the internal storage unit of the removable platform described in any of the foregoing embodiments, for example, the hard disk or memory of the image processing device or the removable platform.
  • the computer-readable storage medium may also be an external storage device of the image processing device or a removable platform, for example, a plug-in hard disk or a smart memory card (Smart Media Card, which is equipped on the image processing device or the removable platform). SMC), Secure Digital (SD) card, Flash Card, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法,包括:获取待处理图像(S110);确定待处理图像中的眼部区域,眼部区域在第一方向上延伸的第一长度与眼部区域在第二方向上延伸的第二长度不相等,第一方向与第二方向不同(S120);对眼部区域进行预设处理(S130)。本申请能够准确地对特定区域进行处理。还提供了设备、可移动平台和存储介质。

Description

图像处理方法、设备、可移动平台和存储介质 技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像处理方法、设备、可移动平台和存储介质。
背景技术
随着互联网技术的发展,视频、图像娱乐和内容的消费趋向于大众化,图像拍摄者、制作者素材使用者对图像处理的需求也日益增长。
有些场景中需要对图像中的部分区域进行处理,例如对其中的椭圆区域、矩形区域等进行处理。例如,眼部的美化是美颜功能的重要组成部分。但是在人脸配戴眼镜或者装饰物等的时候,眼睛周围有镜框等物,在对眼部的图像进行美化处理时,难以避免的同时对镜框等物进行处理,得到的美化图像容易出现异常或不协调的内容,用户体验不够好。
发明内容
基于此,本申请提供了一种图像处理方法、设备、可移动平台和存储介质,旨在解决对图像中的部分区域进行处理时会影响到区域外的内容,处理后的图像容易出现异常或不协调的内容等技术问题。
第一方面,本申请实施例提供了一种图像处理方法,包括:
获取待处理图像,确定所述待处理图像中的脸部关键点;
根据所述脸部关键点确定所述待处理图像中的眼部区域,所述眼部区域在第一方向上延伸的第一长度与所述眼部区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
对所述眼部区域进行预设处理。
第二方面,本申请实施例提供了一种图像处理方法,包括:
获取待处理图像;
确定所述待处理图像中的待处理区域,所述待处理区域在第一方向上延伸的第一长度与所述待处理区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
对所述待处理区域进行预设处理。
第三方面,本申请实施例提供了一种图像处理设备,包括一个或多个处理器,单独地或共同地工作,用于执行如下步骤:
获取待处理图像,确定所述待处理图像中的脸部关键点;
根据所述脸部关键点确定所述待处理图像中的眼部区域,所述眼部区域在第一方向上延伸的第一长度与所述眼部区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
对所述眼部区域进行预设处理。
第四方面,本申请实施例提供了一种图像处理设备,包括一个或多个处理器,单独地或共同地工作,用于执行如下步骤:
获取待处理图像;
确定所述待处理图像中的待处理区域,所述待处理区域在第一方向上延伸的第一长度与所述待处理区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
对所述待处理区域进行预设处理。
第五方面,本申请实施例提供了一种可移动平台,能够搭载拍摄装置,所述拍摄装置用于获取图像;
还包括一个或多个处理器,单独地或共同地工作,用于执行如下步骤:
获取待处理图像,确定所述待处理图像中的脸部关键点;
根据所述脸部关键点确定所述待处理图像中的眼部区域,所述眼部区域在第一方向上延伸的第一长度与所述眼部区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
对所述眼部区域进行预设处理。
第六方面,本申请实施例提供了一种可移动平台,能够搭载拍摄装置,所述拍摄装置用于获取图像;
还包括一个或多个处理器,单独地或共同地工作,用于执行如下步骤:
获取待处理图像;
确定所述待处理图像中的待处理区域,所述待处理区域在第一方向上延伸的第一长度与所述待处理区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
对所述待处理区域进行预设处理。
第七方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现上述的方法。
本申请实施例提供了一种图像处理方法、设备、可移动平台和存储介质,通过在图像中确定长、宽不相等的待处理区域,使得在对待处理区域进行处理时,更准确,可以将待处理区域宽度方向上的眼镜等附属物排除,从而在对待处理区域进行预设处理时,防止影响到区域外的内容,处理后的图像不容易出现异常或不协调的内容。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请实施例的公开内容。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种图像处理方法的流程示意图;
图2是图像处理方法的一种应用场景的示意图;
图3是目前图像处理方法对眼部区域进行处理的效果示意图;
图4是人脸区域中脸部关键点的示意图;
图5是根据脸部关键点确定眼部区域的示意图;
图6是确定第二像素点与第一像素点对应关系的示意图;
图7是确定第二像素点对应的扭曲强度系数的示意图;
图8是本申请实施例的图像处理方法对眼部区域进行处理的效果示意图;
图9是本申请实施例另一实施例提供的一种图像处理方法的流程示意图;
图10是本申请实施例提供的一种图像处理设备的示意性框图;
图11是本申请实施例提供的一种可移动平台的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
请参阅图1,图1是本申请实施例提供的一种图像处理方法的流程示意图。所述图像处理方法可以应用在图像处理设备中,用于对图像进行处理等过程。
在一些实施方式中,图像处理设备包括如下至少一种:相机、手机、电脑、服务器、可移动平台。
其中,可移动平台可以包括无人飞行器、手持云台、云台车等中的至少一种。进一步而言,无人飞行器可以为旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机。其中,电脑可以包括平板电脑、笔记本电脑、台式电脑等中的至少一项。
在一些实施方式中,如图2所示,可移动平台10搭载有拍摄装置11,如相机。可移动平台10还能够与终端设备20通信连接。终端设备20例如包括手机、电脑、遥控器等中的至少一项。
示例性的,拍摄装置11可以拍摄图像,以及根据图像处理方法对拍摄的图像进行处理,还可以将处理后的图像通过可移动平台10发送给终端设备20,以便终端设备20储存和/或显示处理后的图像。
示例性的,拍摄装置11可以拍摄图像,以及将拍摄的图像的发送给可移动 平台10。可移动平台10根据图像处理方法对图像进行处理,以及将处理后的图像发送给终端设备20,以便终端设备20储存和/或显示处理后的图像。
示例性的,可移动平台10将拍摄装置11拍摄的图像发送给终端设备20,由终端设备20根据图像处理方法对图像进行处理。终端设备20可以储存和/或显示处理后的图像。
在其他一些实施方式中,拍摄装置可以拍摄图像,以及根据图像处理方法对拍摄的图像进行处理;拍摄装置还可以将处理后的图像发送给终端设备,以便终端设备储存和/或显示处理后的图像。或者拍摄装置可以拍摄图像,以及将拍摄的图像的发送给终端设备,终端设备可以根据图像处理方法对图像进行处理,还可以储存和/或显示处理后的图像。
如图3所示为对图像中的待处理区域,如眼部区域进行处理的示意图,具体的是对包含眼部的眼部区域进行扭曲变形。图3中左侧的图像为待处理图像,右侧的图像为对眼部区域进行处理后得到的图像,可以得出当人脸配戴眼镜时,眼睛周围有镜框,扭曲变形时镜框也会发生变形扭曲,出现异常或不协调的情况,用户体验较差。
针对该发现,本申请的发明人对无人机进行了改进,以解决对图像中的部分区域进行处理时会影响到区域外的内容,处理后的图像容易出现异常或不协调的内容等技术问题。
图1所示,本申请实施例的图像处理方法包括步骤S110至步骤S130。
S110、获取待处理图像,确定所述待处理图像中的脸部关键点。
示例性的,待处理图像中包括人脸。
待处理图像例如可以为当前拍摄的图像,或者为从本地存储获取的图像,或者为从其他设备读取的图像等。
在一些实施方式中,所述确定所述待处理图像中的脸部关键点,包括:确定所述待处理图像中的人脸区域;确定所述人脸区域的脸部关键点。
示例性的,通过人脸检测算法确定所述待处理图像中的人脸区域。
示例性的,若未在所述待处理图像中检测到人脸区域,或者检测到的人脸不是正面的人脸图像,则可以提示用户当前的待处理图像中未检测到人脸或提示用户拍摄或导入包括正面人脸的图像。
示例性的,可以确定所述人脸区域的脸部关键点(landmark),如图4所 示。例如,通过人脸关键点检测方法确定出人脸的脸部关键点的位置,例如确定68个脸部关键点的位置。
具体的,可以根据脸部关键点确定面部五官位置和面部轮廓。
S120、根据所述脸部关键点确定所述待处理图像中的眼部区域,所述眼部区域在第一方向上延伸的第一长度与所述眼部区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同。
例如,第一方向、第二方向中对应的长度较长的方向可以称为眼部区域的长度方向,对应的长度较短的方向可以称为眼部区域的宽度方向。
示例性的,确定矩形、椭圆形等长宽不相等的眼部区域。
在一些实施方式中,所述根据所述脸部关键点确定所述待处理图像中的眼部区域,包括:从所述脸部关键点中确定眼部区域对应的关键点;根据所述眼部区域对应的关键点确定所述眼部区域。
如图4所示,眼部区域对应的关键点包括编号37-42、43-48的脸部关键点中的至少一个。
示例性的,所述眼部区域对应的关键点包括左眼的眼角关键点和/或右眼的眼角关键点。
如图4所示,左眼的眼角关键点包括编号37、编号40的脸部关键点;右眼的眼角关键点包括编号43、编号46的脸部关键点。
示例性的,所述根据所述脸部关键点确定所述待处理图像中的眼部区域,包括:根据所述左眼的眼角关键点确定左眼的眼部区域和/或根据所述右眼的眼角关键点确定右眼的眼部区域。
如图5所示,根据所述左眼的眼角关键点确定左眼的眼部区域,该眼部区域为椭圆形的眼部区域。
示例性的,如图5所示,所述眼部区域在第一方向上延伸的第一长度a与所述眼部区域在第二方向上延伸的第二长度b不相等。
示例性的,所述第一方向由所述待处理图像中至少两个眼角关键点所在的直线确定。
例如,如图5所示,根据左眼的至少两个眼角关键点确定第一方向,如A1、A2所在的方向。示例性的,根据左眼的靠近脸部中线的眼角关键点40和左眼的靠近脸部轮廓的眼角关键点37确定左眼眼部区域的第一方向。
在一些实施方式中,步骤S120中根据所述脸部关键点确定所述待处理图像中的眼部区域,包括:根据所述脸部关键点确定所述眼部区域的中心和所述眼部区域在所述第一方向上延伸的第一长度;根据所述中心和所述第一长度确定所述眼部区域。
示例性的,如图5所示,可以根据靠近脸部中线的眼角关键点40和靠近脸部轮廓的眼角关键点37的中间位置确定所述眼部区域的中心o。例如根据眼角关键点40和眼角关键点37位置的平均值确定眼部区域的中心o。
示例性的,如图5所示,可以根据靠近脸部中线的眼角关键点40和靠近脸部轮廓的眼角关键点37之间的线段确定所述眼部区域在所述第一方向上延伸的第一长度a。例如将眼角关键点40和眼角关键点37之间的距离确定为第一长度a,或者将眼角关键点40和眼角关键点37之间的距离乘以大于1或小于1的系数得到第一长度a。如图5所示,眼角关键点40和眼角关键点37之间的距离为w,可以将2×w确定为第一长度a。
具体的,根据眼部区域的中心o和眼部区域在所述第一方向上延伸的第一长度a可以确定一个长、宽不相等的眼部区域。
示例性的,所述根据所述眼部区域的中心和所述第一长度确定所述眼部区域,包括:确定所述眼部区域的第二长度;根据所述眼部区域的中心和所述第一长度、所述第二长度确定所述眼部区域。
具体的,所述第一方向与所述第二方向不同。例如,所述第一方向与所述第二方向大致垂直。其中大致垂直包括绝对垂直、与绝对垂直的偏差小于5度。
示例性的,可以根据所述第一长度确定所述眼部区域的第二长度。例如,第二长度小于第一长度。
示例性的,第二长度为第一长度的0.3倍至0.8倍,例如第二长度为第一长度的0.5倍。
示例性的,可以根据靠近脸部中线的眼角关键点和靠近脸部轮廓的眼角关键点确定所述眼部区域的第二长度。例如,根据靠近脸部中线的眼角关键点40和靠近脸部轮廓的眼角关键点37之间的距离w确定所述眼部区域的第二长度b。示例性的,第二长度b为该距离w的0.8倍至1.2倍,例如第二长度b等于眼角关键点40和眼角关键点37之间的距离w。
在一些实施方式中,如图5所示,所述眼部区域包括椭圆形的眼部区域, 所述第一方向包括所述椭圆形的长轴方向,所述第二方向包括所述椭圆形的短轴方向。如图5所示,可以根据眼角关键点确定长轴方向,可以根据长轴方向确定垂直于长轴方向的短轴方向。示例性的,椭圆形的眼部区域的长轴的长度为a,短轴的长度为b。
示例性的,所述根据所述脸部关键点确定所述待处理图像中的眼部区域,包括:根据所述脸部关键点中的眼角关键点确定所述眼部区域的中心、长轴长度、短轴长度和长轴端点;根据所述长轴长度和所述短轴长度确定所述椭圆形的焦距;根据所述中心、焦距、长轴长度和长轴端点确定所述椭圆形的焦点;根据所述焦点和所述长轴长度确定所述待处理图像中的眼部区域。
如图5所示,可以根据靠近脸部中线的眼角关键点40和靠近脸部轮廓的眼角关键点37的中间位置确定所述眼部区域的中心o,根据靠近脸部中线的眼角关键点40和靠近脸部轮廓的眼角关键点37之间的距离w确定长轴长度a。示例性的,还可以根据长轴长度a确定短轴长度b,例如短轴长度b为长轴长度a的一半。
示例性的,可以根据眼部区域的中心o和长轴长度a确定长轴端点A1和/或长轴端点A2。
根据椭圆形的长轴长度a、短轴长度b可以确定椭圆形的焦距c,根据椭圆形的眼部区域的中心o、焦距c、长轴长度a和长轴端点可以确定椭圆形的眼部区域的两个焦点F1、F2的位置。
具体的,根据两个焦点F1、F2和所述长轴长度a可以确定所述待处理图像中的眼部区域。示例性的,如果待处理图像中一个像素点到两个焦点F1、F2之间的距离之和小于等于2×a,则该像素点位于眼部区域内部。具体的,若像素点到两个焦点F1、F2之间的距离之和等于2×a,则该像素点位于眼部区域的边界。
可以理解的,步骤S110确定的脸部关键点也可以与图4中所示的脸部关键点具有不同的分布和位置,可以根据实际确定的脸部关键点确定待处理图像中的眼部区域。
S130、对所述眼部区域进行预设处理。
示例性的,对左眼、右眼的眼部区域分别进行预设处理。
在一些实施方式中,所述预设处理包括:变形处理、对比度调节、亮度调 节、饱和度调节、色彩映射、裁剪处理、填充处理中的至少一种。
在一些实施方式中,步骤S130中所述对所述眼部区域进行预设处理,包括:对所述眼部区域内的原始图像进行形变处理,得到所述眼部区域的形变图像。
示例性的,对待处理图像中的各像素点,确定该像素点到两个焦点F1、F2之间的距离之和是否小于等于2×a,若小于等于2×a,则该像素点在眼部区域内。
示例性的,通过对眼部区域内的原始图像进行形变处理,以使得到的形变图像中的眼睛看起来更大或更小。
通过确定长、宽不相等的眼部区域,使得眼部区域更准确,可以将眼部区域宽度方向上的眼镜等附属物排除,从而在对眼部区域进行预设处理时,防止影响到区域外的内容,处理后的图像不容易出现异常或不协调的内容。
示例性的,所述对所述眼部区域内的原始图像进行形变处理,得到所述眼部区域的形变图像,包括:确定所述形变图像的第二像素点和所述原始图像的第一像素点的位置对应关系;基于所述位置对应关系,根据所述第一像素点的第一像素值确定所述第二像素点的第二像素值。
具体的,根据形变处理前后眼部区域像素的对应关系和形变处理前眼部区域像素的像素值确定形变处理后的图像为形变图像。
示例性的,所述基于所述位置对应关系,根据所述第一像素点的第一像素值确定所述第二像素点的第二像素值,包括:根据所述原始图像中所述第一像素点的第一像素值和/或所述原始图像中所述第一像素点邻近的其他像素点的第一像素值确定所述第二像素点的第二像素值。
例如,将眼部区域内原始图像中各像素的红、绿、蓝分量存入缓存;创建和原始图像边界一致的空白的第二图像;根据所述原始图像中所述第一像素点的第一像素值和/或所述原始图像中所述第一像素点邻近的其他像素点的第一像素值确定所述第一像素点对应的第二像素点的第二像素值,将该第二像素值填充到所述第二图像中第二像素点的位置。
例如,可以通过对第一像素点上、下、左、右四个像素点的红、绿、蓝分量进行线性插值确定该第一像素点对应的第二像素点的红、绿、蓝分量,将该第二像素点的红、绿、蓝分量合成新的像素,填充到所述第二图像中所述第二像素点的位置。
示例性的,可以确定第二图像中第二像素点对应的第一像素点的位置,根据所述第一像素点的第一像素值和/或所述原始图像中所述第一像素点邻近的其他像素点的第一像素值所述第二像素点的第二像素值。
示例性的,可以确定第一图像中第一图像对应的第二像素点的位置,根据所述第一像素点的第一像素值和/或所述原始图像中所述第一像素点邻近的其他像素点的第一像素值所述第二像素点的第二像素值。
在一些实施方式中,所述确定所述第二像素点和所述第一像素点的位置对应关系,包括:根据扭曲强度系数、所述第二像素点与所述眼部区域的中心之间的距离,确定所述第二像素点对应的第一像素点。
示例性的,如图6所示,第二图像中第二像素点P与眼部区域的中心o之间的距离为r,扭曲强度系数为s,则可以根据s和r确定第二像素点P对应的第一像素点P’的位置。
在一些实施方式中,所述根据扭曲强度系数、所述第二像素点与所述眼部区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:确定所述第二像素点与所述眼部区域的中心之间的距离;确定所述距离与所述眼部区域的最大半径之间的第一比值;根据所述扭曲强度系数和所述第一比值确定所述第二像素点对应的第一像素点。
示例性的,所述眼部区域在第一方向上延伸的第一长度a大于所述眼部区域在第二方向上延伸的第二长度b时,所述眼部区域的最大半径可以根据第一长度a确定,例如眼部区域的最大半径r max等于第一长度a。
示例性的,第二像素点P(d x,d y)对应的第一像素点P’的位置(d′ x,d′ y)可以根据下式确定:
Figure PCTCN2020090916-appb-000001
Figure PCTCN2020090916-appb-000002
其中,(d x,d y)为第二像素点P相对于第二图像的中心o的坐标,r为第二像素点P与中心o之间的距离,(d′ x,d′ y)为第一像素点相对于第一图像的 中心o的坐标,s为扭曲强度系数;r max为眼部区域的最大半径,例如为眼部区域的长度。
示例性的,s取值在[-1,0)范围时,变形的效果是缩小,如第二图像的眼睛显得更小;s取值在(0,1]范围时,如图6所示,变形的效果是放大,如第二图像的眼睛显得更大。
示例性的,距离眼部区域的中心o越近的像素点,其变形的程度越大,即形变越大;距离眼部区域的中心o越远的像素点,其变形的程度越小,即形变越小,例如在椭圆形的眼部区域的长轴端点,像素点无变形。
在另一些实施方式中,所述根据扭曲强度系数、所述第二像素点与所述眼部区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:确定所述第二像素点与所述眼部区域的中心之间的距离;确定所述第二像素点对应的眼部区域半径;确定所述距离与所述第二像素点对应的眼部区域半径之间的第二比值;根据所述扭曲强度系数和所述第二比值确定所述第二像素点对应的第一像素点。
具体的,如图7所示,像素点对应的眼部区域半径R表示从所述眼部区域的中心o经所述像素点延伸至所述眼部区域边界的直线距离。
示例性的,所述确定所述第二像素点对应的眼部区域半径,包括:确定所述眼部区域的中心经所述第二像素点延伸至所述眼部区域边缘的长度为所述第二像素点对应的眼部区域半径。
如图7所示,椭圆形的眼部区域短轴上的像素点的眼部区域半径R等于短轴长度b,长轴上的像素点的眼部区域半径R等于长轴长度a,眼部区域半径R不小于b且不大于a。
示例性的,所述确定所述第二像素点对应的眼部区域半径,包括:根据所述第二像素点和所述眼部区域的中心确定所述眼部区域半径的延伸方向;确定所述延伸方向和所述第一方向之间的夹角,根据所述夹角确定所述第二像素点对应的眼部区域半径。
如图7所示,延伸方向和所述第一方向,如长轴方向之间的夹角为γ,可以根据γ和长轴长度a、短轴长度b确定所述第二像素点对应的眼部区域半径R。示例性的,可以根据下式确定:
Figure PCTCN2020090916-appb-000003
在一些实施方式中,所述第一方向和所述待处理图像的横向方向之间的角度为0至45度。如图5和图7所示,由于待处理图像中人脸是倾斜的,眼部区域的长度方向和待处理图像的横向方向之间的角度为β。通过确定所述第二像素点和所述眼部区域的中心的延伸方向和所述横向方向之间的角度θ,可以确定所述延伸方向和所述第一方向之间的夹角γ。
示例性的,第二像素点P(d x,d y)对应的第一像素点P’的位置(d′ x,d′ y)可以根据下式确定:
Figure PCTCN2020090916-appb-000004
Figure PCTCN2020090916-appb-000005
示例性的,s取值在[-1,0)范围时,变形的效果是缩小,如第二图像的眼睛显得更小;s取值在(0,1]范围时,如图6所示,变形的效果是放大,如第二图像的眼睛显得更大。
示例性的,像素点的眼部区域半径R一定时,距离眼部区域的中心o越近的像素点,其变形的程度越大,即形变越大;距离眼部区域的边界越近的像素点,其变形的程度越小,即形变越小,例如在椭圆形的眼部区域的边界,像素点无变形,因此,眼部区域进行处理后的图像中,第二图像与图像的其他部分之间平滑过渡,看起来更自然。
示例性的,与眼部区域的中心o距离一定时,越靠近眼部区域长度方向的像素点的眼部区域半径R越大,形变越大,越靠近眼部区域宽度方向的像素点的眼部区域半径R越小,形变越小;例如椭圆形的眼部区域长轴方向的像素点形变较大,短轴方向的像素点形变较小。例如大眼效果沿眼角连线方向形变较大,沿眼角连线垂直方向形变较小。
在一些实施方式中,所述根据扭曲强度系数、所述第二像素点与所述眼部区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:确定所述第二像素点对应的扭曲强度系数;根据所述第二像素点对应的扭曲强度系 数以及所述第二像素点与所述眼部区域的中心之间的距离确定所述第二像素点对应的第一像素点。
示例性的,所述确定所述第二像素点对应的扭曲强度系数,包括:确定所述第二像素点对应的眼部区域半径;根据所述第二像素点对应的眼部区域半径和预设强度系数确定所述第二像素点对应的扭曲强度系数。
示例性的,所述第二像素点对应的扭曲强度系数和所述第二像素点对应的眼部区域半径正相关。与眼部区域的中心o距离一定时,越靠近眼部区域长度方向的像素点的眼部区域半径R越大,扭曲强度系数越大,形变越大;越靠近眼部区域宽度方向的像素点的眼部区域半径R越小,扭曲强度系数越小,形变越小。
示例性的,所述形变图像的各第二像素点对应的扭曲强度系数中,在所述第一方向上的第二像素点对应的扭曲强度系数最大,在所述第二方向上的第二像素点对应的扭曲强度系数最小。
例如,椭圆形的眼部区域长轴方向的像素点形变较大,短轴方向的像素点形变较小。例如大眼效果沿眼角连线方向形变较大,沿眼角连线垂直方向形变较小,可以进一步避免或减小镜框等附属物的扭曲。
示例性的,所述根据所述第二像素点对应的眼部区域半径和预设强度系数确定所述第二像素点对应的扭曲强度系数,包括:确定所述第二像素点对应的眼部区域半径与所述眼部区域的最大半径之间的第三比值;根据所述第三比值和预设强度系数确定所述第二像素点对应的扭曲强度系数。
如图7所示,第二像素点对应的眼部区域半径R与眼部区域的最大半径r max,如椭圆形的眼部区域的长轴a之间的第三比值为R÷a,根据第三比值R÷a和预设强度系数s的乘积可以确定所述第二像素点对应的扭曲强度系数s′。
示例性的,所述根据所述第二像素点对应的扭曲强度系数和所述距离确定所述第二像素点对应的第一像素点,包括:确定所述距离与所述第二像素点对应的眼部区域半径之间的第四比值;根据所述第二像素点对应的扭曲强度系数和所述第四比值确定所述第二像素点对应的第一像素点。
示例性的,第二像素点P(d x,d y)对应的第一像素点P’的位置(d′ x,d′ y)可以根据下式确定:
Figure PCTCN2020090916-appb-000006
Figure PCTCN2020090916-appb-000007
其中,r为第二像素点P与中心o之间的距离,R为第二像素点对应的眼部区域半径,s′为所述第二像素点对应的扭曲强度系数。示例性的,距离眼部区域的中心o越近的像素点,其变形的程度越大,即形变越大;距离眼部区域的中心o越远的像素点,其变形的程度越小,即形变越小,例如在椭圆形的眼部区域的长轴端点,像素点无变形。
示例性的,与眼部区域的中心o距离一定时,越靠近眼部区域长度方向的像素点的眼部区域半径R越大,扭曲强度系数越大,形变越大;越靠近眼部区域宽度方向的像素点的眼部区域半径R越小,扭曲强度系数越小,形变越小。例如椭圆形的眼部区域长轴方向的像素点形变较大,短轴方向的像素点形变较小。例如大眼效果沿眼角连线方向形变较大,沿眼角连线垂直方向形变较小。以及像素点的眼部区域半径R一定时,距离眼部区域的中心o越近的像素点,形变越大;距离眼部区域的边界越近的像素点,形变越小。例如在椭圆形的眼部区域的边界,像素点无变形。
通过根据各像素点的位置对各像素点进行不同程度的处理,使得处理后的图像更自然,更平滑,而且可以使得长、宽不相等的眼部区域的边界上处理的强度最弱,例如不进行形变,使得处理后的眼部区域图像能够与外部的图像平滑过渡。
如图8所示为通过本申请实施例的图像处理方法对图像中的待处理区域,如眼部区域进行处理的示意图,具体的是对包含眼部的眼部区域进行扭曲变形。图8中左侧的图像为待处理图像,右侧的图像为对眼部区域进行处理后得到的图像,可以得出在对眼部区域进行预设处理时,可以防止影响到眼部区域外的内容,如眼镜,处理后的图像不容易出现异常或不协调的内容。
本申请实施例提供的图像处理方法,通过确定长、宽不相等的眼部区域,使得眼部区域更准确,可以将眼部区域宽度方向上的眼镜等附属物排除,从而在对眼部区域进行预设处理时,防止影响到区域外的内容,处理后的图像不容易出现异常或不协调的内容。即使眼镜框非常贴近眼眶,其发生的变形也可以 降到非常微小,不易察觉的程度,改善了用户体验。
可以理解的,本申请实施例的图像处理方法除了可以用于处理脸部的眼部区域,例如处理人、猫、狗等脸部的眼部区域,还可以应用于处理包括长宽不相等的待处理区域的图像,例如可以将整个脸部、手臂、腿部、整个人体等作为待处理区域进行图像处理,当然待处理区域中不限于是人体或人体的部位,也可以是其他物体。
请结合前述实施例参阅图9,图9是本申请另一实施例提供的一种图像处理方法的流程示意图。
所述图像处理方法可以应用在图像处理设备中,用于对图像进行处理等过程。
在一些实施方式中,图像处理设备包括如下至少一种:相机、手机、电脑、服务器、可移动平台。
其中,可移动平台可以包括无人飞行器、手持云台、云台车等中的至少一种。进一步而言,无人飞行器可以为旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机。其中,电脑可以包括平板电脑、笔记本电脑、台式电脑等中的至少一项。
在一些实施方式中,可移动平台搭载有拍摄装置,如相机。可移动平台还能够与图像处理设备,如手机、电脑、遥控器等通信连接。该图像处理设备可以获取可移动平台搭载的拍摄装置拍摄的图像,并对该图像进行处理。
如图9所示,本实施例图像处理方法包括步骤S210至步骤S240。
S210、获取待处理图像。
S220、确定所述待处理图像中的待处理区域,所述待处理区域在第一方向上延伸的第一长度与所述待处理区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同。
在一些实施方式中,所述确定所述待处理图像中的待处理区域,包括:显示待处理图像,根据用户对待处理图像的圈选操作确定所述待处理区域。例如用户的圈选操作确定手臂所在的区域为待处理区域。
在一些实施方式中,所述确定所述待处理图像中的待处理区域,包括:确定所述待处理图像中的目标物体,根据所述目标物体所在的区域确定所述待处理区域。
示例性的,检测所述待处理图像中的人体、眼睛、脸部、腿部等目标物体,然后根据目标物体所在的区域确定所述待处理区域,例如确定脸部、手臂、腿部、整个人体等目标物体所在的椭圆形区域或矩形区域为待处理区域。当然目标物体不限于人体或人体的部位,也可以是其他物体。
示例性的,所述第一方向与所述第二方向大致垂直。
示例性的,所述待处理区域包括椭圆形的待处理区域,所述第一方向包括所述椭圆形的长轴方向,所述第二方向包括所述椭圆形的短轴方向。
S230、对所述待处理区域进行预设处理。
在一些实施方式中,所述对所述待处理区域进行预设处理,包括:对所述待处理区域内的原始图像进行形变处理,得到所述待处理区域的形变图像。
示例性的,所述对所述待处理区域内的原始图像进行形变处理,得到所述待处理区域的形变图像,包括:确定所述形变图像的第二像素点和所述原始图像的第一像素点的位置对应关系;基于所述位置对应关系,根据所述第一像素点的第一像素值确定所述第二像素点的第二像素值。
示例性的,所述确定所述第二像素点和所述第一像素点的位置对应关系,包括:根据扭曲强度系数、所述第二像素点与所述待处理区域的中心之间的距离,确定所述第二像素点对应的第一像素点。
示例性的,所述根据扭曲强度系数、所述第二像素点与所述待处理区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:确定所述第二像素点与所述待处理区域的中心之间的距离;确定所述距离与所述待处理区域的最大半径之间的第一比值;根据所述扭曲强度系数和所述第一比值确定所述第二像素点对应的第一像素点。
示例性的,所述根据扭曲强度系数、所述第二像素点与所述待处理区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:确定所述第二像素点与所述待处理区域的中心之间的距离;确定所述第二像素点对应的待处理区域半径;确定所述距离与所述待处理区域半径之间的第二比值;根据所述扭曲强度系数和所述第二比值确定所述第二像素点对应的第一像素点。
示例性的,所述根据扭曲强度系数、所述第二像素点与所述待处理区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:确定所述第二像素点对应的扭曲强度系数;根据所述第二像素点对应的扭曲强度系数和所 述距离确定所述第二像素点对应的第一像素点。
示例性的,所述确定所述第二像素点对应的扭曲强度系数,包括:确定所述第二像素点对应的待处理区域半径;根据所述第二像素点对应的待处理区域半径和预设强度系数确定所述第二像素点对应的扭曲强度系数。
示例性的,所述确定所述第二像素点对应的待处理区域半径,包括:确定所述待处理区域的中心经所述第二像素点延伸至所述待处理区域边缘的长度为所述第二像素点对应的待处理区域半径。
示例性的,所述确定所述第二像素点对应的待处理区域半径,包括:根据所述第二像素点和所述待处理区域的中心确定所述待处理区域半径的延伸方向;确定所述延伸方向和所述第一方向之间的夹角,根据所述夹角确定所述第二像素点对应的待处理区域半径。
示例性的,所述根据所述第二像素点对应的待处理区域半径和预设强度系数确定所述第二像素点对应的扭曲强度系数,包括:确定所述第二像素点对应的待处理区域半径与所述待处理区域的最大半径之间的第三比值;根据所述第三比值和预设强度系数确定所述第二像素点对应的扭曲强度系数。
在一些实施方式中,所述第二像素点对应的扭曲强度系数和所述第二像素点对应的待处理区域半径正相关。
示例性的,所述形变图像的各第二像素点对应的扭曲强度系数中,在所述第一方向上的第二像素点对应的扭曲强度系数最大,在所述第二方向上的第二像素点对应的扭曲强度系数最小。
示例性的,所述根据所述第二像素点对应的扭曲强度系数和所述距离确定所述第二像素点对应的第一像素点,包括:确定所述距离与所述第二像素点对应的眼部区域半径之间的第四比值;根据所述第二像素点对应的扭曲强度系数和所述第四比值确定所述第二像素点对应的第一像素点。
在一些实施方式中,所述基于所述位置对应关系,根据所述第一像素点的第一像素值确定所述第二像素点的第二像素值,包括:根据所述原始图像中所述第一像素点的第一像素值和/或所述原始图像中所述第一像素点邻近的其他像素点的第一像素值确定所述第二像素点的第二像素值。
本申请实施例提供的图像处理方法,通过在图像中确定长、宽不相等的待处理区域,使得在对待处理区域进行处理时,更准确,可以将待处理区域宽度 方向上的眼镜等附属物排除,从而在对待处理区域进行预设处理时,防止影响到区域外的内容,处理后的图像不容易出现异常或不协调的内容。
请结合上述实施例参阅图10,图10是本申请实施例提供的图像处理设备600的示意性框图。该图像处理设备600包括一个或多个处理器601,单独地或共同地工作,用于执行前述的图像处理方法的步骤。
图像处理设备600还可以包括存储器602。
示例性的,处理器601和存储器602通过总线603连接,该总线603比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器601可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器602可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述处理器601用于运行存储在存储器602中的计算机程序,并在执行所述计算机程序时实现前述的图像处理方法。
示例性的,图像处理设备包括如下至少一种:相机、手机、电脑、服务器、可移动平台。
示例性的,所述处理器601用于运行存储在存储器602中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取待处理图像,确定所述待处理图像中的脸部关键点;
根据所述脸部关键点确定所述待处理图像中的眼部区域,所述眼部区域在第一方向上延伸的第一长度与所述眼部区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
对所述眼部区域进行预设处理。
示例性的,所述处理器601用于运行存储在存储器602中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取待处理图像;
确定所述待处理图像中的待处理区域,所述待处理区域在第一方向上延伸的第一长度与所述待处理区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
对所述待处理区域进行预设处理。
本申请实施例提供的图像处理设备的具体原理和实现方式均与前述实施例的图像处理方法的控制方法类似,此处不再赘述。
请参阅图11,图11是本申请实施例提供的可移动平台700的示意性框图。
可移动平台700能够搭载拍摄装置30,拍摄装置30用于获取图像。
该可移动平台700包括一个或多个处理器701,单独地或共同地工作,用于执行前述图像处理方法的步骤。
可移动平台700还可以包括存储器702。
示例性的,处理器701和存储器702通过总线703连接,该总线703比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器701可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器702可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述处理器701用于运行存储在存储器702中的计算机程序,并在执行所述计算机程序时实现前述的图像处理方法。
示例性的,可移动平台700包括如下至少一种:无人飞行器、手持云台、云台车。
示例性的,所述处理器701用于运行存储在存储器702中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取待处理图像,确定所述待处理图像中的脸部关键点;
根据所述脸部关键点确定所述待处理图像中的眼部区域,所述眼部区域在第一方向上延伸的第一长度与所述眼部区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
对所述眼部区域进行预设处理。
示例性的,所述处理器701用于运行存储在存储器702中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取待处理图像;
确定所述待处理图像中的待处理区域,所述待处理区域在第一方向上延伸 的第一长度与所述待处理区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
对所述待处理区域进行预设处理。
本申请实施例提供的可移动平台的具体原理和实现方式均与前述实施例的图像处理方法类似,此处不再赘述。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述计算机程序被处理器执行时使所述处理器实现上述实施例提供的图像处理方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述的图像处理设备或可移动平台内部存储单元,例如所述图像处理设备或可移动平台的硬盘或内存。所述计算机可读存储介质也可以是所述图像处理设备或可移动平台的外部存储设备,例如所述图像处理设备或可移动平台上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
应当理解,在此本申请中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。
还应当理解,在本申请和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (53)

  1. 一种图像处理方法,其特征在于,包括:
    获取待处理图像,确定所述待处理图像中的脸部关键点;
    根据所述脸部关键点确定所述待处理图像中的眼部区域,所述眼部区域在第一方向上延伸的第一长度与所述眼部区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
    对所述眼部区域进行预设处理。
  2. 根据权利要求1所述的方法,其特征在于,所述确定所述待处理图像中的脸部关键点,包括:
    确定所述待处理图像中的人脸区域;
    确定所述人脸区域的脸部关键点。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述脸部关键点确定所述待处理图像中的眼部区域,包括:
    从所述脸部关键点中确定眼部区域对应的关键点;
    根据所述眼部区域对应的关键点确定所述眼部区域。
  4. 根据权利要求3所述的方法,其特征在于,所述眼部区域对应的关键点包括左眼的眼角关键点和/或右眼的眼角关键点。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述脸部关键点确定所述待处理图像中的眼部区域,包括:
    根据所述左眼的眼角关键点确定左眼的眼部区域和/或根据所述右眼的眼角关键点确定右眼的眼部区域。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述第一方向和所述待处理图像的横向方向之间的角度为0至45度。
  7. 根据权利要求6所述的方法,其特征在于,所述第一方向由所述待处理图像中至少两个眼角关键点所在的直线确定。
  8. 根据权利要求6所述的方法,其特征在于,所述第一方向与所述第二方向大致垂直。
  9. 根据权利要求1-5中任一项所述的方法,其特征在于,所述根据所述脸 部关键点确定所述待处理图像中的眼部区域,包括:
    根据所述脸部关键点确定所述眼部区域的中心和所述眼部区域在所述第一方向上延伸的第一长度;
    根据所述中心和所述第一长度确定所述眼部区域。
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述脸部关键点确定所述眼部区域的中心和所述眼部区域在所述第一方向上延伸的第一长度,包括:
    根据靠近脸部中线的眼角关键点和靠近脸部轮廓的眼角关键点的中间位置确定所述眼部区域的中心;
    根据靠近脸部中线的眼角关键点和靠近脸部轮廓的眼角关键点之间的线段确定所述眼部区域在所述第一方向上延伸的第一长度。
  11. 根据权利要求9所述的方法,其特征在于,所述根据所述眼部区域的中心和所述第一长度确定所述眼部区域,包括:
    根据所述第一长度确定所述眼部区域的第二长度,或者根据靠近脸部中线的眼角关键点和靠近脸部轮廓的眼角关键点确定所述眼部区域的第二长度;
    根据所述眼部区域的中心和所述第一长度、所述第二长度确定所述眼部区域。
  12. 根据权利要求1-11中任一项所述的方法,其特征在于,所述眼部区域包括椭圆形的眼部区域,所述第一方向包括所述椭圆形的长轴方向,所述第二方向包括所述椭圆形的短轴方向。
  13. 根据权利要求12所述的方法,其特征在于,所述根据所述脸部关键点确定所述待处理图像中的眼部区域,包括:
    根据所述脸部关键点中的眼角关键点确定所述眼部区域的中心、长轴长度、短轴长度和长轴端点;
    根据所述长轴长度和所述短轴长度确定所述椭圆形的焦距;
    根据所述中心、焦距、长轴长度和长轴端点确定所述椭圆形的焦点;
    根据所述焦点和所述长轴长度确定所述待处理图像中的眼部区域。
  14. 根据权利要求1-13中任一项所述的方法,其特征在于,所述对所述眼部区域进行预设处理,包括:
    对所述眼部区域内的原始图像进行形变处理,得到所述眼部区域的形变图 像。
  15. 根据权利要求14所述的方法,其特征在于,所述对所述眼部区域内的原始图像进行形变处理,得到所述眼部区域的形变图像,包括:
    确定所述形变图像的第二像素点和所述原始图像的第一像素点的位置对应关系;
    基于所述位置对应关系,根据所述第一像素点的第一像素值确定所述第二像素点的第二像素值。
  16. 根据权利要求15所述的方法,其特征在于,所述确定所述第二像素点和所述第一像素点的位置对应关系,包括:
    根据扭曲强度系数、所述第二像素点与所述眼部区域的中心之间的距离,确定所述第二像素点对应的第一像素点。
  17. 根据权利要求16所述的方法,其特征在于,所述根据扭曲强度系数、所述第二像素点与所述眼部区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:
    确定所述第二像素点与所述眼部区域的中心之间的距离;
    确定所述距离与所述眼部区域的最大半径之间的第一比值;
    根据所述扭曲强度系数和所述第一比值确定所述第二像素点对应的第一像素点。
  18. 根据权利要求16所述的方法,其特征在于,所述根据扭曲强度系数、所述第二像素点与所述眼部区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:
    确定所述第二像素点与所述眼部区域的中心之间的距离;
    确定所述第二像素点对应的眼部区域半径;
    确定所述距离与所述眼部区域半径之间的第二比值;
    根据所述扭曲强度系数和所述第二比值确定所述第二像素点对应的第一像素点。
  19. 根据权利要求16所述的方法,其特征在于,所述根据扭曲强度系数、所述第二像素点与所述眼部区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:
    确定所述第二像素点对应的扭曲强度系数;
    根据所述第二像素点对应的扭曲强度系数和所述距离确定所述第二像素点对应的第一像素点。
  20. 根据权利要求19所述的方法,其特征在于,所述确定所述第二像素点对应的扭曲强度系数,包括:
    确定所述第二像素点对应的眼部区域半径;
    根据所述第二像素点对应的眼部区域半径和预设强度系数确定所述第二像素点对应的扭曲强度系数。
  21. 根据权利要求18或20所述的方法,其特征在于,所述确定所述第二像素点对应的眼部区域半径,包括:
    确定所述眼部区域的中心经所述第二像素点延伸至所述眼部区域边缘的长度为所述第二像素点对应的眼部区域半径。
  22. 根据权利要求18或20所述的方法,其特征在于,所述确定所述第二像素点对应的眼部区域半径,包括:
    根据所述第二像素点和所述眼部区域的中心确定所述眼部区域半径的延伸方向;
    确定所述延伸方向和所述第一方向之间的夹角,根据所述夹角确定所述第二像素点对应的眼部区域半径。
  23. 根据权利要求20所述的方法,其特征在于,所述根据所述第二像素点对应的眼部区域半径和预设强度系数确定所述第二像素点对应的扭曲强度系数,包括:
    确定所述第二像素点对应的眼部区域半径与所述眼部区域的最大半径之间的第三比值;
    根据所述第三比值和预设强度系数确定所述第二像素点对应的扭曲强度系数。
  24. 根据权利要求20所述的方法,其特征在于,所述第二像素点对应的扭曲强度系数和所述第二像素点对应的眼部区域半径正相关。
  25. 根据权利要求24所述的方法,其特征在于,所述形变图像的各第二像素点对应的扭曲强度系数中,在所述第一方向上的第二像素点对应的扭曲强度系数最大,在所述第二方向上的第二像素点对应的扭曲强度系数最小。
  26. 根据权利要求19所述的方法,其特征在于,所述根据所述第二像素点 对应的扭曲强度系数和所述距离确定所述第二像素点对应的第一像素点,包括:
    确定所述距离与所述第二像素点对应的眼部区域半径之间的第四比值;
    根据所述第二像素点对应的扭曲强度系数和所述第四比值确定所述第二像素点对应的第一像素点。
  27. 根据权利要求15所述的方法,其特征在于,所述基于所述位置对应关系,根据所述第一像素点的第一像素值确定所述第二像素点的第二像素值,包括:
    根据所述原始图像中所述第一像素点的第一像素值和/或所述原始图像中所述第一像素点邻近的其他像素点的第一像素值确定所述第二像素点的第二像素值。
  28. 一种图像处理方法,其特征在于,包括:
    获取待处理图像;
    确定所述待处理图像中的待处理区域,所述待处理区域在第一方向上延伸的第一长度与所述待处理区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
    对所述待处理区域进行预设处理。
  29. 根据权利要求28所述的方法,其特征在于,所述第一方向与所述第二方向大致垂直。
  30. 根据权利要求29所述的方法,其特征在于,所述待处理区域包括椭圆形的待处理区域,所述第一方向包括所述椭圆形的长轴方向,所述第二方向包括所述椭圆形的短轴方向。
  31. 根据权利要求28-30中任一项所述的方法,其特征在于,所述对所述待处理区域进行预设处理,包括:
    对所述待处理区域内的原始图像进行形变处理,得到所述待处理区域的形变图像。
  32. 根据权利要求31所述的方法,其特征在于,所述对所述待处理区域内的原始图像进行形变处理,得到所述待处理区域的形变图像,包括:
    确定所述形变图像的第二像素点和所述原始图像的第一像素点的位置对应关系;
    基于所述位置对应关系,根据所述第一像素点的第一像素值确定所述第二 像素点的第二像素值。
  33. 根据权利要求32所述的方法,其特征在于,所述确定所述第二像素点和所述第一像素点的位置对应关系,包括:
    根据扭曲强度系数、所述第二像素点与所述待处理区域的中心之间的距离,确定所述第二像素点对应的第一像素点。
  34. 根据权利要求33所述的方法,其特征在于,所述根据扭曲强度系数、所述第二像素点与所述待处理区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:
    确定所述第二像素点与所述待处理区域的中心之间的距离;
    确定所述距离与所述待处理区域的最大半径之间的第一比值;
    根据所述扭曲强度系数和所述第一比值确定所述第二像素点对应的第一像素点。
  35. 根据权利要求33所述的方法,其特征在于,所述根据扭曲强度系数、所述第二像素点与所述待处理区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:
    确定所述第二像素点与所述待处理区域的中心之间的距离;
    确定所述第二像素点对应的待处理区域半径;
    确定所述距离与所述待处理区域半径之间的第二比值;
    根据所述扭曲强度系数和所述第二比值确定所述第二像素点对应的第一像素点。
  36. 根据权利要求33所述的方法,其特征在于,所述根据扭曲强度系数、所述第二像素点与所述待处理区域的中心之间的距离,确定所述第二像素点对应的第一像素点,包括:
    确定所述第二像素点对应的扭曲强度系数;
    根据所述第二像素点对应的扭曲强度系数和所述距离确定所述第二像素点对应的第一像素点。
  37. 根据权利要求36所述的方法,其特征在于,所述确定所述第二像素点对应的扭曲强度系数,包括:
    确定所述第二像素点对应的待处理区域半径;
    根据所述第二像素点对应的待处理区域半径和预设强度系数确定所述第二 像素点对应的扭曲强度系数。
  38. 根据权利要求35或37所述的方法,其特征在于,所述确定所述第二像素点对应的待处理区域半径,包括:
    确定所述待处理区域的中心经所述第二像素点延伸至所述待处理区域边缘的长度为所述第二像素点对应的待处理区域半径。
  39. 根据权利要求35或37所述的方法,其特征在于,所述确定所述第二像素点对应的待处理区域半径,包括:
    根据所述第二像素点和所述待处理区域的中心确定所述待处理区域半径的延伸方向;
    确定所述延伸方向和所述第一方向之间的夹角,根据所述夹角确定所述第二像素点对应的待处理区域半径。
  40. 根据权利要求37所述的方法,其特征在于,所述根据所述第二像素点对应的待处理区域半径和预设强度系数确定所述第二像素点对应的扭曲强度系数,包括:
    确定所述第二像素点对应的待处理区域半径与所述待处理区域的最大半径之间的第三比值;
    根据所述第三比值和预设强度系数确定所述第二像素点对应的扭曲强度系数。
  41. 根据权利要求37所述的方法,其特征在于,所述第二像素点对应的扭曲强度系数和所述第二像素点对应的待处理区域半径正相关。
  42. 根据权利要求41所述的方法,其特征在于,所述形变图像的各第二像素点对应的扭曲强度系数中,在所述第一方向上的第二像素点对应的扭曲强度系数最大,在所述第二方向上的第二像素点对应的扭曲强度系数最小。
  43. 根据权利要求36所述的方法,其特征在于,所述根据所述第二像素点对应的扭曲强度系数和所述距离确定所述第二像素点对应的第一像素点,包括:
    确定所述距离与所述第二像素点对应的眼部区域半径之间的第四比值;
    根据所述第二像素点对应的扭曲强度系数和所述第四比值确定所述第二像素点对应的第一像素点。
  44. 根据权利要求32所述的方法,其特征在于,所述基于所述位置对应关系,根据所述第一像素点的第一像素值确定所述第二像素点的第二像素值,包 括:
    根据所述原始图像中所述第一像素点的第一像素值和/或所述原始图像中所述第一像素点邻近的其他像素点的第一像素值确定所述第二像素点的第二像素值。
  45. 一种图像处理设备,其特征在于,包括一个或多个处理器,单独地或共同地工作,用于执行如下步骤:
    获取待处理图像,确定所述待处理图像中的脸部关键点;
    根据所述脸部关键点确定所述待处理图像中的眼部区域,所述眼部区域在第一方向上延伸的第一长度与所述眼部区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
    对所述眼部区域进行预设处理。
  46. 根据权利要求45所述的图像处理设备,其特征在于,所述图像处理设备包括如下至少一种:相机、手机、电脑、服务器、可移动平台。
  47. 一种图像处理设备,其特征在于,包括一个或多个处理器,单独地或共同地工作,用于执行如下步骤:
    获取待处理图像;
    确定所述待处理图像中的待处理区域,所述待处理区域在第一方向上延伸的第一长度与所述待处理区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
    对所述待处理区域进行预设处理。
  48. 根据权利要求47所述的图像处理设备,其特征在于,所述图像处理设备包括如下至少一种:相机、手机、电脑、服务器、可移动平台。
  49. 一种可移动平台,其特征在于,能够搭载拍摄装置,所述拍摄装置用于获取图像;
    还包括一个或多个处理器,单独地或共同地工作,用于执行如下步骤:
    获取待处理图像,确定所述待处理图像中的脸部关键点;
    根据所述脸部关键点确定所述待处理图像中的眼部区域,所述眼部区域在第一方向上延伸的第一长度与所述眼部区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
    对所述眼部区域进行预设处理。
  50. 根据权利要求49所述的可移动平台,其特征在于,所述可移动平台包括如下至少一种:无人飞行器、手持云台、云台车。
  51. 一种可移动平台,其特征在于,能够搭载拍摄装置,所述拍摄装置用于获取图像;
    还包括一个或多个处理器,单独地或共同地工作,用于执行如下步骤:
    获取待处理图像;
    确定所述待处理图像中的待处理区域,所述待处理区域在第一方向上延伸的第一长度与所述待处理区域在第二方向上延伸的第二长度不相等,所述第一方向与所述第二方向不同;
    对所述待处理区域进行预设处理。
  52. 根据权利要求51所述的可移动平台,其特征在于,所述可移动平台包括如下至少一种:无人飞行器、手持云台、云台车。
  53. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算程序,所述计算程序被处理器执行时使所述处理器实现:
    如权利要求1-27中任一项所述的图像处理方法;和/或
    如权利要求28-44中任一项所述的图像处理方法。
PCT/CN2020/090916 2020-05-18 2020-05-18 图像处理方法、设备、可移动平台和存储介质 WO2021232209A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080007161.7A CN113228045A (zh) 2020-05-18 2020-05-18 图像处理方法、设备、可移动平台和存储介质
PCT/CN2020/090916 WO2021232209A1 (zh) 2020-05-18 2020-05-18 图像处理方法、设备、可移动平台和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/090916 WO2021232209A1 (zh) 2020-05-18 2020-05-18 图像处理方法、设备、可移动平台和存储介质

Publications (1)

Publication Number Publication Date
WO2021232209A1 true WO2021232209A1 (zh) 2021-11-25

Family

ID=77086016

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090916 WO2021232209A1 (zh) 2020-05-18 2020-05-18 图像处理方法、设备、可移动平台和存储介质

Country Status (2)

Country Link
CN (1) CN113228045A (zh)
WO (1) WO2021232209A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160342831A1 (en) * 2015-05-21 2016-11-24 Futurewei Technologies, Inc. Apparatus and method for neck and shoulder landmark detection
CN107862673A (zh) * 2017-10-31 2018-03-30 北京小米移动软件有限公司 图像处理方法及装置
CN108090450A (zh) * 2017-12-20 2018-05-29 深圳和而泰数据资源与云技术有限公司 人脸识别方法和装置
CN108665498A (zh) * 2018-05-15 2018-10-16 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备和存储介质
CN110942043A (zh) * 2019-12-02 2020-03-31 深圳市迅雷网络技术有限公司 一种瞳孔图像处理方法及相关装置
CN111144348A (zh) * 2019-12-30 2020-05-12 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160342831A1 (en) * 2015-05-21 2016-11-24 Futurewei Technologies, Inc. Apparatus and method for neck and shoulder landmark detection
CN107862673A (zh) * 2017-10-31 2018-03-30 北京小米移动软件有限公司 图像处理方法及装置
CN108090450A (zh) * 2017-12-20 2018-05-29 深圳和而泰数据资源与云技术有限公司 人脸识别方法和装置
CN108665498A (zh) * 2018-05-15 2018-10-16 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备和存储介质
CN110942043A (zh) * 2019-12-02 2020-03-31 深圳市迅雷网络技术有限公司 一种瞳孔图像处理方法及相关装置
CN111144348A (zh) * 2019-12-30 2020-05-12 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN113228045A (zh) 2021-08-06

Similar Documents

Publication Publication Date Title
US11948282B2 (en) Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data
US10674069B2 (en) Method and apparatus for blurring preview picture and storage medium
JP6864449B2 (ja) イメージの明るさを調整する方法及び装置
WO2020259271A1 (zh) 图像畸变校正方法和装置
US9196071B2 (en) Image splicing method and apparatus
CN105474263B (zh) 用于产生三维人脸模型的系统和方法
US9818226B2 (en) Method for optimizing occlusion in augmented reality based on depth camera
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
JP4811462B2 (ja) 画像処理方法、画像処理プログラム、画像処理装置、及び撮像装置
US11120535B2 (en) Image processing method, apparatus, terminal, and storage medium
US20240037719A1 (en) Method for optimal body or face protection with adaptive dewarping based on context segmentation layers
US10863077B2 (en) Image photographing method, apparatus, and terminal
CN106981078B (zh) 视线校正方法、装置、智能会议终端及存储介质
CN108364254B (zh) 图像处理方法、装置及电子设备
CN108346130B (zh) 图像处理方法、装置及电子设备
CN104992417B (zh) 基于Kinect的人脸视频目光修正方法及系统
CN108399599B (zh) 图像处理方法、装置及电子设备
CN112333385A (zh) 电子防抖控制方法及装置
WO2021232209A1 (zh) 图像处理方法、设备、可移动平台和存储介质
WO2015198478A1 (ja) 画像歪み補正装置、情報処理装置および画像歪み補正方法
WO2022121843A1 (zh) 文本图像的矫正方法及装置、设备和介质
CN113965664A (zh) 一种图像虚化方法、存储介质以及终端设备
US11871104B2 (en) Recommendations for image capture
CN111583155B (zh) 用于图像中人像的布光方法、系统、介质以及电子设备
TWI763126B (zh) 基於注意力方向之裝置互動方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20936567

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20936567

Country of ref document: EP

Kind code of ref document: A1