CN110475067B - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110475067B
CN110475067B CN201910790212.7A CN201910790212A CN110475067B CN 110475067 B CN110475067 B CN 110475067B CN 201910790212 A CN201910790212 A CN 201910790212A CN 110475067 B CN110475067 B CN 110475067B
Authority
CN
China
Prior art keywords
image
face
variation
face area
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910790212.7A
Other languages
Chinese (zh)
Other versions
CN110475067A (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910790212.7A priority Critical patent/CN110475067B/en
Publication of CN110475067A publication Critical patent/CN110475067A/en
Application granted granted Critical
Publication of CN110475067B publication Critical patent/CN110475067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal

Abstract

The application relates to an image processing method and device, an electronic device and a computer readable storage medium, wherein the method comprises the following steps: acquiring a first image and a second image, wherein the second image is a backward frame image of the first image; acquiring a first face area in a first image and a second face area in a second image; determining the variation of the face area between the first face area and the second face area; and when the variation of the face area is smaller than the first preset variation, correcting the second face area to obtain a third image, and displaying the third image. By adopting the method, the definition of the image can be improved.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of image processing technology, anti-shake technology has emerged. The anti-shake technology is used for avoiding picture blurring caused by factors such as hand shaking and the like during photographing and improving the imaging definition. However, since the conventional image processing performs anti-shake in consideration of only hand shake, image sharpness is not high.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, electronic equipment and a computer readable storage medium, which can improve the definition of an image.
An image processing method comprising:
acquiring a first image and a second image, wherein the second image is a backward frame image of the first image;
acquiring a first face area in the first image and a second face area in the second image;
determining a face region variation between the first face region and the second face region;
and when the variation of the face area is smaller than a first preset variation, correcting the second face area to obtain a third image, and displaying the third image.
An image processing apparatus comprising:
the image acquisition module is used for acquiring a first image and a second image, wherein the second image is a backward frame image of the first image;
the face region acquisition module is used for acquiring a first face region in the first image and a second face region in the second image;
the variable quantity determining module is used for determining the variable quantity of the face area between the first face area and the second face area;
and the correction module is used for correcting the second face area according to the face area variation to obtain a third image and displaying the third image when the face area variation is smaller than a first preset variation.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring a first image and a second image, wherein the second image is a backward frame image of the first image;
acquiring a first face area in the first image and a second face area in the second image;
determining a face region variation between the first face region and the second face region;
and when the variation of the face area is smaller than a first preset variation, correcting the second face area to obtain a third image, and displaying the third image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a first image and a second image, wherein the second image is a backward frame image of the first image;
acquiring a first face area in the first image and a second face area in the second image;
determining a face region variation between the first face region and the second face region;
and when the variation of the face area is smaller than a first preset variation, correcting the second face area to obtain a third image, and displaying the third image.
The image processing method and device, the electronic equipment and the computer readable storage medium collect the first image and the second image, acquire the corresponding face area, determine the variation of the face area, correct the second face area to obtain the third image when the variation of the face area is smaller than the first preset variation, display the third image, and enable the face area to keep a relatively static state in a short time under the condition that the face area shakes, thereby improving the definition of the image or the video.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a schematic diagram of an image processing circuit in one embodiment;
FIG. 3 is a flow diagram of a method of image processing in one embodiment;
FIG. 4 is a schematic illustration of an image in one embodiment;
FIG. 5 is a diagram illustrating correction of a face region according to an embodiment;
FIG. 6 is a flowchart illustrating a process of performing a face correction process on a second face region according to an embodiment;
FIG. 7 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
fig. 8 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first image may be referred to as a second image, and similarly, a second image may be referred to as a first image, without departing from the scope of the present application. The first image and the second image are both images, but they are not the same image.
Fig. 1 is a schematic diagram of an application environment of an image processing method in an embodiment. As shown in fig. 1, the application environment includes an electronic device 110. The electronic device 110 includes a front facing camera 112. The number of the front cameras 112 is not limited, and may be one, two, three, or the like without being limited thereto. The front facing camera 112 may be used to capture a first image and a second image. The electronic device 110 may particularly, but not exclusively, be a personal computer, a laptop, a smartphone, a tablet and a portable wearable device.
The embodiment of the application also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 2 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 2, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 2, the image processing circuit includes an ISP processor 240 and control logic 250. The image data captured by the imaging device 210 is first processed by the ISP processor 240, and the ISP processor 240 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 210. The imaging device 210 may include a camera having one or more lenses 212 and an image sensor 214. The image sensor 214 may include an array of color filters (e.g., Bayer filters), and the image sensor 214 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 214 and provide a set of raw image data that may be processed by the ISP processor 240. The sensor 220 (e.g., gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 240 based on the type of interface of the sensor 220. The sensor 220 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
In addition, the image sensor 214 may also send raw image data to the sensor 220, the sensor 220 may provide the raw image data to the ISP processor 240 based on the sensor 220 interface type, or the sensor 220 may store the raw image data in the image memory 230.
The ISP processor 240 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 240 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 240 may also receive image data from the image memory 230. For example, the sensor 220 interface sends raw image data to the image memory 230, and the raw image data in the image memory 230 is then provided to the ISP processor 240 for processing. The image Memory 230 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from image sensor 214 interface or from sensor 220 interface or from image memory 230, ISP processor 240 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 230 for additional processing before being displayed. ISP processor 240 receives processed data from image memory 230 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 240 may be output to display 270 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 240 may also be sent to the image memory 230, and the display 270 may read image data from the image memory 230. In one embodiment, image memory 230 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 240 may be transmitted to an encoder/decoder 260 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on the display 270 device. The encoder/decoder 260 may be implemented by a CPU or GPU or coprocessor.
The statistics determined by ISP processor 240 may be sent to control logic 250 unit. For example, the statistical data may include image sensor 214 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 212 shading correction, and the like. Control logic 250 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 210 and ISP processor 240 based on the received statistical data. For example, the control parameters of the imaging device 210 may include sensor 220 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 212 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 212 shading correction parameters.
The following steps are used for realizing the image processing method by using the image processing technology in the figure 2: acquiring a first image and a second image, wherein the second image is a backward frame image of the first image; acquiring a first face area in a first image and a second face area in a second image; determining the variation of the face area between the first face area and the second face area; and when the variation of the face area is smaller than the first preset variation, correcting the second face area to obtain a third image, and displaying the third image.
FIG. 3 is a flow diagram of a method of image processing in one embodiment. The image processing method in the present embodiment is described by taking the electronic device 110 in fig. 1 as an example. As shown in fig. 3, the image processing method is applied to a mobile electronic device including a front camera, and includes steps 302 to 306.
Step 302, a first image and a second image are acquired, wherein the second image is a backward frame image of the first image.
The first image and the second image can be preview images or images in the process of video shooting. The second image may be a next frame image adjacent to the first image or may be a backward frame image that is not adjacent to the first image. For example, the first image is a first frame image, the second image may be a third frame image, and the like are not limited thereto.
Specifically, the electronic equipment acquires a first image and a second image through a front camera.
Step 304, a first face region in the first image and a second face region in the second image are obtained.
Wherein the first image includes a first background region and a first face region. FIG. 4 is a schematic diagram of an image in one embodiment. Here, 410 is a face area, and 420 is a background area. The face region includes a face. Other body parts of the person may also be included in the face region. The image processing method in the embodiment of the application processes the face area and the background area separately.
Specifically, the electronic device detects whether the first image and the second image contain human faces through the feature points. When the first image and the second image both contain human faces, a first human face area in the first image and a second human face area in the second image are obtained.
Step 306, determining the variation of the face area between the first face area and the second face area.
The face region variation amount may specifically be at least one of a face movement position variation amount, a face rotation variation angle, and a face region area size. The variation of the face area may be a distance or an angle.
Specifically, the electronic device determines a variation of the face area between the first face area and the second face area according to a variation of a position of a feature point in the face area. For example, the face feature point is shifted to the left, and the feature point is not deformed, then the face is determined to be shifted to the left. And if the area corresponding to the face characteristic point becomes smaller, determining that the face moves towards the direction far away from the camera of the electronic equipment.
And 308, when the variation of the face area is smaller than the first preset variation, correcting the second face area to obtain a third image, and displaying the third image.
The first preset variable quantity is configured according to the type of the variable quantity. When the face area variation is the face area position variation, the first preset variation is a first face area position variation. When the human face area variation is a human face rotation variation angle, the first preset variation is a first human face rotation variation angle. The correction processing means performing tilt correction on the image. The correction process can make the portrait area in a relatively static state.
Specifically, when the electronic device detects that the variation of the face area is smaller than a first preset variation, the electronic device corrects the second face area to obtain a third image, and the third image is displayed. Namely, the electronic equipment replaces the first image and the second image acquired by the original camera with the third image.
In this embodiment, when the variation of the face area is smaller than the first preset variation, the electronic device may correct the second area according to a preset correction processing amplitude to obtain a third image. For example, the preset correction processing amplitude may be a processing amplitude corresponding to one third, one half or two thirds of the first preset variation amount, and the like, but is not limited thereto. The third image is an image obtained by processing a second face region in the second image by the electronic device.
In this embodiment, when the variation of the face area is smaller than the first preset variation, the electronic device may correct the second face area according to the variation of the face area to obtain a third image. For example, if the face region variation is 2, the correction processing amplitude is also 2, so that the position of the face region in the third image is the same as the position of the face region in the first image.
In this embodiment, the correcting the second face region to obtain a third image includes: and correcting the second face area, and performing electronic anti-shake processing to obtain a third image. The electronic anti-shake processing is to increase the photosensitive parameters of a CCD (Charge-coupled Device) while accelerating the shutter and analyze the image obtained on the CCD, and then compensate for the shake using the edge image. Electronic anti-shaking uses digital circuitry to perform picture processing resulting in an anti-shaking effect. When the anti-shake circuit works, the shot picture is only 90% of the actual picture but not limited to the actual picture, then the digital circuit carries out fuzzy judgment on the shake direction of the camera, and then the shake compensation is carried out by using the rest pictures. Electronic anti-shake is used in the event of a shake of the electronic device.
In this embodiment, the electronic device may further perform a face beautifying process on the face region.
The image processing method in the embodiment acquires a first image and a second image, acquires a corresponding face region, determines a variation of the face region, indicates that the face region slightly shakes when the variation of the face region is smaller than a first preset variation, corrects the second face region to obtain a third image, and displays the third image.
In one embodiment, when the variation of the face area is smaller than a first preset variation, performing a rectification process on the second face area according to the variation of the face area to obtain a third image, including:
and when the variation of the face area is smaller than the first preset variation and larger than the second preset variation, performing first face correction processing on the second face area to obtain a third image.
And when the variation of the face area is smaller than a second preset variation, performing second face correction processing on the second face area to obtain a third image, wherein the correction amplitude of the first face correction processing is larger than that of the second face correction processing.
The first preset variation is larger than the second preset variation. The correction amplitude refers to the degree of face correction of the second face region. For example, the correction amplitude may be a correction angle, a correction distance, or the like, but is not limited thereto.
Specifically, as shown in fig. 5, a schematic diagram of correcting a face region in one embodiment is shown. Wherein (a) is a first image, (b) is a second image, and (c) is a third image. It can be known that the offset angle of the face region in the (c) image after the rectification processing is smaller than that in the (b) image. (c) The offset angle in the figure is less than or equal to the offset angle in figure (a).
And when the variation of the face area is smaller than the first preset variation and larger than the second preset variation, determining that the variation of the second face area is larger, and performing first face correction processing on the second face area to obtain a third image. And when the variation of the face area is smaller than a second preset variation, determining that the variation of the second face area is smaller, and performing second face correction processing on the second face area to obtain a third image.
In this embodiment, when the variation of the face area is smaller than the first preset variation and larger than the second preset variation, the moving direction of the face area is obtained, and the first correction processing is performed on the second face area in the direction opposite to the moving direction, so as to obtain the third image.
And when the variation of the face area is smaller than a second preset variation, acquiring the moving direction of the face area, and performing second correction processing on the second face area in the direction opposite to the moving direction to obtain a third image.
In the image processing method in this embodiment, when the variation of the face area is smaller than the first preset variation and larger than the second preset variation, the first face correction processing is performed on the second face area, and when the variation of the face area is smaller than the second preset variation, the second face correction processing is performed on the second face area, so that corresponding processing can be performed on different variations of the face area, and the accuracy of image processing and the definition of an image or a video are improved.
In one embodiment, when the variation of the face area is smaller than a first preset variation, performing a rectification process on the second face area to obtain a third image includes: when the variation of the face area is smaller than a first preset variation, acquiring the moving direction of the face area; and according to the variation of the face area, correcting the second face area in the direction opposite to the moving direction to obtain a third image.
Specifically, when the electronic device detects that the face feature point is shifted to the left, the moving direction of the face region is determined to be to the left. And the electronic equipment corrects the second face area rightwards according to the face area variation to obtain a third image. When the electronic equipment detects that the face area is reduced, the moving direction of the face area is determined to be far away from the camera, and the second face area is corrected in the direction close to the camera according to the variation of the face area to obtain an increased face area, so that a third image is obtained.
In the image processing method in this embodiment, when the variation of the face area is smaller than a first preset variation, the moving direction of the face area is obtained; and according to the variation of the face area, the second face area is corrected in the direction opposite to the moving direction to obtain a third image, so that the shaking amplitude of the face area can be reduced, and the definition of the image or the video can be improved.
In one embodiment, the second image includes a second face region and a second background region. And correcting the second face region to obtain a third image, wherein the third image comprises: correcting the second face area according to the face area variation; and when the blank area exists in the background area of the image after the correction processing is detected, filling the blank area to obtain a third image.
The electronic equipment converts the variation of the face area into a correction amplitude of a second face area, and corrects the second face area according to the correction amplitude. Since the corrected image may have a blank area, when the electronic device detects that a blank area exists in the background area of the corrected image, the blank area is filled by using an image filling algorithm, and a third image is obtained. For example, the image filling Algorithm may be a Flood Fill Algorithm (Flood Fill Algorithm), a gaussian filter Algorithm, or the like, but is not limited thereto. Or the electronic equipment fills the image of the pixel point corresponding to the background area in the second image into the blank area after the correction processing to obtain a third image.
In the image processing method in this embodiment, the second face area is corrected according to the amount of change in the face area, and when it is detected that a blank area exists in the background area of the corrected image, the blank area is filled to obtain the third image, so that the obtained third image has no blank area and is more vivid, the shake of the face area is reduced, and the definition of the image or the video is improved.
In an embodiment, fig. 6 is a flowchart illustrating a correction process performed on a second face region in an embodiment. And performing correction processing on the second face area, wherein the correction processing comprises the following steps:
step 602, obtaining depth information corresponding to the first image.
In particular, the image processing method in the embodiment of the present application can be applied to a mobile electronic device including a depth camera. The electronic equipment acquires depth information corresponding to the first image through the depth camera.
And step 604, determining the target distance between the human face and the camera according to the depth information.
And 606, acquiring a target correction amplitude according to the target distance, wherein the target correction amplitude is in negative correlation with the target distance.
The larger the distance between the human face and the camera is, the smaller the jitter amplitude displayed on the screen of the electronic equipment by the human face with the same jitter amplitude is, and therefore, the smaller the correction amplitude is.
Specifically, the corresponding relationship between the target distance and the target correction amplitude may be stored in the electronic device in advance through calibration or the like. The electronic device can search for the corresponding target correction amplitude according to the target distance. Or the electronic equipment adjusts the variation of the correction amplitude according to the variation of the target distance to obtain the target correction amplitude.
And step 608, performing correction processing on the second face area according to the target correction amplitude.
Specifically, the electronic device corrects the second face region according to the target correction amplitude and the corresponding correction direction. The default face area correction direction of the electronic equipment is the width of the face symmetry axis perpendicular to the current camera interface.
In this embodiment, the electronic device may perform correction processing on the second face region according to the target correction amplitude and the face region variation. For example, if the face area changes to shift 0.2 degrees to the left and the target distance increases by 0.02 cm, the electronic device shifts the second face area by 0.1 degrees to the right according to the target correction amplitude and the face area change amount.
The image processing method in the embodiment obtains depth information corresponding to a first image, determines a target distance between a face and a camera according to the depth information, obtains a target correction amplitude according to the target distance, wherein the target correction amplitude is in negative correlation with the target distance, corrects a second face region according to the target correction amplitude, can be applied to electronic equipment with multiple cameras, and adjusts the target correction amplitude according to the distance between the face and the camera, so that an obtained third image is more vivid, the jitter of the face region is reduced, and the definition of the image or the video is improved.
In one embodiment, determining a target distance between a face and a camera from depth information comprises: when detecting that the first frame image contains at least two face regions, determining the distance between each face region of the at least two face regions and the camera according to the depth information; acquiring the minimum distance value in the distance between each face area and the camera; and taking the minimum distance value as a target distance, and taking a face area corresponding to the minimum distance value as a second face area.
Specifically, the electronic device detects a face region in the first image. And when the first image is detected to contain at least two face areas, determining the distance between each face area of the at least two face areas and the camera according to the depth information. And the electronic equipment acquires the minimum distance value and takes the minimum distance value as a target distance. The electronic equipment also takes the face area corresponding to the minimum distance value as a second face area. The electronic equipment only corrects the second face area, and the rest face areas are used as background areas and are not processed.
In the image processing method in this embodiment, when it is detected that the first frame image includes at least two face regions, a distance between each face region of the at least two face regions and the camera is determined according to the depth information; acquiring the minimum distance value in the distance between each face area and the camera; the minimum distance is used as a target distance, the face area corresponding to the minimum distance is used as a second face area, the face area corresponding to the minimum distance is used as a key face area, only the face area with the shortest distance can be processed, and the image processing efficiency is improved.
In one embodiment, the performing the correction process on the second face region includes: acquiring the area of a face region in a first image; acquiring a target correction amplitude according to the area of the face region, wherein the target correction amplitude is positively correlated with the area of the face region; and carrying out correction processing on the second face area according to the target correction amplitude.
The area of the face region can be calculated by the number of pixel points occupied by the face region. The larger the area of the face region is, the smaller the distance between the face and the camera is. The smaller the distance is, the larger the jitter amplitude of the face with the same jitter amplitude is displayed on the screen of the electronic equipment, and therefore the larger the correction amplitude is. Then, the larger the area of the face region is, the larger the target correction amplitude is.
Specifically, the electronic device obtains the area of the face region in the first image according to the number of pixels occupied by the face region. The electronic equipment can obtain the corresponding target correction amplitude according to the area of the face region. Or the electronic equipment can determine the proportion of the area of the face region according to the area of the face region and the area of the image, and acquire the target correction amplitude according to the proportion. The larger the area ratio of the human face area is, the larger the target correction amplitude is.
The image processing method in the embodiment acquires the area of a face region in a first image; acquiring a target correction amplitude according to the area of the face region, wherein the target correction amplitude is positively correlated with the area of the face region; the second face region is corrected according to the target correction amplitude, the target correction amplitude can be adjusted according to the face region area, the obtained third image is more vivid, the shaking of the face region is reduced, and the definition of the image or the video is improved.
In one embodiment, the image processing method further comprises: acquiring depth information corresponding to the first image; obtaining a target distance between the face and the camera according to the depth information; and adjusting the preset variable quantity according to the target distance between the human face and the camera to obtain a first preset variable quantity, wherein the preset variable quantity is in negative correlation with the distance between the human face and the camera.
Specifically, the electronic device acquires depth information corresponding to a first image; and obtaining the target distance between the human face and the camera according to the depth information. Since the farther the distance is, the smaller the amount of change displayed on the display screen of the electronic device is in the case where the movement amplitude of the face area of the user is the same. The purpose of the correction is to reduce slight shaking of the face region and not to correct normal behavior of the face region, such as turning the head, lowering the head, and the like. Then, when the target distance between the human face and the camera is larger, the first preset variation is smaller; the smaller the target distance between the human face and the camera is, the larger the first preset variation is. And when the variation of the face area is smaller than the first preset variation, correcting the second face area.
The image processing method in the embodiment acquires depth information corresponding to a first image; the target distance between the face and the camera is obtained according to the depth information, the preset variable quantity is adjusted according to the target distance between the face and the camera, the first preset variable quantity is obtained, the first preset variable quantity can be adjusted according to the depth information, the obtained third image is more vivid, the shaking of the face area is reduced, and the definition of the image or the video is improved.
In one embodiment, the image processing method in the embodiment of the present application may be applied to a case where the face region is shaken. For example, the image processing method in the embodiment of the present application may be applied to a live video scene, a video recording scene, or an image preview scene. In a live video scene, a user may not always keep a completely static state, and there are some small actions such as leg shaking and the like, which causes a face area to shake slightly. Such slight jitter can be reduced or eliminated by the image processing method in the embodiment of the present application.
In one embodiment, an image processing method includes:
step (a1) of acquiring a first image and a second image, wherein the second image is a backward frame image of the first image.
And (a2) acquiring a first face region in the first image and a second face region in the second image.
And (a3) determining the variation of the face area between the first face area and the second face area.
And (a4) acquiring depth information corresponding to the first image.
And (a5) obtaining the target distance between the human face and the camera according to the depth information.
And (a6) adjusting a preset variation according to the target distance between the human face and the camera to obtain a first preset variation, wherein the preset variation is negatively related to the distance between the human face and the camera.
And (a7), when the variation of the face area is smaller than a first preset variation, acquiring depth information corresponding to the first image.
And (a8), when the first image is detected to contain at least two face regions, determining the distance between each face region of the at least two face regions and the camera according to the depth information.
And (a9) acquiring the minimum distance value in the distances between each face area and the camera.
And (a10) taking the minimum distance as the target distance and taking the face area corresponding to the minimum distance as a second face area.
And (a11) acquiring a target correction amplitude according to the target distance, wherein the target correction amplitude is in negative correlation with the target distance.
And (a12) performing correction processing on the second face area according to the target correction amplitude.
The image processing method in this embodiment acquires the first image and the second image, acquires a corresponding face region, determines a variation of the face region, adjusts a first preset variation according to depth information, indicates that the face region is slightly shaken when the variation of the face region is smaller than the first preset variation, corrects the second face region to obtain a third image, and displays the third image.
It should be understood that although the various steps in the flowcharts of fig. 3 and 6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 3 and 6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, fig. 7 is a block diagram of an image processing apparatus of an embodiment. As shown in fig. 7, an image processing apparatus includes an image acquisition module 702, a face region acquisition module 704, a variation determination module 706, and a rectification module 708, wherein:
an image acquiring module 702 is configured to acquire a first image and a second image, where the second image is a backward frame image of the first image.
A face region obtaining module 704, configured to obtain a first face region in the first image and a second face region in the second image.
A variation determining module 706, configured to determine a variation of the face area between the first face area and the second face area.
The correcting module 708 is configured to, when the variation of the face area is smaller than the first preset variation, correct the second face area according to the variation of the face area to obtain a third image, and display the third image.
The image processing device in this embodiment acquires the first image and the second image, acquires a corresponding face region, determines a variation of the face region, indicates that the face region is slightly shaken when the variation of the face region is smaller than a first preset variation, corrects the second face region to obtain a third image, and displays the third image.
In an embodiment, the rectification module 708 is configured to perform a first face rectification process on the second face region to obtain a third image when the variation of the face region is smaller than a first preset variation and larger than a second preset variation; and when the variation of the face area is smaller than a second preset variation, performing second face correction processing on the second face area to obtain a third image, wherein the correction amplitude of the first face correction processing is larger than that of the second face correction processing.
The image processing apparatus in this embodiment, when the variation of the face area is smaller than the first preset variation and larger than the second preset variation, performs the first face correction processing on the second face area, and when the variation of the face area is smaller than the second preset variation, performs the second face correction processing on the second face area, and can perform corresponding processing on the different variations of the face area, thereby improving the accuracy of image processing and the definition of images or videos.
In one embodiment, the rectification module 708 is configured to obtain a moving direction of the face region when the variation of the face region is smaller than a first preset variation; and according to the variation of the face area, correcting the second face area in the direction opposite to the moving direction to obtain a third image.
In the image processing apparatus in this embodiment, when the variation of the face area is smaller than a first preset variation, the moving direction of the face area is obtained; and according to the variation of the face area, the second face area is corrected in the direction opposite to the moving direction to obtain a third image, so that the shaking amplitude of the face area can be reduced, and the definition of the image or the video can be improved.
In one embodiment, the rectification module 708 is configured to perform rectification processing on the second face region according to the face region variation; and when the blank area exists in the background area of the image after the correction processing is detected, filling the blank area to obtain a third image.
The image processing apparatus in this embodiment corrects the second face region according to the amount of change in the face region, and when it is detected that a blank region exists in the background region of the corrected image, fills the blank region to obtain the third image, so that the obtained third image has no blank region and is more vivid, thereby reducing the jitter of the face region and improving the definition of the image or the video.
In one embodiment, the image processing apparatus further comprises an adjustment module. The adjusting module is used for acquiring depth information corresponding to the first image; determining a target distance between the face and the camera according to the depth information; and acquiring a target correction amplitude according to the target distance, wherein the target correction amplitude is in negative correlation with the target distance. The rectification module 708 is configured to perform rectification processing on the second face region according to the target rectification amplitude.
The image processing device in this embodiment obtains depth information corresponding to the first image, determines a target distance between the face and the camera according to the depth information, obtains a target correction amplitude according to the target distance, where the target correction amplitude is negative-related to the target distance, corrects the second face region according to the target correction amplitude, and can be applied to electronic equipment with multiple cameras, and adjust the target correction amplitude according to the distance between the face and the camera, so that the obtained third image is more vivid, the jitter of the face region is reduced, and the definition of the image or the video is improved.
In one embodiment, the adjusting module is configured to determine, when it is detected that the first frame image includes at least two face regions, a distance between each of the at least two face regions and the camera according to the depth information; acquiring the minimum distance value in the distance between each face area and the camera; and taking the minimum distance value as a target distance, and taking a face area corresponding to the minimum distance value as a second face area.
In the image processing apparatus in this embodiment, when it is detected that the first frame image includes at least two face regions, a distance between each of the at least two face regions and the camera is determined according to the depth information; acquiring the minimum distance value in the distance between each face area and the camera; the minimum distance is used as a target distance, the face area corresponding to the minimum distance is used as a second face area, the face area corresponding to the minimum distance is used as a key face area, only the face area with the shortest distance can be processed, and the image processing efficiency is improved.
In one embodiment, the adjusting module is used for acquiring the area of a face region in the first image; and acquiring a target correction amplitude according to the area of the face region, wherein the target correction amplitude is positively correlated with the area of the face region. The rectification module 708 is configured to perform rectification processing on the second face region according to the target rectification amplitude.
The image processing device in the embodiment acquires the area of a face region in a first image; acquiring a target correction amplitude according to the area of the face region, wherein the target correction amplitude is positively correlated with the area of the face region; the second face region is corrected according to the target correction amplitude, the target correction amplitude can be adjusted according to the face region area, the obtained third image is more vivid, the shaking of the face region is reduced, and the definition of the image or the video is improved.
In one embodiment, the adjusting module is configured to obtain depth information corresponding to the first image; obtaining a target distance between the face and the camera according to the depth information; and adjusting the preset variable quantity according to the target distance between the human face and the camera to obtain a first preset variable quantity, wherein the preset variable quantity is in negative correlation with the distance between the human face and the camera.
The image processing device in this embodiment acquires depth information corresponding to the first image; the target distance between the face and the camera is obtained according to the depth information, the preset variable quantity is adjusted according to the target distance between the face and the camera, the first preset variable quantity is obtained, the first preset variable quantity can be adjusted according to the depth information, the obtained third image is more vivid, the shaking of the face area is reduced, and the definition of the image or the video is improved.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 8 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 8, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement an image processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The implementation of each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. An image processing method, comprising:
during live broadcasting, a first image and a second image are collected through a camera, wherein the second image is a backward frame image of the first image, and the camera is a front camera;
acquiring a first face area in the first image and a second face area in the second image; the second image comprises the second face area and a second background area;
determining a face region variation between the first face region and the second face region; the human face area variation comprises at least one of human face moving position variation, human face rotation variation angle and human face area size variation;
acquiring depth information corresponding to the first image, obtaining a target distance between a face and a camera according to the depth information, and adjusting a preset variation according to the target distance between the face and the camera to obtain a first preset variation, wherein the preset variation is negatively related to the target distance between the face and the camera;
when the variation of the face area is smaller than the first preset variation, performing correction processing on the second face area, and when a blank area exists in a background area of the corrected image, filling the blank area to obtain a third image and displaying the third image; the rectification processing comprises inclination rectification processing, and the inclination rectification processing is used for keeping a face area in an image in a relatively static state in a live broadcasting process.
2. The method according to claim 1, wherein when the variation of the face area is smaller than a first preset variation, performing a rectification process on the second face area to obtain a third image comprises:
when the variation of the face area is smaller than a first preset variation and larger than a second preset variation, performing first face correction processing on the second face area to obtain a third image;
and when the variation of the face area is smaller than the second preset variation, performing second face correction processing on the second face area to obtain a third image, wherein the correction amplitude of the first face correction processing is larger than that of the second face correction processing.
3. The method according to claim 1, wherein when the variation of the face area is smaller than a first preset variation, performing a rectification process on the second face area to obtain a third image comprises:
when the variation of the face area is smaller than a first preset variation, acquiring the moving direction of the face area;
and according to the face area variation, correcting the second face area in the direction opposite to the moving direction to obtain a third image.
4. The method according to claim 1, wherein the performing the rectification process on the second face region to obtain a third image comprises:
and correcting the second face area according to the face area variation.
5. The method of claim 1, wherein the performing the correction process on the second face region comprises:
acquiring a target correction amplitude according to the target distance, wherein the target correction amplitude is in negative correlation with the target distance;
and correcting the second face area according to the target correction amplitude.
6. The method of claim 1, wherein determining the target distance between the face and the camera from the depth information comprises:
when the first image is detected to contain at least two face regions, determining the distance between each face region of the at least two face regions and a camera according to the depth information;
acquiring the minimum distance value in the distances between each face area and the camera;
and taking the minimum distance value as a target distance, and taking a face area corresponding to the minimum distance value as a second face area.
7. The method of claim 1, wherein the performing the correction process on the second face region comprises:
acquiring the area of a face region in a first image;
acquiring a target correction amplitude according to the area of the face region, wherein the target correction amplitude is positively correlated with the area of the face region;
and correcting the second face area according to the target correction amplitude.
8. The method according to any one of claims 1 to 7, wherein the performing the rectification process on the second face region to obtain a third image comprises:
correcting the second face area, and performing electronic anti-shake processing to obtain a third image; the electronic anti-shake processing is an anti-shake method that a Charge-coupled Device (CCD) photosensitive parameter is increased, a shutter is accelerated, an image obtained on the CCD is analyzed, and then an edge image is used for compensation.
9. An image processing apparatus characterized by comprising:
the system comprises an image acquisition module, a video processing module and a video processing module, wherein the image acquisition module is used for acquiring a first image and a second image through a camera in live broadcasting, the second image is a backward frame image of the first image, and the camera is a front camera;
the face region acquisition module is used for acquiring a first face region in the first image and a second face region in the second image; the second image comprises the second face area and a second background area;
the variable quantity determining module is used for determining the variable quantity of the face area between the first face area and the second face area; the human face area variation comprises at least one of human face moving position variation, human face rotation variation angle and human face area size variation;
the adjusting module is used for acquiring depth information corresponding to the first image, obtaining a target distance between a face and a camera according to the depth information, and adjusting a preset variation according to the target distance between the face and the camera to obtain a first preset variation, wherein the preset variation is negatively related to the target distance between the face and the camera;
the correction module is used for correcting the second face region when the variation of the face region is smaller than the first preset variation, filling a blank region when the blank region in the background region of the corrected image is detected to exist, obtaining a third image, and displaying the third image; the rectification processing comprises inclination rectification processing, and the inclination rectification processing is used for keeping a face area in an image in a relatively static state in a live broadcasting process.
10. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the image processing method according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN201910790212.7A 2019-08-26 2019-08-26 Image processing method and device, electronic equipment and computer readable storage medium Active CN110475067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910790212.7A CN110475067B (en) 2019-08-26 2019-08-26 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910790212.7A CN110475067B (en) 2019-08-26 2019-08-26 Image processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110475067A CN110475067A (en) 2019-11-19
CN110475067B true CN110475067B (en) 2022-01-18

Family

ID=68512551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910790212.7A Active CN110475067B (en) 2019-08-26 2019-08-26 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110475067B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127345B (en) * 2019-12-06 2024-02-02 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111105370B (en) * 2019-12-09 2023-10-20 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN113132612B (en) * 2019-12-31 2022-08-09 华为技术有限公司 Image stabilization processing method, terminal shooting method, medium and system
CN112381740B (en) * 2020-11-24 2024-02-06 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112565604A (en) * 2020-11-30 2021-03-26 维沃移动通信有限公司 Video recording method and device and electronic equipment
CN113570518B (en) * 2021-07-22 2023-11-14 上海明略人工智能(集团)有限公司 Image correction method, system, computer equipment and storage medium
CN113610864B (en) * 2021-07-23 2024-04-09 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1870715A (en) * 2005-05-26 2006-11-29 三洋电机株式会社 Means for correcting hand shake
CN104994367A (en) * 2015-06-30 2015-10-21 华为技术有限公司 Image correcting method and camera
CN106488133A (en) * 2016-11-17 2017-03-08 维沃移动通信有限公司 A kind of detection method of Moving Objects and mobile terminal
CN110049246A (en) * 2019-04-22 2019-07-23 联想(北京)有限公司 Video anti-fluttering method, device and the electronic equipment of electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102714697B (en) * 2010-11-11 2016-06-22 松下电器(美国)知识产权公司 Image processing apparatus, image processing method and program
WO2019127512A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Image processing method for photography device, photography device and movable platform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1870715A (en) * 2005-05-26 2006-11-29 三洋电机株式会社 Means for correcting hand shake
CN104994367A (en) * 2015-06-30 2015-10-21 华为技术有限公司 Image correcting method and camera
CN106488133A (en) * 2016-11-17 2017-03-08 维沃移动通信有限公司 A kind of detection method of Moving Objects and mobile terminal
CN110049246A (en) * 2019-04-22 2019-07-23 联想(北京)有限公司 Video anti-fluttering method, device and the electronic equipment of electronic equipment

Also Published As

Publication number Publication date
CN110475067A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110475067B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110166695B (en) Camera anti-shake method and device, electronic equipment and computer readable storage medium
CN110166697B (en) Camera anti-shake method and device, electronic equipment and computer readable storage medium
CN111246089B (en) Jitter compensation method and apparatus, electronic device, computer-readable storage medium
CN108322669B (en) Image acquisition method and apparatus, imaging apparatus, and readable storage medium
CN110536057B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110473159B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110290323B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
CN110278360B (en) Image processing method and device, electronic equipment and computer readable storage medium
US11431915B2 (en) Image acquisition method, electronic device, and non-transitory computer readable storage medium
CN108198152B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112087580B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN109672819B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107509044B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN108322650B (en) Video shooting method and device, electronic equipment and computer readable storage medium
CN110177212B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110636216B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110049237B (en) Camera anti-shake method and device, electronic equipment and computer storage medium
CN110213498B (en) Image generation method and device, electronic equipment and computer readable storage medium
CN110866486B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN107959841B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111246100B (en) Anti-shake parameter calibration method and device and electronic equipment
CN110177223B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2020029679A1 (en) Control method and apparatus, imaging device, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant