CN111080542A - Image processing method, image processing apparatus, electronic device, and storage medium - Google Patents

Image processing method, image processing apparatus, electronic device, and storage medium Download PDF

Info

Publication number
CN111080542A
CN111080542A CN201911252815.8A CN201911252815A CN111080542A CN 111080542 A CN111080542 A CN 111080542A CN 201911252815 A CN201911252815 A CN 201911252815A CN 111080542 A CN111080542 A CN 111080542A
Authority
CN
China
Prior art keywords
image
determining
shot image
face
distortion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911252815.8A
Other languages
Chinese (zh)
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jinsheng Communication Technology Co ltd
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Shanghai Jinsheng Communication Technology Co ltd
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jinsheng Communication Technology Co ltd, Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Shanghai Jinsheng Communication Technology Co ltd
Priority to CN201911252815.8A priority Critical patent/CN111080542A/en
Publication of CN111080542A publication Critical patent/CN111080542A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The application provides an image processing method, an image processing device, an electronic device and a storage medium, wherein the method comprises the following steps: the method comprises the steps of acquiring a shot image, identifying a main person in the shot image, determining the distortion degree of the main person if the main person is identified, and determining whether to perform distortion removal processing on a portrait area in the shot image according to the distortion degree of the main person. Therefore, only the main person with large distortion degree in the shot image is subjected to the distortion removal processing, so that the original state of the shot image is kept to the maximum degree, the calculation amount of the image processing is reduced, and the efficiency of the distortion removal processing of the shot image is improved.

Description

Image processing method, image processing apparatus, electronic device, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
At present, along with the progress of intelligent terminal manufacturing technology, be provided with the camera module on the intelligent terminal and be used for the user to shoot, wherein, it is comparatively general that the installation wide angle camera is gone up to the intelligent terminal. Among them, the wide-angle lens camera has a larger Field of view (FOV) than the conventional lens camera, but the wide-angle lens has a larger distortion and the image edge is severely distorted.
In the related art, in order to compensate for distortion of an image captured by a wide-angle camera, distortion correction processing needs to be performed on the image. The distortion correction processing of the image is carried out on the whole shot image at present, and the problem of low processing efficiency exists.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
An embodiment of a first aspect of the present application provides an image processing method, including:
acquiring a shot image;
identifying a subject person in the captured image;
if the subject person is identified, determining the distortion degree of the subject person;
and determining whether to carry out distortion removal processing on the portrait area in the shot image according to the distortion degree of the main person.
As a first possible case of the embodiment of the present application, after identifying the subject person in the captured image, the method further includes:
if the main person is not identified, inquiring the number of the portrait areas in the shot image;
if the number of the portrait areas is larger than or equal to a first threshold value, counting the proportion of the front face in the human face presented in the portrait areas;
if the proportion of the front face is larger than a second threshold value, determining to perform distortion removal processing on a portrait area in the shot image;
and if the proportion of the front face is less than or equal to a second threshold value, determining that the human image area in the shot image does not need to be subjected to distortion removal processing.
As a second possible case of the embodiment of the present application, after querying the number of the portrait areas in the captured image, the method further includes:
and if the number of the portrait areas is less than the first threshold, determining that the portrait areas in the shot image do not need to be subjected to distortion removal processing.
As a third possible case of the embodiment of the present application, the identifying a subject person in the captured image includes:
carrying out face recognition on the shot image;
respectively determining the face size, the face rotation angle and the face definition degree for each face;
and taking the human face with the human face size, the human face rotation angle and the human face definition degree meeting set conditions as a main figure in the shot image.
As a fourth possible case of the embodiment of the present application, the determining the distortion degree of the main character includes:
determining the view angle FOV of the subject person according to the position of the subject person in the shot image;
if the FOV is determined to be smaller than the preset angle threshold value, determining that the subject character is not distorted;
if the FOV is determined to be larger than or equal to the angle threshold, predicting a real contour according to the imaging contour of the subject person;
and determining the distortion degree of the main person according to the difference degree between the imaging contour and the real contour.
As a fifth possible case of the embodiment of the present application, after determining to perform the distortion removal processing on the portrait area in the captured image, the method further includes:
identifying a straight line segment in the shot image;
and carrying out distortion removal on the portrait area in the shot image according to the straight line segment in the shot image so as to keep the straight line segment in the same form before and after distortion removal.
As a sixth possible case of the embodiment of the present application, the identifying a straight line segment in the captured image includes:
determining a plurality of edge points from each pixel point according to the gradient value of each pixel point in the shot image and the pixel values of adjacent pixel points;
fitting the plurality of edge points to obtain a plurality of initial straight line segments; each initial straight line segment is obtained by fitting edge points with similar gradient directions;
and combining the plurality of initial straight line segments to obtain a straight line segment in the shot image.
As a seventh possible case of the embodiment of the present application, the fitting the plurality of edge points to obtain a plurality of initial straight-line segments includes:
determining a plurality of sets according to edge points with similar gradient directions in the plurality of edge points; wherein, the gradient directions of the edge points in the same set are similar;
and fitting the edge points in the corresponding set to each set to obtain an initial straight line segment.
As an eighth possible case of the embodiment of the present application, the determining, according to edge points with similar gradient directions in the plurality of edge points, a plurality of sets includes:
determining an initial reference point from the edge points that are not added to either set;
inquiring edge points which are adjacent to the reference point and have a gradient direction difference value smaller than an angle threshold value with the reference point;
adding the inquired edge points and the reference points to the same set;
if the gradient direction dispersion degree of each edge point in the same set is smaller than or equal to the set dispersion degree, the inquired edge point is used as an updated reference point, so that the steps of repeatedly executing the steps that the gradient direction difference value between the inquiry and the reference point is smaller than an angle threshold value, the edge point adjacent to the reference point and the inquired edge point and the reference point are added into the corresponding set are repeated until the gradient direction dispersion degree of each edge point in the corresponding set is larger than the set dispersion degree.
According to the image processing method, the shot image is obtained, the main person in the shot image is identified, if the main person is identified, the distortion degree of the main person is determined, and whether the human image area in the shot image is subjected to distortion removal processing or not is determined according to the distortion degree of the main person. Therefore, only the main person with large distortion degree in the shot image is subjected to the distortion removal processing, so that the original state of the shot image is kept to the maximum degree, the calculation amount of the image processing is reduced, and the efficiency of the distortion removal processing of the shot image is improved.
An embodiment of a second aspect of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring a shot image;
an identification module for identifying a subject person in the captured image;
the determining module is used for determining the distortion degree of the main character if the main character is identified;
and the processing module is used for determining whether to carry out distortion removal processing on the portrait area in the shot image according to the distortion degree of the main person.
The image processing apparatus according to the embodiment of the present application identifies a subject person in a captured image by acquiring the captured image, determines a distortion degree of the subject person if the subject person is identified, and determines whether to perform distortion removal processing on a person image area in the captured image according to the distortion degree of the subject person. Therefore, only the main person with large distortion degree in the shot image is subjected to the distortion removal processing, so that the original state of the shot image is kept to the maximum degree, the calculation amount of the image processing is reduced, and the efficiency of the distortion removal processing of the shot image is improved.
An embodiment of a third aspect of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the image processing method described in the foregoing embodiment is implemented.
A fourth aspect of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the image processing method as described in the above embodiments.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a first image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a second image processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a third image processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a fourth image processing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a fifth image processing method according to an embodiment of the present application;
fig. 6 is an exemplary diagram of an image processing method according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a sixth image processing method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of an electronic device according to one embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
In the related art, when a captured image is subjected to a distortion removal process, the entire captured image is usually corrected, which results in a large amount of calculation in the entire distortion removal process.
In order to solve the technical problems in the related art, the present application provides an image processing method, which includes acquiring a captured image, identifying a subject person in the captured image, determining a distortion degree of the subject person if the subject person is identified, and determining whether to perform distortion removal processing on a portrait area in the captured image according to the distortion degree of the subject person. Therefore, only the distortion subject person in the shot image is subjected to distortion removal processing, so that the original state of the shot image is kept to the maximum extent, the calculation amount of image processing is reduced, and the efficiency of the distortion removal processing of the shot image is improved.
An image processing method, an apparatus, an electronic device, and a storage medium according to embodiments of the present application are described below with reference to the drawings.
Fig. 1 is a schematic flowchart of a first image processing method according to an embodiment of the present application.
The embodiment of the present application is exemplified in that the image processing method based on an image is configured in an image processing apparatus, and the image processing apparatus can be applied to any electronic device, so that the electronic device can execute an image processing function.
The electronic device may be a Personal Computer (PC), a cloud device, a mobile device, and the like, and the mobile device may be a hardware device having various operating systems, such as a mobile phone, a tablet Computer, a Personal digital assistant, a wearable device, and an in-vehicle device.
As shown in fig. 1, the image processing method includes the steps of:
step 101, acquiring a shot image.
In the embodiment of the application, the shot image can be acquired through the image sensor arranged on the electronic equipment.
As one possible scenario, the electronic device may include a visible light image sensor, and the captured image may be captured based on the visible light image sensor in the electronic device. In particular, the visible light image sensor may include a visible light camera that may capture visible light reflected by an imaging subject for imaging.
As another possible situation, in this embodiment of the application, the electronic device may further include a structured light image sensor, and the captured image may be acquired based on the structured light image sensor in the electronic device. Alternatively, the structured light image sensor may include a laser lamp and a laser camera. Pulse Width Modulation (PWM) can modulate the laser lamp to emit structured light, the structured light irradiates to the imaging object, and the laser camera can capture the structured light reflected by the imaging object to perform imaging, so as to obtain a structured light image corresponding to the imaging object.
It should be noted that the image sensor disposed in the electronic device is not limited to the visible light sensor and the structured light sensor, but may also be other types of image sensors, such as a depth sensor, and the like, which is not limited in this application.
In step 102, a subject person in the captured image is identified.
In the embodiment of the application, after the shot image is acquired, the main person in the shot image is further identified.
As one possible implementation, the captured image may be input into a subject person recognition model that has been trained to recognize a subject person in the captured image.
As another possible implementation manner, face recognition may be performed on the captured image, and then the face size, the face rotation angle, and the face definition degree may be determined for each recognized face. And then, the human face with the human face size, the human face rotation angle and the human face definition degree meeting the set conditions is taken as a main person in the shot image.
Specifically, a face recognition model based on a Convolutional Neural Network (CNN) may be used to perform face recognition on the captured image, so as to recognize each face in the captured image, and determine a face size and a face position of each face. The face recognition model is obtained by training a large number of training sample images.
In the embodiment of the application, the face recognition is carried out on the shot image, after the face size and the face position of each face are obtained through recognition, the face key point detection is carried out on each face, so that the key points of the faces are determined, and further the key area positions of the faces are determined. The face key point detection is also called face key point detection, positioning or face alignment, and refers to that given face images, key region positions of the face, including eyebrows, eyes, a nose, a mouth, a face contour and the like, are positioned. For example, 106 face keypoint detections may be performed for each face to obtain the facial features and contours.
And after the key points of each face are determined, the face rotation angle of each face can be calculated according to the key points of each face. The face rotation angle refers to face orientation information, such as the face orientation is a front or a side.
As a possible implementation manner, the rotation parameters (roll, pitch, yaw) of the faces can be calculated according to the key points of each face, so as to determine the rotation angle of each face.
In the embodiment of the application, when the face definition degree of each face is determined, the face identified in the shot image can be cut firstly to obtain each face image. And then, carrying out Gaussian blur denoising on each face image, converting the face image into a gray image, filtering each gray image by using a Laplace operator, and counting the number of each gray level in each gray image to obtain a gray histogram of each gray image. And carrying out normalized mapping on each gray level histogram in a range from 0 to 255, and solving the mean value of the pixel values of the mapped gray level images. If the average value is larger than the set threshold value, determining that the face in the face image is clear; and if the average value is less than or equal to the set threshold value, determining that the face in the face image is fuzzy.
In the embodiment of the application, after the face size, the face rotation angle and the face definition degree of each face are determined, the face with the face size, the face rotation angle and the face definition degree meeting set conditions is used as a main figure in a shot image.
For example, the face size is smaller than the set size, and may be a face farther from the lens, and the face may be considered as the image background. For example, if the determined rotation angle of the face is greater than the set angle threshold, the face may be a side face, or the face may be considered as an image background. For example, when the face is blurred and the sharpness of the face is less than the set sharpness threshold, the face may be regarded as the image background.
In step 103, if the main person is identified, the distortion degree of the main person is determined.
It can be understood that when an image is captured by an image sensor disposed in an electronic device, the captured image inevitably has distortion due to performance errors of the image sensor, such as variation of focal length of a camera, optical distortion of a lens, and the like, perspective errors during imaging, and the like.
In the embodiment of the present application, after the subject person is identified from the captured image, it is necessary to further determine the distortion degree of the subject person to determine whether to further perform the distortion removal processing on the subject person according to the distortion degree of the subject person.
As a possible implementation manner, the offset angle of each subject person with respect to the optical axis of the lens may be determined according to the position of each subject person in the captured image, and then whether the subject person has a distortion phenomenon may be determined according to the offset angle of each subject person. If the offset angle is less than the angle threshold, e.g., 60, the subject character is determined to be undistorted. If the offset angle is greater than or equal to the angle threshold, the real contour is further predicted from the imaged contour of the subject person to compare the degree of difference between the imaged contour and the real contour. If the difference between the imaged contour and the real contour is large, it can be determined that the distortion degree of the subject person is serious; if the difference between the imaged contour and the actual contour is small, or substantially zero, then the subject person can be determined to be undistorted.
And 104, determining whether to perform distortion removal processing on the portrait area in the shot image according to the distortion degree of the main person.
In the embodiment of the present application, after determining the distortion degree of each main person in the captured image, whether to perform the distortion removal processing on the person image area in the captured image may be determined according to the distortion degree of the main person.
As a possible implementation, the determined distortion degree of each subject person may be compared with a distortion degree threshold value according to a set distortion degree threshold value. If the distortion degree of the main person is larger than the distortion degree threshold value, performing distortion removal processing on a person image area corresponding to the main person in the shot image; if the distortion degree of the main person is smaller than the distortion degree threshold value, the distortion removal processing does not need to be carried out on the person image area corresponding to the main person in the shot image;
in the embodiment of the application, when a portrait area in a shot image is subjected to distortion removal processing, the portrait area may be firstly divided into a face area and a body area, and then the portrait area is corrected and calculated according to a preset initial projection grid, a first corrected size value corresponding to the face area and a second corrected size value corresponding to the body area are obtained, further, a target corrected size value meeting a preset condition is determined in the first corrected size value and the second corrected size value, so that the portrait area in the image is corrected according to the target corrected size value, and the shot image after distortion removal is obtained.
When a first corrected size value corresponding to a face region and a second corrected size value corresponding to a body region are obtained, as a possible implementation manner, an original mesh in the face region may be constructed according to coordinates of pixel points in the face region, a preset initial projection mesh performs correction calculation on the face region, a first transformation mesh corresponding to the face region and a second transformation mesh corresponding to the body region are obtained, a size ratio of the first transformation mesh to the original mesh is calculated to obtain a first corrected size value, and a size ratio of the second transformation mesh to the original mesh is calculated to obtain a second corrected size value.
When a first corrected size value corresponding to the face region and a second corrected size value corresponding to the body region are obtained, as another possible implementation manner, the pixel coordinates and the depth values of each pixel point in the portrait region may be input into the initial projection grid by obtaining the depth value of each pixel point coordinate in the portrait region, and the mapped pixel coordinates corresponding to each pixel point may be obtained. Further, calculating a pixel difference value between the mapping pixel coordinate of each pixel point and the corresponding pixel coordinate; calculating the mean value of pixel difference values corresponding to all pixel points in the face region to obtain a first corrected size value; and calculating the mean value of the pixel difference values corresponding to all the pixel points in the body area to obtain a second correction size value.
According to the image processing method, the shot image is obtained, the main person in the shot image is identified, if the main person is identified, the distortion degree of the main person is determined, and whether the human image area in the shot image is subjected to distortion removal processing or not is determined according to the distortion degree of the main person. Therefore, only the distortion subject person in the shot image is subjected to distortion removal processing, so that the original state of the shot image is kept to the maximum extent, the calculation amount of image processing is reduced, and the efficiency of the distortion removal processing of the shot image is improved.
On the basis of the above-described embodiment, after the subject person in the captured image is identified in step 102, whether or not to perform the distortion removal processing on the portrait area in the captured image may be determined by the proportion of the front face in the portrait area. The above process is described in detail with reference to fig. 2, and fig. 2 is a flowchart illustrating a second image processing method according to an embodiment of the present application.
As shown in fig. 2, the image processing method may include the steps of:
step 201, acquiring a shot image.
In step 202, a subject person in the captured image is identified.
In the embodiment of the present application, the implementation processes of step 201 and step 202 may refer to the implementation processes of step 101 and step 102 in the foregoing embodiment, and are not described herein again.
In step 203, if the main person is not identified, the number of the portrait areas in the shot image is inquired.
In the embodiment of the present application, when a subject person in a captured image is identified, there may be a case where the subject person is not identified. In this case, the number of portrait areas in the captured image is inquired.
Step 204, determining whether the number of the portrait areas is greater than or equal to a first threshold.
In the embodiment of the application, whether the number of the portrait areas is larger than or equal to the first threshold value is judged to determine whether to perform distortion removal processing on the portrait in the shot image.
In step 205, if the number of the portrait areas is smaller than the first threshold, it is determined that the portrait areas in the captured image do not need to be subjected to the distortion removal processing.
In a possible case where the number of the portrait areas in the captured image is small and the number of the portrait areas is smaller than the first threshold, it is possible to determine that the captured image is a landscape photograph and that the human portrait areas in the captured image do not need to be subjected to the distortion removal processing.
It is understood that, when the electronic device captures an image, the captured object is a natural landscape, but a human image area exists in the captured image, in which case, the human image area in the captured image does not need to be subjected to the distortion removal processing.
And step 206, counting the proportion of the front face in the human face presented in the portrait area if the number of the portrait areas is greater than or equal to the first threshold value.
The proportion of the front face in the human face refers to the proportion of the front face in the human face presented in the human image area. For example, if the number of faces in the portrait area is 15 and the number of faces is 13, the proportion of faces is 13/15.
In a possible case where the number of portrait areas in the captured image is large and the number of portrait areas is greater than or equal to the first threshold, the captured image may be a captured image obtained by a plurality of persons who perform a group photography, or may be a captured image obtained by capturing images at the time of capturing.
Therefore, it is necessary to further count the proportion of the front face in the human face presented in the human image area to determine the shooting scene according to the proportion of the front face in the human face to determine whether to perform the distortion removal processing on the human image area in the shot image.
Step 207, determining whether the face occupation ratio is greater than a second threshold.
In the embodiment of the application, after the proportion of the front face in the face presented in the portrait area is obtained through statistics, the proportion of the front face is compared with a set second threshold value, so that whether the portrait area in the shooting scene is subjected to distortion removal processing or not is determined according to the comparison result.
And step 208, if the proportion of the front face is larger than the second threshold value, determining to perform distortion removal processing on the human image area in the shot image.
In a possible scenario, the ratio of the front face in the face presented in the statistical portrait area is greater than the second threshold, in which case, the shooting scene may be determined as a group of multiple persons, and therefore, the portrait area in the shooting scene needs to be subjected to the distortion removal processing.
In step 209, if the face occupation ratio is less than or equal to the second threshold, it is determined that the human image area in the captured image does not need to be subjected to the distortion removal processing.
In a possible scene, the proportion of the front face in the face presented by the statistical portrait area is smaller than or equal to the second threshold, in this case, the shooting scene can be determined as shot-by-shot, and the portrait area in the shot image does not need to be subjected to distortion removal processing.
According to the image processing method, the shot image is obtained, the main person in the shot image is identified, if the main person is not identified, the number of the portrait areas in the shot image is inquired, if the number of the portrait areas is larger than or equal to a first threshold value, the proportion of the front face in the face presented by the portrait areas is counted, and if the proportion of the front face is larger than a second threshold value, the portrait areas in the shot image are determined to be subjected to distortion removal processing; and if the proportion of the front face is less than or equal to the second threshold value, determining that the human image area in the shot image does not need to be subjected to distortion removal processing. Therefore, when no main person is identified in the shot image, whether the human image area is subjected to distortion removal processing is triggered according to whether the proportion of the front face in the human face presented by the human image area is larger than the second threshold value, scenes such as landscapes and scenes which do not need image correction during shooting are effectively avoided, and the original state of the image is kept to the maximum extent.
On the basis of the above-described embodiment, when determining the degree of distortion of the subject person in step 103, the degree of distortion of the subject person may also be determined according to the size of the view angle FOV of the subject person. The above process is described in detail with reference to fig. 3, and fig. 3 is a flowchart illustrating an image processing method according to a third embodiment of the present application.
As shown in fig. 3, the step 103 may further include the following steps:
step 301, determining the view angle FOV of the subject person according to the position of the subject person in the captured image.
The view angle FOV of the subject person is a deviation angle of the subject person with respect to the optical axis of the camera.
In the embodiment of the application, after a main person in a shot image is identified, the FOV of the main person is determined according to the position of the main person in the shot image.
As a possible implementation manner, after the face recognition model is used to perform face recognition on the shot image so as to recognize the position of the main person in the shot image, a three-dimensional model in which the main person is mapped to the imaging image is established, and the offset angle of the main person relative to the optical axis of the camera is calculated according to the coordinate values of the coordinate point (x, y).
Step 302, determine whether the FOV is less than a predetermined angle threshold.
The angle threshold is a preset angle value, and for example, the angle threshold may be set to 60 °.
In the embodiment of the present application, after determining the FOV of each subject person according to the position of each subject person in the captured image obtained by identifying the captured image, it is determined whether the FOV of the subject person is smaller than the angle threshold.
Step 303, if it is determined that the FOV is smaller than the angle threshold, it is determined that the subject person is not distorted.
In one possible scenario, if the FOV of the subject person is determined to be less than the angular threshold, then the subject person is determined to be undistorted without having to perform a de-distortion process on the subject person.
In step 304, if it is determined that the FOV is greater than or equal to the angular threshold, the true contour is predicted from the imaged contour of the subject person.
In a possible case, if it is determined that the FOV of the subject person is equal to or greater than the angle threshold, it is necessary to further determine the distortion degree of the subject person to determine whether to perform the distortion removal processing on the image area corresponding to the subject person according to the distortion degree of the subject person.
In the embodiment of the present application, it is determined that the FOV of the subject person is greater than or equal to the angle threshold, and the true contour is predicted from the imaged contour of the subject person. Specifically, after the imaging contour of each subject figure is obtained by detecting key points of a shot image, the real contour of the subject figure is calculated according to a conformal projection method.
And 305, determining the distortion degree of the main person according to the difference degree between the imaging outline and the real outline.
In the embodiment of the present application, after the real contour of the corresponding subject person is predicted from the imaged contour of the subject person, the distortion degree of the subject person is determined according to the degree of difference between the imaged contour and the real contour.
In a possible case where the degree of difference between the imaged contour and the true contour of the subject person is large, it can be determined that the degree of distortion of the subject person is large. For example, the overall size of the subject person varies greatly, or the degree of shift of the face key points is large.
In another possible case, the degree of difference between the imaged contour and the true contour of the subject person is small, and it can be determined that the degree of distortion of the subject person is small.
According to the image processing method, the visual angle FOV of the subject person is determined according to the position of the subject person in the shot image, if the FOV is determined to be smaller than the angle threshold, the subject person is determined to be free of distortion, and if the FOV is determined to be larger than or equal to the angle threshold, the real contour is predicted according to the imaging contour of the subject person; and determining the distortion degree of the main person according to the difference degree between the imaged contour and the real contour. Therefore, by comparing the FOV of the subject person with the angle threshold, when the FOV is greater than or equal to the angle threshold, the distortion degree of the subject person is determined according to the difference degree between the imaged contour and the real contour of the subject person, whether the human image area corresponding to the subject person is subjected to the distortion removal processing is determined according to the distortion degree of the subject person, and the efficiency of image processing is improved by reducing the processing on the background and the number of the human image area processing.
On the basis of the above embodiment, after the human figure region in the captured image is subjected to the distortion removal processing, the straight line segment in the captured image can be identified, and the human figure region in the captured image is subjected to the distortion removal processing according to the straight line segment in the captured image, so that the shape of the straight line segment before and after the distortion removal is kept the same, and the accuracy of the distortion removal of the human figure region is improved. The above process is described in detail with reference to fig. 4, and fig. 4 is a flowchart illustrating a fourth image processing method according to an embodiment of the present application.
As shown in fig. 4, the image processing method may further include the steps of:
in step 401, a straight line segment in the captured image is identified.
In the embodiment of the application, after the shot image is acquired, the straight line segment in the shot image can be further identified and acquired.
As one possible implementation, hough transform may be employed to identify straight line segments in the captured image. The hough transform is one of basic methods for identifying geometric shapes from images in image processing, and is widely applied and has a plurality of improved algorithms. The hough transform is mainly used to separate geometric shapes (e.g., straight lines, circles, etc.) having some same features from the image. The most basic hough transform is the detection of straight line segments from an image.
It should be noted that, in the present application, the method for identifying the straight line segment in the captured image is not limited, and other straight line detection methods may be used.
And step 402, the portrait area in the shot image is subjected to distortion removal according to the straight line segment in the shot image so as to keep the straight line segment in the same form before and after distortion removal.
In the embodiment of the application, after the straight line segment in the shot image is identified, the portrait area in the shot image can be subjected to distortion removal according to the straight line segment in the shot image, so that the straight line segment is kept in the same form before and after distortion removal.
It will be appreciated that the projection of a line in three-dimensional space onto the image plane is still a straight line, but the projection of a line in three-dimensional space onto a plane may be a curved line due to the effects of image sensor performance. Therefore, it is necessary to distort the portrait area in the captured image according to the straight line segments in the captured image, so that the straight line segments in the three-dimensional space are still straight line segments with consistent shapes after being projected onto the plane.
According to the image processing method, the straight line segment in the shot image is identified, and the portrait area in the shot image is subjected to distortion removal according to the straight line segment in the shot image, so that the straight line segment is kept in the same shape before and after distortion removal. Thus, by performing the antialiasing processing on the image region from the straight line segment in the captured image, the straight line segment is ensured to have the same form before and after the antialiasing, and the original state of the captured image is retained to the maximum extent.
As a possible implementation manner, in step 401, a plurality of edge points may be determined from each pixel point according to the gradient value and the pixel value of each pixel point in the captured image, the plurality of edge points are fitted to obtain each initial straight-line segment, and the plurality of initial straight-line segments are combined to obtain a straight-line segment in the captured image. The above process is described in detail with reference to fig. 5, and fig. 5 is a flowchart illustrating a fifth image processing method according to an embodiment of the present application.
As shown in fig. 5, the step 401 may further include the following steps:
step 501, determining a plurality of edge points from each pixel point according to the gradient value of each pixel point in the shot image and the pixel values of adjacent pixel points.
In the embodiment of the application, the gradient value of each pixel point in the shot image comprises the gradient and the gradient direction of each pixel point. When the image has edges, the gradient value is larger, and conversely, when the image has smooth parts, the gray value change is smaller, the corresponding gradient is smaller, the mode of the gradient is called the gradient for short in the image processing, and the image formed by the image gradient becomes a gradient image. The gradient of the image is equivalent to the difference between two adjacent pixels, the gradient direction of a certain point in the image is the gradient direction by calculating the gradient angle of the point and the 8 adjacent points, and the maximum gradient angle is the gradient direction. Wherein, 8 neighborhood points are 8 points of a certain point, such as upper, lower, left, right, upper left, upper right, lower left and lower right.
The following describes in detail the gradient, gradient angle and gradient direction of a pixel point with reference to fig. 6, and the gradient, gradient angle and gradient direction of each pixel point in a detected image can be specifically explained by using a Sobel operator. The Sobel operator is one of the most important operators in pixel image edge detection, and plays a significant role in the fields of information technology such as machine learning, digital media, computer vision and the like. Technically, it is a discrete first order difference operator used to calculate the approximation of the first order gradient of the image intensity function. Using this operator at any point in the image will produce the corresponding gradient vector or its normal vector.
As shown in FIG. 6, for pixel A, first calculate G by Sobel operatorx,GyThen, the gradient angle θ of the pixel point a is calculated as arctan (G)y/Gx) The gradient direction is a direction in which the gray level in the detection image increases, wherein the gradient angle in the gradient direction is larger than that in the flat area. As shown in fig. 6, the gradient angle of the direction in which the gray value increases is large, and the gradient direction of the pixel point a is the direction in which the gradient angle between the pixel point a and its 8 neighboring points is the largest.
In the embodiment of the application, after the shot image is acquired, edge detection can be performed on the shot image to determine a plurality of edge points. The algorithms for edge detection are mainly based on the first and second derivatives of the image intensity, but the derivatives are usually sensitive to noise, so the detected image is first filtered to remove the noise in the captured image. The common filtering method is gaussian filtering, that is, a set of normalized gaussian kernels is generated by using a discretized gaussian function, and then each point of the image gray matrix is subjected to weighted summation based on the gaussian kernels. The gaussian kernel radius when gaussian filtering the detection image may be adjusted according to the size of the detection image, for example, the gaussian kernel radius may be set to 5.
The gaussian filtering is a linear smooth filtering, is suitable for eliminating gaussian noise, and is widely applied to a noise reduction process of image processing. Gaussian filtering convolves pixel by pixel of an image with a gaussian kernel, thus obtaining the value of each pixel. In the convolution process, the distance is used as a weight to calculate the pixel at the center position of the convolution kernel by using the values of the surrounding pixels. The specific operation of gaussian filtering is: each pixel in the image is scanned using a template (or convolution, mask) of size 2 x N +1, and the weighted average gray value of the pixels in the neighborhood determined by the template is used to replace the value of the pixel in the center of the template.
From this, through carrying out gaussian filtering to the image of shooing, the noise of having avoided the image influences the gradient direction of each pixel, influences the technical problem that the straightway detected the precision then to the detection precision of straightway has been improved.
The edge detection method in the embodiment of the present application includes, but is not limited to, a canny edge detection method, a prewitt edge detection method, and the like.
As a possible implementation manner, after determining the gradient value of each pixel point and the pixel value of an adjacent pixel point in a shot image, comparing the gradient value of each pixel point with a first gradient threshold value aiming at each pixel point, and under a possible condition, if the gradient value of a certain pixel point is greater than the first gradient threshold value, querying a first adjacent pixel point adjacent to the corresponding pixel point in the gradient direction; and if the difference between the pixel value of the corresponding pixel point and the pixel value of the first adjacent pixel point is larger than the second gradient threshold value, determining the corresponding pixel point as an edge point.
As an example, taking a first adjacent pixel as each pixel in the 8-neighborhood of each pixel as an example, for each pixel in the detected image, if the gradient value is greater than the first gradient threshold, performing difference calculation on the gradient value of the corresponding pixel and the gradient value of the pixel in the 8-neighborhood, and if the difference between the gradient value of the corresponding pixel and the gradient value of the pixel in the 8-neighborhood in the gradient direction is greater than the second gradient threshold, determining that the corresponding pixel is an edge point.
It should be noted that when a plurality of edge points are determined from each pixel point of the shot image after the enhancement processing, some noise points may be determined as edge points, and therefore, each edge point needs to be further screened to screen out noise points in the image, which is beneficial to improving the accuracy of the straight-line segment detection.
In this embodiment of the application, after a plurality of edge points are determined from each pixel point, for each edge point, a second adjacent pixel point adjacent to the corresponding edge point in the gradient direction is queried, if a difference between gradient values of the corresponding edge point and the second adjacent pixel point is greater than a third gradient threshold, the corresponding edge point is retained, and if a difference between gradient values of the corresponding edge point and the second adjacent pixel point is less than or equal to the third gradient threshold, the corresponding edge point is screened. Therefore, noise points in the image are screened out through screening of the edge points, and the method is favorable for improving the recognition rate of the straight line detection method.
Step 502, fitting a plurality of edge points to obtain a plurality of initial straight line segments; wherein each initial straight line segment is fitted to edge points with similar gradient directions.
In the embodiment of the application, after a plurality of edge points are determined from each pixel point according to the gradient value of each pixel point in the shot image and the pixel value of the adjacent pixel point, the plurality of edge points need to be fitted due to the fact that the plurality of edge points are a plurality of discrete points, and a plurality of initial straight line segments are obtained.
It should be noted that each initial straight line segment may be fitted to edge points with similar gradient directions. Specifically, after a plurality of edge points are determined according to the gradient value of each pixel point in the shot image and the pixel values of adjacent pixel points, the edge points with similar gradient directions in the plurality of edge points are determined as a set. Further, the plurality of edge points may be divided into a plurality of sets. Wherein the gradient directions of the edge points in the same set are similar. And aiming at each set, fitting the edge points in the corresponding set to obtain each initial straight line segment.
And 503, combining the plurality of initial straight line segments to obtain a straight line segment in the shot image.
In the embodiment of the application, due to the influence of noise in the image, there may be a case where an edge line segment in the captured image is cut off, resulting in discontinuity of the image edge. Therefore, a plurality of initial straight line segments obtained by fitting a plurality of edge points need to be combined to obtain a target straight line segment in the captured image.
According to the image processing method, a plurality of edge points are determined from each pixel point according to the gradient value of each pixel point in a shot image and the pixel value of an adjacent pixel point, and the plurality of edge points are fitted to obtain a plurality of initial straight-line segments; and each initial straight line segment is obtained by fitting edge points with similar gradient directions, and a plurality of initial straight line segments are combined to obtain a target straight line segment in the detection image. The method comprises the steps of fitting a plurality of edge points determined in each pixel point in a shot image to obtain each initial straight-line segment, combining the initial straight-line segments to obtain a target straight-line segment in a detected image, and detecting the straight-line segments in the image rapidly because repeated processing of the pixel points in the image is not needed, so that the straight-line segment detection speed in the image is improved.
On the basis of the foregoing embodiment, in the step 502, when fitting the plurality of edge points to obtain a plurality of initial straight-line segments, a plurality of sets may also be determined according to edge points with similar gradient directions in the plurality of edge points, and then, for each set, fitting the edge points in the corresponding set to obtain one initial straight-line segment. The above process is described in detail with reference to fig. 7, and fig. 7 is a flowchart illustrating a sixth image processing method according to an embodiment of the present application.
As shown in fig. 7, the step 502 may further include the following steps:
601, determining a plurality of sets according to edge points with similar gradient directions in the plurality of edge points; wherein the gradient directions of the edge points in the same set are similar.
In the embodiment of the application, after a plurality of edge points are determined from each pixel point according to the gradient value of each pixel point and the pixel value of an adjacent pixel point in a shot image, the edge points with similar edge point gradient directions are divided into the same set to obtain a plurality of sets.
As a possible implementation manner, for a plurality of edge points, an edge point is determined as an initial reference point from edge points that are not added to any set, a gradient direction difference between a query and the reference point is smaller than an angle threshold, and edge points adjacent to the reference point add the queried edge point and the reference point to the same set.
In the embodiment of the application, after a plurality of edge points are determined from each pixel point, each edge point can be sorted according to the gradient value of each edge point, and the edge point with the maximum gradient can be used as an initial reference point in the edge points which are not added to any set. The edge points adjacent to the reference point may be edge points in the initial neighborhood of the reference point 8, i.e. 8 points above, below, left, right, above left, above right, below left, below right of the reference point.
For example, the difference between the gradient angle of the reference point and the gradient direction of each edge point within the 8-neighborhood may be calculated, assuming that the gradient direction difference between the edge points above and above-left of the reference point and the reference point is less than the angle threshold, at which point the edge points above and above-left may be added to the same set along with the reference point.
In the embodiment of the application, if the gradient direction dispersion degree of each edge point in the same set is less than or equal to the set dispersion degree, the queried edge point is used as an updated reference point, so that the step of repeatedly executing the edge point adjacent to the reference point, the edge point and the reference point, the gradient direction difference between the queried edge point and the reference point is less than the angle threshold, and the queried edge point and the reference point are added into the corresponding set until the gradient direction dispersion degree of each edge point in the corresponding set is greater than the set dispersion degree.
Step 602, for each set, fitting the edge points in the corresponding set to obtain an initial straight-line segment.
In the embodiment of the application, after a plurality of sets are determined according to edge points with similar gradient directions in the plurality of edge points, the plurality of edge points in each set are fitted to obtain an initial straight-line segment.
In the embodiment of the present application, fitting the plurality of edge points in each set is to connect the plurality of edge points in each set by using a straight line segment to obtain an initial straight line segment.
According to the image processing method, a plurality of sets are determined according to edge points with similar gradient directions in a plurality of edge points; and fitting the edge points in the corresponding set for each set to obtain an initial straight-line segment. Therefore, the initial straight line segments of the corresponding sets are obtained by fitting the edge points in each set, and the operation of combining the discrete edge points is realized.
In order to implement the above embodiments, the present application also provides an image processing apparatus.
Fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 8, the image processing apparatus 700 may include: an acquisition module 710, an identification module 720, a determination module 730, and a processing module 740.
The acquiring module 710 is configured to acquire a captured image.
And the identification module 720 is used for identifying the main person in the shot image.
The determining module 730 is configured to determine the distortion degree of the subject person if the subject person is identified.
And the processing module 740 is configured to determine whether to perform distortion removal processing on the portrait area in the captured image according to the distortion degree of the main person.
As a possible case, the image processing apparatus 700 may further include:
and the query module is used for querying the number of the portrait areas in the shot image if the main person is not identified.
And the counting module is used for counting the proportion of the front face in the face presented in the portrait area if the number of the portrait areas is greater than or equal to a first threshold value.
The determining module 730 is further configured to determine to perform distortion removal processing on the portrait area in the captured image if the proportion of the front face is greater than the second threshold.
The determining module 730 is further configured to determine that the human image region in the captured image does not need to be subjected to the distortion removal processing if the proportion of the front face is smaller than or equal to the second threshold.
As another possible scenario, the determining module 730 may further be configured to:
and if the number of the portrait areas is less than the first threshold value, determining that the portrait areas in the shot image do not need to be subjected to distortion removal processing.
As another possible scenario, the identifying module 720 may further be configured to:
carrying out face recognition on the shot image; respectively determining the face size, the face rotation angle and the face definition degree for each face; and taking the human face with the human face size, the human face rotation angle and the human face definition degree meeting set conditions as a main figure in the shot image.
As another possible scenario, the determining module 730 may further be configured to:
determining the view angle FOV of the subject person according to the position of the subject person in the shot image;
if the FOV is smaller than the preset angle threshold value, determining that the subject character is not distorted;
if the FOV is determined to be greater than or equal to the angle threshold, predicting the real contour according to the imaging contour of the subject person;
and determining the distortion degree of the main person according to the difference degree between the imaged contour and the real contour.
As another possible case, the image processing apparatus 700 may further include:
and the line segment identification module is used for identifying a straight line segment in the shot image.
And the distortion removing module is used for removing distortion of the portrait area in the shot image according to the straight line segment in the shot image so as to keep the straight line segment in the same form before and after distortion removal.
As another possible case, the line segment identification module may be further configured to:
and the determining unit is used for determining a plurality of edge points from each pixel point according to the gradient value of each pixel point in the shot image and the pixel value of the adjacent pixel point.
The fitting unit is used for fitting the edge points to obtain a plurality of initial straight line segments; wherein each initial straight line segment is fitted to edge points with similar gradient directions.
And the merging unit is used for merging the plurality of initial straight line segments to obtain the straight line segments in the shot image.
As another possible scenario, the fitting unit may be further configured to:
determining a plurality of sets according to edge points with similar gradient directions in the plurality of edge points; wherein, the gradient directions of the edge points in the same set are similar; and fitting the edge points in the corresponding set to each set to obtain an initial straight line segment.
As another possible scenario, the fitting unit may be further configured to:
determining an initial reference point from the edge points that are not added to either set;
inquiring edge points which are adjacent to the reference point and have a gradient direction difference value smaller than an angle threshold value with the reference point;
adding the inquired edge points and the reference points into the same set;
if the gradient direction discrete degree of each edge point in the same set is smaller than or equal to the set discrete degree, the inquired edge point is used as an updated reference point, so that the step of repeatedly executing the steps that the gradient direction difference value between the inquiry and the reference point is smaller than the angle threshold value and the edge point adjacent to the reference point and the inquired edge point and the reference point are added into the corresponding set is carried out until the gradient angle discrete degree of each edge point in the corresponding set is larger than the set discrete degree.
It should be noted that the foregoing explanation of the embodiment of the image processing method is also applicable to the image processing apparatus of this embodiment, and is not repeated here.
The image processing apparatus according to the embodiment of the present application identifies a subject person in a captured image by acquiring the captured image, determines a distortion degree of the subject person if the subject person is identified, and determines whether to perform distortion removal processing on a person image area in the captured image according to the distortion degree of the subject person. Therefore, only the distortion subject person in the shot image is subjected to distortion removal processing, so that the original state of the shot image is kept to the maximum extent, the calculation amount of image processing is reduced, and the efficiency of the distortion removal processing of the shot image is improved.
In order to implement the foregoing embodiments, the present application further provides an electronic device, and fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 9, an electronic device 110 includes a memory 111, a processor 112, and a computer program stored on the memory 111 and executable on the processor 112, and when the processor executes the program, the image processing method described in the foregoing embodiments is implemented.
In order to implement the above embodiments, the present application also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method as described in the above embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. An image processing method, characterized in that the method comprises:
acquiring a shot image;
identifying a subject person in the captured image;
if the subject person is identified, determining the distortion degree of the subject person;
and determining whether to carry out distortion removal processing on the portrait area in the shot image according to the distortion degree of the main person.
2. The image processing method according to claim 1, further comprising, after the identifying the subject person in the captured image:
if the main person is not identified, inquiring the number of the portrait areas in the shot image;
if the number of the portrait areas is larger than or equal to a first threshold value, counting the proportion of the front face in the human face presented in the portrait areas;
if the proportion of the front face is larger than a second threshold value, determining to perform distortion removal processing on a portrait area in the shot image;
and if the proportion of the front face is less than or equal to a second threshold value, determining that the human image area in the shot image does not need to be subjected to distortion removal processing.
3. The image processing method according to claim 2, further comprising, after the querying the number of the portrait area in the captured image:
and if the number of the portrait areas is less than the first threshold, determining that the portrait areas in the shot image do not need to be subjected to distortion removal processing.
4. The image processing method according to any one of claims 1 to 3, wherein the identifying a subject person in the captured image includes:
carrying out face recognition on the shot image;
respectively determining the face size, the face rotation angle and the face definition degree for each face;
and taking the human face with the human face size, the human face rotation angle and the human face definition degree meeting set conditions as a main figure in the shot image.
5. The image processing method according to any one of claims 1 to 3, wherein the determining the degree of distortion of the subject person includes:
determining the view angle FOV of the subject person according to the position of the subject person in the shot image;
if the FOV is determined to be smaller than the preset angle threshold value, determining that the subject character is not distorted;
if the FOV is determined to be larger than or equal to the angle threshold, predicting a real contour according to the imaging contour of the subject person;
and determining the distortion degree of the main person according to the difference degree between the imaging contour and the real contour.
6. The image processing method according to any one of claims 1 to 3, wherein after determining to perform the distortion removal processing on the portrait area in the captured image, the method further comprises:
identifying a straight line segment in the shot image;
and carrying out distortion removal on the portrait area in the shot image according to the straight line segment in the shot image so as to keep the straight line segment in the same form before and after distortion removal.
7. The image processing method according to claim 6, wherein the identifying a straight line segment in the captured image comprises:
determining a plurality of edge points from each pixel point according to the gradient value of each pixel point in the shot image and the pixel values of adjacent pixel points;
fitting the plurality of edge points to obtain a plurality of initial straight line segments; each initial straight line segment is obtained by fitting edge points with similar gradient directions;
and combining the plurality of initial straight line segments to obtain a straight line segment in the shot image.
8. The image processing method of claim 7, wherein said fitting said plurality of edge points to obtain a plurality of initial straight line segments comprises:
determining a plurality of sets according to edge points with similar gradient directions in the plurality of edge points; wherein, the gradient directions of the edge points in the same set are similar;
and fitting the edge points in the corresponding set to each set to obtain an initial straight line segment.
9. The image processing method according to claim 8, wherein determining a plurality of sets according to edge points with similar gradient directions in the plurality of edge points comprises:
determining an initial reference point from the edge points that are not added to either set;
inquiring edge points which are adjacent to the reference point and have a gradient direction difference value smaller than an angle threshold value with the reference point;
adding the inquired edge points and the reference points to the same set;
if the gradient direction dispersion degree of each edge point in the same set is smaller than or equal to the set dispersion degree, the inquired edge point is used as an updated reference point, so that the steps of repeatedly executing the steps that the gradient direction difference value between the inquiry and the reference point is smaller than an angle threshold value, the edge point adjacent to the reference point and the inquired edge point and the reference point are added into the corresponding set are repeated until the gradient direction dispersion degree of each edge point in the corresponding set is larger than the set dispersion degree.
10. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a shot image;
an identification module for identifying a subject person in the captured image;
the determining module is used for determining the distortion degree of the main character if the main character is identified;
and the processing module is used for determining whether to carry out distortion removal processing on the portrait area in the shot image according to the distortion degree of the main person.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method according to any one of claims 1 to 9 when executing the program.
12. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the image processing method according to any one of claims 1 to 9.
CN201911252815.8A 2019-12-09 2019-12-09 Image processing method, image processing apparatus, electronic device, and storage medium Pending CN111080542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911252815.8A CN111080542A (en) 2019-12-09 2019-12-09 Image processing method, image processing apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911252815.8A CN111080542A (en) 2019-12-09 2019-12-09 Image processing method, image processing apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN111080542A true CN111080542A (en) 2020-04-28

Family

ID=70313416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911252815.8A Pending CN111080542A (en) 2019-12-09 2019-12-09 Image processing method, image processing apparatus, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN111080542A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637482A (en) * 2020-12-08 2021-04-09 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN113077396A (en) * 2021-03-29 2021-07-06 Oppo广东移动通信有限公司 Straight line segment detection method and device, computer readable medium and electronic equipment
CN113222862A (en) * 2021-06-04 2021-08-06 黑芝麻智能科技(上海)有限公司 Image distortion correction method, device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249802A (en) * 2006-03-17 2007-09-27 Noritsu Koki Co Ltd Image processor and image processing method
CN102693418A (en) * 2012-05-17 2012-09-26 上海中原电子技术工程有限公司 Multi-pose face identification method and system
JP2013153404A (en) * 2011-12-28 2013-08-08 Ricoh Co Ltd Image processing apparatus, and image processing method
CN104700388A (en) * 2013-12-09 2015-06-10 富士通株式会社 Method and device for extracting distorted lines from images
CN105005972A (en) * 2015-06-30 2015-10-28 广东欧珀移动通信有限公司 Shooting distance based distortion correction method and mobile terminal
JP2016058882A (en) * 2014-09-09 2016-04-21 カシオ計算機株式会社 Image correction device, image correction method, and program
CN105554403A (en) * 2016-02-29 2016-05-04 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN106127778A (en) * 2016-06-27 2016-11-16 安徽慧视金瞳科技有限公司 A kind of line detection method for projecting interactive system
CN107423737A (en) * 2017-05-03 2017-12-01 武汉东智科技股份有限公司 The video quality diagnosing method that foreign matter blocks
CN107644228A (en) * 2017-09-21 2018-01-30 联想(北京)有限公司 Image processing method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249802A (en) * 2006-03-17 2007-09-27 Noritsu Koki Co Ltd Image processor and image processing method
JP2013153404A (en) * 2011-12-28 2013-08-08 Ricoh Co Ltd Image processing apparatus, and image processing method
CN102693418A (en) * 2012-05-17 2012-09-26 上海中原电子技术工程有限公司 Multi-pose face identification method and system
CN104700388A (en) * 2013-12-09 2015-06-10 富士通株式会社 Method and device for extracting distorted lines from images
JP2016058882A (en) * 2014-09-09 2016-04-21 カシオ計算機株式会社 Image correction device, image correction method, and program
CN105005972A (en) * 2015-06-30 2015-10-28 广东欧珀移动通信有限公司 Shooting distance based distortion correction method and mobile terminal
CN105554403A (en) * 2016-02-29 2016-05-04 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN106127778A (en) * 2016-06-27 2016-11-16 安徽慧视金瞳科技有限公司 A kind of line detection method for projecting interactive system
CN107423737A (en) * 2017-05-03 2017-12-01 武汉东智科技股份有限公司 The video quality diagnosing method that foreign matter blocks
CN107644228A (en) * 2017-09-21 2018-01-30 联想(北京)有限公司 Image processing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
于新瑞等: "数字图像直线特征的亚像素位置检测", vol. 30, no. 2, pages 138 - 141 *
田启川: "《虹膜识别原理及算法》", 国防工业出版社, pages: 13 - 16 *
郑毅等: "利用直线特征的定标图像非线性畸变校正", 《仪器仪表学报》, vol. 28, no. 6, 30 June 2007 (2007-06-30), pages 1129 - 1133 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637482A (en) * 2020-12-08 2021-04-09 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112637482B (en) * 2020-12-08 2022-05-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN113077396A (en) * 2021-03-29 2021-07-06 Oppo广东移动通信有限公司 Straight line segment detection method and device, computer readable medium and electronic equipment
CN113222862A (en) * 2021-06-04 2021-08-06 黑芝麻智能科技(上海)有限公司 Image distortion correction method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11461930B2 (en) Camera calibration plate, camera calibration method and device, and image acquisition system
CN108932698B (en) Image distortion correction method, device, electronic equipment and storage medium
CN111080661B (en) Image-based straight line detection method and device and electronic equipment
EP2085928B1 (en) Detection of blobs in images
CN111047615B (en) Image-based straight line detection method and device and electronic equipment
CN111080542A (en) Image processing method, image processing apparatus, electronic device, and storage medium
EP2570966A2 (en) Fast obstacle detection
US10909719B2 (en) Image processing method and apparatus
JP2003172873A5 (en)
CN111091507A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN109492642B (en) License plate recognition method, license plate recognition device, computer equipment and storage medium
CN109447117B (en) Double-layer license plate recognition method and device, computer equipment and storage medium
CN108154491B (en) Image reflection eliminating method
CN109190617B (en) Image rectangle detection method and device and storage medium
CN110087049A (en) Automatic focusing system, method and projector
CN114127784A (en) Method, computer program product and computer readable medium for generating a mask for a camera stream
WO2017101292A1 (en) Autofocusing method, device and system
CN113902652B (en) Speckle image correction method, depth calculation method, device, medium, and apparatus
CN112861870B (en) Pointer instrument image correction method, system and storage medium
CN108460368B (en) Three-dimensional image synthesis method and device and computer-readable storage medium
CN114724119B (en) Lane line extraction method, lane line detection device, and storage medium
CN115797405A (en) Multi-lens self-adaptive tracking method based on vehicle wheel base
CN114882122A (en) Image local automatic calibration method and device and related equipment
CN114972084A (en) Image focusing accuracy evaluation method and system
CN114120343A (en) Certificate image quality evaluation method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination