CN114627014A - Image processing method, image processing apparatus, storage medium, and electronic device - Google Patents

Image processing method, image processing apparatus, storage medium, and electronic device Download PDF

Info

Publication number
CN114627014A
CN114627014A CN202210248289.3A CN202210248289A CN114627014A CN 114627014 A CN114627014 A CN 114627014A CN 202210248289 A CN202210248289 A CN 202210248289A CN 114627014 A CN114627014 A CN 114627014A
Authority
CN
China
Prior art keywords
region
body region
face
area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210248289.3A
Other languages
Chinese (zh)
Inventor
祁亚芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210248289.3A priority Critical patent/CN114627014A/en
Publication of CN114627014A publication Critical patent/CN114627014A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image processing method, an image processing device, a computer readable storage medium and an electronic device, and relates to the technical field of image and video processing. The image processing method comprises the following steps: detecting a face region and a body region in an image to be processed; determining correction information of the body area according to the position relation between the body area and the face area; and when the distortion correction is carried out on the image to be processed, the body area is processed and acquired based on the correction information of the body area. The imaging quality of the body part in the image is improved, and the visual effect of the image is improved.

Description

Image processing method, image processing apparatus, storage medium, and electronic device
Technical Field
The present disclosure relates to the field of image and video processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device.
Background
The process of camera imaging can cause image distortion such that objects in the image deviate from their actual shape, causing image distortion. Therefore, correction for image distortion is required.
In portrait processing, most people pay attention to the distortion of human face parts and perform corresponding correction processing. However, if proper correction processing is not applied to the body part, distortion of the body part may result, affecting the visual effect of the image.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those skilled in the art.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device, so as to improve distortion of a body part in an image at least to some extent.
According to a first aspect of the present disclosure, there is provided an image processing method including: detecting a face region and a body region in an image to be processed; determining correction information of the body area according to the position relation between the body area and the face area; and when distortion correction is carried out on the image to be processed, processing the body area based on the correction information of the body area.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising: the image detection module is configured to detect a human face region and a body region in the image to be processed; a correction information determination module configured to determine a correction strategy of the body region according to a position relationship between the body region and the face region; a region processing module configured to process the body region based on a correction strategy of the body region when performing distortion correction on the image to be processed.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image processing method of the first aspect described above and possible implementations thereof via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
according to the position relation between the body area and the face area in the image to be processed, appropriate correction information is determined for the body area, and then when distortion correction is carried out on the image to be processed, the body area is processed based on the correction information, so that the processed body area can be matched with the distortion correction effect of the face area, the imaging quality of the body part is improved, the body part is more natural, and the visual effect of the image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 shows a schematic diagram of a system architecture in the present exemplary embodiment;
fig. 2 shows a flowchart of an image processing method in the present exemplary embodiment;
fig. 3 is a diagram illustrating detection of a face region and a body region in an image to be processed in the present exemplary embodiment;
fig. 4 shows a flow chart for determining correction information for a body region in the present exemplary embodiment;
fig. 5A to 5C are diagrams illustrating determination of correction parameters of a body region from correction parameters of a face region in the present exemplary embodiment;
fig. 6 shows a schematic flowchart of an image processing method in the present exemplary embodiment;
fig. 7A and 7B illustrate an image in which only a human face is corrected and a target image after being processed by the present exemplary embodiment;
fig. 8 shows a schematic configuration diagram of an image processing apparatus in the present exemplary embodiment; .
Fig. 9 shows a schematic structural diagram of an electronic device in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The inventors found that the proportion of the face portion in the portrait (particularly, the portrait of a self-portrait) is generally large, the proportion of the body portion is relatively small, and therefore the body portion is generally not concerned, and the distortion correction of the portrait is mainly a correction process for the face. However, when the proportion of the body part in the human image is large, the body is also greatly distorted due to the influence of perspective projection and the like, and the corrected human face makes the body distortion more obvious, greatly affecting the imaging quality.
In view of one or more of the above problems, exemplary embodiments of the present disclosure provide an image processing method. The system architecture and application scenario of the operating environment of the exemplary embodiment are described below with reference to fig. 1.
Fig. 1 shows a schematic diagram of a system architecture, and the system architecture 100 may include a terminal 110 and a server 120. Wherein, terminal 110 can be electronic equipment such as smart mobile phone, panel computer, unmanned aerial vehicle, intelligent wearing equipment. The server 120 generally refers to a background system providing image processing-related services in the present exemplary embodiment, and may be a server or a cluster formed by a plurality of servers. The terminal 110 and the server 120 may form a connection through a wired or wireless communication link for data interaction.
In one embodiment, the image processing method in the present exemplary embodiment may be performed by the terminal 110. For example, the user captures a portrait using the terminal 110 to obtain an image to be processed, or the user selects an image (e.g., an image in an album) stored in the terminal 110 as the image to be processed and obtains a processed target image by performing an image processing method.
In one embodiment, the image processing method in the present exemplary embodiment may be performed by the server 120. For example, the terminal 110 may upload an image to be processed to the server 120, and the server 120 may obtain a processed target image by executing an image processing method, and may also return the target image to the terminal 110.
As can be seen from the above, the main body of execution of the image processing method in the present exemplary embodiment may be the terminal 110 or the server 120, which is not limited by the present disclosure.
The following describes an image processing method in the present exemplary embodiment with reference to fig. 2, where fig. 2 shows an exemplary flow of the image processing method, and may include:
step S210, detecting a face region and a body region in an image to be processed;
step S220, determining correction information of the body area according to the position relation between the body area and the face area;
in step S230, when the distortion correction is performed on the image to be processed, the body region is processed based on the correction information of the body region.
Based on the method, appropriate correction information is determined for the body area according to the position relation between the body area and the face area in the image to be processed, and then when distortion correction is carried out on the image to be processed, the body area is processed based on the correction information, so that the processed body area can be matched with the distortion correction effect of the face area, the imaging quality of the body part is improved, the body part is more natural, and the visual effect of the image is improved.
Each step in fig. 2 is explained in detail below.
Referring to fig. 2, in step S210, a face region and a body region in an image to be processed are detected.
The image to be processed is an image requiring distortion correction, and may be an image including a person. The face region may be a region of a complete face, an incomplete face (e.g., a face occlusion exists in the image to be processed), a side face, or the like, and the body region may be a region of a complete body, an incomplete body (e.g., only an upper body in the image to be processed), a side body, or the like.
The present disclosure does not limit the specific manner of detecting the face region and the body region. For example, target detection may be performed on an image to be processed to detect a face region and a body region in the image; the image to be processed can also be subjected to semantic segmentation to segment pixel points belonging to the face and pixel points belonging to the body in the image, so that a face region and a body region are obtained.
In one embodiment, a face in the image to be processed may be detected to obtain a face region, and then a body may be detected in a neck connecting direction of the face region to obtain a body region. The neck connecting direction is the direction of the neck, such as the direction close to the mouth or the side of the chin, and the direction of the body usually.
In one embodiment, a human body in the image to be processed may be detected, and the human body includes a human face and a human body, and then the human body region is divided into a human face region and a human body region.
Fig. 3 shows a schematic diagram of the detection of a face region and a body region from an image to be processed. The image to be processed comprises the heads and the upper half of two persons, and the human face regions f1 and f2 and the body regions b1 and b2 are obtained through detection. Fig. 3 shows that the face region and the body region can be detected and represented by rectangular boxes. In an embodiment, after obtaining the rectangular frame of the face region or the body region by means of target detection or the like, the rectangular frame may be expanded, for example, the position of the central point of the rectangular frame is kept unchanged, and the width and the height of the rectangular frame are expanded by a preset expansion ratio (e.g., 1.1 times), so that the expanded rectangular frame can contain image content around the face or the body, the robustness of the image processing method is increased, and the subsequent processed face or body is more naturally linked with the surrounding image content. In addition, the face region or the body region may also be represented in other shapes, for example, the face region or the body region may be represented by a mask (mask) of the image to be processed, if the value of the face region or the body region is 1, and the value of the rest of the face region or the body region is 0, the mask may represent the face region or the body region in any shape.
In one embodiment, the face region and the body region may be matched, and the face region and the body region having a matching relationship may be the face and the body of the same person, for example, the face region f1 and the body region b1 in fig. 3 have a matching relationship, and the face region f2 and the body region b2 have a matching relationship. The present disclosure is not limited to the specific manner of matching. For example, for each face region, the neck connecting direction may be determined in the face region, and then it is detected whether the body region satisfies the following condition: the body region is located in the neck connecting direction of the face region and adjacent to the face region (adjacent may mean that there is an overlapping portion between the two, or the distance between the two is less than a distance threshold). And if the conditions are met, determining that the face region and the body region have a matching relationship. If the plurality of body areas and the face area all meet the relationship, the face area and each body area can be respectively connected with a central point to obtain a central connecting line corresponding to each body area, the body area corresponding to the central connecting line with the smallest included angle with the neck connecting direction is selected, and the body area and the face area are determined to have the matching relationship.
With continued reference to fig. 2, in step S220, correction information of the body region is determined according to the positional relationship between the body region and the face region.
Wherein the correction information of the body area is used for processing the body area, and may include: whether a correction or a protection treatment is applied to the body region, and/or correction parameters of the body region. The correction processing is to correct distortion by means of a Stereographic projection (Stereographic) projection or other conformal projection. If the distortion of a certain region is not serious, the correction process may produce an opposite effect to make the region deviate from an actual shape, resulting in distortion, so that a protection process may be adopted, where the protection process refers to reducing the influence of the image correction process on a certain region, such as maintaining the shape of the region in an original image, or reducing the strength of distortion correction. The correction parameter may be used to indicate the strength of the distortion correction, such as a degree parameter of stretching or deflection during the correction process.
In the present exemplary embodiment, correction information adapted to the face region may be determined for the body region according to the positional relationship between the body region and the face region.
In an embodiment, referring to fig. 4, the determining the correction information of the body region according to the position relationship between the body region and the face region may include the following steps S410 and S420:
step S410, determining the number of face areas adjacent to a body area according to the position relationship between the body area and the face area;
in step S420, correction information of the body region is determined according to the number of face regions adjacent to the body region.
Wherein the human face region being adjacent to the body region may comprise: the intersection of the face region and the body region is not empty, namely the face region and the body region have overlapped parts, including the condition that the face region is tangent to the body region; alternatively, the distance between the face region and the body region is less than a distance threshold. The distance between the face region and the body region may be the distance between the center points of the two regions, or may be the shortest distance between the two regions. The shortest distance may be determined by: calculating the distance between any pixel point (usually any pixel point on the edge) in the face region and any pixel point (usually any pixel point on the edge) in the body region, and obtaining the minimum value of the distance, namely the shortest distance between the face region and the body region. The face area (or the body area) is moved in any direction for a certain distance, so that the face area (or the body area) is tangent to the body area (or the face area), and when the moving distance in a certain direction is shortest, the distance is the shortest distance between the face area and the body area. The distance threshold may be a generally small distance value used to measure whether the face region is very close to the body region, which may be determined empirically or actually. If the distance between the face region and the body region is smaller than the distance threshold, the face region and the body region can be regarded as an adjacent relationship even if the face region and the body region are not actually connected.
It should be understood that any one of the above conditions that the intersection of the face region and the body region is not empty and the distance between the face region and the body region is smaller than the distance threshold may be used alone as a condition for judging whether the face region and the body region are adjacent, or two conditions in an "or" and "relationship may be used to form a condition for judging whether the face region and the body region are adjacent.
Based on the above conditions, the number of face regions adjacent thereto can be determined for each body region. For example, in fig. 3 described above, it can be determined that body region b1 is adjacent to face regions f1, f2, and body region b2 is adjacent to face region f 2. Further, corresponding correction information is determined according to the number of face regions adjacent to the body region. For example, the more adjacent human face regions, the more the correction effect of the human face regions on the vision is, the more conservative correction processing may be adopted, for example, a protection processing may be adopted for the human face regions, or smaller correction parameters may be adopted.
By the method, the adaptive correction information can be determined for the body area according to the condition that the body area is adjacent to the face area, so that the proper correction processing can be favorably carried out on the body area subsequently, and the visual harmony of the body and the face can be improved.
In one embodiment, the determining the correction information of the body region according to the number of the face regions adjacent to the body region may include:
in response to the number of face regions adjacent to the body region reaching a number threshold, determining correction information for the body region as: applying a protective treatment to the body region;
in response to the number of face regions adjacent to the body region not reaching the number threshold, determining correction information for the body region as: a correction process is applied to the body region.
The number threshold is used to measure whether the number of face regions adjacent to the body region is large enough to require a protection process for the body region, and may be determined empirically or actually, for example, the number threshold may be 2. If the number of the face areas adjacent to the body area reaches a number threshold, determining to adopt protection processing on the body area; if the number of face regions adjacent to a body region does not reach a number threshold, it is determined that correction processing is to be applied to the body region. Therefore, the body area adjacent to a plurality of human face areas can be improved from being excessively stretched, deflected or deformed in the image distortion correction process, and the image quality is improved.
In an embodiment, the determining the correction information of the body region according to the position relationship between the body region and the face region may include:
and acquiring correction information of a face region adjacent to the body region, and determining the correction information of the body region according to the correction information of the face region.
The correction information of the face region is used for processing the face region, and may include: whether the face region is subjected to correction processing or protection processing, and/or correction parameters of the face region. The present disclosure does not limit how the correction information of the face region is specifically determined. In one embodiment, the correction information of the face region may include: and correction parameters for global correction of the image to be processed. For example, if the image to be processed is globally distortion-corrected by conformal projection, the correction parameters of the face region in the image to be processed may be calculated based on an algorithm of conformal projection. In one embodiment, the correction information may also be determined for each face region, for example, distortion information of the face region, which may include a distortion type, a distortion degree, and the like, may be obtained first, and then the corresponding correction information may be determined according to the distortion information of the face region.
In an embodiment, if there is radial distortion in the image to be processed, since the radial distortion is more serious at a position closer to the edge, a certain range of protection region may be determined in the center of the image to be processed, for example, the protection region may be a region in a shape of a rectangle, a circle, or the like with a certain proportion (e.g., 50%, 60%, or the like) of the area of the image to be processed, or a radius may be calculated according to the size of the image to be processed, and a circular region with the size of the radius is generated with the center point of the image to be processed as the center of the circle to serve as the protection region. And determining to adopt protection processing for the face area in the protection area (if the part of a certain face area exceeding the preset proportion is in the protection area, the face area is considered to be in the protection area), and determining to adopt correction processing for the face area outside the protection area.
After the correction information of the face region is acquired, for each body region, the correction information of the body region may be determined from the correction information of the face regions adjacent thereto.
In one embodiment, the correction information of the adjacent face region may be used as the correction information of the body region.
In one embodiment, if a certain body region is adjacent to a plurality of face regions, the correction information of each face region may be fused to obtain the correction information of the body region. For example, the correction parameters of each face region are averaged or weighted-averaged to obtain the correction parameters of the body region. Wherein, the weight can be calculated according to the size (such as area) of each face region or the distance (such as central point distance) between each face region and the body region, so as to be used for the calculation of weighted average.
In one embodiment, if a body region is adjacent to a plurality of face regions, both the face region determined to be subject to the correction process and the face region determined to be subject to the protection process are included. To avoid applying undue stretching to the body area, the correction information for the body area may be determined as: a protective treatment is applied to the body region.
In one embodiment, correction information of a face region matching a body region may be obtained, and the correction information of the body region may be determined according to the correction information of the face region. That is, the same or adapted correction information can be used for the face and body of the same person.
By the method for determining the correction information of the body area according to the correction information of the adjacent face area, the body area is further ensured to be the same as or adapted to the correction information of the face area, and therefore the body and the face in the target image after the subsequent processing have harmonious visual impressions.
In one embodiment, the correction information may include a correction parameter, such as a stretch ratio, a deflection angle, and the like. Therefore, the correction parameters of the face area adjacent to the body area can be acquired, and the correction parameters of the body area are determined according to the correction parameters of the face area.
Fig. 5A to 5C are diagrams illustrating determination of correction parameters of a body region from correction parameters of a face region. Referring to fig. 5A, assuming barrel distortion of the image to be processed, a set of reference points, such as grid points which may be uniformly distributed according to a grid, is determined in the image to be processed. When the image to be processed is globally corrected, different positions may be stretched to different degrees, and generally, the stretching degree of the region closer to the corner point is higher, so as to reduce distortion caused by barrel distortion, and thus, the position of the grid point after global correction may be calculated, as shown in fig. 5B. It should be noted that the global correction image is an image obtained by globally correcting the image to be processed, and is not a target image that is expected to be obtained in the present exemplary embodiment. In fact, in the present exemplary embodiment, it is not necessary to generate a global correction image, but the correction parameters of the face region may be calculated by an algorithm of global correction, for example, in fig. 5B, only the positions of the grid points need to be calculated, and the global correction image does not need to be actually obtained. The stretching parameter of the face region may be represented by the distance between the x direction and the y direction of the grid points in the face region, and the stretching parameter of the face region f1 in fig. 5B is smaller than that of the face region f2 as a whole. The body area b1 is adjacent to the face areas f1, f2, and the correction parameters of the body area b1 can be calculated from the correction parameters of the face areas f1, f 2. As can be seen from fig. 5B, since the body area B1 is located in the lower left corner of the image to be processed, in the global correction, the body area B1 needs to be stretched to a greater extent, which may result in an excessive deformation of the body area B1. In the present exemplary embodiment, the stretching parameters of the face region f1 and the stretching parameters of the face region f2 may be weighted to obtain the stretching parameters of the body region b1, which are smaller than the stretching degree in the global correction, and then the grid point positions in the body region b1 may be calculated, as shown in fig. 5C. In fig. 5C, compared with fig. 5B, the positions of the grid points in the body area B1 are slightly shifted from the original positions in the image to be processed, that is, the stretching degree of the body area B1 is reduced, so that the problem of excessive deformation of the body area B1 during distortion correction of the image to be processed is solved, and the imaging quality of the body area B1 is improved.
In an embodiment, the determining the correction information of the body region according to the position relationship between the body region and the face region may include:
and determining correction information of the body area according to the distortion information of the body area, the size of the body area and the position relation between the body area and the face area.
The distortion information of the body area may include a distortion type, a distortion degree, and the like of the body area, and may be determined by camera parameters, shooting parameters, and the like. The size of the body region may include the width, height, area, etc. of the body region. Compared to the above step S220, in the present exemplary embodiment, both the distortion information and the size of the body region are increased for determining the correction information of the body region.
In one embodiment, the greater the distortion level and the greater the size of the body region, the greater the correction force required. Therefore, the basic correction information of the body area, such as the basic correction parameters of the body area, can be determined according to the position relation between the body area and the face area. And then, adjusting the basic correction parameter according to the distortion degree and the size of the body area to ensure that the basic correction parameter is positively correlated with the distortion degree and the size of the body area, thereby obtaining the final correction parameter.
In one embodiment, the determining the correction information of the body region according to the distortion information of the body region, the size of the body region, and the position relationship between the body region and the face region may include:
and determining correction information of the body area according to the position relation between the body area and the face area in response to the fact that the distortion information of the body area and the size of the body area meet the precondition.
The precondition may include, but is not limited to: the body region is distorted to a degree greater than a distortion degree threshold and/or the size of the body region is greater than a size threshold. Corresponding distortion degree thresholds may be set for different distortion types, and corresponding one or more size thresholds may also be set for one or more of the width, height, and area of the body region. The distortion level threshold and the size threshold may be determined empirically or in practice, and are not limited by this disclosure.
By precondition, it can be judged whether the body area is severely distorted and whether the body area is large. When any one or two of the conditions are met, the influence of the body area on the visual effect of the image to be processed is relatively large, and the correction information of the body area can be determined according to the position relation between the body area and the face area so as to facilitate fine processing.
In one embodiment, the image processing method may further include the steps of:
in response to the distortion information of the body region and the size of the body region not satisfying the precondition, determining correction information of the body region as: a protective treatment is applied to the body region.
That is, if the distortion information and the size of the body region do not satisfy the precondition, the body region is considered to have a small influence on the visual effect of the image to be processed. Thus, it can be determined that the body region is subjected to the protection processing in the distortion correction process of the image to be processed, so that the steps S220 and S230 do not need to be performed, and the processing efficiency is improved.
With continued reference to fig. 2, in step S230, in distortion correction of the image to be processed, the body region is processed based on the correction information of the body region.
For example, if the calibration information for the body region is: if the correction processing is applied to the body region, the correction processing may be performed based on the calculated correction parameters of the body region, or the correction processing may be performed on the body region based on the global correction. If the correction information of the body area is: by applying the protection process to the body area, the pixel point positions of the body area can be maintained, so that the shape of the body area part in the processed image is unchanged.
In addition, when distortion correction is performed on the image to be processed, corresponding processing may be performed on the face region, such as processing based on the correction information of the face region, or performing correction processing on the face region based on global correction, and the like.
Fig. 6 shows a schematic flow of an image processing method in the present exemplary embodiment, which may include:
step S601, acquiring an image to be processed;
step S602, detecting a face region and a body region in an image to be processed, and acquiring information of the face region and the body region, wherein the information can include masks, extended rectangular frames, height, width, area, distortion degree and the like of each region;
step S603, determining whether the distortion degree, height (or width), and area of the body region are all greater than corresponding thresholds, the distortion degree corresponding to a distortion degree threshold, the height (or width) corresponding to a height threshold (or width threshold), and the area corresponding to an area threshold, if yes, performing step S604, and if no, performing step S608;
step S604, determining the number of face regions adjacent to the body region, if the number is 0, performing step S605, if the number is 1, performing step S606, and if the number is greater than or equal to 2, performing step S607;
step S605, determining to apply correction processing to the body region;
step S606, determining correction information of the body area according to the correction information of the adjacent face area;
step S607, determining whether the adjacent face area includes both the protected face and the corrected face, if yes, performing step S608, and if not, performing step S606;
step S608, determining to apply protection processing to the body region;
step S609, in the distortion correction process of the image to be processed, corresponding processing is carried out on the body area;
in step S610, after the distortion correction and the processing for each face region and body region are completed, the target image is output. Thereby completing the image processing flow.
Fig. 7A shows a schematic diagram of an image in which only a human face is corrected, and fig. 7B shows a schematic diagram of a target image after being processed by the present exemplary embodiment. By contrast, in fig. 7A, the body of the two leftmost and rightmost people is wide, and particularly, the body of the leftmost person appears to be not matched with the size of the face, and the whole person appears to be discordant and distorted. In fig. 7B, the body area is processed so that the shape and size of each person are suitable and the face is matched, and the whole image has harmonious and natural visual perception.
Exemplary embodiments of the present disclosure also provide an image processing apparatus. Referring to fig. 8, the image processing apparatus 800 may include:
an image detection module 810 configured to detect a face region and a body region in an image to be processed;
a correction information determination module 820 configured to determine a correction strategy of the body region according to the position relationship between the body region and the face region;
a region processing module 830 configured to process the body region based on the correction strategy of the body region when the distortion correction is performed on the image to be processed.
In an embodiment, the determining the correction information of the body region according to the positional relationship between the body region and the face region includes:
determining the number of face regions adjacent to the body region according to the position relationship between the body region and the face region;
and determining correction information of the body area according to the number of the human face areas adjacent to the body area.
In an embodiment, the determining the correction information of the body region according to the number of the face regions adjacent to the body region includes:
in response to the number of face regions adjacent to the body region reaching a number threshold, determining correction information for the body region as: applying a protective treatment to the body region;
in response to the number of face regions adjacent to the body region not reaching the number threshold, determining correction information for the body region as: a correction process is applied to the body region.
In an embodiment, the determining the correction information of the body region according to the positional relationship between the body region and the face region includes:
and acquiring correction information of a face area adjacent to the body area, and determining the correction information of the body area according to the correction information of the face area.
In an embodiment, the determining the correction information of the body region according to the positional relationship between the body region and the face region includes:
and determining correction information of the body area according to the distortion information of the body area, the size of the body area and the position relation between the body area and the face area.
In one embodiment, the determining correction information of the body region according to the distortion information of the body region, the size of the body region, and the positional relationship between the body region and the face region includes:
and determining correction information of the body area according to the position relation between the body area and the face area in response to the fact that the distortion information of the body area and the size of the body area meet the precondition.
In one embodiment, the correction information determination module 820 is further configured to:
in response to the distortion information of the body region and the size of the body region not satisfying the precondition, determining correction information of the body region as: a protective treatment is applied to the body region.
The specific details of each part in the above device have been described in detail in the method part embodiments, and details that are not disclosed may be referred to in the method part embodiments, and thus are not described again.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which may be implemented in the form of a program product, including program code for causing an electronic device to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "exemplary method" section of this specification, when the program product is run on the electronic device. In an alternative embodiment, the program product may be embodied as a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Exemplary embodiments of the present disclosure also provide an electronic device. The electronic device may be the terminal 10 or the server 120 described above. In general, the electronic device may include a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the above-mentioned image processing method via execution of the executable instructions.
The following takes the mobile terminal 900 in fig. 9 as an example, and the configuration of the electronic device is exemplarily described. It will be appreciated by those skilled in the art that the configuration of figure 9 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes.
As shown in fig. 9, the mobile terminal 900 may specifically include: the mobile communication device comprises a processor 901, a memory 902, a bus 903, a mobile communication module 904, an antenna 1, a wireless communication module 905, an antenna 2, a display screen 906, a camera module 907, an audio module 908, a power supply module 909 and a sensor module 910.
Processor 901 may include one or more processing units, such as: the Processor 901 may include an AP (Application Processor), a modem Processor, a GPU (Graphics Processing Unit), an ISP (Image Signal Processor), a controller, an encoder, a decoder, a DSP (Digital Signal Processor), a baseband Processor, and/or an NPU (Neural-Network Processing Unit), etc. The image processing method in the present exemplary embodiment may be performed by an AP, a GPU, or a DSP, and furthermore, the neural network related processing may be performed by an NPU, for example, the NPU may load a neural network for object detection and execute related algorithm instructions.
An encoder may encode (i.e., compress) an image or video to reduce the data size for storage or transmission. The decoder may decode (i.e., decompress) the encoded data for the image or video to recover the image or video data. Mobile terminal 900 may support one or more encoders and decoders, such as: image formats such as JPEG (Joint Photographic Experts Group), PNG (Portable Network Graphics), BMP (Bitmap), and Video formats such as MPEG (Moving Picture Experts Group) 1, MPEG9, h.963, h.964, and HEVC (High Efficiency Video Coding).
The processor 901 may be connected to the memory 902 or other components via the bus 903.
The memory 902 may be used to store computer-executable program code, which includes instructions. The processor 901 executes various functional applications of the mobile terminal 900 and data processing by executing instructions stored in the memory 902. The memory 902 may also store application data, such as files for storing images, videos, and the like.
The communication function of the mobile terminal 900 may be implemented by the mobile communication module 904, the antenna 1, the wireless communication module 905, the antenna 2, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 904 may provide a mobile communication solution of 3G, 4G, 5G, etc. applied to the mobile terminal 900. The wireless communication module 905 may provide wireless communication solutions for wireless local area network, bluetooth, near field communication, etc. applied to the mobile terminal 900.
The display screen 906 is used to implement display functions, such as displaying a user interface, images, videos, and the like. The camera module 907 is used to implement a photographing function, such as photographing an image, video, and the like. The audio module 908 is used for implementing audio functions, such as playing audio, capturing voice, and the like. The power module 909 is used to implement power management functions such as charging batteries, powering devices, monitoring battery status, etc. The sensor module 910 may include one or more sensors for implementing corresponding sensing functions.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the following claims.

Claims (10)

1. An image processing method, comprising:
detecting a face region and a body region in an image to be processed;
determining correction information of the body area according to the position relation between the body area and the face area;
and when distortion correction is carried out on the image to be processed, processing the body area based on the correction information of the body area.
2. The method according to claim 1, wherein the determining the correction information of the body region according to the position relationship between the body region and the face region comprises:
determining the number of the face regions adjacent to the body region according to the position relationship between the body region and the face region;
and determining correction information of the body area according to the number of the face areas adjacent to the body area.
3. The method of claim 2, wherein determining the correction information of the body region according to the number of the face regions adjacent to the body region comprises:
in response to the number of face regions adjacent to the body region reaching a number threshold, determining correction information for the body region as: applying a protective treatment to the body region;
in response to the number of face regions adjacent to the body region not reaching the number threshold, determining correction information for the body region as: applying a correction process to the body region.
4. The method according to claim 1, wherein the determining the correction information of the body region according to the position relationship between the body region and the face region comprises:
and acquiring correction information of the face region adjacent to the body region, and determining the correction information of the body region according to the correction information of the face region.
5. The method according to claim 1, wherein the determining the correction information of the body region according to the position relationship between the body region and the face region comprises:
and determining correction information of the body area according to the distortion information of the body area, the size of the body area and the position relation between the body area and the face area.
6. The method according to claim 5, wherein the determining correction information of the body region according to the distortion information of the body region, the size of the body region, and the positional relationship between the body region and the face region comprises:
and determining correction information of the body area according to the position relation between the body area and the face area in response to the fact that the distortion information of the body area and the size of the body area meet a precondition.
7. The method of claim 6, further comprising:
in response to the distortion information of the body region and the size of the body region not satisfying a precondition, determining correction information of the body region as: applying a protective treatment to the body region.
8. An image processing apparatus characterized by comprising:
the image detection module is configured to detect a human face region and a body region in the image to be processed;
a correction information determination module configured to determine a correction strategy of the body region according to a position relationship between the body region and the face region;
a region processing module configured to process the body region based on a correction strategy of the body region when performing distortion correction on the image to be processed.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 7 via execution of the executable instructions.
CN202210248289.3A 2022-03-14 2022-03-14 Image processing method, image processing apparatus, storage medium, and electronic device Pending CN114627014A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210248289.3A CN114627014A (en) 2022-03-14 2022-03-14 Image processing method, image processing apparatus, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210248289.3A CN114627014A (en) 2022-03-14 2022-03-14 Image processing method, image processing apparatus, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN114627014A true CN114627014A (en) 2022-06-14

Family

ID=81902605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210248289.3A Pending CN114627014A (en) 2022-03-14 2022-03-14 Image processing method, image processing apparatus, storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN114627014A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457644A (en) * 2022-11-10 2022-12-09 成都智元汇信息技术股份有限公司 Method and device for obtaining image recognition of target based on extended space mapping

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016122947A (en) * 2014-12-25 2016-07-07 キヤノン株式会社 Image processing apparatus
CN110853073A (en) * 2018-07-25 2020-02-28 北京三星通信技术研究有限公司 Method, device, equipment and system for determining attention point and information processing method
CN111080542A (en) * 2019-12-09 2020-04-28 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN111105366A (en) * 2019-12-09 2020-05-05 Oppo广东移动通信有限公司 Image processing method and device, terminal device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016122947A (en) * 2014-12-25 2016-07-07 キヤノン株式会社 Image processing apparatus
CN110853073A (en) * 2018-07-25 2020-02-28 北京三星通信技术研究有限公司 Method, device, equipment and system for determining attention point and information processing method
CN111080542A (en) * 2019-12-09 2020-04-28 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN111105366A (en) * 2019-12-09 2020-05-05 Oppo广东移动通信有限公司 Image processing method and device, terminal device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SIMON C. TREMBLAY 等: "From filters to fillers: an active inference approach to body image distortion in the selfie era", 《AI & SOCIETY》, 12 July 2020 (2020-07-12), pages 33 *
杨波: "广角图像透视畸变校正方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 March 2017 (2017-03-15), pages 138 - 5064 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457644A (en) * 2022-11-10 2022-12-09 成都智元汇信息技术股份有限公司 Method and device for obtaining image recognition of target based on extended space mapping

Similar Documents

Publication Publication Date Title
CN103139577A (en) Depth image filtering method, method for acquiring depth image filtering threshold values and depth image filtering device
CN113920195A (en) Distance detection method, control method, device, storage medium and electronic equipment
CN113658073B (en) Image denoising processing method and device, storage medium and electronic equipment
CN114627014A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN114565532A (en) Video beautifying processing method and device, storage medium and electronic equipment
CN113422956B (en) Image coding method and device, electronic equipment and storage medium
CN113205011A (en) Image mask determining method and device, storage medium and electronic equipment
CN112419161B (en) Image processing method and device, storage medium and electronic equipment
CN116524186A (en) Image processing method and device, electronic equipment and storage medium
CN114612341A (en) Image distortion correction method and device, computer readable medium and electronic device
CN113781336B (en) Image processing method, device, electronic equipment and storage medium
CN115187488A (en) Image processing method and device, electronic device and storage medium
CN115330633A (en) Image tone mapping method and device, electronic equipment and storage medium
CN115379128A (en) Exposure control method and device, computer readable medium and electronic equipment
CN114972096A (en) Face image optimization method and device, storage medium and electronic equipment
CN115719316A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110519486B (en) Distortion compensation method and device based on wide-angle lens and related equipment
CN115278189A (en) Image tone mapping method and apparatus, computer readable medium and electronic device
EP3761260A1 (en) Storage and signaling of entrance pupil parameters for immersive media
CN114612342A (en) Face image correction method and device, computer readable medium and electronic equipment
CN115022541B (en) Video distortion correction method and device, computer readable medium and electronic equipment
CN110769252A (en) Method for improving coding quality by AI face detection
CN118505524A (en) Image blurring method and device, electronic equipment and storage medium
WO2024164736A1 (en) Video processing method and apparatus, and computer-readable medium and electronic device
CN113902651B (en) Video image quality enhancement system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination