CN115205131A - Distorted image correction method and device, computer readable medium and electronic equipment - Google Patents

Distorted image correction method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN115205131A
CN115205131A CN202110400359.8A CN202110400359A CN115205131A CN 115205131 A CN115205131 A CN 115205131A CN 202110400359 A CN202110400359 A CN 202110400359A CN 115205131 A CN115205131 A CN 115205131A
Authority
CN
China
Prior art keywords
area
image
face
distorted image
portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110400359.8A
Other languages
Chinese (zh)
Inventor
沈成南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110400359.8A priority Critical patent/CN115205131A/en
Publication of CN115205131A publication Critical patent/CN115205131A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides a distorted image correction method and device, a computer readable medium and electronic equipment, and relates to the technical field of image processing. The method comprises the following steps: acquiring a distorted image to be corrected; carrying out region division on the distorted image, and determining a first image region and a second image region in the distorted image; wherein the first image area comprises image content needing to be corrected; and performing protection processing on the second image area and correction processing on the first image area so as to avoid deformation of the second image area when the first image area is corrected, and finishing correction of a distorted image. The method and the device can avoid the situation that the adjacent image area of the correction part is abnormally stretched when the distorted image, particularly the distorted image containing the human face, is corrected, improve the correction accuracy of the distorted image and avoid the abnormal stretching of the corrected image.

Description

Distorted image correction method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a distorted image correction method, a distorted image correction apparatus, a computer-readable medium, and an electronic device.
Background
Along with the continuous improvement of living standard of people, the related technologies of photographing and photography are more and more concerned by people. However, in the currently captured image, the distortion of the human face increases as the angle of view of the lens increases, the center of the lens has no distortion, and the edge distortion is significant.
At present, in the related art, when a distorted image, especially an image containing portrait content, is corrected, either an image area adjacent to a face image part is abnormally stretched when the face image part is corrected, or the distorted portrait content is missed to be corrected, so that the correction accuracy of the image content is low, and the correction effect is poor.
Disclosure of Invention
The present disclosure aims to provide a distorted image correction method, a distorted image correction apparatus, a computer-readable medium, and an electronic device, so as to avoid, at least to a certain extent, the problems of low correction accuracy and poor correction effect of image content caused by abnormal stretching of an adjacent image region when a portion to be corrected of a distorted image is corrected in the related art.
According to a first aspect of the present disclosure, there is provided a distorted image correction method including:
acquiring a distorted image to be corrected;
dividing the distorted image into areas, and determining a first image area and a second image area in the distorted image; wherein the first image area comprises image content requiring correction;
and performing protection processing on the second image area and correction processing on the first image area so as to avoid deformation of the second image area when the first image area is corrected, and finishing correction of the distorted image.
According to a second aspect of the present disclosure, there is provided a distorted image correction apparatus comprising:
the distorted image acquisition module is used for acquiring a distorted image to be corrected;
the image area dividing module is used for dividing the distorted image into areas and determining a first image area and a second image area in the distorted image; wherein the first image area comprises image content requiring correction;
and the distorted image correction module is used for protecting the second image area and correcting the first image area so as to avoid deformation of the second image area when the first image area is corrected and finish correction of the distorted image.
According to a third aspect of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the above-mentioned method.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, comprising:
a processor; and
a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method described above.
The distorted image correction method provided by an embodiment of the present disclosure divides an acquired distorted image into regions, determines a first image region that needs to be corrected and a second image region that does not need to be corrected in the distorted image, and performs correction processing on the first image region after performing protection processing on the second image region during correction, so as to avoid deformation of the second image region during correction of the first image region, and finally completes correction of the distorted image. On one hand, the acquired distorted image is subjected to region division, a first image region needing to be corrected and a second image region not needing to be corrected are correctly distinguished, all the first image regions needing to be corrected can be corrected, correction omission is avoided, and correction accuracy of the distorted image is improved; on the other hand, the second image area which does not need to be corrected is protected, then the first image area which needs to be corrected is corrected, the second image area can be effectively prevented from being deformed when the first image area is corrected, the correction effect is effectively improved, and abnormal stretching deformation which is not in line with the vision of human eyes in the corrected image is avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which embodiments of the present disclosure may be applied;
FIG. 2 shows a schematic diagram of an electronic device to which embodiments of the present disclosure may be applied;
FIG. 3 schematically illustrates a flow chart of a method of distorted image correction in an exemplary embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart for region partitioning of a distorted image in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow chart for determining a target face region in an exemplary embodiment of the disclosure;
FIG. 6 schematically illustrates another flow chart for region partitioning of a distorted image in an exemplary embodiment of the disclosure;
FIG. 7 schematically illustrates a flow chart for determining a first image region in an exemplary embodiment of the present disclosure;
fig. 8 schematically shows a composition diagram of a distorted image correction apparatus in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 is a schematic diagram illustrating a system architecture of an exemplary application environment to which a distorted image correction method and apparatus according to an embodiment of the present disclosure can be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The terminal devices 101, 102, 103 may be various electronic devices having an image processing function, including but not limited to desktop computers, portable computers, smart phones, tablet computers, and the like. It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The distorted image correction method provided by the embodiment of the present disclosure is generally executed by the terminal apparatuses 101, 102, 103, and accordingly, the distorted image correction apparatus is generally provided in the terminal apparatuses 101, 102, 103. However, it is easily understood by those skilled in the art that the distorted image correction method provided in the embodiment of the present disclosure may also be executed by the server 105, and accordingly, the distorted image correction apparatus may also be disposed in the server 105, which is not particularly limited in the exemplary embodiment. For example, in an exemplary embodiment, the user uploads the acquired distorted image to be corrected to the server 105 through the terminal devices 101, 102, and 103, and the server completes correction of the distorted image by the distorted image correction method provided by the embodiment of the present disclosure, and then transmits the corrected image to the terminal devices 101, 102, and 103, and so on.
An exemplary embodiment of the present disclosure provides an electronic device for implementing a distorted image correction method, which may be the terminal device 101, 102, 103 or the server 105 in fig. 1. The electronic device includes at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the distorted image correction method via execution of the executable instructions.
The following takes the mobile terminal 200 in fig. 2 as an example, and exemplifies the configuration of the electronic device. It will be appreciated by those skilled in the art that the configuration of figure 2 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, mobile terminal 200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the components is only schematically illustrated and does not constitute a structural limitation of the mobile terminal 200. In other embodiments, the mobile terminal 200 may also interface differently than shown in fig. 2, or a combination of multiple interfaces.
As shown in fig. 2, the mobile terminal 200 may specifically include: a processor 210, an internal memory 221, an external memory interface 222, a Universal Serial Bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor module 280, a display 290, a camera module 291, an indicator 292, a motor 293, a button 294, and a Subscriber Identity Module (SIM) card interface 295. Wherein the sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, and the like.
Processor 210 may include one or more processing units, such as: the Processor 210 may include an Application Processor (AP), a modem Processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
The NPU is a Neural-Network (NN) computing processor, which processes input information quickly by using a biological Neural Network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the mobile terminal 200, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
A memory is provided in the processor 210. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and execution is controlled by processor 210.
The charge management module 240 is configured to receive a charging input from a charger. The power management module 241 is used for connecting the battery 242, the charging management module 240 and the processor 210. The power management module 241 receives the input of the battery 242 and/or the charging management module 240, and supplies power to the processor 210, the internal memory 221, the display screen 290, the camera module 291, the wireless communication module 260, and the like.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. Wherein, the antenna 1 and the antenna 2 are used for transmitting and receiving electromagnetic wave signals; the mobile communication module 250 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the mobile terminal 200; the modem processor may include a modulator and a demodulator; the Wireless communication module 260 may provide a solution for Wireless communication including a Wireless Local Area Network (WLAN) (e.g., a Wireless Fidelity (Wi-Fi) network), bluetooth (BT), and the like, applied to the mobile terminal 200. In some embodiments, antenna 1 of the mobile terminal 200 is coupled to the mobile communication module 250 and antenna 2 is coupled to the wireless communication module 260, such that the mobile terminal 200 may communicate with networks and other devices via wireless communication techniques.
The mobile terminal 200 implements a display function through the GPU, the display screen 290, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 290 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
The mobile terminal 200 may implement a photographing function through the ISP, the camera module 291, the video codec, the GPU, the display screen 290, the application processor, and the like. The ISP is used for processing data fed back by the camera module 291; the camera module 291 is used for capturing still images or videos; the digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals; the video codec is used to compress or decompress digital video, and the mobile terminal 200 may also support one or more video codecs.
The external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the mobile terminal 200. The external memory card communicates with the processor 210 through the external memory interface 222 to implement a data storage function. For example, files such as music, video, etc. are saved in the external memory card.
Internal memory 221 may be used to store computer-executable program code, which includes instructions. The internal memory 221 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the mobile terminal 200, and the like. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like. The processor 210 executes various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221 and/or instructions stored in a memory provided in the processor.
The mobile terminal 200 may implement an audio function through the audio module 270, the speaker 271, the receiver 272, the microphone 273, the earphone interface 274, the application processor, and the like. Such as music playing, recording, etc.
The depth sensor 2801 is used to acquire depth information of a scene. In some embodiments, a depth sensor may be provided to the camera module 291.
The pressure sensor 2802 is used to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 2802 may be disposed on the display screen 290. Pressure sensor 2802 can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like.
The gyro sensor 2803 may be used to determine a motion gesture of the mobile terminal 200. In some embodiments, the angular velocity of the mobile terminal 200 about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensor 2803. The gyro sensor 2803 can be used to photograph anti-shake, navigation, body-feel game scenes, and the like.
In addition, other functional sensors, such as an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc., may be provided in the sensor module 280 according to actual needs.
Other devices for providing auxiliary functions may also be included in mobile terminal 200. For example, the keys 294 include a power-on key, a volume key, and the like, and a user can generate key signal inputs related to user settings and function control of the mobile terminal 200 through key inputs. Further examples include indicator 292, motor 293, SIM card interface 295, etc.
The distortion of the human face is increased along with the increase of the field angle of the lens, the center of the lens has no distortion, the distortion within 60 degrees of the field angle is negligible, the edge distortion is obvious, particularly, the distortion of the human face at the edge of the super-wide-angle lens is obvious, the distortion belongs to perspective distortion, and the cause is perspective projection in the imaging process of the lens.
In the related art, in order to solve the problem of face distortion caused by perspective projection, perspective distortion is usually corrected by using a spherical polar plane projection (Stereographic). The spherical pole plane projection is a conformal projection, can well restore the real situation of a human face, but cannot keep the shape of a straight line, and the perspective projection can keep the shape of the straight line. Therefore, face distortion correction methods typically combine the advantages of perspective projection and conformal projection to correct the image. Firstly, carrying out face detection and face segmentation on an image, and then correcting a face region needing to be corrected by adopting conformal projection; protecting the rest face regions to avoid deformation caused by being taken as a background; while maintaining background information and smoothing the transition between the portrait and background. However, in this technical solution, in the face correction, in addition to performing distortion correction on the face image part, the body part in the edge portrait is also considered to be corrected, so that the proportions of the face and the body in the correction result are kept consistent, but it is ignored that the body part of the portrait adjacent to the edge portrait is abnormally stretched; in the face correction process, for an area only containing portrait segmentation information but not human face detection information, if the area is intersected with an image boundary and the size of the area is large, protection processing is carried out, the situation that the portrait at the edge is originally obvious in distortion is ignored, and therefore the problem of correction missing is caused, and after correction is caused, deformation which does not accord with the vision of human eyes exists, and the correction effect of distorted images is poor.
Based on one or more of the above problems, the distorted image correction method according to the exemplary embodiment of the present disclosure will be specifically described below, taking a terminal device as an example.
Fig. 3 shows a flow of a distorted image correction method in the present exemplary embodiment, including the following steps S310 to S330:
in step S310, a distorted image to be corrected is acquired.
In an exemplary embodiment, the distorted image refers to an image with stretching deformation at the edge due to lens shooting, for example, the distorted image may be an image with distortion shot by a wide-angle lens, or may be a panoramic image with distortion shot by a panoramic lens, and of course, the distorted image may specifically be an ordinary image with distortion shot by an ordinary lens, which is not particularly limited in this exemplary embodiment.
In step S320, performing region division on the distorted image, and determining a first image region and a second image region in the distorted image; wherein the first image area comprises image content that needs to be corrected.
In an exemplary embodiment, the area division refers to a process of dividing an image area requiring correction from an image area not requiring correction in the distorted image. The first image area refers to an image area corresponding to image content that needs to be corrected in a distorted image, for example, the first image area may be a face portion of the distorted image or a foreground object portion of the distorted image, which is not particularly limited in this exemplary embodiment. The second image region refers to an image region corresponding to image content that does not need to be corrected in a distorted image or an image region adjacent to the first image region, for example, the second image region may be a body part corresponding to a distorted face part in an image, or may be an undistorted image region in the image, which is not particularly limited in this exemplary embodiment.
In step S330, performing protection processing on the second image region and performing correction processing on the first image region to avoid deformation of the second image region when the first image region is corrected, thereby completing correction of the distorted image.
In an exemplary embodiment, the protection processing refers to a processing procedure of performing modification locking on the content in the image area, and when the distortion correction algorithm corrects the distorted image, the image area subjected to the protection processing is not subjected to modification correction. After the second image area in the distorted image is subjected to protection processing, the first image area in the distorted image is subjected to correction processing, the whole distorted image is corrected, the second image area is effectively prevented from deforming when the first image area is corrected, and the visual effect of the corrected distorted image is ensured.
The following further describes steps S310 to S330.
In an exemplary embodiment, the image correction can be realized after the body protection processing is performed on the human face area with the human face detection information in the distorted image. First, a first image region and a second image region may be obtained by dividing a distorted image through the following steps, and as shown in fig. 4, the method specifically includes:
step S410, portrait detection is carried out on the distorted image, and a target portrait area is determined;
step S420, if a target face region exists in the target face region, using the target face region as the first image region, and using a body region corresponding to the target face region in the target face region as the second image region.
For example, the distorted image may be subjected to portrait detection through a pre-trained portrait segmentation model (such as a portrait detection model based on deep learning) or a portrait segmentation algorithm (such as a segmentation algorithm based on edge detection), a plurality of portrait regions in the distorted image are determined, and then the target portrait region needing distortion correction is screened from the plurality of portrait regions. The target face region refers to an image region or a face frame obtained by performing face region detection on a distorted image or a target face region, for example, the distorted image or the target face region may be subjected to face detection through a pre-trained face region-of-interest detection model or a face key point detection model, so as to determine the target face regions corresponding to the target face regions.
In an exemplary embodiment, the distorted image may be segmented into a plurality of portrait areas, and then a target portrait area is selected from the plurality of portrait areas according to the following conditions:
a communication area is formed among the plurality of portrait areas; and
the width of the communication area is greater than or equal to a preset width threshold value;
at least two sides of the boundary of the connected component intersect the adjacent boundary of the distorted image.
For example, a distorted image may include a portrait area a, a portrait area B, a portrait area C, a portrait area D, and a portrait area E, assuming that the portrait area a and the portrait area B are independent, and the portrait area C, the portrait area D, and the portrait area E are connected, that is, the portrait area a and the portrait area B correspond to a complete portrait contour, the image content is represented as a single portrait, and the portrait area C, the portrait area D, and the portrait area E correspond to an image area that is superimposed together, and the image content is represented as a plurality of related portraits that are superimposed together, so that the connected area formed by the portrait area C, the portrait area D, and the portrait area E may be used as a target portrait area. Because the related correction algorithm can realize better image correction on a single portrait area, abnormal stretching of a body part cannot be generated, and when the face correction is carried out on a plurality of portrait areas which are communicated or overlapped, due to the fact that the body part is difficult to identify, abnormal stretching of images of adjacent areas of the face is easily generated during the face correction, the plurality of portrait areas which are communicated need to be screened out for body protection, and the correction effect is convenient to improve.
The width threshold refers to a preset numerical value of the screening connected region, for example, if the width of the distorted image is 10cm, the width threshold may be 3cm, of course, the width threshold may also be 1cm, and the like, and the specific width threshold may be set by self-definition according to the width or the actual situation of the distorted image, which is not particularly limited in this example embodiment. If the width of the connected region is greater than or equal to the preset width threshold, it is indicated that the image content corresponding to the connected region occupies a large proportion of the distorted image, and body protection processing is required to be performed so as to improve the correction effect; if the width of the connected region is smaller than the preset width threshold, it is indicated that the image content corresponding to the connected region occupies a smaller proportion of the distorted image, and at this time, even if the body part is abnormally stretched, a larger visual influence cannot be generated, so that body protection processing is not required, the calculation amount of the system is reduced, and the image correction processing efficiency is improved.
At least two edges of the boundary of the connected region intersect with the adjacent boundary of the distorted image, which means that the connected region is located at four corners of the distorted image, and at this time, the distortion of the image content in the connected region is relatively large, and the stretching amplitude generated when the face is corrected is also large, so that the protection processing needs to be performed on the body part.
When the target portrait area is screened, the target portrait area may satisfy any one of the above three conditions, or may satisfy any two of the above three conditions, or may satisfy all of the above three conditions, which is not particularly limited in this exemplary embodiment.
In an exemplary embodiment, whether a target face region exists in the target face region may be determined through the steps in fig. 5, and as shown in fig. 5, the method specifically includes:
step S510, carrying out face detection on the target portrait area, and determining a face area;
step S520, taking the face area, of which the distance from the boundary of the distorted image is smaller than a preset distance threshold and the face area is larger than or equal to a preset first area threshold, as a first face area;
step S530, regarding a face region having the smallest distance from the boundary of the distorted image in the first face region as a second face region;
step S540, regarding a face region, which has the smallest distance from the second face region and has a corresponding body region area greater than or equal to a preset second area threshold value, in the first face region as a third face region;
step S550, using the second face area and the third face area as the target face area.
The distance threshold is a numerical value used for screening a face region close to the boundary of the distorted image, for example, the distance threshold may be 5mm or 1cm, and may be specifically set by a user according to the size of the distorted image or an actual application scene, which is not particularly limited in this example embodiment. The boundary of the distorted image may be, for example, an upper boundary, a lower boundary, a left boundary, or a right boundary of the distorted image. The first area threshold is a numerical value used for screening a large-size face area close to a boundary of the distorted image, for example, the first area threshold may be 1 square centimeter or 25 square millimeters, and may be specifically set by a user according to a size of the distorted image or an actual application scene, which is not particularly limited in this example embodiment.
The first face area is an image area corresponding to a large-size face close to the edge of the distorted image, and the large-size face close to the edge of the distorted image generates large abnormal stretching on an area adjacent to the face when the large-size face is corrected due to large distortion amplitude, so that body protection and doctor correction effects are required on a face area containing the large-size face close to the edge of the distorted image.
Further, a face area having the smallest distance from the boundary of the distorted image in the first face area is taken as the second face area, and for example, if there is a first face area having distances of 1cm, 2cm, and 3cm from the boundary of the distorted image, respectively, a face area having a distance of 1cm from the boundary of the distorted image in the first face area is taken as the second face area. And further screening a third face region from the rest of the first face regions based on the second face regions obtained by screening, wherein the third face region is the first face region except the second face region, has the smallest distance with the second face region, and the area of the body region corresponding to the face region is larger than or equal to a preset second area threshold, and the face region meeting the conditions is taken as the third face region.
Finally, the second face area and the third face area which meet the conditions can be used as target face areas, and the target portrait area containing the target face area is divided into a first image area and a second image area, wherein the first image area corresponds to the target face area in the target portrait area, and the second image area corresponds to a body area in the target portrait area except for the target face area. When the distorted image is corrected, the body area in the second image area, namely the target portrait area, is protected, and the target face area in the first image area, namely the target portrait area, is corrected, so that the distorted image is corrected, unreasonable abnormal stretching is avoided, the correction accuracy is improved, and the correction effect of the distorted image is effectively improved.
In an exemplary embodiment, the region of the human face in the distorted image without human face detection information may be divided into regions to avoid the problem of missing corrections. First, the dividing of the distorted image into the first image region and the second image region may be implemented by the steps in fig. 6, and as shown in fig. 6, the method specifically includes:
step S610, carrying out portrait detection and face detection on the distorted image, and determining a no-face portrait area;
step S620 of setting, as the first image region, a face-free image region in the face-free image region, which intersects with two adjacent boundaries of the distorted image or intersects with one boundary of the distorted image;
step S630, regarding an unmanned face region in the unmanned face region intersecting with at least three boundaries of the distorted image or two non-adjacent boundaries of the distorted image as the second image region.
For example, the non-human face portrait area may be a portrait area whose image content is expressed in a direction of a back-to-front field angle, or may be a portrait area only including a body part, which is not particularly limited in this example embodiment. Not including the face detection information may also be simply understood as detecting a portrait area, but the portrait area does not output a face frame or face key point information.
The non-human face portrait region intersecting with two adjacent boundaries of the distorted image or one boundary of the distorted image in the non-human face portrait region may be used as the first image region to be corrected, for example, the non-human face portrait region intersecting with the upper boundary (or the lower boundary) and the left boundary (or the right boundary) of the distorted image may be used as the first image region, and the non-human face portrait region intersecting with the upper boundary (or the lower boundary, the left boundary, or the right boundary) of the distorted image may be used as the first image region, which is not particularly limited in the present exemplary embodiment.
The non-face portrait region intersecting with at least three boundaries of the distorted image or two non-adjacent boundaries of the distorted image may be used as the protection region, i.e., the second image region, for example, the non-face portrait region intersecting with the upper boundary, the left boundary and the lower boundary (or at least three adjacent boundaries such as the left boundary, the lower boundary and the right boundary, or four boundaries of the distorted image) of the distorted image may be used as the second image region, or the non-face portrait region intersecting with the upper boundary and the lower boundary (or the left boundary and the right boundary) of the distorted image may be used as the second image region, which is not particularly limited in the present exemplary embodiment.
Of course, the second image area, which is a protection area, may be the unmanned face image area that does not intersect with the boundary of the distorted image and is within the target field angle range. When judging whether the unmanned face portrait area is in the target field angle range, judging whether the radial distance between each pixel point in the unmanned face portrait area and the central point of the distorted image is smaller than a preset radial distance threshold value, if the radial distance between each pixel point in the unmanned face portrait area and the central point of the distorted image is smaller than the preset radial distance threshold value, considering that the unmanned face portrait area is in the target field angle range, otherwise, considering that the unmanned face portrait area is not in the target field angle range.
In an exemplary embodiment, after the non-human face portrait area intersecting two adjacent boundaries of the distorted image or intersecting one boundary of the changed image is taken as the first image area, the first image area may be further processed through the steps in fig. 7 to improve the correction effect, and as shown in fig. 7, specifically, the method may include:
step S710, calculating the ratio of the maximum width of a target area in the non-human face portrait area to the height of the non-human face portrait area;
step S720, if the ratio is greater than or equal to a preset ratio threshold, taking the whole human face-free image area as the first image area;
step S730, if the ratio is smaller than a preset ratio threshold, the target area is used as the first image area.
For example, the target region may be an image region corresponding to an upper half portion of the non-face portrait region (for example, a dividing line may be transversely determined in the non-face portrait region, and a portion above the dividing line is used as the upper half portion), or may be an image region having an average width smaller than that of other regions in the non-face portrait region, which is not particularly limited in this example embodiment.
If the area of the non-human face portrait to be corrected exists, each area of the non-human face portrait needing to be corrected is judged as follows: if the obtained ratio is equal to or equal to a preset ratio threshold value, the whole non-human face image area is marked as a human face area to be corrected, namely a first image area; otherwise, intercepting the face image area from top to bottom, intercepting the length as the maximum width corresponding to the target area, and recording the intercepted target area as the face area to be corrected, namely the first image area.
In summary, in the exemplary embodiment, the acquired distorted image is divided into regions, a first image region that needs to be corrected and a second image region that does not need to be corrected in the distorted image are determined, during correction, after the second image region is subjected to protection processing, correction processing is performed on the first image region, so as to avoid deformation of the second image region during correction of the first image region, and finally, correction of the distorted image is completed. On one hand, the acquired distorted image is subjected to region division, a first image region needing to be corrected and a second image region not needing to be corrected are correctly distinguished, all the first image regions needing to be corrected can be corrected, correction omission is avoided, and correction accuracy of the distorted image is improved; on the other hand, the second image area which does not need to be corrected is protected, then the first image area which needs to be corrected is corrected, the second image area can be effectively prevented from being deformed when the first image area is corrected, the correction effect is effectively improved, and abnormal stretching deformation which is not in line with the vision of human eyes in the corrected image is avoided.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the disclosure and are not intended to be limiting. It will be readily appreciated that the processes illustrated in the above figures are not intended to indicate or limit the temporal order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, referring to fig. 8, the distorted image correction apparatus 800 provided in the exemplary embodiment may include a distorted image obtaining module 810, an image area dividing module 820, and a distorted image correction module 830. Wherein:
the distorted image obtaining module 810 is configured to obtain a distorted image to be corrected;
the image area dividing module 820 is configured to perform area division on the distorted image, and determine a first image area and a second image area in the distorted image; wherein the first image area comprises image content requiring correction;
the distorted image correction module 830 is configured to perform protection processing on the second image region and perform correction processing on the first image region, so as to avoid deformation of the second image region when the first image region is corrected, and complete correction of the distorted image.
In an exemplary embodiment, the image region division module 820 may include:
the portrait segmentation unit can be used for detecting the portrait of the distorted image and determining a target portrait area;
the image region determining unit may be configured to, if a target face region exists in the target face region, use the target face region as the first image region, and use a body region corresponding to the target face region in the target face region as the second image region.
In an exemplary embodiment, the portrait segmentation unit may be further configured to:
segmenting the distorted image into a portrait area, and taking the portrait area meeting the following conditions as the target portrait area:
a communication area is formed between the portrait areas; and
the width of the communication area is greater than or equal to a preset width threshold value;
at least two sides of the boundary of the connected region intersect with the adjacent boundary of the distorted image.
In an exemplary embodiment, the distorted image correction apparatus 800 may further include a target face region filtering unit, and the target face region filtering unit may be configured to:
carrying out face detection on the target portrait area to determine a face area;
taking the face region, of which the distance from the boundary of the distorted image is smaller than a preset distance threshold and the face region area is larger than or equal to a preset first area threshold, as a first face region;
taking a face area with the minimum distance from the boundary of the distorted image in the first face area as a second face area;
taking the face area which has the minimum distance with the second face area and has the corresponding body area larger than or equal to a preset second area threshold value in the first face area as a third face area;
and taking the second face area and the third face area as the target face area.
In an exemplary embodiment, the image region division module 820 may include:
the unmanned face portrait area determining unit can be used for carrying out portrait detection and face detection on the distorted image and determining an unmanned face portrait area;
a first image region determination unit operable to determine, as the first image region, an unmanned face region that intersects two adjacent boundaries of the distorted image or intersects one boundary of the distorted image, among the unmanned face regions;
a second image region determining unit, configured to use, as the second image region, an unmanned face image region in the unmanned face image region that intersects at least three boundaries of the distorted image or intersects two non-adjacent boundaries of the distorted image.
In an exemplary embodiment, the second image region determining unit may be further configured to:
and taking the non-human face portrait area which is not intersected with the boundary of the distorted image and is within the range of the target field angle in the non-human face portrait area as the second image area.
In an exemplary embodiment, the first image region determining unit may be further configured to:
calculating the ratio of the maximum width of a target area in the non-human face portrait area to the height of the non-human face portrait area;
if the ratio is larger than or equal to a preset ratio threshold, taking the whole human face-free image area as the first image area;
and if the ratio is smaller than a preset ratio threshold, taking the target area as the first image area.
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the contents of the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 3 to 7 may be performed.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. A distorted image correction method, comprising:
acquiring a distorted image to be corrected;
dividing the distorted image into areas, and determining a first image area and a second image area in the distorted image; wherein the first image area comprises image content requiring correction;
and performing protection processing on the second image area and correction processing on the first image area so as to avoid deformation of the second image area when the first image area is corrected, and finishing correction of the distorted image.
2. The method of claim 1, wherein the area dividing the distorted image, and the determining the first image area and the second image area in the distorted image comprises:
detecting the portrait of the distorted image, and determining a target portrait area;
if the target human face area exists in the target human face area, taking the target human face area as the first image area, and taking a body area corresponding to the target human face area in the target human face area as the second image area.
3. The method of claim 2, wherein performing portrait detection on the distorted image to determine a target portrait area comprises:
segmenting the distorted image into a portrait area, and taking the portrait area meeting the following conditions as the target portrait area:
a communication area is formed between the portrait areas; and
the width of the communication area is greater than or equal to a preset width threshold value;
at least two sides of the boundary of the connected region intersect with the adjacent boundary of the distorted image.
4. The method of claim 2, further comprising:
carrying out face detection on the target portrait area to determine a face area;
taking the face region, of which the distance from the boundary of the distorted image is smaller than a preset distance threshold and the area of the face region is larger than or equal to a preset first area threshold, as a first face region;
taking a face area with the minimum distance from the boundary of the distorted image in the first face area as a second face area;
taking the face area which has the minimum distance with the second face area and has the corresponding body area larger than or equal to a preset second area threshold value in the first face area as a third face area;
and taking the second face area and the third face area as the target face area.
5. The method of claim 1, wherein the area dividing the distorted image, and the determining the first image area and the second image area in the distorted image comprises:
carrying out portrait detection and face detection on the distorted image, and determining a no-face portrait area;
taking an unmanned face portrait region of the unmanned face portrait region intersecting two adjacent boundaries of the distorted image or intersecting one boundary of the distorted image as the first image region;
and taking the non-face image area which is intersected with at least three boundaries of the distorted image or two non-adjacent boundaries of the distorted image in the non-face image area as the second image area.
6. The method of claim 5, further comprising:
and taking the non-human face portrait area which is not intersected with the boundary of the distorted image and is within the range of the target field angle in the non-human face portrait area as the second image area.
7. The method according to claim 5, wherein the first image region is a face-free portrait region in which two adjacent boundaries of the distorted image intersect or one boundary of the distorted image intersects, and further comprising:
calculating the ratio of the maximum width of a target area in the non-human face portrait area to the height of the non-human face portrait area;
if the ratio is larger than or equal to a preset ratio threshold, taking the whole human face-free image area as the first image area;
and if the ratio is smaller than a preset ratio threshold, taking the target area as the first image area.
8. A distorted image correction apparatus, comprising:
the distorted image acquisition module is used for acquiring a distorted image to be corrected;
the image area dividing module is used for dividing the distorted image into areas and determining a first image area and a second image area in the distorted image; wherein the first image area comprises image content requiring correction;
and the distorted image correction module is used for protecting the second image area and correcting the first image area so as to avoid deformation of the second image area when the first image area is corrected and finish correction of the distorted image.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 7 via execution of the executable instructions.
CN202110400359.8A 2021-04-14 2021-04-14 Distorted image correction method and device, computer readable medium and electronic equipment Pending CN115205131A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110400359.8A CN115205131A (en) 2021-04-14 2021-04-14 Distorted image correction method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110400359.8A CN115205131A (en) 2021-04-14 2021-04-14 Distorted image correction method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115205131A true CN115205131A (en) 2022-10-18

Family

ID=83574181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110400359.8A Pending CN115205131A (en) 2021-04-14 2021-04-14 Distorted image correction method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115205131A (en)

Similar Documents

Publication Publication Date Title
US11403763B2 (en) Image segmentation method and apparatus, computer device, and storage medium
AU2020250124B2 (en) Image processing method and head mounted display device
CN110210573B (en) Method and device for generating confrontation image, terminal and storage medium
US20220309836A1 (en) Ai-based face recognition method and apparatus, device, and medium
CN112927362A (en) Map reconstruction method and device, computer readable medium and electronic device
CN109937434B (en) Image processing method, device, terminal and storage medium
CN113706678A (en) Method, device and equipment for acquiring virtual image and computer readable storage medium
CN113570645A (en) Image registration method, image registration device, computer equipment and medium
CN110956571B (en) SLAM-based virtual-real fusion method and electronic equipment
CN113658065A (en) Image noise reduction method and device, computer readable medium and electronic equipment
CN112528760B (en) Image processing method, device, computer equipment and medium
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
CN113205011A (en) Image mask determining method and device, storage medium and electronic equipment
CN112818979A (en) Text recognition method, device, equipment and storage medium
CN111385481A (en) Image processing method and device, electronic device and storage medium
CN113850709A (en) Image transformation method and device
CN115908120A (en) Image processing method and electronic device
CN112203002B (en) Method and apparatus for aligning image forming apparatus, storage medium, and electronic device
CN111538009A (en) Radar point marking method and device
CN115205131A (en) Distorted image correction method and device, computer readable medium and electronic equipment
CN113362260A (en) Image optimization method and device, storage medium and electronic equipment
CN116137025A (en) Video image correction method and device, computer readable medium and electronic equipment
CN114119405A (en) Image processing method and device, computer readable storage medium and electronic device
CN110097622B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN113077396A (en) Straight line segment detection method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination