WO2020259271A1 - 图像畸变校正方法和装置 - Google Patents

图像畸变校正方法和装置 Download PDF

Info

Publication number
WO2020259271A1
WO2020259271A1 PCT/CN2020/095025 CN2020095025W WO2020259271A1 WO 2020259271 A1 WO2020259271 A1 WO 2020259271A1 CN 2020095025 W CN2020095025 W CN 2020095025W WO 2020259271 A1 WO2020259271 A1 WO 2020259271A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate
smoothing
image
weight
distance
Prior art date
Application number
PCT/CN2020/095025
Other languages
English (en)
French (fr)
Inventor
康健
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP20833621.4A priority Critical patent/EP3965054A4/en
Publication of WO2020259271A1 publication Critical patent/WO2020259271A1/zh
Priority to US17/525,628 priority patent/US11861813B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Definitions

  • This application relates to the field of image processing technology, and in particular to an image distortion correction method and device.
  • smart terminals are equipped with camera modules for users to take pictures.
  • FOV Field of Vision
  • FOV Field of Vision
  • FOV Field of Vision
  • the purpose of this application is to provide a solution to the technical problem in the prior art that the processing of the de-distorted image directly based on the interpolation algorithm leads to the low definition of the image after distortion correction.
  • the first purpose of the present application is to propose an image distortion correction method to solve the technical problem that the image resolution after distortion correction is not high due to the processing of the distortion image directly based on the interpolation algorithm in the prior art.
  • the second purpose of this application is to provide an image distortion correction device.
  • the third purpose of this application is to propose an electronic device.
  • the fourth purpose of this application is to provide a non-transitory computer-readable storage medium.
  • an embodiment of the first aspect of the present application proposes an image distortion correction method, including the following steps: obtaining a distorted image to be corrected and the first coordinate of each pixel in the distorted image; The second coordinate corresponding to the first coordinate, wherein the second coordinate is an undistorted coordinate corresponding to the first coordinate; the distance between the first coordinate and the center coordinate point of the distorted image is acquired, and the distance between Function to determine the smoothing coefficient corresponding to the distance, wherein the smoothing function is used to indicate the proportional relationship between the distance and the smoothing coefficient; according to the smoothing coefficient and the second coordinate pair The first coordinate is smoothly corrected to obtain a distortion corrected image.
  • An embodiment of the second aspect of the present application proposes an image distortion correction device, including: a first acquisition module for acquiring a distorted image to be corrected, and the first coordinate of each pixel in the distorted image; second acquisition A module for obtaining a second coordinate corresponding to the first coordinate, where the second coordinate is an undistorted coordinate corresponding to the first coordinate; and a third obtaining module for calculating the first coordinate The distance from the center coordinate point of the distorted image; a determining module, configured to determine a smoothing coefficient corresponding to the distance according to a preset smoothing function, wherein the smoothing function is used to indicate the distance and the The proportional relationship between the smoothing coefficients; a correction module, configured to smoothly correct the first coordinate according to the smoothing coefficient and the second coordinate to obtain a distortion corrected image.
  • the embodiment of the third aspect of the present application proposes an electronic device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
  • the image distortion correction method described in the embodiment of the first aspect is implemented.
  • the image distortion correction method includes the following steps: obtaining a distorted image to be corrected, and a first coordinate of each pixel in the distorted image; obtaining a second coordinate corresponding to the first coordinate, wherein the first coordinate
  • the second coordinate is the undistorted coordinate corresponding to the first coordinate; the distance between the first coordinate and the center coordinate point of the distorted image is acquired, and the smoothing coefficient corresponding to the distance is determined according to the smoothing function.
  • the smoothing function is used to indicate the proportional relationship between the distance and the smoothing coefficient; performing smoothing correction on the first coordinate according to the smoothing coefficient and the second coordinate to obtain a distortion corrected image.
  • the embodiment of the fourth aspect of the present application proposes a non-transitory computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the image distortion correction method as described in the embodiment of the first aspect is implemented .
  • the image distortion correction method includes the following steps: obtaining a distorted image to be corrected, and a first coordinate of each pixel in the distorted image; obtaining a second coordinate corresponding to the first coordinate, wherein the first coordinate The second coordinate is the undistorted coordinate corresponding to the first coordinate; the distance between the first coordinate and the center coordinate point of the distorted image is acquired, and the smoothing coefficient corresponding to the distance is determined according to the smoothing function.
  • the smoothing function is used to indicate the proportional relationship between the distance and the smoothing coefficient; performing smoothing correction on the first coordinate according to the smoothing coefficient and the second coordinate to obtain a distortion corrected image.
  • the traditional wide-angle distortion correction algorithm is improved on the basis of the bilinear interpolation algorithm, and the additional weighted smoothing function is used to perform distortion correction processing.
  • the traditional distortion correction algorithm considers the distribution of distortion in the whole image, and realizes the differentiated distortion correction operation in different areas of the image.
  • the degree of loss also ensures that the image area with larger distortion can completely eliminate the distortion and achieve a better photo experience.
  • FIG. 1 is a schematic diagram of a hardware flow provided by an embodiment of the application
  • Fig. 2 is a flowchart of an image distortion correction method according to an embodiment of the present application
  • Fig. 3 is a schematic diagram of a smoothing function according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of a bilinear interpolation algorithm according to an embodiment of the present application.
  • Fig. 5 is a flowchart of an image distortion correction method according to a specific embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of an image distortion correction device according to a first embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of an image distortion correction device according to a second embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an image distortion correction device according to a third embodiment of the present application.
  • Fig. 9 is a schematic structural diagram of an image distortion correction device according to a fourth embodiment of the present application.
  • the image distortion correction method of the embodiment of the present application includes the following steps: obtaining a distorted image to be corrected, and a first coordinate of each pixel in the distorted image; obtaining a second coordinate corresponding to the first coordinate, where , The second coordinate is the undistorted coordinate corresponding to the first coordinate; the distance between the first coordinate and the center coordinate point of the distorted image is obtained, and the smoothing coefficient corresponding to the distance is determined according to the smoothing function, where the smoothing function is used to indicate the distance Proportional relationship with the smoothing coefficient; smoothly correct the first coordinate according to the smoothing coefficient and the second coordinate to obtain a distortion corrected image.
  • obtaining the second coordinate corresponding to the first coordinate includes: determining the internal parameter of the camera module that takes the distorted image; calculating the internal parameter and the first coordinate according to a preset algorithm to obtain the second coordinate.
  • the smoothing function is: Among them, x is the normalized distance corresponding to the distance, and S(x) is the smoothing coefficient.
  • smoothly correcting the first coordinate according to the smoothing coefficient and the second coordinate to obtain the distortion corrected image includes: calculating the smoothing coefficient, the second coordinate, and the first coordinate according to a preset algorithm, and determining and The floating-point coordinates corresponding to each first coordinate; the floating-point coordinate interpolation is calculated to obtain the integer coordinate point and pixel value of each pixel; the distortion correction image is obtained according to the integer coordinate point and the pixel value.
  • smoothly correcting the first coordinate according to the smoothing coefficient and the second coordinate to obtain the distortion corrected image includes: determining the first weight of the second coordinate and the second weight of the first coordinate according to the smoothing coefficient , Where the first weight is proportional to the smoothing coefficient, and the second weight is inversely proportional to the smoothing coefficient; calculate the first product of the first weight and the second coordinate, and the second product of the second weight and the first coordinate ; Perform smooth correction on the first coordinate according to the sum of the first product and the second product to obtain a distortion corrected image.
  • the image distortion correction device of the embodiment of the present application includes a first acquisition module 10, a second acquisition module 20, a third acquisition module 30, a determination module 40 and a correction module 50.
  • the first acquiring module 10 is used to acquire the distorted image to be corrected and the first coordinate of each pixel in the distorted image.
  • the second acquisition module 20 is configured to calculate a second coordinate corresponding to the first coordinate, where the second coordinate is an undistorted coordinate corresponding to the first coordinate.
  • the third acquiring module 30 is used to calculate the distance between the first coordinate and the center coordinate point of the distorted image.
  • the determining module 40 is configured to determine a smoothing coefficient corresponding to the distance according to a preset smoothing function, where the smoothing function is used to indicate a proportional relationship between the distance and the smoothing coefficient.
  • the correction module 50 is used to smoothly correct the first coordinate according to the smoothing processing coefficient and the second coordinate to obtain a distortion corrected image.
  • the second obtaining 20 includes a first determining unit 21 and a first obtaining unit 22.
  • the first determining unit 21 is used to determine the internal parameters of the camera module that takes the distorted image.
  • the first obtaining unit 22 is configured to calculate the internal parameters and the first coordinate according to a preset algorithm to obtain the second coordinate.
  • the smoothing function is: Among them, x is the normalized distance corresponding to the distance, and S(x) is the smoothing coefficient.
  • the correction module 50 includes a second determination unit 51, a first calculation unit 52, and a second acquisition unit 53.
  • the second determining unit 51 is configured to calculate the smoothing coefficient, the second coordinate, and the first coordinate according to a preset algorithm, and determine the floating-point coordinate corresponding to each first coordinate.
  • the first calculation unit 52 is configured to interpolate floating-point coordinates to obtain integer coordinate points and pixel values of each pixel point.
  • the second acquiring unit 53 is configured to acquire a distortion corrected image according to integer coordinate points and pixel values.
  • the correction module 50 includes a third determination unit 54, a second calculation unit 55 and a correction unit 56.
  • the third determining unit 54 is configured to determine the first weight of the second coordinate and the second weight of the first coordinate according to the smoothing coefficient, wherein the first weight and the smoothing coefficient are in a proportional relationship, and the second weight is inversely proportional to the smoothing coefficient relationship.
  • the second calculation unit 55 is used to calculate the first product of the first weight and the second coordinate, and the second product of the second weight and the first coordinate.
  • the correction unit 56 is configured to smoothly correct the first coordinate according to the sum of the first product and the second product to obtain a distortion corrected image.
  • the electronic device of the embodiment of the present application includes a memory, a processor, and a computer program stored on the memory and running on the processor.
  • the processor executes the computer program, the following steps are implemented: acquiring a distorted image to be corrected, and distorted image The first coordinate of each pixel; obtain the second coordinate corresponding to the first coordinate, where the second coordinate is the undistorted coordinate corresponding to the first coordinate; obtain the distance between the first coordinate and the center coordinate point of the distorted image, according to Smoothing function, determine the smoothing coefficient corresponding to the distance, where the smoothing function is used to indicate the proportional relationship between the distance and the smoothing coefficient; smoothly correct the first coordinate according to the smoothing coefficient and the second coordinate to obtain the distortion correction image.
  • the following steps may also be implemented: determining the internal parameters of the camera module that took the distorted image; calculating the internal parameters and the first coordinates according to a preset algorithm, and obtaining the second coordinates.
  • the smoothing function is: Among them, x is the normalized distance corresponding to the distance, and S(x) is the smoothing coefficient.
  • the following steps may be implemented: calculate the smoothing coefficient, the second coordinate, and the first coordinate according to a preset algorithm, and determine the floating point type corresponding to each first coordinate. Coordinates; the floating-point coordinate interpolation is calculated to obtain the integer coordinate point and pixel value of each pixel; the distortion correction image is obtained according to the integer coordinate point and pixel value.
  • the following steps may be implemented: determining the first weight of the second coordinate and the second weight of the first coordinate according to the smoothing coefficient, wherein the first weight and the smoothing coefficient The second weight is inversely proportional to the smoothing coefficient; the first product of the first weight and the second coordinate, and the second product of the second weight and the first coordinate are calculated; according to the first product and the second product And smoothly correct the first coordinate to obtain a distortion corrected image.
  • a non-transitory computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the following steps are implemented: obtaining the distorted image to be corrected, and the value of each pixel in the distorted image The first coordinate; obtain the second coordinate corresponding to the first coordinate, where the second coordinate is the undistorted coordinate corresponding to the first coordinate; obtain the distance between the first coordinate and the center coordinate point of the distorted image, and determine according to the smoothing function The smoothing coefficient corresponding to the distance, where the smoothing function is used to indicate the proportional relationship between the distance and the smoothing coefficient; the first coordinate is smoothly corrected according to the smoothing coefficient and the second coordinate to obtain a distortion corrected image.
  • the following steps may be implemented: determining the internal parameters of the camera module that took the distorted image; calculating the internal parameters and the first coordinates according to a preset algorithm, and obtaining the second coordinates.
  • the smoothing function is: Among them, x is the normalized distance corresponding to the distance, and S(x) is the smoothing coefficient.
  • the following steps may be implemented: calculate the smoothing coefficient, the second coordinate, and the first coordinate according to a preset algorithm, and determine the floating-point type corresponding to each first coordinate. Coordinates; the floating-point coordinate interpolation is calculated to obtain the integer coordinate point and pixel value of each pixel; the distortion correction image is obtained according to the integer coordinate point and pixel value.
  • the following steps may also be implemented: the first weight of the second coordinate and the second weight of the first coordinate are determined according to the smoothing coefficient, wherein the first weight and the smoothing coefficient The second weight is inversely proportional to the smoothing coefficient; the first product of the first weight and the second coordinate, and the second product of the second weight and the first coordinate are calculated; according to the first product and the second product And smoothly correct the first coordinate to obtain a distortion corrected image.
  • the application subject of the image distortion correction method in the embodiment of the present application is a smart terminal with a camera module including a wide-angle camera.
  • the smart terminal may be a mobile phone, a notebook computer, a smart wearable device, and the like.
  • this application proposes a new type of distortion correction method, which realizes the image by introducing a weighted smoothing function. Different areas are subjected to different degrees of distortion correction to minimize the loss of image clarity while ensuring the high timeliness of the algorithm.
  • the execution part of the image distortion correction improvement in the embodiment of the application is the CPU of the smart terminal, that is, as shown in the hardware flowchart of the solution shown in FIG. 1, on the smart terminal, the cmos sensor of the wide-angle camera first performs light , Convert the optical signal into raw format data; then the raw format data is processed by the ISP, and the image is converted into yuv format; then, the CPU performs calculations and performs distortion correction on the YUV image through the known camera parameters in advance; and finally After the distortion correction processing, the YUV data is sent to the display for display, and at the same time, it is encoded in Jpeg format via the encoder and stored in the memory of the smart terminal.
  • FIG. 2 is a flowchart of an image distortion correction method according to an embodiment of the present application.
  • the distortion image processing in the embodiment of the present application is described by taking a distortion image taken by a wide-angle camera as an example. As shown in FIG. 2, the method include:
  • Step 101 Obtain a distorted image to be corrected and the first coordinate of each pixel in the distorted image.
  • the distorted image taken before by the camera module can be read from the system memory, and the distorted image taken by the camera module in real time can also be obtained.
  • the distorted image can be an image after traditional de-distortion processing.
  • the image after distortion processing still has distortion, therefore, at this time, it is still defined as a distorted image in this application.
  • the first coordinate of each pixel in the distorted image is obtained based on the image recognition algorithm.
  • Step 102 Acquire a second coordinate corresponding to the first coordinate, where the second coordinate is an undistorted coordinate corresponding to the first coordinate.
  • the first coordinate of the distorted image is a coordinate with a certain distortion. Then suppose that when the camera module that takes the distorted image is not distorted, the coordinate corresponding to the first coordinate should be the second coordinate. At this time, in order to achieve the With the correction of one coordinate, we can obtain the second coordinate without distortion.
  • the internal parameters determine the degree of distortion of the first coordinate.
  • the camera module is controlled to shoot the training object at multiple angles to obtain multiple reference images, where the training object has a relatively regular shape and contour marks, etc., so that it can be quickly found in the corresponding image.
  • the reference point for calibration may be a checkerboard pattern, so that the pixel points of each checkerboard corner can be easily detected, and the checkerboard corner in the checkerboard pattern can be used as the corresponding reference point.
  • the image coordinates corresponding to the reference points in the training object are obtained in each reference image.
  • the world coordinates of the reference points are measured in advance, and the internal parameters of the camera module can be calculated based on the world coordinates and image coordinates of the prestored reference points.
  • the internal parameters may include the x coordinate of the principal point: cx, the y coordinate cy of the principal point, the normalized x-direction focal length fx, the normalized y-direction fy, the coefficients of radial distortion k1, k2, k3, and the tangential distortion coefficient p1, p2, and further, calculate the first coordinates and internal parameters of the distortion according to a preset calculation formula to obtain the second coordinates.
  • the training object is a checkerboard
  • first use the camera to take 6-9 full-size images of different angles on the plane checkerboard pattern board to ensure that the checkerboard pattern fills the entire FOV of the camera, where each checkerboard angle Pixels are easy to detect.
  • the image coordinates of the checkerboard corners of each picture Since the calibrated checkerboard is specially made, the coordinates of its corner points in the three-dimensional world space are known in advance, so the world coordinates of the reference point of the checkerboard can be obtained. Through the obtained reference point image coordinates and the world coordinates of the reference point, the internal parameters of the camera can be obtained by using the corresponding relationship between the image plane and the checkerboard plane.
  • the corresponding camera coordinates that is, the coordinates corresponding to the undistorted coordinates in the camera coordinate system
  • the corresponding camera coordinates that is, the coordinates corresponding to the undistorted coordinates in the camera coordinate system
  • x0 (u0-cx)/fx
  • the coordinates of the distortion point corresponding to the camera coordinates are (x', y'). among them:
  • x’ x0*(1+k1*r ⁇ 2+k2*r ⁇ 4+k3*r ⁇ 6)+2*p1*x0*y0+p2*(r ⁇ 2+2*x0 ⁇ 2);
  • y’ y0*(1+k1*r ⁇ 2+k2*r ⁇ 4+k3*r ⁇ 6)+2*p2*x0*y0+p1*(r ⁇ 2+2*y0 ⁇ 2);
  • the distortion coordinates (first coordinates) in the distorted image are calculated as follows:
  • the distortion coordinates (first coordinates) (ud, vd) corresponding to the non-distorted coordinate points (second coordinates) (u0, v0) of the non-distorted image are obtained. Based on this correspondence, we can calculate the second coordinates .
  • a depth model is trained in advance based on a large number of sample images.
  • the input of the depth model is the distorted first coordinate and the output is the non-distorted second coordinate. Therefore, the depth corresponding to the camera module can be obtained.
  • Model and determine the corresponding second coordinate based on the depth model.
  • Step 103 Obtain the distance between the first coordinate and the center coordinate point of the distorted image, and determine the smoothing coefficient corresponding to the distance according to the smoothing function, where the smoothing function is used to indicate the proportional relationship between the distance and the smoothing coefficient.
  • the distance between the distortion coordinate and the center coordinate point of the distorted image can be calculated, according to the preset
  • the smoothing function and distance calculate the smoothing coefficient, which is used to correct the distorted image.
  • the smoothing function is used to indicate the proportional relationship between the distance and the smoothing coefficient, that is to say, the closer the area to the edge of the image, the greater the corresponding distance, the larger the corresponding smoothing coefficient , Will get a stronger correction process, the farther the area from the edge of the image, the smaller the corresponding distance, the smaller the corresponding smoothing coefficient, and the weaker the correction process will be. Therefore, it is obvious that the above-mentioned smoothing function can ensure that the degree of de-correction of the distorted image from the center to the edge is gradually increased, so as to ensure a smooth transition and improve the realness of the image after processing.
  • the smoothing function can realize the smooth correction of the distorted image. .
  • the following formula (1) can be used to calculate the distance, where, in the following formula, x is the current first coordinate point (ud, vd) to the image center coordinate point (u', v')
  • x is the current first coordinate point (ud, vd) to the image center coordinate point (u', v')
  • the smoothing function is the following formula (2),
  • x is the Euclidean distance corresponding to the distance
  • S(x) is the smoothing coefficient.
  • the corresponding smoothing function is shown in Figure 3.
  • the horizontal axis is the Euclidean distance
  • the vertical axis is the value of the smoothing coefficient. As shown in Figure 3, when the Euclidean distance is larger, the value of the smoothing coefficient is larger, and the value of the smoothing coefficient increases smoothly, which preserves the processing quality of subsequent images.
  • Step 104 Perform smoothing correction on the first coordinate according to the smoothing processing coefficient and the second coordinate to obtain a distortion corrected image.
  • the first coordinate is smoothed and corrected by combining the smoothing coefficient and the second coordinate.
  • the smoothing coefficient and the distance The correlation is basically a function of positive correlation, and because wide-angle lens camera images tend to have smaller distortion in the central area and larger distortion in the edge area, the human eye is more sensitive to the sharpness of the image central area than the edge area. Therefore, the degree of weakening the central distortion correction is realized, and the smoothing processing coefficient can be realized, and the degree of distortion correction from the center of the image to the edge of the image is smoothly enhanced in sequence. This can not only ensure the sharpness of the image center, but also ensure the degree of distortion correction at the edges of the image.
  • a preset formula is used to correct the first coordinate.
  • the preset formula (3) is as follows, where (u1, v1) are floating-point coordinates, (u0, v0 ) Is the undistorted coordinates, (ud, vd) is the distortion coordinates, and s is the smoothing coefficient. Based on the above description, the closer to the edge area, the larger S is, and the obtained (u1, v1) is closer to the distortion coordinates (ud, vd) , The higher the corresponding correction degree, the closer to the central area, the smaller the S, the closer the obtained (u1, v1) is to the undistorted coordinates (u0, v0), the smaller the corresponding correction degree:
  • the floating-point coordinate interpolation is calculated to obtain the integer coordinate point and pixel value of each pixel, and the de-distorted image is obtained according to the integer coordinate point and pixel value.
  • (u1, v1) are often floating-point values, and the actual image coordinates are integers, so the integer (u2, v2) needs to be interpolated from the neighboring pixels of the floating-point coordinates (u1, v1).
  • the pixel gray value of the coordinate point (RGB can be interpolated separately) bilinear interpolation method can use the gray values of four adjacent pixels of the pixel to be determined to linearly interpolate in the x and y directions.
  • the schematic diagram is shown in Figure 4.
  • the 4 adjacent four known floating-point coordinate points in the u and v directions are (u1′, v1′), ( u1′′, v1′), (u1′, v1′′), (u1′′, v1′′).
  • the first step is to linearly interpolate (u1", v1') and (u1', v1') in the u direction to obtain (u2, v1').
  • the internal camera parameters are obtained by Zhang's calibration method: principal point coordinates cx, cy, focal length fx, fy, radial distortion parameters k1, k2, k3, tangential distortion
  • the first weight of the second coordinate and the second weight of the first coordinate are determined according to the smoothing coefficient, where the first weight and the smoothing coefficient are proportional to each other, and the second weight is proportional to the smoothing coefficient
  • the correction degree can be obtained, and the correction adjustment coefficient can be determined according to the correction adjustment degree. For example, it can provide a progress bar of the correction level, and determine the correction adjustment coefficient based on the corresponding relationship between the progress bar and the correction level. For example, it can automatically detect the subject corresponding to the distorted image, and perform different correction levels based on the type and color of the subject For example, when the shot is a face image, the correction degree is higher, and when the shot is a night scene image, the correction degree is higher than that of the daytime image.
  • the image distortion correction method of the embodiment of the present application obtains the distorted image to be corrected, and the first coordinate of each pixel in the distorted image, and calculates the second coordinate corresponding to the first coordinate, where the second coordinate is The undistorted coordinate corresponding to the first coordinate, and then calculate the distance between the first coordinate and the center coordinate point of the distorted image, and determine the smoothing coefficient corresponding to the distance according to the preset smoothing function, where the smoothing function is used to indicate the distance It is proportional to the smoothing coefficient. Finally, the first coordinate is smoothed and corrected according to the smoothing coefficient and the second coordinate to obtain a distortion corrected image.
  • the distortion correction processing is performed by additionally using a weighted smoothing function.
  • the traditional distortion correction algorithm it takes into account the distribution of distortion in the entire image to realize the differentiated distortion correction operation in different areas of the image.
  • it On the basis of ensuring the high timeliness of the algorithm, it not only weakens the loss of image clarity after distortion correction The degree also ensures that the image area with larger distortion can completely eliminate the distortion and achieve a better photo experience.
  • FIG. 6 is a schematic structural diagram of an image distortion correction device according to an embodiment of the present application.
  • the image distortion correction device includes: a first acquisition module 10, a second acquisition module 20, a third acquisition module 30, a determination module 40, and a correction module 50, wherein,
  • the first acquiring module 10 is used to acquire the distorted image to be corrected and the first coordinate of each pixel in the distorted image.
  • the first acquiring module 10 can read the distorted image taken before by the camera module from the system memory, and can also acquire the distorted image taken by the camera module in real time.
  • the distorted image may be an image after traditional de-distortion processing. Since the image after the distortion processing in the prior art still has distortion, at this time, it is still defined as a distorted image in this application.
  • the first obtaining module 10 obtains the first coordinates of each pixel in the distorted image based on an image recognition algorithm.
  • the second acquisition module 20 is configured to calculate a second coordinate corresponding to the first coordinate, where the second coordinate is an undistorted coordinate corresponding to the first coordinate.
  • the first coordinate of the distorted image is a coordinate with a certain distortion. Then suppose that when the camera module that takes the distorted image is not distorted, the coordinate corresponding to the first coordinate should be the second coordinate. At this time, in order to achieve the For the correction of one coordinate, the second acquisition 20 can acquire the second coordinate without distortion.
  • the second acquiring 20 includes: a first determining unit 21 and a first acquiring unit 22, wherein,
  • the first determining unit 21 is used to determine the internal parameters of the camera module that takes the distorted image.
  • the first obtaining unit 22 is configured to calculate the internal parameters and the first coordinate according to a preset algorithm to obtain the second coordinate.
  • the camera module is controlled to shoot the training object at multiple angles to obtain multiple reference images, where the training object has a relatively regular shape and contour marks, etc., so that it can be quickly found in the corresponding image.
  • the reference point for calibration may be a checkerboard pattern, so that the pixel points of each checkerboard corner can be easily detected, and the checkerboard corner in the checkerboard pattern can be used as the corresponding reference point.
  • the first determining unit 21 obtains the image coordinates corresponding to the reference points in the training object in each reference image.
  • the world coordinates based on the reference points are measured in advance, and can be calculated based on the world coordinates and image coordinates of the prestored reference points.
  • the internal parameters of the camera module which can include the x coordinate of the principal point: cx, the y coordinate cy of the principal point, the normalized x-direction focal length fx, the normalized y-direction fy, the coefficients of radial distortion k1, k2, k3 , Tangential distortion coefficients p1 and p2, and further, the first obtaining unit 22 calculates the first coordinate and internal parameters of the distortion according to a preset calculation formula to obtain the second coordinate.
  • the third acquiring module 30 is used to calculate the distance between the first coordinate and the center coordinate point of the distorted image.
  • the third acquiring module 30 can calculate the distance between the distortion coordinates and the center coordinate point of the distorted image. Calculate the smoothing coefficient according to the preset smoothing function and distance, and the smoothing coefficient is used to correct the distorted image.
  • the smoothing function is used to indicate the proportional relationship between the distance and the smoothing coefficient, that is to say, the closer the area to the edge of the image, the greater the corresponding distance, the larger the corresponding smoothing coefficient , Will get a stronger correction process, the farther the area from the edge of the image, the smaller the corresponding distance, the smaller the corresponding smoothing coefficient, and the weaker the correction process will be. Therefore, it is obvious that the above-mentioned smoothing function can ensure that the degree of de-correction of the distorted image from the center to the edge is gradually increased, so as to ensure a smooth transition and improve the realness of the image after processing.
  • the smoothing function can realize the smooth correction of the distorted image. .
  • the determining module 40 is configured to determine a smoothing coefficient corresponding to the distance according to a preset smoothing function, where the smoothing function is used to indicate a proportional relationship between the distance and the smoothing coefficient.
  • the correction module 50 is configured to smoothly correct the first coordinate according to the smoothing processing coefficient and the second coordinate to obtain a distortion corrected image.
  • the first coordinate is smoothed and corrected by combining the smoothing coefficient and the second coordinate.
  • the smoothing coefficient and the distance The correlation is basically a function of positive correlation, and because wide-angle lens camera images tend to have smaller distortion in the central area and larger distortion in the edge area, the human eye is more sensitive to the sharpness of the image central area than the edge area. Therefore, the correction module 50 realizes the degree of weakening the central distortion correction, and the smoothing processing coefficient can be realized, and the degree of distortion correction from the central point of the image to the edge of the image is smoothly enhanced in sequence. This can not only ensure the sharpness of the image center, but also ensure the degree of distortion correction at the edges of the image.
  • the correction module 50 includes: a second determination unit 51, a first calculation unit 52, and a second acquisition unit 53, wherein,
  • the second determining unit 51 is configured to calculate the smoothing processing coefficient, the second coordinate, and the first coordinate according to a preset algorithm, and determine the floating-point coordinate corresponding to each first coordinate.
  • the first calculation unit 52 is configured to interpolate floating-point coordinates to obtain integer coordinate points and pixel values of each pixel point.
  • the second acquiring unit 53 is configured to acquire a distortion corrected image according to integer coordinate points and pixel values.
  • the correction module 50 includes: a third determination unit 54, a second calculation unit 55, and a correction unit 56, where
  • the third determining unit 54 is configured to determine the first weight of the second coordinate and the second weight of the first coordinate according to the smoothing coefficient, wherein the first weight and the smoothing coefficient are in a proportional relationship, and the second weight is proportional to the smoothing coefficient. Inverse relationship.
  • the second calculation unit 55 is configured to calculate the first product of the first weight and the second coordinate, and the second product of the second weight and the first coordinate.
  • the correction unit 56 is configured to smoothly correct the first coordinate according to the sum of the first product and the second product, and obtain a distortion corrected image.
  • the image distortion correction device of the embodiment of the present application is improved on the basis of the traditional wide-angle distortion correction algorithm, and on the basis of still using the bilinear interpolation algorithm, the distortion correction processing is performed by additionally using a weighted smoothing function.
  • the traditional distortion correction algorithm it considers the distribution of distortion in the whole image, and realizes the differentiated distortion correction operation in different areas of the image.
  • the degree of loss also ensures that the image area with larger distortion can completely eliminate the distortion and achieve a better photo experience.
  • this application also proposes an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor executes the computer program, the implementation is as described in the foregoing embodiment.
  • Image distortion correction method
  • this application also proposes a non-transitory computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the image distortion correction method as described in the foregoing method embodiment is implemented.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include at least one of the features. In the description of the present application, "a plurality of” means at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any device that can contain, store, communicate, propagate, or transmit a program for use by an instruction execution system, device, or device or in combination with these instruction execution systems, devices, or devices.
  • computer readable media include the following: electrical connections (electronic devices) with one or more wiring, portable computer disk cases (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable and editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable media on which the program can be printed, because it can be used, for example, by optically scanning the paper or other media, and then editing, interpreting, or other suitable media if necessary. The program is processed in a manner to obtain the program electronically and then stored in the computer memory.
  • each part of this application can be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • Discrete logic gate circuits for implementing logic functions on data signals Logic circuit, application specific integrated circuit with suitable combinational logic gate, programmable gate array (PGA), field programmable gate array (FPGA), etc.
  • the functional units in the various embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the aforementioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Geometry (AREA)

Abstract

一种图像畸变校正方法、图像畸变校正装置、电子设备和计算机可读存储介质,方法包括:(101)获取待校正的畸变图像和畸变图像中每个像素点的第一坐标;(102)获取与第一坐标对应的第二坐标;(103)根据平滑处理函数,确定距离对应的平滑处理系数;(104)根据平滑处理系数和第二坐标对第一坐标进行平滑校正,获取畸变校正图像。

Description

图像畸变校正方法和装置
优先权信息
本申请请求2019年6月24日向中国国家知识产权局提交的、专利申请号为201910550603.1的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像畸变校正方法和装置。
背景技术
目前,随着智能终端制造技术的进步,智能终端上设置有相机模组以供用户拍照,其中,智能终端上安装广角摄像头较为普遍。其中,广角镜头相机与传统镜头相机相比,具有更大的视场角(Field of Vision,FOV),但广角镜头畸变较大,图像边缘会产生严重失真。相关技术中,为了补偿广角摄像头拍摄的图像的畸变,需要对图像进行畸变校正处理。
发明内容
本申请旨在提供一种解决现有技术中直接基于插值算法对去畸变图像处理导致畸变校正后的图像清晰度不高的技术问题的方案。
为此,本申请的第一个目的在于提出一种图像畸变校正方法,以解决现有技术中直接基于插值算法对去畸变图像处理导致畸变校正后的图像清晰度不高的技术问题。
本申请的第二个目的在于提出一种图像畸变校正装置。
本申请的第三个目的在于提出一种电子设备。
本申请的第四个目的在于提出一种非临时性计算机可读存储介质。
为达上述目的,本申请第一方面实施例提出了一种图像畸变校正方法,包括以下步骤:获取待校正的畸变图像,以及所述畸变图像中每个像素点的第一坐标;获取与所述第一坐标对应的第二坐标,其中,所述第二坐标是与所述第一坐标对应的无畸变坐标;获取所述第一坐标与所述畸变图像中心坐标点的距离,根据平滑处理函数,确定所述距离对应的平滑处理系数,其中,所述平滑处理函数用于指示所述距离与所述平滑处理系数之间的正比关系;根据所述平滑处理系数和所述第二坐标对所述第一坐标进行平滑校正,获取畸变校正图像。
本申请第二方面实施例提出了一种图像畸变校正装置,包括:第一获取模块,用 于获取待校正的畸变图像,以及所述畸变图像中每个像素点的第一坐标;第二获取模块,用于获取与所述第一坐标对应的第二坐标,其中,所述第二坐标是与所述第一坐标对应的无畸变坐标;第三获取模块,用于计算所述第一坐标与所述畸变图像中心坐标点的距离;确定模块,用于根据预设的平滑处理函数,确定所述距离对应的平滑处理系数,其中,所述平滑处理函数用于指示所述距离与所述平滑处理系数之间的正比关系;校正模块,用于根据所述平滑处理系数和所述第二坐标对所述第一坐标进行平滑校正,获取畸变校正图像。
本申请第三方面实施例提出了一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如第一方面实施例所述的图像畸变校正方法。所述图像畸变校正方法包括以下步骤:获取待校正的畸变图像,以及所述畸变图像中每个像素点的第一坐标;获取与所述第一坐标对应的第二坐标,其中,所述第二坐标是与所述第一坐标对应的无畸变坐标;获取所述第一坐标与所述畸变图像中心坐标点的距离,根据平滑处理函数,确定所述距离对应的平滑处理系数,其中,所述平滑处理函数用于指示所述距离与所述平滑处理系数之间的正比关系;根据所述平滑处理系数和所述第二坐标对所述第一坐标进行平滑校正,获取畸变校正图像。
本申请第四方面实施例提出了一种非临时性计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面实施例所述的图像畸变校正方法。所述图像畸变校正方法包括以下步骤:获取待校正的畸变图像,以及所述畸变图像中每个像素点的第一坐标;获取与所述第一坐标对应的第二坐标,其中,所述第二坐标是与所述第一坐标对应的无畸变坐标;获取所述第一坐标与所述畸变图像中心坐标点的距离,根据平滑处理函数,确定所述距离对应的平滑处理系数,其中,所述平滑处理函数用于指示所述距离与所述平滑处理系数之间的正比关系;根据所述平滑处理系数和所述第二坐标对所述第一坐标进行平滑校正,获取畸变校正图像。
本申请实施例提供的技术方案可以包含如下的有益效果:
在传统广角畸变校正算法的基础上进行改进,在依然采用双线性插值算法的基础上,通过额外使用加权平滑函数,进行畸变校正处理。相较于传统畸变校正算法,考量了畸变在整幅图像的分布,实现了图像不同区域的区分化畸变校正操作,在保证算法高时效性的基础上,不仅削弱畸变校正后图像的清晰度的损失程度同时也保证了畸变较大的图像区域能够完整消除畸变,实现更好的拍照体验。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本申请实施例所提供的一种硬件流程示意图;
图2是根据本申请一个实施例的图像畸变校正方法的流程图;
图3是根据本申请一个实施例的平滑处理函数示意图;
图4是根据本申请一个实施例的双线性插值算法示意图;
图5是根据本申请一个具体实施例的图像畸变校正方法流程图;
图6是根据本申请第一个实施例的图像畸变校正装置的结构示意图;
图7是根据本申请第二个实施例的图像畸变校正装置的结构示意图;
图8是根据本申请第三个实施例的图像畸变校正装置的结构示意图;
图9是根据本申请第四个实施例的图像畸变校正装置的结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
请参阅图2,本申请实施方式的图像畸变校正方法包括以下步骤:获取待校正的畸变图像,以及畸变图像中每个像素点的第一坐标;获取与第一坐标对应的第二坐标,其中,第二坐标是与第一坐标对应的无畸变坐标;获取第一坐标与畸变图像中心坐标点的距离,根据平滑处理函数,确定距离对应的平滑处理系数,其中,平滑处理函数用于指示距离与平滑处理系数之间的正比关系;根据平滑处理系数和第二坐标对第一坐标进行平滑校正,获取畸变校正图像。
在某些实施方式中,获取与第一坐标对应的第二坐标,包括:确定拍摄畸变图像的相机模组的内参;根据预设算法对内参和第一坐标计算,获取第二坐标。
在某些实施方式中,平滑处理函数为:
Figure PCTCN2020095025-appb-000001
其中,x为距离对应的归一化距离,S(x)为平滑处理系数。
在某些实施方式中,根据平滑处理系数和第二坐标对第一坐标进行平滑校正,获取畸变校正图像,包括:根据预设算法对平滑处理系数、第二坐标和第一坐标计算, 确定与每个第一坐标对应的浮点型坐标;对浮点型坐标插值计算获取每个像素点的整数型坐标点和像素值;根据整数型坐标点和像素值获取畸变校正图像。
在某些实施方式中,根据平滑处理系数和第二坐标对第一坐标进行平滑校正,获取畸变校正图像,包括:根据平滑处理系数确定第二坐标的第一权重和第一坐标的第二权重,其中,第一权重和平滑处理系数成正比关系,第二权重和平滑处理系数成反比关系;计算第一权重和第二坐标的第一乘积,以及第二权重和第一坐标的第二乘积;根据第一乘积和第二乘积之和对第一坐标进行平滑校正,获取畸变校正图像。
请参阅图6,本申请实施方式的图像畸变校正装置包括第一获取模块10、第二获取模块20、第三获取模块30、确定模块40和校正模块50。第一获取模块10,用于获取待校正的畸变图像,以及畸变图像中每个像素点的第一坐标。第二获取模块20用于计算与第一坐标对应的第二坐标,其中,第二坐标是与第一坐标对应的无畸变坐标。第三获取模块30用于计算第一坐标与畸变图像中心坐标点的距离。确定模块40用于根据预设的平滑处理函数,确定距离对应的平滑处理系数,其中,平滑处理函数用于指示距离与平滑处理系数之间的正比关系。校正模块50用于根据平滑处理系数和第二坐标对第一坐标进行平滑校正,获取畸变校正图像。
请参阅图7,第二获取20包括第一确定单元21和第一获取单元22。第一确定单元21用于确定拍摄畸变图像的相机模组的内参。第一获取单元22用于根据预设算法对内参和第一坐标计算,获取第二坐标。
在某些实施方式中,平滑处理函数为:
Figure PCTCN2020095025-appb-000002
其中,x为距离对应的归一化距离,S(x)为平滑处理系数。
请参阅图8,在某些实施方式中,校正模块50包括第二确定单元51、第一计算单元52和第二获取单元53。第二确定单元51用于根据预设算法对平滑处理系数、第二坐标和第一坐标计算,确定与每个第一坐标对应的浮点型坐标。第一计算单元52用于对浮点型坐标插值计算获取每个像素点的整数型坐标点和像素值。第二获取单元53用于根据整数型坐标点和像素值获取畸变校正图像。
请参阅图9,在某些实施方式中,校正模块50包括第三确定单元54、第二计算单元55和校正单元56。第三确定单元54用于根据平滑处理系数确定第二坐标的第一权重和第一坐标的第二权重,其中,第一权重和平滑处理系数成正比关系,第二权重和平滑处理系数成反比关系。第二计算单元55用于计算第一权重和第二坐标的第一乘积,以及第二权重和第一坐标的第二乘积。校正单元56用于根据第一乘积和第二乘积之和对第一坐标进行平滑校正,获取畸变校正图像。
本申请实施方式的电子设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时,实现以下步骤:获取待校正的畸变图像,以及畸变图像中每个像素点的第一坐标;获取与第一坐标对应的第二坐标,其中,第二坐标是与第一坐标对应的无畸变坐标;获取第一坐标与畸变图像中心坐标点的距离,根据平滑处理函数,确定距离对应的平滑处理系数,其中,平滑处理函数用于指示距离与平滑处理系数之间的正比关系;根据平滑处理系数和第二坐标对第一坐标进行平滑校正,获取畸变校正图像。
在某些实施方式中,处理器执行计算机程序时,还可实现以下步骤:确定拍摄畸变图像的相机模组的内参;根据预设算法对内参和第一坐标计算,获取第二坐标。
在某些实施方式中,平滑处理函数为:
Figure PCTCN2020095025-appb-000003
其中,x为距离对应的归一化距离,S(x)为平滑处理系数。
在某些实施方式中,处理器执行计算机程序时,还可实现以下步骤:根据预设算法对平滑处理系数、第二坐标和第一坐标计算,确定与每个第一坐标对应的浮点型坐标;对浮点型坐标插值计算获取每个像素点的整数型坐标点和像素值;根据整数型坐标点和像素值获取畸变校正图像。
在某些实施方式中,处理器执行计算机程序时,还可实现以下步骤:根据平滑处理系数确定第二坐标的第一权重和第一坐标的第二权重,其中,第一权重和平滑处理系数成正比关系,第二权重和平滑处理系数成反比关系;计算第一权重和第二坐标的第一乘积,以及第二权重和第一坐标的第二乘积;根据第一乘积和第二乘积之和对第一坐标进行平滑校正,获取畸变校正图像。
本申请实施方式的一种非临时性计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:获取待校正的畸变图像,以及畸变图像中每个像素点的第一坐标;获取与第一坐标对应的第二坐标,其中,第二坐标是与第一坐标对应的无畸变坐标;获取第一坐标与畸变图像中心坐标点的距离,根据平滑处理函数,确定距离对应的平滑处理系数,其中,平滑处理函数用于指示距离与平滑处理系数之间的正比关系;根据平滑处理系数和第二坐标对第一坐标进行平滑校正,获取畸变校正图像。
在某些实施方式中,计算机程序被处理器执行时还可实现以下步骤:确定拍摄畸变图像的相机模组的内参;根据预设算法对内参和第一坐标计算,获取第二坐标。
在某些实施方式中,平滑处理函数为:
Figure PCTCN2020095025-appb-000004
其中,x为距离对应的归一化距离,S(x)为平滑处理系数。
在某些实施方式中,计算机程序被处理器执行时还可实现以下步骤:根据预设算法对平滑处理系数、第二坐标和第一坐标计算,确定与每个第一坐标对应的浮点型坐标;对浮点型坐标插值计算获取每个像素点的整数型坐标点和像素值;根据整数型坐标点和像素值获取畸变校正图像。
在某些实施方式中,计算机程序被处理器执行时还可实现以下步骤:根据平滑处理系数确定第二坐标的第一权重和第一坐标的第二权重,其中,第一权重和平滑处理系数成正比关系,第二权重和平滑处理系数成反比关系;计算第一权重和第二坐标的第一乘积,以及第二权重和第一坐标的第二乘积;根据第一乘积和第二乘积之和对第一坐标进行平滑校正,获取畸变校正图像。
下面参考附图描述本申请实施例的图像畸变校正方法和装置。其中,本申请实施例的图像畸变校正方法的应用主体为带有包含广角摄像头相机模组的智能终端,该智能终端可以为手机、笔记本电脑、智能穿戴式设备等。
当前在智能终端上,仅考虑待测样点周围若干个直接邻点灰度值的影响,而未考虑到各相邻点间灰度值变化率的影响,从而导致插值后图像的高频分量受到损失,图像边缘在一定程度上变得较为模糊。用此方法得到的输出图像与输入图像相比,仍然存在由于插值函数设计考虑不周而产生的图像质量受损与计算精度不高的问题。
针对现有技术中的直接使用双线性插值算法所得到的畸变校正后的图像清晰度有所损耗的技术问题,本申请提出了一种新型的畸变校正方法,通过引入加权平滑函数,实现图像不同区域进行不同程度的畸变校正,在保证算法高时效性的同时,尽可能减少图像清晰度的损失。
其中,本申请实施例的图像畸变校正改进的执行部位为智能终端的CPU,即如图1所示的本方案的硬件流程图所示,在智能终端上,首先由广角相机的cmos sensor进行感光,将光信号转换成raw格式数据;再由ISP对raw格式数据进行处理,将图像转换成yuv格式;然后,由CPU进行计算,通过事先已知的相机内参数对YUV图像进行畸变校正;最后,在畸变校正处理后,将YUV数据送至显示器进行显示,同时经由编码器进行Jpeg格式编码并存储在智能终端的存储器中。
具体而言,图2是根据本申请一个实施例的图像畸变校正方法的流程图,本申请实施例的畸变图像处理以广角相机拍摄的畸变图像为例进行说明,如图2所示,该方法包括:
步骤101,获取待校正的畸变图像,以及畸变图像中每个像素点的第一坐标。
具体的,可以从系统内存中读取相机模组之前拍摄的畸变图像,也可以获取相机模组实时拍摄的畸变图像,畸变图像可以是经过传统的去畸变处理后的图像,由于现 有技术中畸变处理后的图像仍然具有畸变,因而,此时本申请中仍然将其定义为畸变图像。进而,基于图像识别算法获取畸变图像中每个像素点的第一坐标。
步骤102,获取与第一坐标对应的第二坐标,其中,第二坐标是与第一坐标对应的无畸变坐标。
具体的,畸变图像的第一坐标是具有一定畸变的坐标,那么假设当拍摄畸变图像的摄像模组没有拍摄畸变时,第一坐标对应的坐标应当是第二坐标,此时,为了实现对第一坐标的校正,我们可以获取没有畸变的第二坐标。
作为一种可能的实现方式,确定拍摄畸变图像的相机模组的内参,该内参决定了第一坐标的畸变程度,我们根据这种内参和畸变程度的对应关系,确定与第一坐标对应的第二坐标。
具体而言,在本实施例中,控制相机模组以多个角度拍摄训练物获取多张参考图像,其中,训练物具有较为规律的形状和轮廓标记等,以便于在对应的图像中快速找到标定的参考点,比如,可以是棋盘格图案,从而,每个棋盘格角的像素点都很容易被检测,可以将棋盘格图案中的棋盘格角作为对应的参考点。进一步地,获取各张参考图像中与训练物中的参考点对应的图像坐标,基于参考点的世界坐标是预先测量的,可以基于预存的参考点的世界坐标和图像坐标计算相机模组的内参,该内参可以包括主点的x坐标:cx、主点的y坐标cy、归一化x方向焦距fx、归一化y方向fy、径向畸变的系数k1、k2、k3、切向畸变系数p1、p2,进而,根据预设计算公式对畸变的第一坐标和内参计算,获取第二坐标。
举例而言,当训练物为棋盘格,从而,首先使用相机对平面棋盘格图案板拍摄不同角度的6~9张全尺寸图像,确保棋盘格图案充满相机全部FOV,其中,每个棋盘格角的像素点都很容易被检测,可以将棋盘格图案中的棋盘格角作为对应的参考点,对采集得到的6~9张全尺寸参考图像分别进行亚像素尺度的棋盘格角点检测,得到每张图片的棋盘格角点的图像坐标。由于标定的棋盘格是特制的,其角点在三维世界空间上的坐标是事先已知,因此,可以得到棋盘格的参考点的世界坐标。通过求得的参考点图像坐标与参考点的世界坐标,利用图像平面和棋盘格平面的对应关系可以求得相机的内参。
进一步地,利用得到的相机内参主点的x坐标:cx、主点的y坐标cy、归一化x方向焦距fx、归一化y方向fy、径向畸变的系数k1、k2、k3、切向畸变系数p1、p2和已知的畸变图像,计算得到原始非畸变图像。具体地,对于第二坐标(u0,v0),其对应的相机坐标(即在相机坐标系下非畸变坐标对应的坐标)为(x0,y0),其中:
x0=(u0-cx)/fx;
y0=(v0-cy)/fy;
该相机坐标对应的畸变点的坐标为(x’,y’)。其中:
x’=x0*(1+k1*r^2+k2*r^4+k3*r^6)+2*p1*x0*y0+p2*(r^2+2*x0^2);
y’=y0*(1+k1*r^2+k2*r^4+k3*r^6)+2*p2*x0*y0+p1*(r^2+2*y0^2);
其中r^2=x0^2+y0^2;
进一步的,基于求得的畸变点计算其在畸变图像中的畸变坐标(第一坐标)如下:
ud=fx*x’+cx;
vd=fy*y’+cy;
这样便求得非畸变图像的非畸变坐标点(第二坐标)(u0,v0)对应的畸变坐标(第一坐标)(ud,vd),基于这种对应关系,我们可以计算出第二坐标。
作为另一种可能的实现方式,预先根据大量样本图像训练深度模型,该深度模型的输入为畸变的第一坐标,输出为非畸变的第二坐标,因此,可以获取该相机模组对应的深度模型,基于该深度模型确定对应的第二坐标。
步骤103,获取第一坐标与畸变图像中心坐标点的距离,根据平滑处理函数,确定距离对应的平滑处理系数,其中,平滑处理函数用于指示距离与平滑处理系数之间的正比关系。
可以理解,由于相机模组拍摄机制,越靠近图像的边缘则畸变程度越高,越靠近中心区域,畸变程度越小,因此,可以计算畸变坐标距离畸变图像中心坐标点的距离,根据预设的平滑处理函数和距离计算平滑处理系数,该平滑处理系数用于对畸变图像进行校正处理。
需要强调的是,该平滑处理函数用于指示距离与平滑处理系数之间的正比关系,也就是说,越靠近图像的边缘的区域,由于对应的距离越大,则对应的平滑处理系数越大,会得到更强的校正处理,越远离图像的边缘的区域,由于对应的距离越小,则对应的平滑处理系数越小,会得到更弱的校正处理。因此,很显然,上述平滑处理函数可以保证畸变图像由中心到边缘的去校正程度逐渐增强,以保证平滑的过渡,提高图像处理后的真实度,基于平滑处理函数能够实现对畸变图像的平滑校正。
在本申请的一个实施例中,可采用如下公式(1)计算距离,其中,在下述公式中,x为当前第一坐标点(ud,vd)到图像中心坐标点(u’,v’)的归一化欧氏距离值:
Figure PCTCN2020095025-appb-000005
进而,作为一种可能的示例,平滑处理函数为下述公式(2),
Figure PCTCN2020095025-appb-000006
其中,x为距离对应的欧氏距离,S(x)为平滑处理系数,其中,本示例中,对应的平滑处理函数如图3所示,其中,在图3中,横轴为欧氏距离,纵轴为平滑处理系数的值,如图3所示,当欧式距离越大,平滑处理系数的值越大,且平滑处理系数的值是平滑增大,保整了后续图像的处理质量。
步骤104,根据平滑处理系数和第二坐标对第一坐标进行平滑校正,获取畸变校正图像。
具体的,结合平滑处理系数和第二坐标对第一坐标进行平滑校正,此时,由于在畸变校正时结合了非畸变坐标因此能够更好的提高图像的清晰度,且由于平滑处理系数与距离有关,基本上是正相关的函数,且因为广角镜头的相机图像往往中央区域畸变幅度较小,边缘区域畸变幅度较大,而人眼对图像中央区域的清晰度敏感性要高于边缘区域。因此,实现了减弱中央畸变校正的程度,平滑处理系数可以实现,从图像中央点到图像边缘畸变校正程度依次平滑增强。这样既可以保证图像中央的清晰度,同时也保证了图像边缘的畸变校正程度。
作为一种可能的实现方式,采用预设的公式对第一坐标进行校正的,该预设的公式(3)如下所示,其中,(u1,v1)为浮点型坐标,(u0,v0)为非畸变坐标,(ud,vd)为畸变坐标,s为平滑系数,基于上述描述,越靠近边缘区域,S越大,则得到的(u1,v1)越接近畸变坐标(ud,vd),对应的校正程度越高,越靠近中央区域,S越小,则得到的(u1,v1)越接近非畸变坐标(u0,v0),对应的校正程度越小:
(u1,v1)=(ud,vd)*s+(u0,v0)*(1-s)公式(3)
基于成像原理,对浮点型坐标插值计算获取每个像素点的整数型坐标点和像素值,根据整数型坐标点和像素值获取去畸变图像。
具体而言,此时(u1,v1)往往为浮点型数值,而实际图像坐标是整数,因此需要从浮点型坐标(u1,v1)邻域像素插值计算得到整数型(u2,v2)坐标点的像素灰度值(RGB可以分别进行插值)双线性插值法可利用待求像素四个相邻像素的灰度在x,y两个方向上作线性内插。
其中,示意图如图4,对于未知整数型坐标点(u2,v2),其u,v方向4相邻4个已知计算得到的浮点型坐标点分别为(u1′,v1′)、(u1″,v1′)、(u1′,v1″)、(u1″,v1″)。第一步,在u方向上,对(u1″,v1')和(u1',v1')进行线性插值,得到(u2,v1'),对(u1″,v1″)和(u1',v1″)进行线性插值,得到(u2,v1″);第二步,在v方向对(u2,v1')和(u2,v1″)进行线性插值,从而得到整数型坐标点(u2,v2)所对应的像素灰度值。(u2,v2)依次遍历整幅图像全部像素的坐标点,即可得到畸变校正后的图像。
由此,在保证算法的高时效性基础上,尽可能地削弱插值后图像清晰度的损耗。 作为一种示例,算法具体流程图框架如图5所示,通过张氏标定法得到相机内参:主点坐标cx,cy、焦距fx,fy、径向畸变参数k1,k2,k3、切向畸变参数p1,p2,进而,计算非畸变坐标(u0,v0)在畸变图像的畸变坐标(ud,vd),进而,使用上述实施例所提出的平滑函数对(ud,vd)与(u0,v0)进行加权融合,得到融合后的浮点型坐标(u1,v1),对(u1,v1)进行双线性插值,得到最终畸变校正后图像坐标(u2,v2)。遍历所有坐标点即可得到完整的畸变校正后图像。
作为另一种可能的实现方式,根据平滑处理系数确定第二坐标的第一权重和第一坐标的第二权重,其中,第一权重和平滑处理系数成正比关系,第二权重和平滑处理系数成反比关系,计算第一权重和第二坐标的第一乘积,和第二权重和第一坐标的第二乘积,进而,根据第一乘积和第二乘积之和对第一坐标进行平滑校正,获取畸变校正图像。由此,越靠近畸变图像的边缘,越加强第二坐标的考量比例对有关像素点的坐标校正,越靠近中心,越依赖于原有的第一坐标进行有关像素点的保留,由此,保证了图像的真实性,提高了校正图形的平滑度。
当然,在本申请的一个实施例中,考虑到在不同的场景中,对畸变图像去畸变处理时中央清晰度高的区域和边缘畸变校正程度较高区域的两个区域大小比例不同,因此,可以获取校正程度,并根据校正调整程度确定校正调整系数。比如,可以提供校正程度的进度条,基于进度条与校正程度的对应关系确定校正调整系数,又比如,可以自动检测畸变图像对应的拍摄对象,基于拍摄对象的类型和颜色不同进行不同的校正程度的确定,比如,拍摄的为人脸图像,校正程度较高,当拍摄的为夜景图像校正程度相对于白天拍摄校正程度较高等。
综上,本申请实施例的图像畸变校正方法,获取待校正的畸变图像,以及畸变图像中每个像素点的第一坐标,计算与第一坐标对应的第二坐标,其中,第二坐标是与第一坐标对应的无畸变坐标,进而,计算第一坐标与畸变图像中心坐标点的距离,根据预设的平滑处理函数,确定距离对应的平滑处理系数,其中,平滑处理函数用于指示距离与平滑处理系数之间的正比关系,最后,根据平滑处理系数和第二坐标对第一坐标进行平滑校正,获取畸变校正图像。由此,在传统广角畸变校正算法的基础上进行改进,在依然采用双线性插值算法的基础上,通过额外使用加权平滑函数,进行畸变校正处理。相较于传统畸变校正算法,考量了畸变在整幅图像的分布实现了图像不同区域的区分化畸变校正操作,在保证算法高时效性的基础上,不仅削弱畸变校正后图像的清晰度的损失程度同时也保证了畸变较大的图像区域能够完整消除畸变,实现更好的拍照体验。
为了实现上述实施例,本申请还提出一种图像畸变校正装置。图6是根据本申请 一个实施例的图像畸变校正装置的结构示意图。如图6所示,该图像畸变校正装置包括:第一获取模块10、第二获取模块20、第三获取模块30、确定模块40和校正模块50,其中,
第一获取模块10,用于获取待校正的畸变图像,以及畸变图像中每个像素点的第一坐标。
具体的,第一获取模块10可以从系统内存中读取相机模组之前拍摄的畸变图像,也可以获取相机模组实时拍摄的畸变图像,畸变图像可以是经过传统的去畸变处理后的图像,由于现有技术中畸变处理后的图像仍然具有畸变,因而,此时本申请中仍然将其定义为畸变图像。进而,第一获取模块10基于图像识别算法获取畸变图像中每个像素点的第一坐标。
第二获取模块20,用于计算与第一坐标对应的第二坐标,其中,第二坐标是与第一坐标对应的无畸变坐标。
具体的,畸变图像的第一坐标是具有一定畸变的坐标,那么假设当拍摄畸变图像的摄像模组没有拍摄畸变时,第一坐标对应的坐标应当是第二坐标,此时,为了实现对第一坐标的校正,第二获取20可以获取没有畸变的第二坐标。
在本申请的一个实施例中,如图7所示,在如图6所示的基础上,第二获取20包括:第一确定单元21、第一获取单元22,其中,
第一确定单元21,用于确定拍摄畸变图像的相机模组的内参。
第一获取单元22,用于根据预设算法对内参和第一坐标计算,获取第二坐标。
具体而言,在本实施例中,控制相机模组以多个角度拍摄训练物获取多张参考图像,其中,训练物具有较为规律的形状和轮廓标记等,以便于在对应的图像中快速找到标定的参考点,比如,可以是棋盘格图案,从而,每个棋盘格角的像素点都很容易被检测,可以将棋盘格图案中的棋盘格角作为对应的参考点。进一步地,第一确定单元21获取各张参考图像中与训练物中的参考点对应的图像坐标,基于参考点的世界坐标是预先测量的,可以基于预存的参考点的世界坐标和图像坐标计算相机模组的内参,该内参可以包括主点的x坐标:cx、主点的y坐标cy、归一化x方向焦距fx、归一化y方向fy、径向畸变的系数k1、k2、k3、切向畸变系数p1、p2,进而,第一获取单元22根据预设计算公式对畸变的第一坐标和内参计算,获取第二坐标。
第三获取模块30,用于计算第一坐标与畸变图像中心坐标点的距离。
可以理解,由于相机模组拍摄机制,越靠近图像的边缘则畸变程度越高,越靠近中心区域,畸变程度越小,因此,第三获取模块30可以计算畸变坐标距离畸变图像中心坐标点的距离,根据预设的平滑处理函数和距离计算平滑处理系数,该平滑处理 系数用于对畸变图像进行校正处理。
需要强调的是,该平滑处理函数用于指示距离与平滑处理系数之间的正比关系,也就是说,越靠近图像的边缘的区域,由于对应的距离越大,则对应的平滑处理系数越大,会得到更强的校正处理,越远离图像的边缘的区域,由于对应的距离越小,则对应的平滑处理系数越小,会得到更弱的校正处理。因此,很显然,上述平滑处理函数可以保证畸变图像由中心到边缘的去校正程度逐渐增强,以保证平滑的过渡,提高图像处理后的真实度,基于平滑处理函数能够实现对畸变图像的平滑校正。
确定模块40,用于根据预设的平滑处理函数,确定距离对应的平滑处理系数,其中,平滑处理函数用于指示距离与平滑处理系数之间的正比关系。
校正模块50,用于根据平滑处理系数和第二坐标对第一坐标进行平滑校正,获取畸变校正图像。
具体的,结合平滑处理系数和第二坐标对第一坐标进行平滑校正,此时,由于在畸变校正时结合了非畸变坐标因此能够更好的提高图像的清晰度,且由于平滑处理系数与距离有关,基本上是正相关的函数,且因为广角镜头的相机图像往往中央区域畸变幅度较小,边缘区域畸变幅度较大,而人眼对图像中央区域的清晰度敏感性要高于边缘区域。因此,校正模块50实现了减弱中央畸变校正的程度,平滑处理系数可以实现,从图像中央点到图像边缘畸变校正程度依次平滑增强。这样既可以保证图像中央的清晰度,同时也保证了图像边缘的畸变校正程度。
在本申请的一个实施例中,如图8所示,在如图6所示的基础上,校正模块50包括:第二确定单元51、第一计算单元52和第二获取单元53,其中,
第二确定单元51,用于根据预设算法对平滑处理系数、第二坐标和第一坐标计算,确定与每个第一坐标对应的浮点型坐标。
第一计算单元52,用于对浮点型坐标插值计算获取每个像素点的整数型坐标点和像素值。
第二获取单元53,用于根据整数型坐标点和像素值获取畸变校正图像。
如图9所示,在如图6所示的基础上,校正模块50包括:第三确定单元54、第二计算单元55和校正单元56,其中,
第三确定单元54,用于根据平滑处理系数确定第二坐标的第一权重和第一坐标的第二权重,其中,第一权重和平滑处理系数成正比关系,第二权重和平滑处理系数成反比关系。
第二计算单元55,用于计算第一权重和第二坐标的第一乘积,以及第二权重和第一坐标的第二乘积。
校正单元56,用于根据第一乘积和第二乘积之和对第一坐标进行平滑校正,获取畸变校正图像。
需要说明的是,前述对图像畸变校正方法实施例的解释说明也适用于该实施例的图像畸变校正装置,此处不再赘述。
综上,本申请实施例的图像畸变校正装置,在传统广角畸变校正算法的基础上进行改进,在依然采用双线性插值算法的基础上,通过额外使用加权平滑函数,进行畸变校正处理。相较于传统畸变校正算法,考量了畸变在整幅图像的分布,实现了图像不同区域的区分化畸变校正操作,在保证算法高时效性的基础上,不仅削弱畸变校正后图像的清晰度的损失程度同时也保证了畸变较大的图像区域能够完整消除畸变,实现更好的拍照体验。
为了实现上述实施例,本申请还提出一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时,实现如前述实施例所描述的图像畸变校正方法。
为了实现上述实施例,本申请还提出一种非临时性计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现如前述方法实施例所描述的图像畸变校正方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种图像畸变校正方法,其特征在于,包括以下步骤:
    获取待校正的畸变图像,以及所述畸变图像中每个像素点的第一坐标;
    获取与所述第一坐标对应的第二坐标,其中,所述第二坐标是与所述第一坐标对应的无畸变坐标;
    获取所述第一坐标与所述畸变图像中心坐标点的距离,根据平滑处理函数,确定所述距离对应的平滑处理系数,其中,所述平滑处理函数用于指示所述距离与所述平滑处理系数之间的正比关系;
    根据所述平滑处理系数和所述第二坐标对所述第一坐标进行平滑校正,获取畸变校正图像。
  2. 如权利要求1所述的方法,其特征在于,所述获取与所述第一坐标对应的第二坐标,包括:
    确定拍摄所述畸变图像的相机模组的内参;
    根据预设算法对所述内参和所述第一坐标计算,获取所述第二坐标。
  3. 如权利要求1所述的方法,其特征在于,所述平滑处理函数为:
    Figure PCTCN2020095025-appb-100001
    其中,x为所述距离对应的归一化距离,S(x)为所述平滑处理系数。
  4. 如权利要求1所述的方法,其特征在于,所述根据所述平滑处理系数和所述第二坐标对所述第一坐标进行平滑校正,获取畸变校正图像,包括:
    根据预设算法对所述平滑处理系数、所述第二坐标和所述第一坐标计算,确定与所述每个第一坐标对应的浮点型坐标;
    对所述浮点型坐标插值计算获取所述每个像素点的整数型坐标点和像素值;
    根据所述整数型坐标点和像素值获取所述畸变校正图像。
  5. 如权利要求1所述的方法,其特征在于,所述根据所述平滑处理系数和所述第二坐标对所述第一坐标进行平滑校正,获取畸变校正图像,包括:
    根据所述平滑处理系数确定所述第二坐标的第一权重和所述第一坐标的第二权重,其中,所述第一权重和所述平滑处理系数成正比关系,所述第二权重和所述平滑处理系数成反比关系;
    计算所述第一权重和所述第二坐标的第一乘积,以及所述第二权重和所述第一坐标的第二乘积;
    根据所述第一乘积和所述第二乘积之和对所述第一坐标进行平滑校正,获取所述 畸变校正图像。
  6. 一种图像畸变校正装置,其特征在于,包括:
    第一获取模块,用于获取待校正的畸变图像,以及所述畸变图像中每个像素点的第一坐标;
    第二获取模块,用于获取与所述第一坐标对应的第二坐标,其中,所述第二坐标是与所述第一坐标对应的无畸变坐标;
    第三获取模块,用于获取所述第一坐标与所述畸变图像中心坐标点的距离;
    确定模块,用于根据预设的平滑处理函数,确定所述距离对应的平滑处理系数,其中,所述平滑处理函数用于指示所述距离与所述平滑处理系数之间的正比关系;
    校正模块,用于根据所述平滑处理系数和所述第二坐标对所述第一坐标进行平滑校正,获取畸变校正图像。
  7. 如权利要求6所述的装置,其特征在于,所述第二获取模块,包括:
    第一确定单元,用于确定拍摄所述畸变图像的相机模组的内参;
    第一获取单元,用于根据预设算法对所述内参和所述第一坐标计算,获取所述第二坐标。
  8. 如权利要求6所述的装置,其特征在于,所述平滑处理函数为:
    Figure PCTCN2020095025-appb-100002
    其中,x为所述距离对应的归一化距离,S(x)为所述平滑处理系数。
  9. 如权利要求6所述的装置,其特征在于,所述校正模块,包括:
    第二确定单元,用于根据预设算法对所述平滑处理系数、所述第二坐标和所述第一坐标计算,确定与所述每个第一坐标对应的浮点型坐标;
    第一计算单元,用于对所述浮点型坐标插值计算获取所述每个像素点的整数型坐标点和像素值;
    第二获取单元,用于根据所述整数型坐标点和像素值获取所述畸变校正图像。
  10. 如权利要求6所述的装置,其特征在于,所述校正模块,包括:
    第三确定单元,用于根据所述平滑处理系数确定所述第二坐标的第一权重和所述第一坐标的第二权重,其中,所述第一权重和所述平滑处理系数成正比关系,所述第二权重和所述平滑处理系数成反比关系;
    第二计算单元,用于计算所述第一权重和所述第二坐标的第一乘积,以及所述第二权重和所述第一坐标的第二乘积;
    校正单元,用于根据所述第一乘积和所述第二乘积之和对所述第一坐标进行平滑 校正,获取所述畸变校正图像。
  11. 一种电子设备,其特征在于,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现以下步骤:
    获取待校正的畸变图像,以及所述畸变图像中每个像素点的第一坐标;
    获取与所述第一坐标对应的第二坐标,其中,所述第二坐标是与所述第一坐标对应的无畸变坐标;
    获取所述第一坐标与所述畸变图像中心坐标点的距离,根据平滑处理函数,确定所述距离对应的平滑处理系数,其中,所述平滑处理函数用于指示所述距离与所述平滑处理系数之间的正比关系;
    根据所述平滑处理系数和所述第二坐标对所述第一坐标进行平滑校正,获取畸变校正图像。
  12. 如权利要求11所述的电子设备,其特征在于,所述处理器执行所述计算机程序时,还可实现以下步骤:
    确定拍摄所述畸变图像的相机模组的内参;
    根据预设算法对所述内参和所述第一坐标计算,获取所述第二坐标。
  13. 如权利要求11所述的电子设备,其特征在于,所述平滑处理函数为:
    Figure PCTCN2020095025-appb-100003
    其中,x为所述距离对应的归一化距离,S(x)为所述平滑处理系数。
  14. 如权利要求11所述的电子设备,其特征在于,所述处理器执行所述计算机程序时,还可实现以下步骤:
    根据预设算法对所述平滑处理系数、所述第二坐标和所述第一坐标计算,确定与所述每个第一坐标对应的浮点型坐标;
    对所述浮点型坐标插值计算获取所述每个像素点的整数型坐标点和像素值;
    根据所述整数型坐标点和像素值获取所述畸变校正图像。
  15. 如权利要求11所述的电子设备,其特征在于,所述处理器执行所述计算机程序时,还可实现以下步骤:
    根据所述平滑处理系数确定所述第二坐标的第一权重和所述第一坐标的第二权重,其中,所述第一权重和所述平滑处理系数成正比关系,所述第二权重和所述平滑处理系数成反比关系;
    计算所述第一权重和所述第二坐标的第一乘积,以及所述第二权重和所述第一坐 标的第二乘积;
    根据所述第一乘积和所述第二乘积之和对所述第一坐标进行平滑校正,获取所述畸变校正图像。
  16. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现以下步骤:
    获取待校正的畸变图像,以及所述畸变图像中每个像素点的第一坐标;
    获取与所述第一坐标对应的第二坐标,其中,所述第二坐标是与所述第一坐标对应的无畸变坐标;
    获取所述第一坐标与所述畸变图像中心坐标点的距离,根据平滑处理函数,确定所述距离对应的平滑处理系数,其中,所述平滑处理函数用于指示所述距离与所述平滑处理系数之间的正比关系;
    根据所述平滑处理系数和所述第二坐标对所述第一坐标进行平滑校正,获取畸变校正图像。
  17. 如权利要求16所述的非临时性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还可实现以下步骤:
    确定拍摄所述畸变图像的相机模组的内参;
    根据预设算法对所述内参和所述第一坐标计算,获取所述第二坐标。
  18. 如权利要求16所述的非临时性计算机可读存储介质,其特征在于,所述平滑处理函数为:
    Figure PCTCN2020095025-appb-100004
    其中,x为所述距离对应的归一化距离,S(x)为所述平滑处理系数。
  19. 如权利要求16所述的非临时性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还可实现以下步骤:
    根据预设算法对所述平滑处理系数、所述第二坐标和所述第一坐标计算,确定与所述每个第一坐标对应的浮点型坐标;
    对所述浮点型坐标插值计算获取所述每个像素点的整数型坐标点和像素值;
    根据所述整数型坐标点和像素值获取所述畸变校正图像。
  20. 如权利要求16所述的非临时性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还可实现以下步骤:
    根据所述平滑处理系数确定所述第二坐标的第一权重和所述第一坐标的第二权重,其中,所述第一权重和所述平滑处理系数成正比关系,所述第二权重和所述平滑 处理系数成反比关系;
    计算所述第一权重和所述第二坐标的第一乘积,以及所述第二权重和所述第一坐标的第二乘积;
    根据所述第一乘积和所述第二乘积之和对所述第一坐标进行平滑校正,获取所述畸变校正图像。
PCT/CN2020/095025 2019-06-24 2020-06-09 图像畸变校正方法和装置 WO2020259271A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20833621.4A EP3965054A4 (en) 2019-06-24 2020-06-09 IMAGE DISTORTION CORRECTION METHOD AND APPARATUS
US17/525,628 US11861813B2 (en) 2019-06-24 2021-11-12 Image distortion correction method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910550603.1A CN110276734B (zh) 2019-06-24 2019-06-24 图像畸变校正方法和装置
CN201910550603.1 2019-06-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/525,628 Continuation US11861813B2 (en) 2019-06-24 2021-11-12 Image distortion correction method and apparatus

Publications (1)

Publication Number Publication Date
WO2020259271A1 true WO2020259271A1 (zh) 2020-12-30

Family

ID=67961842

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/095025 WO2020259271A1 (zh) 2019-06-24 2020-06-09 图像畸变校正方法和装置

Country Status (4)

Country Link
US (1) US11861813B2 (zh)
EP (1) EP3965054A4 (zh)
CN (1) CN110276734B (zh)
WO (1) WO2020259271A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173056A (zh) * 2023-11-01 2023-12-05 欣瑞华微电子(上海)有限公司 用于解决信息丢失的图像处理方法、设备及可读存储介质

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276734B (zh) * 2019-06-24 2021-03-23 Oppo广东移动通信有限公司 图像畸变校正方法和装置
CN110728638A (zh) * 2019-09-25 2020-01-24 深圳疆程技术有限公司 一种图像的畸变矫正方法、车机及汽车
CN112862895B (zh) * 2019-11-27 2023-10-10 杭州海康威视数字技术股份有限公司 一种鱼眼摄像头标定方法、装置及系统
CN111080542B (zh) * 2019-12-09 2024-05-28 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备以及存储介质
CN111260567B (zh) * 2020-01-10 2024-01-19 昆山丘钛微电子科技有限公司 一种图像畸变校正的方法及装置
CN111325691B (zh) * 2020-02-20 2023-11-10 Oppo广东移动通信有限公司 图像校正方法、装置、电子设备和计算机可读存储介质
CN111355863B (zh) * 2020-04-07 2022-07-22 北京达佳互联信息技术有限公司 一种图像畸变校正方法、装置、电子设备及存储介质
US11335022B2 (en) 2020-06-10 2022-05-17 Snap Inc. 3D reconstruction using wide-angle imaging devices
CN111932622B (zh) * 2020-08-10 2022-06-28 浙江大学 一种无人机的飞行高度的确定装置、方法及系统
JP7551419B2 (ja) * 2020-09-23 2024-09-17 キヤノン株式会社 情報処理装置、情報処理方法及びプログラム
JP2022069967A (ja) * 2020-10-26 2022-05-12 住友重機械工業株式会社 歪曲収差補正処理装置、歪曲収差補正方法、及びプログラム
CN113222862B (zh) * 2021-06-04 2024-09-17 黑芝麻智能科技(上海)有限公司 图像畸变校正方法、装置、电子设备和存储介质
CN113487540B (zh) * 2021-06-15 2023-07-07 北京道达天际科技股份有限公司 一种空基大倾角图像的校正方法和装置
CN113658053B (zh) * 2021-07-04 2024-09-06 浙江大华技术股份有限公司 图像校正方法、装置、电子设备、计算机可读存储介质
CN114820787B (zh) * 2022-04-22 2024-05-28 聊城大学 一种面向大视场平面视觉测量的图像校正方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1684499A (zh) * 2004-04-16 2005-10-19 夏普株式会社 图像处理装置、图像处理方法及其程序和记录介质
CN104182933A (zh) * 2013-05-28 2014-12-03 东北大学 一种基于逆向除法模型的广角镜头图像畸变校正方法
CN104994367A (zh) * 2015-06-30 2015-10-21 华为技术有限公司 一种图像矫正方法以及摄像头
CN106815869A (zh) * 2016-10-28 2017-06-09 北京鑫洋泉电子科技有限公司 鱼眼相机的光心确定方法及装置
US20170359573A1 (en) * 2016-06-08 2017-12-14 SAMSUNG SDS CO., LTD., Seoul, KOREA, REPUBLIC OF; Method and apparatus for camera calibration using light source
CN108090880A (zh) * 2017-12-29 2018-05-29 杭州联络互动信息科技股份有限公司 一种图像的反畸变处理方法以及装置
CN110276734A (zh) * 2019-06-24 2019-09-24 Oppo广东移动通信有限公司 图像畸变校正方法和装置

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5905530A (en) * 1992-08-24 1999-05-18 Canon Kabushiki Kaisha Image pickup apparatus
US6924816B2 (en) * 2000-03-17 2005-08-02 Sun Microsystems, Inc. Compensating for the chromatic distortion of displayed images
US7068852B2 (en) * 2001-01-23 2006-06-27 Zoran Corporation Edge detection and sharpening process for an image
JP4133029B2 (ja) * 2002-06-25 2008-08-13 富士フイルム株式会社 画像処理方法および装置
US7058237B2 (en) * 2002-06-28 2006-06-06 Microsoft Corporation Real-time wide-angle image correction system and method for computer image viewing
EP1940180B1 (en) * 2005-09-29 2012-05-30 Nikon Corporation Image processing apparatus and image processing method
JP5241698B2 (ja) * 2009-12-25 2013-07-17 キヤノン株式会社 画像処理装置および画像処理方法
US20130100310A1 (en) * 2010-07-05 2013-04-25 Nikon Corporation Image processing device, imaging device, and image processing program
CN103426149B (zh) * 2013-07-24 2016-02-03 玉振明 大视角图像畸变的校正处理方法
CN103559684B (zh) * 2013-10-08 2016-04-06 清华大学深圳研究生院 基于平滑校正的图像恢复方法
CN104636743B (zh) * 2013-11-06 2021-09-03 北京三星通信技术研究有限公司 文字图像校正的方法和装置
US20160065306A1 (en) * 2014-09-02 2016-03-03 Chin Sheng Henry Chong System and method for green communication for intelligent mobile internet of things
EP3189493B1 (en) * 2014-09-05 2018-11-07 PoLight AS Depth map based perspective correction in digital photos
CN105488775A (zh) * 2014-10-09 2016-04-13 东北大学 一种基于六摄像机环视的柱面全景生成装置及方法
US10115024B2 (en) * 2015-02-26 2018-10-30 Mobileye Vision Technologies Ltd. Road vertical contour detection using a stabilized coordinate frame
CN105046657B (zh) * 2015-06-23 2018-02-09 浙江大学 一种图像拉伸畸变自适应校正方法
CN107113376B (zh) * 2015-07-31 2019-07-19 深圳市大疆创新科技有限公司 一种图像处理方法、装置及摄像机
CN106683068B (zh) * 2015-11-04 2020-04-07 北京文博远大数字技术有限公司 一种三维数字化图像采集方法
CN107424126A (zh) * 2017-05-26 2017-12-01 广州视源电子科技股份有限公司 图像校正方法、装置、设备、系统及摄像设备和显示设备
CN108761777B (zh) * 2018-03-30 2021-04-20 京东方科技集团股份有限公司 一种确定光学装置畸变量、畸变校正的方法及设备
CN109035170B (zh) * 2018-07-26 2022-07-01 电子科技大学 基于单网格图分段映射的自适应广角图像校正方法及装置
CN109255760A (zh) * 2018-08-13 2019-01-22 青岛海信医疗设备股份有限公司 畸变图像校正方法及装置
CN109345461A (zh) 2018-09-30 2019-02-15 中国科学院长春光学精密机械与物理研究所 一种图像畸变校正方法、装置、设备及存储介质
CN109461126B (zh) * 2018-10-16 2020-06-30 重庆金山科技(集团)有限公司 一种图像畸变校正方法及系统
CN109840894B (zh) * 2019-01-30 2021-02-09 湖北亿咖通科技有限公司 视差图精修方法、装置及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1684499A (zh) * 2004-04-16 2005-10-19 夏普株式会社 图像处理装置、图像处理方法及其程序和记录介质
CN104182933A (zh) * 2013-05-28 2014-12-03 东北大学 一种基于逆向除法模型的广角镜头图像畸变校正方法
CN104994367A (zh) * 2015-06-30 2015-10-21 华为技术有限公司 一种图像矫正方法以及摄像头
US20170359573A1 (en) * 2016-06-08 2017-12-14 SAMSUNG SDS CO., LTD., Seoul, KOREA, REPUBLIC OF; Method and apparatus for camera calibration using light source
CN106815869A (zh) * 2016-10-28 2017-06-09 北京鑫洋泉电子科技有限公司 鱼眼相机的光心确定方法及装置
CN108090880A (zh) * 2017-12-29 2018-05-29 杭州联络互动信息科技股份有限公司 一种图像的反畸变处理方法以及装置
CN110276734A (zh) * 2019-06-24 2019-09-24 Oppo广东移动通信有限公司 图像畸变校正方法和装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP3965054A4
XIANG KNOWS: "[Image] Detailed explanation of distortion correction", 14 April 2015 (2015-04-14), XP055774025, Retrieved from the Internet <URL:http://blog.csdn.net/humanking7/article/details/45037239> *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173056A (zh) * 2023-11-01 2023-12-05 欣瑞华微电子(上海)有限公司 用于解决信息丢失的图像处理方法、设备及可读存储介质
CN117173056B (zh) * 2023-11-01 2024-04-09 欣瑞华微电子(上海)有限公司 用于解决信息丢失的图像处理方法、设备及可读存储介质

Also Published As

Publication number Publication date
CN110276734A (zh) 2019-09-24
US20220076391A1 (en) 2022-03-10
EP3965054A1 (en) 2022-03-09
EP3965054A4 (en) 2022-07-13
CN110276734B (zh) 2021-03-23
US11861813B2 (en) 2024-01-02

Similar Documents

Publication Publication Date Title
WO2020259271A1 (zh) 图像畸变校正方法和装置
CN110264426B (zh) 图像畸变校正方法和装置
WO2021115071A1 (zh) 单目内窥镜图像的三维重建方法、装置及终端设备
US10997696B2 (en) Image processing method, apparatus and device
WO2019105262A1 (zh) 背景虚化处理方法、装置及设备
US8224069B2 (en) Image processing apparatus, image matching method, and computer-readable recording medium
JP5075757B2 (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
JP5179398B2 (ja) 画像処理装置、画像処理方法、画像処理プログラム
WO2019011147A1 (zh) 逆光场景的人脸区域处理方法和装置
JP6347675B2 (ja) 画像処理装置、撮像装置、画像処理方法、撮像方法及びプログラム
WO2019105261A1 (zh) 背景虚化处理方法、装置及设备
WO2019042216A1 (zh) 图像虚化处理方法、装置及拍摄终端
JP4813517B2 (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
JP7123736B2 (ja) 画像処理装置、画像処理方法、およびプログラム
WO2019105254A1 (zh) 背景虚化处理方法、装置及设备
WO2019232793A1 (zh) 双摄像头标定方法、电子设备、计算机可读存储介质
CN109859137B (zh) 一种广角相机非规则畸变全域校正方法
CN109785390B (zh) 一种用于图像矫正的方法和装置
WO2021147650A1 (zh) 拍照方法、装置、存储介质及电子设备
CN114390262A (zh) 用于拼接三维球面全景影像的方法及电子装置
JP2009301181A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
CN117058183A (zh) 一种基于双摄像头的图像处理方法、装置、电子设备及存储介质
JP2009302731A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
JP4789964B2 (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
JP7321772B2 (ja) 画像処理装置、画像処理方法、およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20833621

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020833621

Country of ref document: EP

Effective date: 20211130

NENP Non-entry into the national phase

Ref country code: DE