WO2021195829A1 - 图像处理方法、装置和可移动平台 - Google Patents

图像处理方法、装置和可移动平台 Download PDF

Info

Publication number
WO2021195829A1
WO2021195829A1 PCT/CN2020/082015 CN2020082015W WO2021195829A1 WO 2021195829 A1 WO2021195829 A1 WO 2021195829A1 CN 2020082015 W CN2020082015 W CN 2020082015W WO 2021195829 A1 WO2021195829 A1 WO 2021195829A1
Authority
WO
WIPO (PCT)
Prior art keywords
chromatic aberration
pixel
zoom factor
offset
compensation table
Prior art date
Application number
PCT/CN2020/082015
Other languages
English (en)
French (fr)
Inventor
潘晖
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080004864.4A priority Critical patent/CN112640424A/zh
Priority to PCT/CN2020/082015 priority patent/WO2021195829A1/zh
Publication of WO2021195829A1 publication Critical patent/WO2021195829A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Definitions

  • This application relates to the field of image processing, and in particular to an image processing method, device and movable platform.
  • the imaging result will have obvious dispersion, which will affect the imaging effect.
  • dispersion In the process of light propagation, due to the different refractive indexes of light of different wavelengths, the phenomenon of inconsistent convergence points will occur on the imaging plane.
  • the direction along the optical axis is called axial dispersion, and the direction along the focal plane is called lateral dispersion.
  • axial dispersion The direction along the optical axis
  • lateral dispersion The direction along the focal plane.
  • the farther from the imaging center the more serious the dispersion. Therefore, it is necessary to perform chromatic dispersion correction on the image taken by the camera to reduce the influence of chromatic dispersion on the imaging effect.
  • This application provides an image processing method, device and movable platform.
  • an embodiment of the present application provides an image processing method, the method including:
  • the pixel point is chromatic dispersion correction.
  • an image processing device including:
  • Storage device for storing program instructions
  • One or more processors call program instructions stored in the storage device, and when the program instructions are executed, the one or more processors are individually or collectively configured to implement the following operations:
  • the pixel point is chromatic dispersion correction.
  • an embodiment of the present application provides a movable platform, including:
  • a power system connected to the body and used to provide power for the movement of the body;
  • An image processing device supported by the body;
  • the image processing device includes: a storage device for storing program instructions; and
  • One or more processors call program instructions stored in the storage device, and when the program instructions are executed, the one or more processors are individually or collectively configured to implement the following operations:
  • the pixel point is chromatic dispersion correction.
  • the zoom factor of the camera when shooting the image to be processed determines the chromatic aberration offset of the pixel, and then the chromatic aberration correction of the pixel is implemented to realize the chromatic dispersion correction of the multifocal segment, which can effectively perform the chromatic dispersion correction for the multifocal imaging device.
  • FIG. 1 is a schematic diagram of a method flow of an image processing method in an embodiment of the present application
  • FIG. 2 is a schematic diagram of an implementation process of determining the chromatic aberration offset of a pixel according to the zoom factor and position information in an embodiment of the present application;
  • Fig. 3 is a schematic diagram of a dot matrix diagram in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a calibration point in a bitmap image in an embodiment of the present application under different color channels;
  • Fig. 5 is a schematic diagram of a preset coordinate system in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a method flow of an image processing method in another embodiment of the present application.
  • FIG. 7 is a schematic diagram of a method flow of an image processing method in another embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an image processing device in another embodiment of the present application.
  • an embodiment of the present application provides an image processing method.
  • the method includes: obtaining a zoom factor of the photographing device when shooting an image to be processed, and acquiring position information of pixels in the image to be processed; according to the zoom factor and position Information, determine the chromatic aberration offset of the pixel; according to the chromatic aberration offset, perform dispersion correction on the pixel.
  • the position information determines the chromatic aberration offset of the pixel, and then corrects the chromatic dispersion of the pixel to realize the chromatic dispersion correction of the multifocal segment, which can effectively correct the chromatic dispersion of the multifocal imaging device.
  • an embodiment of the present application also provides an image processing method, the method includes: obtaining the position information of the pixel in the image to be processed; The compensation table is used to record the chromatic aberration offset of different pixel positions; according to the chromatic aberration offset, the pixel is corrected for dispersion.
  • the color difference offset of different pixel positions is stored in the color difference compensation table, instead of the actual coordinates of different pixel positions.
  • the color difference offset has a smaller value, which is convenient for data merging, thereby simplifying the dispersion. Correction process.
  • an embodiment of the present application also provides an image processing method, the method includes: obtaining the position information of the pixel in the image to be processed;
  • the compensation table is used to record the chromatic aberration offset of different pixel positions, and the pixel positions in the chromatic aberration compensation table are preset areas distributed in a preset coordinate system, and the origin of the preset coordinate system coincides with the optical center of the camera; Chromatic aberration offset, which corrects the dispersion of pixels.
  • the embodiment of the present application saves storage space by storing the color difference offsets of different pixel positions of the preset area in the preset coordinate system in the color difference compensation table.
  • the execution subject of the image processing method of the embodiment of the present application may be any device with data processing function, such as a camera or other device with data processing function, such as a computer, a mobile phone, and so on.
  • the image processing method of the embodiment of the present application can be executed online, that is, after the image is captured by the photographing device, the image is processed in real time by the photographing device or a device communicating with the photographing device; it should be understood that the image of the embodiment of the application
  • the processing method can also be executed offline.
  • FIG. 1 is a schematic diagram of a method flow of an image processing method in an embodiment of the present application; please refer to FIG. 1, the image processing method of an embodiment of the present application may include steps S101 to S103.
  • the zoom factor of the photographing device when photographing the image to be processed is acquired, and the position information of the pixel in the image to be processed is acquired.
  • the photographing device in the embodiment of the present application is a photographing device with multiple focal lengths, that is, a zooming photographing device, and the photographing device has multiple zoom multiples.
  • the image to be processed can be the original RAW image, and the RAW image can retain more information; of course, the image to be processed can also be other types of images, such as RGB images.
  • the zoom factor can be obtained in different ways.
  • the zoom factor is an external input, such as user input or sent by a photographing device; in some embodiments, the image processing method is executed online, which can be directly obtained from The device reads the current zoom factor of the camera; in some embodiments, the zoom factor is determined according to the image information of the image to be processed. For example, when the camera captures the image to be processed, the zoom factor is stored in the image to be processed, or Store the zoom factor corresponding to the image to be processed, and then pack it.
  • the implementation process of obtaining the position information of the pixels in the image to be processed may include but is not limited to the following steps:
  • the method for acquiring the pixel coordinates can use an existing recognition algorithm, which is not specifically limited in the embodiment of the present application.
  • the optical center of the camera is the center of the lens, but due to processing errors, installation errors, etc., the optical center of the camera is offset from the center of the lens. This offset is called the optical center coordinate offset quantity.
  • the center offset of the center of the image to be processed corresponds to the optical center coordinate offset.
  • the optical center of the photographing device is the calibrated optical center coordinates, which refers to the image coordinates corresponding to the position of the focal center point on the image sensor of the photographing device after the light beam is focused by the lens.
  • the optical center coordinate offset can also be obtained in different ways.
  • the optical center coordinate offset is input from the outside.
  • the optical center coordinate offset is a fixed value.
  • the imaging device may pre-store the optical center coordinate offset
  • the executing device of the image processing method may obtain the optical center coordinate offset from the imaging device
  • the executing device of the image processing method may obtain the offset from the imaging device in a way such as reading Obtain the optical center coordinate offset; for example, the user inputs the optical center coordinate offset to the execution device of the image processing method.
  • the optical center coordinate offset is determined according to an externally input optical center coordinate
  • the optical center coordinate is the actual optical center coordinate of the lens obtained by measurement
  • the optical center coordinate offset is an externally input optical center The difference between the coordinate and the ideal optical center coordinate (a known quantity).
  • the execution device of the image processing method obtains the pre-stored optical center coordinates from the photographing device; for example, the user inputs the optical center coordinates to the execution device of the image processing method.
  • the position information of the pixel is the position of the pixel in the coordinate system established with the optical center of the camera as the origin, and the pixel coordinate of the pixel is the position of the pixel in the coordinate system established with the center of the image to be processed as the origin.
  • Location Due to the problem of the offset of the optical center coordinate, the position information of the pixel is the sum of the pixel coordinate of the pixel and the offset of the optical center coordinate. Therefore, after the center of the image to be processed is obtained, the coordinates of the center are compensated according to the offset of the optical center coordinates to determine the corresponding position of the optical center of the camera in the image to be processed.
  • the position information of the pixel point is the pixel coordinate of the pixel point.
  • the chromatic aberration shift amount of the pixel point is determined according to the zoom factor and the position information.
  • Figure 2 is a schematic diagram of an implementation process of determining the chromatic aberration offset of a pixel according to the zoom factor and position information in an embodiment of the present application; as shown in Figure 2, a method for determining the pixel point according to the zoom factor and position information
  • the realization process of the chromatic aberration offset can include:
  • the chromatic aberration compensation table is used to record the chromatic aberration offsets of different pixel positions under the corresponding zoom multiples;
  • the chromatic aberration offset By storing the chromatic aberration offset of different pixel positions in the chromatic aberration compensation table, instead of the actual coordinates of different pixel positions, the chromatic aberration offset has a smaller value than the actual coordinates, which facilitates data merging, thereby simplifying the dispersion correction process; at the same time; ,
  • the actual coordinates described in the embodiments of the present application refer to coordinates in a coordinate system established with the optical center of the photographing device as the origin.
  • the zoom factor may be one of multiple preset zoom multiples, or it may be different from the multiple preset zoom multiples.
  • the chromatic aberration compensation table of the zoom factor is the chromatic aberration compensation table of the corresponding preset zoom factor.
  • the preset zoom multiples include a first preset zoom multiple and a second preset zoom multiple, and the first preset zoom multiple is less than the second preset zoom multiple .
  • the chromatic aberration compensation table of the zoom factor is the chromatic aberration compensation table based on the first preset zoom factor and the chromatic aberration compensation table of the second preset zoom factor Sure.
  • the chromatic aberration compensation table of the zoom factor can be determined by linear interpolation or other interpolation methods to determine the chromatic aberration of the zoom factor. Compensation table.
  • the chromatic aberration compensation table of multiple preset zoom multiples pre-calibrated in this embodiment of the application To improve the accuracy of dispersion correction. Too many preset zoom factors will increase the workload of calibration, while too few preset zoom factors will not cover all focal lengths of the camera. Therefore, it is advisable that the number of preset zoom factors can cover all focal lengths of the camera. For example, the number of preset zoom magnifications is greater than or equal to 5 and less than or equal to 10.
  • the number of preset zoom multiples can be 5, 6, 7, 8, 9 or 10, so that the chromatic aberration compensation of 5-10 preset zoom multiples expresses the dispersion correction to the full focal length. It should be understood that the number of preset zoom factors can also be set to other numbers.
  • the chromatic aberration offset of each pixel position (referred to as the calibration position herein) in the image with the optical center of the camera as the coordinate origin under the preset zoom factor.
  • the chromatic aberration offset of different pixel positions and the chromatic aberration offset of the corresponding calibration position that is, the pixel in the chromatic aberration compensation table of the preset zoom factor
  • the number of positions is the same as the number of calibration positions.
  • the chromatic aberration offsets of different pixel positions in the chromatic aberration compensation table of the preset zoom factor are obtained by down-sampling the chromatic aberration offsets of multiple calibration positions under the preset zoom factor, reducing the preset zoom factor The amount of data in the color difference compensation table.
  • the down-sampling interval should not be too small; and to ensure the dispersion correction effect, the down-sampling interval should not be too large.
  • the size of the sampling interval for down-sampling is 100 pixels, which reduces the data amount of the chromatic aberration compensation table of the preset zoom factor while ensuring the dispersion correction effect.
  • the size of the calibration image is 4000*3000, and if the size of the sampling interval is 100 pixels, the size of the color difference compensation table is 40 (columns)*30 (rows).
  • the color difference offset of the calibration position can be obtained by using a bitmap (including multiple regularly arranged points, as shown in Figure 3, the points in the bitmap image are referred to as calibration points in this article), but it is not limited to the bitmap , You can also use other patterns of images.
  • the calibration image is a bitmap image
  • the calibration position includes the positions of at least most of the calibration points in the bitmap image under a preset zoom factor, and the points under the preset zoom factor
  • the matrix image is obtained by shooting a dot matrix image of the corresponding size. For example, design bitmaps of different sizes, and shoot the bitmaps of different sizes with the camera to obtain clear bitmap images that occupy the entire field of view under different zoom magnifications to obtain enough targets. Fixed point.
  • the calibration position may include at least a part of the pixel position in the bitmap image, where the pixel position of the bitmap image may include the calibration point in the bitmap image; of course, in addition to the calibration point, the pixel position of the bitmap image is also Can include other locations.
  • the calibration position includes at least a part of the calibration point in the bitmap image.
  • the color difference offset of the calibration position includes the color difference offset of the calibration point, and the color difference offset of the calibration point is determined according to the color difference offset of the calibration point in different color channels.
  • the color channel includes three color channels of red R, green G, and blue B
  • the color difference offset of the calibration point is the color difference offset of the calibration point in the three color channels of R, G, and B according to the calibration point.
  • the color difference offset of the calibration point in different color channels includes the color difference offset of the calibration point in each color channel, the color difference offset in the first direction and the color difference offset in the second direction.
  • the first direction and the second direction are orthogonal.
  • the first direction is one of the length direction and the width direction of the bitmap image
  • the second direction is the other of the length direction and the width direction of the bitmap image; of course, the first direction and the second direction are also It can be set to other directions, which can be set as required.
  • the chromatic aberration offset in the first direction includes the distance that the calibration point moves along the radial direction of the optical center of the camera (herein referred to as the radial movement distance) in the first direction under the color channel.
  • the first upward component, the chromatic aberration offset in the second direction includes the second component in the second direction of the calibration point under the color channel, along the radial movement of the optical center of the imaging device, to correct the chromatic dispersion
  • the resulting chromatic aberration offset of the calibration point along the radial direction of the optical center of the imaging device includes the distance that the calibration point moves along the radial direction of the optical center of the camera (herein referred to as the radial movement distance) in the first direction under the color channel.
  • the first upward component, the chromatic aberration offset in the second direction includes the second component in the second direction of the calibration point under the color channel, along the radial movement of the optical center of the imaging device, to correct the chromatic dispersion
  • the calibration point is under the color channel, and the radial movement distance corresponding to the radial movement of the optical center of the camera can be made so that the center of the calibration point under the different color channels is in the circle with the optical center of the camera as the center. superior.
  • zoom in on a calibration point in the bitmap image The calibration point appears as a different circle under the three color channels of R, G, and B. 1 is the circle with the calibration point under the G color channel.
  • the circle under the R color channel 2 is the circle under the R color channel
  • 3 is the circle under the B color channel
  • 11 is the center of the circle under the G color channel
  • 21 is the center of the circle under the R color channel
  • 31 is the circle under the B color channel
  • the center of the circle, the center of the calibration point under the different color channels, is the center of the index fixed point under the different color channels.
  • the embodiment of the application takes the circle with the calibration point under the G color channel as the reference, the calibration point is under the R color channel, and the radial movement distance corresponds to the radial movement of the optical center of the camera, and the calibration point is on the B color channel.
  • the radial movement distance corresponding to the radial movement of the optical center of the camera can make the calibration point at the center 21 of circle 2 under the R color channel, and the calibration point at the center 31 of circle 3 under the B color channel. It is in the same circle as the center 11 of circle 1 with the calibration point under the G color channel (the optical center of the circle camera is the center of the circle, and the distance from the optical center of the camera to the center 11 of the circle 1 with the calibration point under the G color channel Is the radius).
  • the centers of circle 1, circle 2 and circle 3 overlap in the radial direction, although the centers of circle 1, circle 2 and circle 3 are not moved upward, so that the centers of circle 1, circle 2 and circle 3 It also coincides in the normal direction (when the centers of circle 1, circle 2 and circle 3 coincide in the radial and normal directions, the centers of circle 1, circle 2 and circle 3 coincide), but circle 1, circle 2 and circle
  • the circle center of 3 can be overlapped in the radial direction, so that the dispersion of the calibration point can be basically eliminated, and the dispersion correction effect can be satisfied.
  • the G color channel is used as the reference.
  • the wavelength of the light received by the G color channel is between the R color channel and the B color channel, which can be regarded as the calibration point There is no color difference in the G color channel. Therefore, the result of color difference calibration based on the G color channel is more accurate.
  • the chromatic aberration offset in the first direction includes the distance that the calibration point moves along the normal direction of the optical center of the camera (referred to as the normal movement distance herein) in the first direction under the color channel.
  • the third upward component, the color difference offset in the second direction includes the fourth component in the second direction of the distance moved by the calibration point under the color channel along the normal direction of the optical center of the photographing device.
  • the calibration point is under the color channel, and moves along the normal direction of the optical center of the imaging device corresponding to the normal movement distance, so that the center of the calibration point under different color channels is a circle centered on the optical center of the imaging device. On a trail.
  • the calibration point is under the R color channel and moves along the normal direction of the optical center of the imaging device corresponding to the normal movement distance.
  • the calibration point is under the B color channel and moves along the normal direction of the optical center of the imaging device.
  • the corresponding normal movement distance can make the calibration point at the center 21 of circle 2 under the R color channel, the center 31 of circle 3 with the calibration point under the B color channel, and the center of circle 1 with the calibration point under the G color channel 11 is on the same radial line (the optical center of the circle camera is the center of the circle, and the distance from the optical center of the camera to the center 11 of circle 1 under the G color channel is the radius).
  • the chromatic aberration shift in the first direction and the chromatic aberration shift in the second direction are used to correct the chromatic aberration shift caused by the chromatic dispersion along the radial direction of the optical center of the imaging device to correct the chromatic aberration shift caused by the chromatic dispersion.
  • the distance between 21 and the center 11 of the circle with the calibration point in the G color channel, and the distance between the center 31 of the circle with the calibration point in the B color channel and the center 11 of the circle with the calibration point in the G color channel, the calibration point The color difference offset includes the distance between the center 21 of the circle with the calibration point under the R color channel and the center 11 of the circle with the calibration point under the G color channel, and the distance between the center 31 of the circle with the calibration point under the B color channel and the The distance of the calibration point at the center 11 of the circle under the G color channel is based on the distance between the center 21 of the circle where the calibration point is under the R color channel and the center 11 of the circle where the calibration point is under the G color channel, and the calibration point at B The distance between the center 31 of the circle under the color channel and the center 11 of the circle with the calibration point under the G color channel, move the circle with the calibration point under the corresponding color channel to coincide with the circle with the calibration point under the R color channel, that is Realize dispersion correction.
  • the first component is based on the pixel coordinates of the calibration point in the corresponding color channel and the optical center coordinates of the optical center of the camera.
  • the difference in the first direction is determined by the distance between the calibration point and the optical center of the camera under the corresponding color channel
  • the second component is the difference between the pixel coordinates of the calibration point in the corresponding color channel and the optical center coordinates in the second direction.
  • the distance from the calibration point to the optical center of the camera under the corresponding color channel is determined.
  • the first component is determined according to the first fitting function
  • the second component is determined according to the second fitting function.
  • the first fitting function takes the difference between the pixel coordinates of the calibration point under the color channel and the optical center coordinates of the optical center of the camera in the first direction, and the distance between the calibration point under the color channel and the optical center of the camera as the independent variable.
  • the first component is the dependent variable.
  • the second fitting function takes the difference between the pixel coordinates of the calibration point under the color channel and the optical center coordinates of the optical center of the camera in the second direction, and the distance between the calibration point under the color channel and the optical center of the camera as the independent variable.
  • the second component is the dependent variable. It should be understood that the calculation methods of the first component and the second component are not limited to the form of functions, and may also be other, such as table lookup, model prediction, and so on.
  • the pixel coordinates of the calibration point under the corresponding color channel are the coordinates of the coordinate system established with the optical center of the camera as the coordinate origin.
  • the color difference offset of the calibration point obtained is discrete, so it is necessary to fit the color difference offset of the data calibration point to obtain the corresponding fitting function, so as to facilitate obtaining the full size Calibration data.
  • the chromatic aberration offsets of all pixel positions in the bitmap image can be calculated according to the fitting function, and full-scale simulation can effectively remove chromatic dispersion.
  • the first fitting function may be a polynomial function.
  • the first fitting function is a fifth-order polynomial function of the distance between the calibration point at the corresponding color channel and the optical center of the camera; of course, the first fitting function may also be It is a polynomial function of other number degree of the distance between the calibration point in the corresponding color channel and the optical center of the camera. It should be understood that the first fitting function may also be other types of functions, and is not limited to a polynomial function form.
  • the second fitting function may be a polynomial function.
  • the second fitting function is a fifth-order polynomial function of the distance between the calibration point at the corresponding color channel and the optical center of the camera; of course, the second fitting function may also be a standard
  • the fixed point is a polynomial function of other magnitudes of the distance from the corresponding color channel to the optical center of the camera. It should be understood that the second fitting function may also be other types of functions, and is not limited to a polynomial function form.
  • the optical center coordinates are (x0, y0)
  • the pixel coordinates of the calibration point in the R color channel are (x, y)
  • the calibration point is in the R color channel.
  • the difference ⁇ x between the pixel coordinates and the optical center coordinates in the first direction is
  • the difference ⁇ y between the pixel coordinates of the calibration point in the R color channel and the optical center coordinates in the second direction is
  • the distance r from the calibration point to the optical center of the camera under the R color channel is sqrt( ⁇ x ⁇ 2+ ⁇ y ⁇ 2).
  • the calculation formula of the first fitting function can be:
  • dxr (k0*r+k1*r ⁇ 2+k2*r ⁇ 4) ⁇ x(1);
  • dxr is the first component
  • k0, k1, and k2 are the coefficients of the corresponding terms.
  • the calculation formula of the second fitting function can be:
  • dyr is the second component
  • m0, m1, and m2 are the coefficients of the corresponding terms.
  • calculation formula of the third component may be:
  • dxt is the third component
  • p0 and p1 are the coefficients of the corresponding terms.
  • the calculation formula of the fourth component can be:
  • dyt is the third component
  • q0 and q1 are the coefficients of the corresponding terms.
  • the chromatic aberration offset in the first direction is dxr
  • the chromatic aberration offset in the second direction is dyr to correct the diameter of the calibration point along the optical center of the camera due to dispersion.
  • Chromatic aberration offset for the R color channel, the chromatic aberration offset in the first direction is dxt, and the chromatic aberration offset in the second direction is dyt to correct the calibration point caused by chromatic dispersion along the shooting
  • the normal chromatic aberration offset of the optical center of the device optionally, for the R color channel, the chromatic aberration offset in the first direction is (dxr+dxt), and the chromatic aberration offset in the second direction is (dyr+ dyt) to correct the radial chromatic aberration offset of the calibration point along the optical center of the imaging device due to dispersion, and to correct the normal chromatic aberration offset of the calibration point along the optical center of the imaging device due to dispersion.
  • the calibration process of the fitting function of the pixel position of the bitmap image in the B color channel is similar to the calibration process of the fitting function of the pixel position of the bitmap image in the R color channel, here No longer.
  • a realization process of determining the chromatic aberration offset of a pixel according to the chromatic aberration compensation table based on the position information and the zoom factor may include, but is not limited to, the following steps:
  • the table lookup position is determined according to the quotient obtained by dividing the position information by the sampling interval.
  • the sampling interval is 100 pixels
  • the table lookup position is (x1/100, y1/100).
  • the size of the image to be processed is 4000*3000
  • the pixel with the position information of (400, 300) in the image to be processed the corresponding look-up table position is (4, 3). If the coordinates of the determined look-up position have decimal places, you can round up to determine the final look-up position.
  • the chromatic aberration offset of the pixel position corresponding to the look-up table position in the chromatic aberration compensation table of the zoom factor is determined to be
  • the chromatic aberration offset of the pixel position adjacent to the look-up position in the chromatic aberration compensation table of the zoom factor Determine the color difference offset of the pixel.
  • the chromatic aberration offset of the pixel is obtained by interpolation according to the chromatic aberration offset of the pixel position adjacent to the look-up table position in the chromatic aberration compensation table of the zoom factor.
  • the adjacent pixel positions may be pixel positions adjacent to the row and/or column of the look-up table position in the chromatic aberration compensation table of the zoom factor, which can be specifically selected as required.
  • the interpolation method can be selected as a linear interpolation method or other interpolation methods.
  • the color difference compensation table in the embodiment of the application can be a grid mesh table.
  • the reason for using the mesh table is that the interpolation process can be converted into addition and multiplication operations.
  • the calculation speed is high and it is easy to transplant to FPGA and other modules; of course, the color difference compensation table can also be used. For other forms of tables.
  • the pixel positions in the color difference compensation table are distributed in the preset coordinate system
  • the pixel positions in the color difference compensation table can also be distributed in all areas of the preset coordinate system.
  • the origin of the preset coordinate system coincides with the optical center of the camera under the corresponding zoom factor.
  • the pixel positions in the color difference compensation table are preset areas distributed in the preset coordinate system.
  • the pixel positions in the color difference compensation table are distributed on one side of the first axis in the preset coordinate system.
  • the storage space is reduced to half of the storage space of the entire region.
  • the pixel positions in the color difference compensation table are distributed on one side of the second axis in the preset coordinate system, that is, the pixel positions in the color difference compensation table are distributed on one side of the first axis in the preset coordinate system, and the distribution On one side of the second axis in the preset coordinate system, under the premise of ensuring accuracy, the storage space is reduced to 1/4 of the storage space of the entire region.
  • the color difference compensation table in this embodiment can be called a bidirectional mirror image table.
  • the first axis and the second axis are orthogonal.
  • the first axis is parallel to the length direction of the corresponding image
  • the second axis is parallel to the width direction of the corresponding image
  • the first axis is parallel to the width direction of the corresponding image
  • the second axis is parallel to the width direction of the corresponding image.
  • the length direction of the image is
  • the implementation process of determining the chromatic aberration offset of the pixel point according to the position information and the chromatic aberration compensation table of the zoom factor may include but is not limited to the following steps:
  • the target pixel position is determined in the color difference compensation table according to the position information of the pixel point, and the color difference offset of the target pixel position is obtained quantity;
  • the target pixel position is symmetrical to the pixel point.
  • the X axis is parallel to the width direction of the image 100
  • the Y axis is parallel to the length direction of the image 100
  • point O is the optical center of the camera.
  • the image 100 is divided into areas I, II, II, and IV through the coordinate system XOY, if the pixel positions in the color difference compensation table are distributed in area I.
  • the color difference compensation table records the color difference offset of pixel position 10, and the pixel corresponds to pixel position 20, while the color difference compensation table does not record the color difference offset of pixel position 20.
  • Pixel position 10 and pixel position 20 are related to Y Axisymmetric, then for this pixel, the target pixel position is pixel position 10.
  • this embodiment determines the color difference offset of the pixel according to the coordinate relationship of the two-dimensional coordinate system. For example, the color difference offset of the pixel position 10 is (dx, dy), then wait The chromatic aberration offset of the pixel corresponding to the pixel position 20 in the processed image is (-dx, dy).
  • the chromatic dispersion correction is performed on the pixel points according to the chromatic aberration shift amount.
  • the dispersion in the embodiments of the present application may include lateral dispersion; of course, the dispersion may also include longitudinal dispersion.
  • the chromatic dispersion is the lateral chromatic dispersion, so as to realize the correction of the lateral chromatic dispersion of the image to be processed.
  • An implementation process of performing dispersion correction on a pixel according to the chromatic aberration offset of the pixel includes: superimposing the chromatic aberration offset of the pixel on the position information of the pixel to perform the dispersion correction on the pixel. For example, if the chromatic aberration offset of the pixel is (dx1, dy1), and the position information of the pixel is (x2, y2), after the dispersion correction is performed, use the R color channel corresponding to (x2+dx1,y2+dy1) The color information replaces the color information of the R color channel corresponding to (x2, y2) before the color information, and replaces the B color corresponding to the previous (x2, y2) with the color information of the B color channel corresponding to (x2+dx1, y2+dy1) For the color information of the channel, since the pixel can be considered to have no chromatic dispersion in the G color channel, there is no need to replace the color information of the G color channel of the pixel. It
  • the embodiment of the present application also provides an image processing method, please refer to FIG. 6, the method includes:
  • S602 Determine the color difference offset of the pixel in the color difference compensation table according to the position information, and the color difference compensation table is used to record the color difference offset of different pixel positions;
  • S603 Perform chromatic dispersion correction on the pixel points according to the chromatic aberration offset.
  • the color difference offset of different pixel positions is stored in the color difference compensation table, instead of the actual coordinates of different pixel positions.
  • the color difference offset has a smaller value, which is convenient for data merging, thereby simplifying the dispersion. Correction process.
  • An embodiment of the present application also provides an image processing method. Please refer to FIG. 7.
  • the method includes:
  • S701 Acquire position information of pixels in the image to be processed
  • S702 Determine the color difference offset of the pixel in the color difference compensation table according to the position information.
  • the color difference compensation table is used to record the color difference offset of different pixel positions, and the pixel positions in the color difference compensation table are distributed in a preset coordinate system In the preset area of, the origin of the preset coordinate system coincides with the optical center of the camera;
  • S703 Perform chromatic dispersion correction on the pixels according to the chromatic aberration offset.
  • the embodiment of the present application saves storage space by storing the color difference offsets of different pixel positions of the preset area in the preset coordinate system in the color difference compensation table.
  • the image processing device includes: a storage device and one or more processors.
  • the storage device is used to store program instructions.
  • the storage device stores a computer program of executable instructions of the image processing method, and the storage device may include at least one type of storage medium.
  • the storage medium includes a flash memory, a hard disk, a multimedia card, and a card-type memory (for example, SD or DX Memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic Storage, magnetic disks, optical discs, etc.
  • the image processing device may cooperate with a network storage device that performs the storage function of the memory through a network connection.
  • the memory may be an internal storage unit of the image processing device, such as a hard disk or a memory of the image processing device.
  • the memory can also be an external storage device of the image processing device, such as a plug-in hard disk equipped on the image processing device, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash card (Flash Card). )Wait.
  • the memory may also include both an internal storage unit of the image processing apparatus and an external storage device.
  • the memory is used to store computer programs and other programs and data required by the device.
  • the memory can also be used to temporarily store data that has been output or will be output.
  • one or more processors call program instructions stored in the storage device, and when the program instructions are executed, the one or more processors are individually or collectively configured to perform the following operations: Obtain the zoom factor of the camera when shooting the image to be processed, and obtain the position information of the pixel in the image to be processed; determine the chromatic aberration offset of the pixel according to the zoom factor and position information; Click for dispersion correction.
  • the processor of this embodiment can implement the image processing method of the embodiment shown in FIGS. 1 and 2 of the present application, and the image processing apparatus of this embodiment can be described with reference to the image processing method of the foregoing embodiment.
  • one or more processors call program instructions stored in the storage device, and when the program instructions are executed, the one or more processors are individually or collectively configured to perform the following operations: Obtain the position information of the pixel in the image to be processed; determine the color difference offset of the pixel in the color difference compensation table according to the position information.
  • the color difference compensation table is used to record the color difference offset of different pixel positions; according to the color difference offset, Perform dispersion correction on pixels.
  • the processor of this embodiment can implement the image processing method of the embodiment shown in FIG. 6 of the present application, and the image processing apparatus of this embodiment can be described with reference to the image processing method of the foregoing embodiment.
  • one or more processors call program instructions stored in the storage device, and when the program instructions are executed, the one or more processors are individually or collectively configured to perform the following operations: Obtain the position information of the pixel in the image to be processed; determine the color difference offset of the pixel in the color difference compensation table according to the position information.
  • the color difference compensation table is used to record the color difference offset of different pixel positions, and the color difference compensation table
  • the pixel positions are a preset area distributed in a preset coordinate system, and the origin of the preset coordinate system coincides with the optical center of the shooting device; the pixel points are chromatic dispersion corrected according to the chromatic aberration offset.
  • the processor of this embodiment can implement the image processing method of the embodiment shown in FIG. 7 of the present application, and the image processing apparatus of this embodiment can be described with reference to the image processing method of the foregoing embodiment.
  • the processor may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (ASIC), on-site Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the image processing device in the embodiment of the present application may be a photographing device, or other devices with data processing functions, such as a computer, a mobile phone, and so on.
  • an embodiment of the present application also provides a movable platform, which includes a body, a power system, and the image processing device described in any one of the above.
  • the power system is connected with the body to provide power for the movement of the body, and the image processing device is supported by the body.
  • the image processing device may be a camera provided on the fuselage, or may be a photographing device mounted on the fuselage through a pan-tilt, and used to perform dispersion correction on the captured image.
  • the movable platform can not only support dispersion correction of multiple focal lengths, and is suitable for different shooting scenarios; it can also save computing resources, which is beneficial to reduce power consumption and increase the cost of the movable platform. Moving time; it can also save storage space and increase the storage space of photographed images or other data.
  • the movable platform may be a drone.
  • an embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the image processing method of the foregoing embodiment are implemented.
  • the computer-readable storage medium may be an internal storage unit of the image processing apparatus described in any of the foregoing embodiments, such as a hard disk or a memory.
  • the computer-readable storage medium may also be an external storage device of the image processing apparatus, such as a plug-in hard disk, a smart media card (SMC), an SD card, and a flash card (Flash Card) equipped on the device. Wait.
  • the computer-readable storage medium may also include both an internal storage unit of the image processing apparatus and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the image processing apparatus, and can also be used to temporarily store data that has been output or will be output.
  • the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)

Abstract

一种图像处理方法、装置和可移动平台,所述方法包括:获取拍摄装置在拍摄待处理图像时的变焦倍数,并获取待处理图像中的像素点的位置信息;根据变焦倍数和位置信息,确定像素点的色差偏移量;根据色差偏移量,对像素点进行色散矫正。本申请在进行色散矫正时,不仅考虑了不同位置的像素点的色散的不同,也考虑了拍摄装置在拍摄待处理图像时的变焦倍数,通过变焦倍数和待处理图像中的像素点的位置信息确定像素点的色差偏移量,再对像素点的色散矫正,实现了多焦段的色散矫正,可以对多焦段的拍摄装置进行有效色散矫正。

Description

图像处理方法、装置和可移动平台 技术领域
本申请涉及图像处理领域,尤其涉及一种图像处理方法、装置和可移动平台。
背景技术
拍摄装置由于各方面设计的考量,成像结果会有明显的色散,影响成像效果。在光线传播过程中,由于不同波长光的折射率不同,会在成像平面上产生汇聚点不一致的现象,沿着光轴方向上的称为轴向色散,沿焦平面的称为横向色散。一般情况下,距离成像中心越远,色散越严重。故需要对拍摄装置拍摄的图像进行色散矫正,以减小色散对成像效果的影响。
发明内容
本申请提供一种图像处理方法、装置和可移动平台。
第一方面,本申请实施例提供一种图像处理方法,所述方法包括:
获取拍摄装置在拍摄待处理图像时的变焦倍数,并获取所述待处理图像中的像素点的位置信息;
根据所述变焦倍数和所述位置信息,确定所述像素点的色差偏移量;
根据所述色差偏移量,对所述像素点进行色散矫正。
第二方面,本申请实施例提供一种图像处理装置,所述装置包括:
存储装置,用于存储程序指令;以及
一个或多个处理器,调用所述存储装置中存储的程序指令,当所述程序指令被执行时,所述一个或多个处理器单独地或共同地被配置成用于实施如下操作:
获取拍摄装置在拍摄待处理图像时的变焦倍数,并获取所述待处理图像中的像素点的位置信息;
根据所述变焦倍数和所述位置信息,确定所述像素点的色差偏移量;
根据所述色差偏移量,对所述像素点进行色散矫正。
第三方面,本申请实施例提供一种可移动平台,包括:
机体;
动力系统,与所述机体连接,用于给所述机体的移动提供动力;
图像处理装置,由所述机体支撑;
其中,所述图像处理装置包括:存储装置,用于存储程序指令;以及
一个或多个处理器,调用所述存储装置中存储的程序指令,当所述程序指令被执行时,所述一个或多个处理器单独地或共同地被配置成用于实施如下操作:
获取拍摄装置在拍摄待处理图像时的变焦倍数,并获取所述待处理图像中的像素点的位置信息;
根据所述变焦倍数和所述位置信息,确定所述像素点的色差偏移量;
根据所述色差偏移量,对所述像素点进行色散矫正。
根据本申请实施例提供的技术方案,本申请在进行色散矫正时,不仅考虑了不同位置的像素点的色散的不同,也考虑了拍摄装置在拍摄待处理图像时的变焦倍数,通过变焦倍数和待处理图像中的像素点的位置信息确定像素点的色差偏移量,再对像素点的色散矫正,实现了多焦段的色散矫正,可以对多焦段的拍摄装置进行有效色散矫正。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中的图像处理方法的方法流程示意图;
图2是本申请一实施例中的根据变焦倍数和位置信息,确定像素点的色差偏移量的一种实现过程示意图;
图3是本申请一实施例中的点阵图的示意图;
图4是本申请一实施例中的点阵图图像中的一个标定点在不同颜色通道下的示意图;
图5是本申请一实施例中的预设坐标系的示意图;
图6是本申请另一实施例中的图像处理方法的方法流程示意图;
图7是本申请另一实施例中的图像处理方法的方法流程示意图;
图8是本申请另一实施例中的图像处理装置的结构示意图。
具体实施方式
由于色散会影响成像效果,故需要对拍摄装置拍摄的图像进行色散矫正,以减小色散对成像效果的影响。目前,大都为定焦的拍摄装置的色散矫正,对于多焦段的拍摄装置的色散矫正较少。对于此,本申请实施例提供一种图像处理方法,所述方法包 括:获取拍摄装置在拍摄待处理图像时的变焦倍数,并获取待处理图像中的像素点的位置信息;根据变焦倍数和位置信息,确定像素点的色差偏移量;根据色差偏移量,对像素点进行色散矫正。本申请实施例在进行色散矫正时,不仅考虑了不同位置的像素点的色散的不同,也考虑了拍摄装置在拍摄待处理图像时的变焦倍数,通过变焦倍数和待处理图像中的像素点的位置信息确定像素点的色差偏移量,再对像素点的色散矫正,实现了多焦段的色散矫正,可以对多焦段的拍摄装置进行有效色散矫正。
目前,对于定焦的拍摄装置,在进行色散矫正时,利用带色散矫正的计算公式对图像中的像素点进行遍历,计算每个像素点的实际坐标,计算公式通常具有除法、开根号等复杂运算,需要耗费大量计算资源,且实际坐标数值较大。对于此,本申请实施例还提供一种图像处理方法,所述方法包括:获取待处理图像中的像素点的位置信息;根据位置信息在色差补偿表中确定像素点的色差偏移量,色差补偿表用于记录不同像素位置的色差偏移量;根据色差偏移量,对像素点进行色散矫正。本申请实施例通过在色差补偿表中存储不同像素位置的色差偏移量,而非不同像素位置的实际坐标,色差偏移量相较于实际坐标,数值更小,便于数据合并,从而简化色散矫正过程。
另外,若通过色差补偿表记录每个图像区域中不同像素位置的色差偏移量,则会耗费大量存储资源。对于此,本申请实施例还提供一种图像处理方法,所述方法包括:获取待处理图像中的像素点的位置信息;根据位置信息在色差补偿表中确定像素点的色差偏移量,色差补偿表用于记录不同像素位置的色差偏移量,且色差补偿表中的像素位置为分布在预设坐标系中的预设区域,预设坐标系的原点与拍摄装置的光心重合;根据色差偏移量,对像素点进行色散矫正。本申请实施例通过在色差补偿表中存储预设坐标系中的预设区域的不同像素位置的色差偏移量,节省了存储空间。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在不冲突的情况下,下述的实施例及实施方式中的特征可以相互组合。
本申请实施例的图像处理方法的执行主体可以为任意具有数据处理功能的装置,如拍摄装置或其他具有数据处理功能的装置,如计算机、手机等。另外,本申请实施例的图像处理方法可以在线执行,即在拍摄装置拍摄到图像后,通过拍摄装置或与拍摄装置通信的装置实时地对图像进行处理;应当理解地,本申请实施例的图像处理方法也可以离线执行。
图1是本申请一实施例中的图像处理方法的方法流程示意图;请参见图1,本申请实施例的图像处理方法可以包括步骤S101~S103。
其中,在S101中,获取拍摄装置在拍摄待处理图像时的变焦倍数,并获取待处理图像中的像素点的位置信息。
本申请实施例的拍摄装置为具有多焦段的拍摄装置,即变焦的拍摄装置,拍摄装置具有多个变焦倍数。
待处理图像可以为原始RAW图像,RAW图像能够保留更多的信息;当然待处理图像也可以为其他类型的图像,如RGB图像。
变焦倍数可以通过不同方式获取,例如,在某些实施例中,变焦倍数为外部输入,如用户输入或拍摄装置发送;在某些实施例中,图像处理方法是在线执行的,可以直接从拍摄装置读取拍摄装置的当前变焦倍数;在某些实施例中,变焦倍数根据待处理图像的图像信息确定,比如,拍摄装置在拍摄待处理图像时,将变焦倍数存储在待处理图像中,或者将变焦倍数与待处理图像对应存储,然后打包。
获取待处理图像中的像素点的位置信息的实现过程可以包括但不限于如下步骤:
(1)、获取待处理图像中的像素点的像素坐标;
像素坐标的获取方式可以采用现有识别算法,本申请实施例对此不作具体限定。
(2)、获取拍摄装置拍摄待处理图像时的光心坐标偏移量;
理想状态下,拍摄装置的光心为镜头的中心,但由于加工误差、安装误差等的影响,导致拍摄装置的光心相对镜头的中心偏移,这个偏移量就称为光心坐标偏移量。待处理图像的中心的中心偏移量与光心坐标偏移量相对应。拍摄装置的光心是标定得到的光心坐标,指的是光束经过镜头聚焦后,聚焦中心点在拍摄装置的图像传感器上位置对应的图像坐标。
光心坐标偏移量也可通过不同方式获取,例如,在某些实施例中,光心坐标偏移量由外部输入。对于镜头已经安装好的拍摄装置,光心坐标偏移量为一个固定值。示例地,拍摄装置会对光心坐标偏移量进行预先存储,图像处理方法的执行装置可以从拍摄装置获取光心坐标偏移量,图像处理方法的执行装置可以通过诸如读取方式从拍摄装置获取光心坐标偏移量;示例地,通过用户输入光心坐标偏移量至图像处理方法的执行装置。
在某些实施例中,光心坐标偏移量为根据外部输入的光心坐标确定,该光心坐标为测量获得的镜头的实际光心坐标,光心坐标偏移量为外部输入的光心坐标与理想的光心坐标(已知量)的差值。示例地,图像处理方法的执行装置从拍摄装置获取预先存储的光心坐标;示例地,通过用户输入光心坐标至图像处理方法的执行装置。
(3)、根据像素坐标和光心坐标偏移量,确定像素点的位置信息。
像素点的位置信息为该像素点在以拍摄装置的光心为原点建立的坐标系中的位 置,像素点的像素坐标为该像素点在以待处理图像的中心为原点建立的坐标系中的位置。由于存在光心坐标偏移的问题,像素点的位置信息即为该像素点的像素坐标与光心坐标偏移量之和。因此,在获得待处理图像的中心后,按照光心坐标偏移量对该中心的坐标进行补偿,即可确定拍摄装置的光心在待处理图像中的对应位置。
应当理解地,在理想情况下,若拍摄装置的光心不存在偏移,则待处理图像的中心也不存在偏移,那么,像素点的位置信息即为像素点的像素坐标。
在S102中,根据变焦倍数和位置信息,确定像素点的色差偏移量。
图2是本申请一实施例中的根据变焦倍数和位置信息,确定像素点的色差偏移量的一种实现过程示意图;如图2所示,一种根据变焦倍数和位置信息,确定像素点的色差偏移量的实现过程可以包括:
S1021、根据预先标定的多个预设变焦倍数的色差补偿表,确定变焦倍数的色差补偿表,色差补偿表用于记录对应变焦倍数下,不同像素位置的色差偏移量;
S1022、根据位置信息和变焦倍数的色差补偿表,确定像素点的色差偏移量。
通过在色差补偿表中存储不同像素位置的色差偏移量,而非不同像素位置的实际坐标,色差偏移量相较于实际坐标,数值更小,便于数据合并,从而简化色散矫正过程;同时,预先标定的多个预设变焦倍数的色差补偿表,再根据预先标定的多个预设变焦倍数的色差补偿表,确定变焦倍数的色差补偿表,变焦倍数的色差补偿表的确定过程简单。可以理解,本申请实施例所述的实际坐标指的是在以拍摄装置的光心为原点建立的坐标系中的坐标。
变焦倍数可以为多个预设变焦倍数中的一个,也可以与多个预设变焦倍数各不相等。其中,当变焦倍数为多个预设变焦倍数中的一个时,变焦倍数的色差补偿表为对应的预设变焦倍数的色差补偿表。当变焦倍数与多个预设变焦倍数各不相等时,示例地,预设变焦倍数包括第一预设变焦倍数和第二预设变焦倍数,第一预设变焦倍数小于第二预设变焦倍数。当变焦倍数大于第一预设变焦倍数,并小于第二预设变焦倍数时,变焦倍数的色差补偿表为根据第一预设变焦倍数的色差补偿表和第二预设变焦倍数的色差补偿表确定。例如,根据第一预设变焦倍数的色差补偿表和第二预设变焦倍数的色差补偿表进行插值确定变焦倍数的色差补偿表,可以采用线性插值,也可以采用其他插值方式确定变焦倍数的色差补偿表。
由于拍摄装置的不同焦段,色散大小会发生变化,若使用相同的参数,在某些焦段下,色散更为严重,因此,本申请实施例预先标定的多个预设变焦倍数的色差补偿表,以提高色散矫正的准确性。预设变焦倍数的数量过多会增加标定工作量,过少则无法覆盖拍摄装置的所有焦段,因此,预设变焦倍数的数量能够覆盖拍摄装置的所有焦段为宜。示例地,预设变焦倍数的数量大于或等于5,并小于或等于10。例如,预设变焦倍数的数量可以为5、6、7、8、9或10,从而通过5-10个预设变焦倍数的色差 补偿表达到全焦段的色散矫正。应当理解地,预设变焦倍数的数量也可以设置为其他数量大小。
在标定预设变焦倍数的色差补偿表时,需要获取预设变焦倍数下,以拍摄装置的光心为坐标原点,标定图像中每个像素位置(本文称为标定位置)的色差偏移量。在某些实施例中,预设变焦倍数的色差补偿表中,不同像素位置与的色差偏移量为对应标定位置的色差偏移量,也即,预设变焦倍数的色差补偿表中的像素位置的数量与标定位置的数量相同。
在某些实施例中,预设变焦倍数的色差补偿表中不同像素位置的色差偏移量为根据预设变焦倍数下,多个标定位置的色差偏移量下采样获得,减少预设变焦倍数的色差补偿表的数据量。为保证预设变焦倍数的色差补偿表中的数据量足够少,下采样的间隔不宜过小;而为保证色散矫正效果,下采样的间隔不宜过大。本申请实施例中,下采样的采样间隔的大小为100像素,在减小预设变焦倍数的色差补偿表的数据量的同时,保证色散矫正效果。示例地,标定图像的尺寸为4000*3000,若采样间隔的大小为100像素,则色差补偿表的尺寸为40(列)*30(行)。
标定位置的色差偏移量可以使用点阵图(包括多个规则排布的点,如图3所示,本文将点阵图图像中的点称为标定点)得到,但不限于点阵图,也可以使用其他图案的图像。以点阵图为例,本申请实施例中,标定图像为点阵图图像,标定位置包括预设变焦倍数下的点阵图图像中至少大部分标定点的位置,预设变焦倍数下的点阵图图像为对对应尺寸的点阵图进行拍摄获得。示例地,设计不同尺寸的点阵图,通过拍摄装置对不同尺寸的点阵图分别拍摄,得到不同变焦倍数下,清晰的、占满整个视场的点阵图图像,以获得足够多的标定点。
标定位置可以包括点阵图图像中的至少部分像素位置,其中,点阵图图像的像素位置可以包括点阵图图像中的标定点;当然,除标定点外,点阵图图像的像素位置还可以包括其他位置。示例地,标定位置包括点阵图图像中的至少部分标定点。
本申请实施例中,标定位置的色差偏移量包括标定点的色差偏移量,标定点的色差偏移量为根据标定点在不同颜色通道下的色差偏移量确定。示例地,颜色通道包括红色R、绿色G、蓝色B三个颜色通道,标定点的色差偏移量为根据标定点在R、G、B三个颜色通道下的色差偏移量。
本申请实施例中,标定点在不同颜色通道下的色差偏移量包括标定点在各颜色通道下,第一方向上的色差偏移量和第二方向上的色差偏移量。其中,第一方向和第二方向正交。示例地,第一方向为点阵图图像的长度方向和宽度方向中的一个,第二方向为点阵图图像的长度方向和宽度方向中的另一个;当然,第一方向、第二方向也可以设置为其他方向,具体根据需要设置。
在某些实施例中,第一方向上的色差偏移量包括标定点在颜色通道下,沿着拍摄 装置的光心的径向移动的距离(本文称为径向移动距离)在第一方向上的第一分量,第二方向上的色差偏移量包括标定点在颜色通道下,沿着拍摄装置的光心的径向移动的距离在第二方向上的第二分量,以矫正由于色散导致的标定点沿着拍摄装置的光心的径向的色差偏移量。其中,标定点在颜色通道下,沿着拍摄装置的光心的径向移动对应的径向移动距离大小,能够使得标定点在不同颜色通道下的中心在以拍摄装置的光心为圆心的圆上。如图4所示,将点阵图图像中的一个标定点进行放大,该标定点在R、G、B三个颜色通道下呈现为不同的圆,1为标定点在G颜色通道下的圆,2为R颜色通道下的圆,3为B颜色通道下的圆,11为G颜色通道下的圆的圆心,21为R颜色通道下的圆的圆心,31为B颜色通道下的圆的圆心,标定点在不同颜色通道下的中心即指标定点在不同颜色通道下的圆心。本申请实施例以标定点在G颜色通道下的圆作为基准,标定点在R颜色通道下,沿着拍摄装置的光心的径向移动对应的径向移动距离大小,标定点在B颜色通道下,沿着拍摄装置的光心的径向移动对应的径向移动距离大小,能够使得标定点在R颜色通道下的圆2的圆心21、标定点在B颜色通道下的圆3的圆心31和标定点在G颜色通道下的圆1的圆心11在同一圆(该圆拍摄装置的光心为圆心,以拍摄装置的光心至标定点在G颜色通道下的圆1的圆心11的距离为半径)上。本实施例中,圆1、圆2和圆3的圆心在径向上实现重合,虽然未对圆1、圆2和圆3的圆心做法向上的移动,使得圆1、圆2和圆3的圆心在法向也实现重合(当圆1、圆2和圆3的圆心在径向和法向均实现重合时,圆1、圆2和圆3的圆心重合),但圆1、圆2和圆3的圆心在径向上实现重合即可使得该标定点的色散基本可以消除,满足色散矫正效果。另外需要说明的是,本申请实施例中,在进行标定时,以G颜色通道为基准,这是因为G颜色通道接收的光的波长在R颜色通道和B颜色通道之间,可以认为标定点在G颜色通道下没有色差。因而,以G颜色通道为基准进行色差标定的结果更准确。
在某些实施例中,第一方向上的色差偏移量包括标定点在颜色通道下,沿着拍摄装置的光心的法向移动的距离(本文称为法向移动距离)在第一方向上的第三分量,第二方向上的色差偏移量包括标定点在颜色通道下,沿着拍摄装置的光心的法向移动的距离在第二方向上的第四分量。其中,标定点在颜色通道下,沿着拍摄装置的光心的法向移动对应的法向移动距离大小,能够使得标定点在不同颜色通道下的中心在以拍摄装置的光心为圆心的圆的一条径线上。沿用图4,标定点在R颜色通道下,沿着拍摄装置的光心的法向移动对应的法向移动距离大小,标定点在B颜色通道下,沿着拍摄装置的光心的法向移动对应的法向移动距离大小,能够使得标定点在R颜色通道下的圆2的圆心21、标定点在B颜色通道下的圆3的圆心31和标定点在G颜色通道下的圆1的圆心11在同一圆(该圆拍摄装置的光心为圆心,以拍摄装置的光心至标定点在G颜色通道下的圆1的圆心11的距离为半径)的同一条径线上。通过第一方向上的色差偏移量和第二方向上的色差偏移量,矫正由于色散导致的标定点沿着拍摄装置的光心的径向的色差偏移量,以矫正由于色散导致的标定点沿着拍摄装置的光心的 法向的色差偏移量。
在某些实施例中,除了矫正由于色散导致的标定点沿着拍摄装置的光心的径向色差偏移量,还需矫正由于色散导致的标定点沿着拍摄装置的光心的法向的色差偏移量。由于色散导致标定点在R、G、B三个颜色通道下的圆不重合,沿用图4,以标定点在G颜色通道下的圆作为基准,分别标定点在R颜色通道下的圆的圆心21与该标定点在G颜色通道下的圆的圆心11的距离,以及标定点在B颜色通道下的圆的圆心31与该标定点在G颜色通道下的圆的圆心11的距离,标定点的色差偏移量包括标定点在R颜色通道下的圆的圆心21与该标定点在G颜色通道下的圆的圆心11的距离,以及标定点在B颜色通道下的圆的圆心31与该标定点在G颜色通道下的圆的圆心11的距离,根据标定点在R颜色通道下的圆的圆心21与该标定点在G颜色通道下的圆的圆心11的距离,以及标定点在B颜色通道下的圆的圆心31与该标定点在G颜色通道下的圆的圆心11的距离,将标定点在对应颜色通道下的圆移动至与标定点在R颜色通道下的圆重合,即实现色散矫正。本实施例中,第一方向上的色差偏移量为根据第一分量和第三分量之和确定,第二方向上的色差偏移量为根据第二分量和第四分量之和确定。
第一分量、第二分量的计算方式可以根据需要设计,例如,在某些实施例中,第一分量为根据标定点在对应颜色通道下的像素坐标与拍摄装置的光心的光心坐标在第一方向的差值和标定点在对应颜色通道下至拍摄装置的光心的距离确定,第二分量为根据标定点在对应颜色通道下的像素坐标与光心坐标在第二方向的差值和标定点在对应颜色通道下至拍摄装置的光心的距离确定。示例地,第一分量为根据第一拟合函数确定,第二分量为根据第二拟合函数确定。第一拟合函数以标定点在颜色通道下的像素坐标与拍摄装置的光心的光心坐标在第一方向的差值、标定点在颜色通道下至拍摄装置的光心的距离为自变量,第一分量为因变量。第二拟合函数以标定点在颜色通道下的像素坐标与拍摄装置的光心的光心坐标在第二方向的差值、标定点在颜色通道下至拍摄装置的光心的距离为自变量,第二分量为因变量。应当理解地,第一分量、第二分量的计算方式不限于函数形式,也可以为其他,如查表、模型预测等。其中,标定点在对应颜色通道下的像素坐标是在以拍摄装置的光心为坐标原点建立的坐标系的坐标。
以函数形式为例,在标定时,得到的标定点的色差偏移量是离散的,故需要对数据标定点的色差偏移量进行拟合,得到对应的拟合函数,从而方便得到全尺寸的标定数据。本申请实施例中,点阵图图像中所有像素位置的色差偏移量均可以根据拟合函数计算得到,全尺寸仿真可以有效去除色散。
其中,第一拟合函数可以为多项式函数,示例地,第一拟合函数为标定点在对应颜色通道至拍摄装置的光心的距离的5次多项式函数;当然,第一拟合函数也可以为标定点在对应颜色通道至拍摄装置的光心的距离的其他数量大小次多项式函数。应当理解地,第一拟合函数还可以为其他类型的函数,不限于多项式函数形式。
第二拟合函数可以为多项式函数,示例地,第二拟合函数为标定点在对应颜色通道至拍摄装置的光心的距离的5次多项式函数;当然,第二拟合函数也可以为标定点在对应颜色通道至拍摄装置的光心的距离的其他数量大小次多项式函数。应当理解地,第二拟合函数还可以为其他类型的函数,不限于多项式函数形式。
在一示例性的实施例中,以R颜色通道为例,光心坐标为(x0,y0),标定点在R颜色通道下的像素坐标为(x,y),该标定点在R颜色通道下的像素坐标与光心坐标在第一方向的差值Δx为|x-x0|,该标定点在R颜色通道下的像素坐标与光心坐标在第二方向的差值Δy为|y-y0|,该标定点在R颜色通道下至拍摄装置的光心的距离r为sqrt(Δx^2+Δy^2)。
第一拟合函数的计算公式可以为:
dxr=(k0*r+k1*r^2+k2*r^4)Δx   (1);
公式(1)中,dxr为第一分量,k0、k1、k2为对应项的系数。
第二拟合函数的计算公式可以为:
dyr=(m0*r+m1*r^2+m2*r^4)Δy   (2);
公式(1)中,dyr为第二分量,m0、m1、m2为对应项的系数。
获取光心坐标(x0,y0)以及多个标定点在R颜色通道下的像素坐标(x,y),代入上述公式(1)和(2),即可确定k0、k1、k2以及m0、m1、m2的大小,从而标定获得第一拟合函数和第二拟合函数,实现全尺寸的标定数据的计算。
进一步地,第三分量的计算公式可以为:
dxt=p0*(2Δx*Δy)+p1*(r^2+2Δx^2)   (3);
公式(3)中,dxt为第三分量,p0、p1为对应项的系数。
第四分量的计算公式可以为:
dyt=q1*(2Δx*Δy)+q0*(r^2+2Δx^2)   (4);
公式(3)中,dyt为第三分量,q0、q1为对应项的系数。
获取光心坐标(x0,y0)以及多个标定点在R颜色通道下的像素坐标(x,y),代入上述公式(3)和(4),即可确定p0、p1以及q0、q1的大小,从而拟合获得第三分量、第四分量对应的函数。
可选地,对于R颜色通道,第一方向上的色差偏移量为dxr,第二方向上的色差偏移量为dyr,以矫正由于色散导致的标定点沿着拍摄装置的光心的径向色差偏移量;可选地,对于R颜色通道,第一方向上的色差偏移量为dxt,第二方向上的色差偏移量为dyt,以矫正由于色散导致的标定点沿着拍摄装置的光心的法向色差偏移量;可选 地,对于R颜色通道,第一方向上的色差偏移量为(dxr+dxt),第二方向上的色差偏移量为(dyr+dyt),以矫正由于色散导致的标定点沿着拍摄装置的光心的径向色差偏移量,并矫正由于色散导致的标定点沿着拍摄装置的光心的法向色差偏移量。
本申请实施例中,点阵图图像的像素位置在B颜色通道下的拟合函数的标定过程与点阵图图像的像素位置在R颜色通道下的拟合函数的标定过程相类似,此处不再赘述。
一种根据位置信息和变焦倍数的色差补偿表,确定像素点的色差偏移量的实现过程可以包括但不限于如下步骤:
(1)、根据位置信息和下采样的采样间隔,确定像素点的查表位置;
查表位置为根据位置信息除以采样间隔获得的商值确定,示例地,采样间隔为100像素,查表位置为(x1/100,y1/100)。比如,待处理图像的尺寸为4000*3000,对于待处理图像中位置信息为(400,300)的像素点,对应的查表位置为(4,3)。若确定的查表位置的坐标具有小数位,则可以四舍五入确定最终的查表位置。
(2)、根据查表位置和变焦倍数的色差补偿表,确定像素点的色差偏移量。
在某些实施例中,若变焦倍数的色差补偿表中存在与查表位置相对应的像素位置,则确定变焦倍数的色差补偿表中与查表位置相对应的像素位置的色差偏移量为像素点的色差偏移量,也即,变焦倍数的色差补偿表中存在与查表位置相对应的像素位置时,像素点的色差偏移量=变焦倍数的色差补偿表中与查表位置相对应的像素位置的色差偏移量。
在某些实施例中,若变焦倍数的色差补偿表中不存在与查表位置相对应的像素位置,则根据变焦倍数的色差补偿表中与查表位置相邻的像素位置的色差偏移量,确定像素点的色差偏移量。可选地,像素点的色差偏移量为根据变焦倍数的色差补偿表中与查表位置相邻的像素位置的色差偏移量进行插值获得。其中,相邻的像素位置可以为变焦倍数的色差补偿表中与查表位置行和/或列相邻的像素位置,具体可以根据需要选择。插值方式可以选择为线性插值方式,也可以为其他插值方式。
本申请实施例的色差补偿表可以为网格mesh表,使用mesh表的原因是插值过程可以转换为加法和乘法操作,计算速度高,便于移植到FPGA等模块上;当然,色差补偿表也可以为其他形式的表格。
由于图像中像素位置是关于拍摄装置的光心对称的,且两个对称的像素位置的色差偏移量大小存在关联,故为了节省存储空间,色差补偿表中的像素位置分布在预设坐标系的部分区域即可;当然,若存储空间允许,色差补偿表中的像素位置也可以分布在预设坐标系的所有区域。其中,预设坐标系的原点与对应变焦倍数下的拍摄装置的光心重合。
示例地,色差补偿表中的像素位置为分布在预设坐标系中的预设区域,例如,可选地,色差补偿表中的像素位置分布在预设坐标系中第一轴的一侧,在确保精度的前提下,将存储空间缩小为全区域存储空间的一半。进一步可选地,色差补偿表中的像素位置分布在预设坐标系统中第二轴的一侧,即色差补偿表中的像素位置分布在预设坐标系统中第一轴的一侧,且分布在预设坐标系统中第二轴的一侧,在确保精度的前提下,将存储空间缩小为全区域存储空间的1/4,本实施例的色差补偿表可以称作为双方向镜像表。其中,第一轴和第二轴正交。可选地,第一轴平行于对应的图像的长度方向,第二轴平行于对应的图像的宽度方向;可选地,第一轴平行于对应的图像的宽度方向,第二轴平行于对应的图像的长度方向。
进一步地,在某些实施例中,根据位置信息和变焦倍数的色差补偿表,确定像素点的色差偏移量的实现过程可以包括但不限于如下步骤:
(1)、若像素点位于第一轴的另一侧或第二轴的另一侧,则根据像素点的位置信息在色差补偿表中确定目标像素位置,并获取目标像素位置的色差偏移量;
其中,目标像素位置与像素点对称。示例地,请参见图5,对于坐标系XOY,X轴平行于图像100的宽度方向,Y轴平行于图像100的长度方向,O点为拍摄装置的光心。通过坐标系XOY将图像100分割成区域I、II、II和IV,若色差补偿表中的像素位置分布区域I。例如,色差补偿表中记录有像素位置10的色差偏移量,像素点与像素位置20对应,而色差补偿表中未记录像素位置20的色差偏移量,像素位置10和像素位置20关于Y轴对称,则对于该像素点,目标像素位置则为像素位置10。
(2)、根据像素点与目标像素位置之间的位置关系以及目标像素位置的色差偏移量,确定像素点的色差偏移量。
沿用图5所示实施例,本实施例是根据二维坐标系的坐标关系确定像素点的色差偏移量的,示例地,像素位置10的色差偏移量为(dx,dy),则待处理图像中与像素位置20对应的像素点的色差偏移量为(-dx,dy)。
在S103中,根据色差偏移量,对像素点进行色散矫正。
本申请实施例的色散可以包括横向色散;当然,色散也可以包括纵向色散。本申请实施例中,色散为横向色散,从而实现对待处理图像的横向色散的矫正。
一种根据像素点的色差偏移量,对像素点进行色散矫正的实现过程包括:将像素点的色差偏移量叠加在像素点的位置信息上,以对像素点进行色散矫正。例如,像素点的色差偏移量为(dx1,dy1),像素点的位置信息为(x2,y2),则进行色散矫正后,使用(x2+dx1,y2+dy1)对应的R颜色通道的颜色信息替换之前的(x2,y2)对应的R颜色通道的颜色信息,并使用(x2+dx1,y2+dy1)对应的B颜色通道的颜色信息替换之前的(x2,y2)对应的B颜色通道的颜色信息,由于像素点在G颜色通道下可以认为不存在色散,故无需对像素点的G颜色通道的颜色信息进行替换。应当理解地,色散矫正后,像素点 的位置并未发生改变,变化的为该像素点的R和B颜色通道的颜色信息。
本申请实施例还提供一种图像处理方法,请参见图6,所述方法包括:
S601:获取待处理图像中的像素点的位置信息;
S602:根据位置信息在色差补偿表中确定像素点的色差偏移量,色差补偿表用于记录不同像素位置的色差偏移量;
S603:根据色差偏移量,对像素点进行色散矫正。
本申请实施例通过在色差补偿表中存储不同像素位置的色差偏移量,而非不同像素位置的实际坐标,色差偏移量相较于实际坐标,数值更小,便于数据合并,从而简化色散矫正过程。
未展开的部分可以参照上述实施例中相应部分的描述,此处不再赘述。
本申请实施例还提供一种图像处理方法,请参见图7,所述方法包括:
S701:获取待处理图像中的像素点的位置信息;
S702:根据位置信息在色差补偿表中确定像素点的色差偏移量,色差补偿表用于记录不同像素位置的色差偏移量,且色差补偿表中的像素位置为分布在预设坐标系中的预设区域,预设坐标系的原点与拍摄装置的光心重合;
S703:根据色差偏移量,对像素点进行色散矫正。
本申请实施例通过在色差补偿表中存储预设坐标系中的预设区域的不同像素位置的色差偏移量,节省了存储空间。
未展开的部分可以参照上述实施例中相应部分的描述,此处不再赘述。
本申请实施例提供一种图像处理装置,请参见图8,所述图像处理装置包括:存储装置和一个或多个处理器。
其中,存储装置,用于存储程序指令。所述存储装置存储所述图像处理方法的可执行指令计算机程序,所述存储装置可以包括至少一种类型的存储介质,存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,所述图像处理装置可以与通过网络连接执行存储器的存储功能的网络存储装置协作。存储器可以是图像处理装置的内部存储单元,例如图像处理装置的硬盘或内存。存储器也可以是图像处理装置的外部存储设备,例如图像处理装置上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器还可以既包括图像处理装置的内部存储单元也包括外部存储设备。存储器用于存储计算机程序以及设备所需的其他程序和数据。存 储器还可以用于暂时地存储已经输出或者将要输出的数据。
在某些实施例中,一个或多个处理器,调用存储装置中存储的程序指令,当程序指令被执行时,一个或多个处理器单独地或共同地被配置成用于实施如下操作:获取拍摄装置在拍摄待处理图像时的变焦倍数,并获取待处理图像中的像素点的位置信息;根据变焦倍数和位置信息,确定像素点的色差偏移量;根据色差偏移量,对像素点进行色散矫正。本实施例的处理器可以实现如本申请图1和2所示实施例的图像处理方法,可参见上述实施例的图像处理方法对本实施例的图像处理装置进行说明。
在某些实施例中,一个或多个处理器,调用存储装置中存储的程序指令,当程序指令被执行时,一个或多个处理器单独地或共同地被配置成用于实施如下操作:获取待处理图像中的像素点的位置信息;根据位置信息在色差补偿表中确定像素点的色差偏移量,色差补偿表用于记录不同像素位置的色差偏移量;根据色差偏移量,对像素点进行色散矫正。本实施例的处理器可以实现如本申请图6所示实施例的图像处理方法,可参见上述实施例的图像处理方法对本实施例的图像处理装置进行说明。
在某些实施例中,一个或多个处理器,调用存储装置中存储的程序指令,当程序指令被执行时,一个或多个处理器单独地或共同地被配置成用于实施如下操作:获取待处理图像中的像素点的位置信息;根据位置信息在色差补偿表中确定像素点的色差偏移量,色差补偿表用于记录不同像素位置的色差偏移量,且色差补偿表中的像素位置为分布在预设坐标系中的预设区域,预设坐标系的原点与拍摄装置的光心重合;根据色差偏移量,对像素点进行色散矫正。本实施例的处理器可以实现如本申请图7所示实施例的图像处理方法,可参见上述实施例的图像处理方法对本实施例的图像处理装置进行说明。
所述处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
本申请实施例的图像处理装置可以为拍摄装置,也可以为其他具有数据处理功能的装置,如计算机、手机等。
进一步的,本申请实施例还提供一种可移动平台,该可移动平台包括机体、动力系统以及上述任一说明的图像处理装置。其中,动力系统与机体连接,用于为机体的移动提供动力,图像处理装置由机体支撑。
具体的,图像处理装置可以为设于机身上的摄像头,也可以为通过云台挂载于机身上的拍摄装置,用于对拍摄图像进行色散矫正。通过本申请实施例的图像处理装置的应用,可移动平台不仅可以支持多焦段的色散矫正,适用于不同的拍摄场景;也能 够节省计算资源,从而有利于减少电量的耗费而增加可移动平台的移动时长;还可以节省存储空间而增加拍摄图像或其它数据的存储空间。
其中,可移动平台可以为无人机。
此外,本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例的图像处理方法的步骤。
所述计算机可读存储介质可以是前述任一实施例所述的图像处理装置的内部存储单元,例如硬盘或内存。所述计算机可读存储介质也可以是图像处理装置的外部存储设备,例如所述设备上配备的插接式硬盘、智能存储卡(Smart Media Card,SMC)、SD卡、闪存卡(Flash Card)等。进一步的,所述计算机可读存储介质还可以既包括图像处理装置的内部存储单元也包括外部存储设备。所述计算机可读存储介质用于存储所述计算机程序以及所述图像处理装置所需的其他程序和数据,还可以用于暂时地存储已经输出或者将要输出的数据。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本申请部分实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (63)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取拍摄装置在拍摄待处理图像时的变焦倍数,并获取所述待处理图像中的像素点的位置信息;
    根据所述变焦倍数和所述位置信息,确定所述像素点的色差偏移量;
    根据所述色差偏移量,对所述像素点进行色散矫正。
  2. 根据权利要求1所述的方法,其特征在于,所述色散包括横向色散。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述变焦倍数和所述位置信息,确定所述像素点的色差偏移量,包括:
    根据预先标定的多个预设变焦倍数的色差补偿表,确定所述变焦倍数的色差补偿表,所述色差补偿表用于记录对应变焦倍数下,不同像素位置的色差偏移量;
    根据所述位置信息和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量。
  4. 根据权利要求3所述的方法,其特征在于,所述预设变焦倍数的色差补偿表中不同所述像素位置的色差偏移量为根据所述预设变焦倍数下,多个标定位置的色差偏移量下采样获得。
  5. 根据权利要求4所述的方法,其特征在于,所述下采样的采样间隔的大小为100像素。
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述位置信息和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量,包括:
    根据所述位置信息和所述下采样的采样间隔,确定所述像素点的查表位置;
    根据所述查表位置和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量。
  7. 根据权利要求6所述的方法,其特征在于,所述查表位置为根据所述位置信息除以所述采样间隔获得的商值确定。
  8. 根据权利要求6所述的方法,其特征在于,所述根据所述查表位置和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量,包括:
    若所述变焦倍数的色差补偿表中存在与所述查表位置相对应的像素位置,则确定所述变焦倍数的色差补偿表中与所述查表位置相对应的像素位置的色差偏移量为所述像素点的色差偏移量。
  9. 根据权利要求6所述的方法,其特征在于,所述根据所述查表位置和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量,包括:
    若所述变焦倍数的色差补偿表中不存在与所述查表位置相对应的像素位置,则根据所述变焦倍数的色差补偿表中与所述查表位置相邻的像素位置的色差偏移量,确定所述像素点的色差偏移量。
  10. 根据权利要求9所述的方法,其特征在于,所述像素点的色差偏移量为根据所述变焦倍数的色差补偿表中与所述查表位置相邻的像素位置的色差偏移量进行插值获得。
  11. 根据权利要求4所述的方法,其特征在于,所述标定位置包括所述预设变焦倍数下的点阵图图像中所有标定点的位置,所述预设变焦倍数下的点阵图图像为对对应尺寸的点阵图进行拍摄获得。
  12. 根据权利要求11所述的方法,其特征在于,所述标定位置的色差偏移量包括所述标定点的色差偏移量,所述标定点的色差偏移量为根据所述标定点在不同颜色通道下的色差偏移量确定。
  13. 根据权利要求12所述的方法,其特征在于,所述标定点在不同颜色通道下的色差偏移量包括所述标定点在各颜色通道下,第一方向上的色差偏移量和第二方向上的色差偏移量;
    其中,所述第一方向和所述第二方向正交。
  14. 根据权利要求13所述的方法,其特征在于,所述第一方向上的色差偏移量包括所述标定点在所述颜色通道下,沿着所述点阵图图像的光心的径向移动的距离在所述第一方向上的第一分量;
    所述第二方向上的色差偏移量包括所述标定点在所述颜色通道下,沿着所述点阵图图像的光心的径向移动的距离在所述第二方向上的第二分量;
    所述标定点在所述颜色通道下,沿着所述点阵图图像的光心的径向移动所述距离大小,能够使得所述标定点在不同颜色通道下的中心在以所述拍摄装置的光心为圆心的圆上。
  15. 根据权利要求14所述的方法,其特征在于,所述第一分量为根据所述标定点在所述颜色通道下的像素坐标与所述拍摄装置的光心的光心坐标在第一方向的差值和所述标定点在所述颜色通道下至所述拍摄装置的光心的距离确定;
    所述第二分量为根据所述标定点在所述颜色通道下的像素坐标与所述光心坐标在第二方向的差值和所述标定点在所述颜色通道下至所述拍摄装置的光心的距离确定。
  16. 根据权利要求15所述的方法,其特征在于,所述第一分量为根据第一拟合函数确定,所述第二分量为根据第二拟合函数确定;
    所述第一拟合函数以所述标定点在所述颜色通道下的像素坐标与所述拍摄装置的光心的光心坐标在第一方向的差值、所述标定点在所述颜色通道下至所述拍摄装置的光心的距离为自变量,所述第一分量为因变量;
    所述第二拟合函数以所述标定点在所述颜色通道下的像素坐标与所述拍摄装置的光心的光心坐标在第二方向的差值、所述标定点在所述颜色通道下至所述拍摄装置的光心的距离为自变量,所述第二分量为因变量。
  17. 根据权利要求16所述的方法,其特征在于,所述第一拟合函数为多项式函数;和/或,
    所述第二拟合函数为多项式函数。
  18. 根据权利要求17所述的方法,其特征在于,所述第一拟合函数为所述标定点在所述颜色通道至所述拍摄装置的光心的距离的5次多项式函数;和/或,
    所述第二拟合函数为所述标定点在所述颜色通道至所述拍摄装置的光心的距离的5次多项式函数。
  19. 根据权利要求3所述的方法,其特征在于,所述预设变焦倍数的数量大于或等于5,并小于或等于10。
  20. 根据权利要求3所述的方法,其特征在于,当所述变焦倍数为多个所述预设变焦倍数中的一个时,所述变焦倍数的色差补偿表为对应的预设变焦倍数的色差补偿表。
  21. 根据权利要求3所述的方法,其特征在于,所述预设变焦倍数包括第一预设变焦倍数和第二预设变焦倍数,所述第一预设变焦倍数小于所述第二预设变焦倍数;
    当所述变焦倍数大于所述第一预设变焦倍数,并小于所述第二预设变焦倍数时,所述变焦倍数的色差补偿表为根据所述第一预设变焦倍数的色差补偿表和所述第二预设变焦倍数的色差补偿表确定。
  22. 根据权利要求21所述的方法,其特征在于,所述变焦倍数的色差补偿表为根据所述第一预设变焦倍数的色差补偿表和所述第二预设变焦倍数的色差补偿表进行插值确定。
  23. 根据权利要求3所述的方法,其特征在于,所述色差补偿表为网格mesh表。
  24. 根据权利要求3或23所述的方法,其特征在于,所述色差补偿表中的像素位置分布在预设坐标系中第一轴的一侧;
    其中,所述预设坐标系的原点与对应变焦倍数下的拍摄装置的光心重合。
  25. 根据权利要求24所述的方法,其特征在于,所述色差补偿表中的像素位置分布在所述预设坐标系统中第二轴的一侧;
    其中,所述第一轴和所述第二轴正交。
  26. 根据权利要求25所述的方法,其特征在于,所述根据所述位置信息和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量,包括:
    若所述像素点位于所述第一轴的另一侧或所述第二轴的另一侧,则根据所述像素点的位置信息在所述色差补偿表中确定目标像素位置,并获取所述目标像素位置的色差偏移量;
    根据所述像素点与所述目标像素位置之间的位置关系以及所述目标像素位置的色差偏移量,确定所述像素点的色差偏移量。
  27. 根据权利要求1所述的方法,其特征在于,所述变焦倍数为外部输入。
  28. 根据权利要求1所述的方法,其特征在于,所述获取所述待处理图像中的像素点的位置信息,包括:
    获取所述待处理图像中的像素点的像素坐标;
    获取所述拍摄装置拍摄所述待处理图像时的光心坐标偏移量;
    根据所述像素坐标和所述光心坐标偏移量,确定所述像素点的位置信息。
  29. 根据权利要求28所述的方法,其特征在于,所述光心坐标偏移量为根据外部 输入的光心坐标确定。
  30. 根据权利要求1所述的方法,其特征在于,所述根据所述像素点的色差偏移量,对所述像素点进行色散矫正,包括:
    将所述像素点的色差偏移量叠加在所述像素点的所述位置信息上,以对所述像素点进行色散矫正。
  31. 根据权利要求1所述的方法,其特征在于,所述待处理图像为原始RAW图像。
  32. 一种图像处理装置,其特征在于,所述装置包括:
    存储装置,用于存储程序指令;以及
    一个或多个处理器,调用所述存储装置中存储的程序指令,当所述程序指令被执行时,所述一个或多个处理器单独地或共同地被配置成用于实施如下操作:
    获取拍摄装置在拍摄待处理图像时的变焦倍数,并获取所述待处理图像中的像素点的位置信息;
    根据所述变焦倍数和所述位置信息,确定所述像素点的色差偏移量;
    根据所述色差偏移量,对所述像素点进行色散矫正。
  33. 根据权利要求32所述的装置,其特征在于,所述色散包括横向色散。
  34. 根据权利要求32所述的装置,其特征在于,所述一个或多个处理器在根据所述变焦倍数和所述位置信息,确定所述像素点的色差偏移量时,单独地或共同地被进一步配置成用于实施如下操作:
    根据预先标定的多个预设变焦倍数的色差补偿表,确定所述变焦倍数的色差补偿表,所述色差补偿表用于记录对应变焦倍数下,不同像素位置的色差偏移量;
    根据所述位置信息和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量。
  35. 根据权利要求34所述的装置,其特征在于,所述预设变焦倍数的色差补偿表中不同所述像素位置的色差偏移量为根据所述预设变焦倍数下,多个标定位置的色差偏移量下采样获得。
  36. 根据权利要求35所述的装置,其特征在于,所述下采样的采样间隔的大小为100像素。
  37. 根据权利要求35所述的装置,其特征在于,所述一个或多个处理器在根据所述位置信息和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量时,单独地或共同地被进一步配置成用于实施如下操作:
    根据所述位置信息和所述下采样的采样间隔,确定所述像素点的查表位置;
    根据所述查表位置和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量。
  38. 根据权利要求37所述的装置,其特征在于,所述查表位置为根据所述位置信息除以所述采样间隔获得的商值确定。
  39. 根据权利要求37所述的装置,其特征在于,所述一个或多个处理器在根据所述查表位置和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量时,单独地 或共同地被进一步配置成用于实施如下操作:
    若所述变焦倍数的色差补偿表中存在与所述查表位置相对应的像素位置,则确定所述变焦倍数的色差补偿表中与所述查表位置相对应的像素位置的色差偏移量为所述像素点的色差偏移量。
  40. 根据权利要求37所述的装置,其特征在于,所述一个或多个处理器在根据所述查表位置和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量时,单独地或共同地被进一步配置成用于实施如下操作:
    若所述变焦倍数的色差补偿表中不存在与所述查表位置相对应的像素位置,则根据所述变焦倍数的色差补偿表中与所述查表位置相邻的像素位置的色差偏移量,确定所述像素点的色差偏移量。
  41. 根据权利要求40所述的装置,其特征在于,所述像素点的色差偏移量为根据所述变焦倍数的色差补偿表中与所述查表位置相邻的像素位置的色差偏移量进行插值获得。
  42. 根据权利要求35所述的装置,其特征在于,所述标定位置包括所述预设变焦倍数下的点阵图图像中所有标定点的位置,所述预设变焦倍数下的点阵图图像为对对应尺寸的点阵图进行拍摄获得。
  43. 根据权利要求42所述的装置,其特征在于,所述标定位置的色差偏移量包括所述标定点的色差偏移量,所述标定点的色差偏移量为根据所述标定点在不同颜色通道下的色差偏移量确定。
  44. 根据权利要求43所述的装置,其特征在于,所述标定点在不同颜色通道下的色差偏移量包括所述标定点在各颜色通道下,第一方向上的色差偏移量和第二方向上的色差偏移量;
    其中,所述第一方向和所述第二方向正交。
  45. 根据权利要求44所述的装置,其特征在于,所述第一方向上的色差偏移量包括所述标定点在所述颜色通道下,沿着所述点阵图图像的光心的径向移动的距离在所述第一方向上的第一分量;
    所述第二方向上的色差偏移量包括所述标定点在所述颜色通道下,沿着所述点阵图图像的光心的径向移动的距离在所述第二方向上的第二分量;
    所述标定点在所述颜色通道下,沿着所述点阵图图像的光心的径向移动所述距离大小,能够使得所述标定点在不同颜色通道下的中心在以所述拍摄装置的光心为圆心的圆上。
  46. 根据权利要求45所述的装置,其特征在于,所述第一分量为根据所述标定点在所述颜色通道下的像素坐标与所述拍摄装置的光心的光心坐标在第一方向的差值和所述标定点在所述颜色通道下至所述拍摄装置的光心的距离确定;
    所述第二分量为根据所述标定点在所述颜色通道下的像素坐标与所述光心坐标在第二方向的差值和所述标定点在所述颜色通道下至所述拍摄装置的光心的距离确定。
  47. 根据权利要求46所述的装置,其特征在于,所述第一分量为根据第一拟合函数确定,所述第二分量为根据第二拟合函数确定;
    所述第一拟合函数以所述标定点在所述颜色通道下的像素坐标与所述拍摄装置的光心的光心坐标在第一方向的差值、所述标定点在所述颜色通道下至所述拍摄装置的光心的距离为自变量,所述第一分量为因变量;
    所述第二拟合函数以所述标定点在所述颜色通道下的像素坐标与所述拍摄装置的光心的光心坐标在第二方向的差值、所述标定点在所述颜色通道下至所述拍摄装置的光心的距离为自变量,所述第二分量为因变量。
  48. 根据权利要求47所述的装置,其特征在于,所述第一拟合函数为多项式函数;和/或,
    所述第二拟合函数为多项式函数。
  49. 根据权利要求48所述的装置,其特征在于,所述第一拟合函数为所述标定点在所述颜色通道至所述拍摄装置的光心的距离的5次多项式函数;和/或,
    所述第二拟合函数为所述标定点在所述颜色通道至所述拍摄装置的光心的距离的5次多项式函数。
  50. 根据权利要求34所述的装置,其特征在于,所述预设变焦倍数的数量大于或等于5,并小于或等于10。
  51. 根据权利要求34所述的装置,其特征在于,当所述变焦倍数为多个所述预设变焦倍数中的一个时,所述变焦倍数的色差补偿表为对应的预设变焦倍数的色差补偿表。
  52. 根据权利要求34所述的装置,其特征在于,所述预设变焦倍数包括第一预设变焦倍数和第二预设变焦倍数,所述第一预设变焦倍数小于所述第二预设变焦倍数;
    当所述变焦倍数大于所述第一预设变焦倍数,并小于所述第二预设变焦倍数时,所述变焦倍数的色差补偿表为根据所述第一预设变焦倍数的色差补偿表和所述第二预设变焦倍数的色差补偿表确定。
  53. 根据权利要求52所述的装置,其特征在于,所述变焦倍数的色差补偿表为根据所述第一预设变焦倍数的色差补偿表和所述第二预设变焦倍数的色差补偿表进行插值确定。
  54. 根据权利要求34所述的装置,其特征在于,所述色差补偿表为网格mesh表。
  55. 根据权利要求34或54所述的装置,其特征在于,所述色差补偿表中的像素位置分布在预设坐标系中第一轴的一侧;
    其中,所述预设坐标系的原点与对应变焦倍数下的拍摄装置的光心重合。
  56. 根据权利要求55所述的装置,其特征在于,所述色差补偿表中的像素位置分布在所述预设坐标系统中第二轴的一侧;
    其中,所述第一轴和所述第二轴正交。
  57. 根据权利要求56所述的装置,其特征在于,所述一个或多个处理器在根据所 述位置信息和所述变焦倍数的色差补偿表,确定所述像素点的色差偏移量时,单独地或共同地被进一步配置成用于实施如下操作:
    若所述像素点位于所述第一轴的另一侧或所述第二轴的另一侧,则根据所述像素点的位置信息在所述色差补偿表中确定目标像素位置,并获取所述目标像素位置的色差偏移量;
    根据所述像素点与所述目标像素位置之间的位置关系以及所述目标像素位置的色差偏移量,确定所述像素点的色差偏移量。
  58. 根据权利要求32所述的装置,其特征在于,所述变焦倍数为外部输入。
  59. 根据权利要求32所述的装置,其特征在于,所述一个或多个处理器在获取所述待处理图像中的像素点的位置信息时,单独地或共同地被进一步配置成用于实施如下操作:
    获取所述待处理图像中的像素点的像素坐标;
    获取所述拍摄装置拍摄所述待处理图像时的光心坐标偏移量;
    根据所述像素坐标和所述光心坐标偏移量,确定所述像素点的位置信息。
  60. 根据权利要求59所述的装置,其特征在于,所述光心坐标偏移量为根据外部输入的光心坐标确定。
  61. 根据权利要求32所述的装置,其特征在于,所述一个或多个处理器在根据所述像素点的色差偏移量,对所述像素点进行色散矫正时,单独地或共同地被进一步配置成用于实施如下操作:
    将所述像素点的色差偏移量叠加在所述像素点的所述位置信息上,以对所述像素点进行色散矫正。
  62. 根据权利要求32所述的装置,其特征在于,所述待处理图像为原始RAW图像。
  63. 一种可移动平台,其特征在于,包括:
    机体;
    动力系统,与所述机体连接,用于给所述机体的移动提供动力;
    权利要求32至62中任一项所述的图像处理装置,由所述机体支撑。
PCT/CN2020/082015 2020-03-30 2020-03-30 图像处理方法、装置和可移动平台 WO2021195829A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080004864.4A CN112640424A (zh) 2020-03-30 2020-03-30 图像处理方法、装置和可移动平台
PCT/CN2020/082015 WO2021195829A1 (zh) 2020-03-30 2020-03-30 图像处理方法、装置和可移动平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/082015 WO2021195829A1 (zh) 2020-03-30 2020-03-30 图像处理方法、装置和可移动平台

Publications (1)

Publication Number Publication Date
WO2021195829A1 true WO2021195829A1 (zh) 2021-10-07

Family

ID=75291255

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082015 WO2021195829A1 (zh) 2020-03-30 2020-03-30 图像处理方法、装置和可移动平台

Country Status (2)

Country Link
CN (1) CN112640424A (zh)
WO (1) WO2021195829A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283077A (zh) * 2021-12-08 2022-04-05 凌云光技术股份有限公司 一种校正图像横向色差的方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115017070A (zh) * 2022-06-07 2022-09-06 青岛信芯微电子科技股份有限公司 图像矫正方法、图像矫正模块、激光投影设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101166235A (zh) * 2006-10-16 2008-04-23 索尼株式会社 镜头装置、图像捕获装置、及用于校正图像质量的方法
CN101552893A (zh) * 2008-04-01 2009-10-07 精工爱普生株式会社 图像处理装置、图像显示装置以及图像处理方法
US20120013775A1 (en) * 2010-07-15 2012-01-19 Apple Inc. Enhanced Image Capture Sharpening
CN104735323A (zh) * 2009-02-19 2015-06-24 佳能株式会社 信息处理设备和信息处理方法
US20160044221A1 (en) * 2014-08-11 2016-02-11 Seiko Epson Corporation Imaging display device and control method thereof
CN107666562A (zh) * 2016-07-28 2018-02-06 佳能株式会社 图像处理装置、图像处理方法和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5171361B2 (ja) * 2008-04-07 2013-03-27 株式会社日立製作所 撮像装置
JP2011061444A (ja) * 2009-09-09 2011-03-24 Hitachi Information & Communication Engineering Ltd 収差補正装置及び収差補正方法
JP5503497B2 (ja) * 2010-10-26 2014-05-28 パナソニック株式会社 画像信号処理装置、画像信号処理方法およびプログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101166235A (zh) * 2006-10-16 2008-04-23 索尼株式会社 镜头装置、图像捕获装置、及用于校正图像质量的方法
CN101552893A (zh) * 2008-04-01 2009-10-07 精工爱普生株式会社 图像处理装置、图像显示装置以及图像处理方法
CN104735323A (zh) * 2009-02-19 2015-06-24 佳能株式会社 信息处理设备和信息处理方法
US20120013775A1 (en) * 2010-07-15 2012-01-19 Apple Inc. Enhanced Image Capture Sharpening
US20160044221A1 (en) * 2014-08-11 2016-02-11 Seiko Epson Corporation Imaging display device and control method thereof
CN107666562A (zh) * 2016-07-28 2018-02-06 佳能株式会社 图像处理装置、图像处理方法和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283077A (zh) * 2021-12-08 2022-04-05 凌云光技术股份有限公司 一种校正图像横向色差的方法
CN114283077B (zh) * 2021-12-08 2024-04-02 凌云光技术股份有限公司 一种校正图像横向色差的方法

Also Published As

Publication number Publication date
CN112640424A (zh) 2021-04-09

Similar Documents

Publication Publication Date Title
KR101150665B1 (ko) 이미지 보정 시스템 및 방법
KR101017802B1 (ko) 영상 왜곡 보정
JP5299383B2 (ja) 画像補正装置および画像補正方法
US9652847B2 (en) Method for calibrating a digital optical imaging system having a zoom system, method for correcting aberrations in a digital optical imaging system having a zoom system, and digital optical imaging system
US8699820B2 (en) Image processing apparatus, camera apparatus, image processing method, and program
CN109644230A (zh) 图像处理方法、图像处理装置、图像拾取装置、图像处理程序和存储介质
WO2020097851A1 (zh) 一种图像处理方法、控制终端及存储介质
WO2021195829A1 (zh) 图像处理方法、装置和可移动平台
CN111242863B (zh) 基于图像处理器实现的消除镜头横向色差的方法及介质
WO2019232793A1 (zh) 双摄像头标定方法、电子设备、计算机可读存储介质
CN102111544A (zh) 摄像模块、图像处理装置及图像记录方法
CN105051506A (zh) 亮度测量方法、亮度测量装置和使用它们的画质调整技术
US20130069961A1 (en) Projector, image processing apparatus and image processing method
US20190180475A1 (en) Dynamic camera calibration
CN116012241A (zh) 图像畸变校正方法、装置、计算机设备和存储介质
JP2009225119A (ja) 画像撮像装置
US20200111198A1 (en) Image processing method, image processing apparatus, imaging apparatus, and storage medium
US20160189350A1 (en) System and method for remapping of image to correct optical distortions
CN111757086A (zh) 有源双目相机、rgb-d图像确定方法及装置
JP2012156716A (ja) 画像処理装置、撮像装置、画像処理方法およびプログラム。
WO2023070862A1 (zh) 校正广角镜头图像畸变的方法、装置及照相设备
JP5446285B2 (ja) 画像処理装置及び画像処理方法
CN108510547A (zh) 一种远心移轴相机标定方法和系统
JP7414430B2 (ja) 画像処理方法、画像処理装置、撮像装置、画像処理システム、プログラム、および、記憶媒体
CN113191975A (zh) 图像畸变矫正方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20928532

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20928532

Country of ref document: EP

Kind code of ref document: A1