WO2016065632A1 - 一种图像处理方法和设备 - Google Patents
一种图像处理方法和设备 Download PDFInfo
- Publication number
- WO2016065632A1 WO2016065632A1 PCT/CN2014/090094 CN2014090094W WO2016065632A1 WO 2016065632 A1 WO2016065632 A1 WO 2016065632A1 CN 2014090094 W CN2014090094 W CN 2014090094W WO 2016065632 A1 WO2016065632 A1 WO 2016065632A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- optical distortion
- coordinate value
- distortion
- value
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 9
- 230000003287 optical effect Effects 0.000 claims abstract description 583
- 238000012937 correction Methods 0.000 claims abstract description 225
- 238000013507 mapping Methods 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000012634 optical imaging Methods 0.000 claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims description 114
- 238000012545 processing Methods 0.000 claims description 84
- 238000006243 chemical reaction Methods 0.000 claims description 44
- 238000003384 imaging method Methods 0.000 claims description 31
- 230000009466 transformation Effects 0.000 claims description 21
- 230000008569 process Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 13
- 230000015654 memory Effects 0.000 description 13
- 238000004590 computer program Methods 0.000 description 6
- 238000006073 displacement reaction Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000226585 Antennaria plantaginifolia Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
Images
Classifications
-
- G06T5/70—
-
- G06T5/80—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
Definitions
- the present invention relates to the field of wireless communication technologies, and in particular, to an image processing method and apparatus.
- terminal devices such as cameras, cameras, etc.
- smartphones have become an important part of smartphones as an interface between the real world and the virtual world.
- Thousands of people every day record new things that are related or unrelated to themselves through the camera or camera functions of their smartphones.
- the terminal device is prone to optical distortion, and the facial features in the obtained face image are distorted.
- the barrel distortion occurs, that is, in practical applications, if the terminal device uses a wide-angle lens when acquiring an image, the face features in the captured image are barrel-shaped distortion; the pincushion distortion occurs, that is, in practical applications, If the terminal device uses a telephoto lens when acquiring an image, the face feature in the captured image is pincushion-shaped.
- the severity of face distortion affects the quality of face images.
- the embodiments of the present invention provide an image processing method and device, which are used to solve the problem that the existing image quality is low.
- an image processing method comprising:
- a lens optical distortion model whose re-projection error value is smaller than a set threshold according to a mapping relationship between at least one set of lens optical distortion models and re-projection error values, wherein the lens optical distortion model includes an optical distortion type and a distortion order And a distortion coefficient for characterizing a theoretically distorted image coordinate value of the calibration object and the calibration object for a calibration object The difference between the actual distortion image coordinate values;
- the obtained optical distortion image is optically corrected by the lens optical distortion model to obtain an image after optical distortion correction.
- the optical distortion correction is performed on the acquired distorted image by using the lens optical distortion model, including:
- determining an ideal image coordinate value of the captured object corresponding to the acquired image that is distorted comprises:
- the determined optical image distortion model is used to perform coordinate conversion on the determined ideal image coordinate value of the object to be photographed, Obtaining theoretical optical distortion image coordinate values corresponding to the ideal image coordinate values, including:
- the internal parameter matrix of the terminal device, the selected lens optical distortion model, and the internal parameter matrix of the terminal device are utilized.
- the inverse matrix, coordinate transformation of the ideal image coordinate values of the selected grid points to obtain theoretical optical distortion image coordinate values including:
- the distorted second pinhole plane coordinate value is converted to obtain a theoretical optical distortion image coordinate value.
- a first possible embodiment of the first aspect of the invention, or a second possible embodiment of the first aspect of the invention, or a third possible embodiment of the first aspect of the invention, or a combination thereof A fourth possible implementation manner of the first aspect of the invention, in the fifth possible implementation, the actual optical distortion of the pixel points included in the acquired optical distortion image coordinate value and the acquired distortion image The image coordinate value is used to find a pixel point where the distance between the actual optical distortion image coordinate value and the theoretical optical distortion image coordinate value is less than a set threshold, including:
- a first possible embodiment of the first aspect of the invention, or a second possible embodiment of the first aspect of the invention, or a third possible implementation of the first aspect of the invention Means, or in combination with the fourth possible implementation manner of the first aspect of the present invention, or the fifth possible implementation manner of the first aspect of the present invention, in the sixth possible implementation manner, according to the found pixel point a pixel value, and calculating a pixel value corresponding to the ideal image coordinate value of the subject, including:
- a second possible embodiment of the first aspect of the invention, or a third possible embodiment of the first aspect of the invention, or a fourth possible embodiment of the first aspect of the invention, or a combination thereof A fifth possible embodiment of the first aspect of the invention, or a sixth possible implementation of the first aspect of the invention, in the seventh possible implementation, the image obtained by the optical distortion correction, including :
- the obtained ideal image is used as an image obtained by optically correcting the acquired distortion image.
- mapping relationship between the lens optical distortion model and the re-projection error value include:
- a calibration object is selected
- a mapping relationship between the lens optical distortion model and the determined re-projection error value is established.
- the method further includes:
- the region distortion correction is performed on the image after the optical distortion correction by using the selected region distortion correction parameter, and the image after the regional distortion correction is obtained.
- determining, by the setting object, the strength and direction of the regional distortion occurring in the acquired distorted image includes:
- the coordinate values of the pixels in the second set of position coordinates determine the intensity and direction of the region distortion in the acquired image in which the set object is distorted.
- the selected region distortion correction is utilized
- the parameter corrects the regional distortion of the image after the optical distortion correction, and obtains the image after the regional distortion correction, including:
- the image distortion correction is performed on the image after the optical distortion correction by using the determined conversion rule, and the image after the regional distortion correction is obtained.
- the area distortion correction is performed on the optical distortion corrected image by using the determined conversion rule, including:
- the virtual region distortion corrected mesh image wherein the number of grid points included in the mesh image after the region distortion correction and the optical distortion corrected image
- the number of included pixel points is the same, and the coordinate values of the grid points at the same position are the same as the coordinate values of the pixel points;
- an image processing apparatus comprising: an imaging device, an image sensor, and a processor, wherein the image sensor and the processor are connected by a bus;
- the imaging device is configured to map a subject to the image sensor
- the image sensor is configured to acquire an image in which a subject is distorted
- the processor is configured to select a lens optical distortion model whose reprojection error value is smaller than a set threshold according to a mapping relationship between at least one set of lens optical distortion models and reprojection error values, where the lens optical distortion model includes An optical distortion type, a distortion order, and a distortion coefficient, the reprojection error value being used to characterize a difference between a theoretical distortion image coordinate value of the calibration object and an actual distortion image coordinate value of the calibration object for a calibration object value;
- the optical distortion correction image obtained by the image sensor is optically corrected by the lens optical distortion model to obtain an image after optical distortion correction.
- the processor uses the lens optical distortion model to perform optical distortion correction on the acquired image that is distorted, specifically for:
- the processor determines an ideal image coordinate value of the captured object corresponding to the acquired image that is distorted, specifically for:
- the processor uses the lens optical distortion model to determine the ideal image coordinate value of the object to be photographed. Performing coordinate conversion to obtain coordinate values of theoretical optical distortion images corresponding to the ideal image coordinate values, specifically for:
- the processor utilizes an internal parameter matrix of the terminal device, the selected lens optical distortion model, and the terminal The inverse matrix of the internal parameter matrix of the device, coordinate transformation of the ideal image coordinate values of the selected grid points to obtain theoretical optical distortion image coordinate values, specifically for:
- the distorted second pinhole plane coordinate value is converted to obtain a theoretical optical distortion image coordinate value.
- the processor searches for actual optical distortion image coordinate values and the theoretical optical distortion according to the theoretical optical distortion image coordinate value and the actual optical distortion image coordinate value of the pixel point included in the acquired image that is distorted
- the distance between the image coordinate values is less than the pixel of the set threshold, specifically for:
- a first possible embodiment of the second aspect of the invention, or a second possible embodiment of the second aspect of the invention, or a third possible embodiment of the second aspect of the invention, or a combination thereof A fourth possible implementation manner of the second aspect of the invention, or a fifth possible implementation manner of the second aspect of the present invention.
- the processor is based on the found pixel point The pixel value is calculated, and the pixel value corresponding to the ideal image coordinate value of the subject is calculated, specifically for:
- a second possible embodiment of the second aspect of the invention, or a third possible embodiment of the second aspect of the invention, or a fourth possible embodiment of the second aspect of the invention, or a combination thereof A fifth possible implementation manner of the second aspect of the invention, or a sixth possible implementation manner of the second aspect of the present invention.
- the processor is specifically configured to:
- the obtained ideal image is used as an image obtained by optically correcting the acquired distortion image.
- the mapping relationship between the lens optical distortion model and the re-projection error value includes:
- a calibration object is selected
- a mapping relationship between the lens optical distortion model and the determined re-projection error value is established.
- the region distortion correction is performed on the image after the optical distortion correction by using the selected region distortion correction parameter, and the image after the regional distortion correction is obtained.
- the processor determines an intensity and a direction of the region distortion caused by the setting object in the acquired distorted image. Specifically for:
- the processor utilizes the selected The regional distortion correction parameter is used to correct the regional distortion of the image after the optical distortion correction, and obtain the image after the regional distortion correction, specifically for:
- the image distortion correction is performed on the image after the optical distortion correction by using the determined conversion rule, and the image after the regional distortion correction is obtained.
- the processor uses the determined conversion rule to perform regional distortion correction on the optical distortion corrected image, specifically Used for:
- the virtual region distortion corrected mesh image wherein the number of grid points included in the mesh image after the region distortion correction and the optical distortion corrected image
- the number of included pixel points is the same, and the coordinate values of the grid points at the same position are the same as the coordinate values of the pixel points;
- the pixel value of the selected grid point in the grid image is calculated according to the found pixel value of the pixel.
- an image processing apparatus comprising:
- An acquisition module configured to acquire an image in which a subject is distorted
- a selection module configured to select a lens optical distortion model in which a re-projection error value is less than a set threshold according to a mapping relationship between at least one set of lens optical distortion models and a re-projection error value, wherein the lens optical distortion model includes optical distortion a type, a distortion order, and a distortion coefficient, the reprojection error value being used to characterize a difference between a theoretical distortion image coordinate value of the calibration object and an actual distortion image coordinate value of the calibration object for a calibration object;
- a processing module configured to perform optical distortion correction on the acquired distortion image by using the lens optical distortion model to obtain an image after optical distortion correction.
- the processing module performs optical distortion correction on the acquired image that is distorted by using the lens optical distortion model, specifically for:
- the processing module determines an ideal image coordinate value of the captured object corresponding to the acquired image that is distorted, specifically for :
- the processing module uses the lens optical distortion model to determine the ideal image coordinate value of the object to be photographed. Performing coordinate conversion to obtain coordinate values of theoretical optical distortion images corresponding to the ideal image coordinate values, specifically for:
- the processing module utilizes an internal parameter matrix of the terminal device, the selected lens optical distortion model, and the terminal The inverse matrix of the internal parameter matrix of the device, coordinate transformation of the ideal image coordinate values of the selected grid points to obtain theoretical optical distortion image coordinate values, specifically for:
- the ideal image coordinate value of the selected grid point Converting to obtain the first pinhole plane coordinate value
- the distorted second pinhole plane coordinate value is converted to obtain a theoretical optical distortion image coordinate value.
- the processing module according to the theoretical optical distortion image coordinate value, and the pixel point included in the acquired image that is distorted
- the actual optical distortion image coordinate value is used to find a pixel point where the distance between the actual optical distortion image coordinate value and the theoretical optical distortion image coordinate value is less than a set threshold, specifically for:
- a first possible embodiment of the third aspect of the invention, or a second possible embodiment of the third aspect of the invention, or a third possible embodiment of the third aspect of the invention, or a combination thereof A fourth possible implementation manner of the third aspect of the invention, or a fifth possible implementation manner of the third aspect of the present invention.
- the processing module is based on the found pixel point The pixel value is calculated, and the pixel value corresponding to the ideal image coordinate value of the subject is calculated, specifically for:
- the processing module specifically For obtaining the pixel value of each grid point in the ideal image, the obtained ideal image is used as an image obtained by optically correcting the acquired distortion image.
- mapping relationship between the lens optical distortion model and the re-projection error value include:
- a calibration object is selected
- a mapping relationship between the lens optical distortion model and the determined re-projection error value is established.
- the processing module is configured to obtain an image after optical distortion correction Also used for:
- the region distortion correction is performed on the image after the optical distortion correction by using the selected region distortion correction parameter, and the image after the regional distortion correction is obtained.
- the processing module determines an intensity and a direction of the area distortion in the acquired image in which the set object is distorted Specifically for:
- the processing module utilizes the selected The regional distortion correction parameter is used to correct the regional distortion of the image after the optical distortion correction, and obtain the image after the regional distortion correction, specifically for:
- Determining the set according to the corrected first position coordinate set and the second position coordinate set Determining a conversion rule between a coordinate value of the pixel of the set object in the corrected first position coordinate set and a coordinate value of the pixel point in the second position coordinate set;
- the image distortion correction is performed on the image after the optical distortion correction by using the determined conversion rule, and the image after the regional distortion correction is obtained.
- the processing module uses the determined conversion rule to perform regional distortion correction on the optical distortion corrected image, specifically Used for:
- the virtual region distortion corrected mesh image wherein the number of grid points included in the mesh image after the region distortion correction and the optical distortion corrected image
- the number of included pixel points is the same, and the coordinate values of the grid points at the same position are the same as the coordinate values of the pixel points;
- the pixel value of the selected grid point in the grid image is calculated according to the found pixel value of the pixel.
- the embodiment of the invention acquires an image in which the subject is distorted; and according to the mapping relationship between the at least one set of the lens optical distortion model and the re-projection error value, the lens optical distortion model whose re-projection error value is smaller than the set threshold is selected, wherein
- the lens optical distortion model includes an optical distortion type, a distortion order, and a distortion coefficient for characterizing any one of the calibration objects, the theoretical distortion image coordinate value of the calibration object and the actual distortion of the calibration object.
- the difference between the image coordinate values; the optical distortion correction image obtained by the lens optical distortion model is used to obtain an optical distortion corrected image, so that the image obtained by the distortion is obtained.
- the optical distortion correction is performed by the lens optical distortion parameter whose re-projection error value is smaller than the set threshold value, which effectively eliminates the optical distortion caused by the optical imaging principle of the imaging device in acquiring the image of the object;
- the re-projection error value corresponding to the optical distortion model is smaller than the set threshold, so that the accuracy of the optical distortion correction is improved, and the quality of the captured image is improved.
- FIG. 1 is a schematic flowchart of an image processing method according to Embodiment 1 of the present invention.
- FIG. 2 is a schematic diagram of a reprojection error corresponding to a lens optical distortion model
- Figure 3 (a) is a displacement vector standard diagram of an optical distortion correction image
- Figure 3 (b) is a displacement vector change diagram of the optical distortion correction image
- FIG. 4 is a schematic structural diagram of an image processing apparatus according to Embodiment 2 of the present invention.
- FIG. 5 is a schematic structural diagram of an image processing apparatus according to Embodiment 3 of the present invention.
- an embodiment of the present invention provides an image processing method and apparatus for optically correcting an image of a distorted image by using a lens optical distortion parameter whose re-projection error value is smaller than a set threshold.
- the optical distortion caused by the optical imaging principle of the imaging device during the acquisition of the image of the subject is eliminated; and the accuracy of the optical distortion correction is improved because the re-projection error value corresponding to the selected lens optical distortion model is less than a set threshold Improve and improve the quality of the captured image.
- the spatial coordinate value of the object to be photographed in the embodiment of the present invention refers to a coordinate value of the object to be photographed in three-dimensional space.
- the spatial coordinate value may include a longitude value, a latitude value, and a height value.
- the ideal image coordinate value of the subject means that the subject is mapped in the grid image to obtain coordinate values of the respective grid points that are not distorted.
- the theoretical distortion image coordinate value of the subject is a coordinate value obtained by performing coordinate conversion on the ideal image coordinate value of the subject using the lens optical distortion model.
- the actual distortion image coordinate value of the object refers to: using the imaging function of the optical imaging device to map the object to the image sensor to obtain an image in which optical distortion actually occurs, wherein: each pixel in the image in which optical distortion actually occurs
- the coordinate value may be referred to as the actual distortion image coordinate value of the subject.
- a plurality of lens optical distortion models can be stored locally by one terminal device, and different lens optical distortion models can be determined by a camera/camera calibration method (for example, Zhang Zhengyou camera calibration method, Tsai camera calibration method, etc.) in the prior art.
- a camera/camera calibration method for example, Zhang Zhengyou camera calibration method, Tsai camera calibration method, etc.
- the lens optical distortion model includes optical distortion type, distortion order and distortion coefficient.
- the optical distortion type includes at least one or more of radial distortion and tangential distortion.
- the radial distortion refers to the change of the vector endpoint along the length direction
- the tangential distortion refers to the change of the vector endpoint along the tangential direction, that is, the angle changes.
- lens optical distortion models stored locally by different terminal devices may be different or the same.
- the lens optical distortion model may be different in the optical distortion model included in the lens optical distortion model; or the optical distortion model included in the lens optical distortion model is the same type and the distortion order is different.
- the same lens optical distortion model means that the optical distortion model included in the lens optical distortion model has the same type and the same distortion order; for the same lens optical distortion model, the same lens optical distortion model determined by the camera/camera calibration method for different terminal devices
- the distortion coefficients corresponding to the same optical distortion type and the same distortion order may be the same; they may be different.
- the lens optical distortion model is expressed as: Where (x, y) is the ideal image coordinate value of the subject; (x rd , y rd ) is the coordinate value after the radial distortion of the ideal image coordinate value of the subject; r represents (x, y) Polar radius; K 1 represents the radial distortion coefficient.
- the determined K 1 may be the same or different.
- Embodiment 1 is a diagrammatic representation of Embodiment 1:
- FIG. 1 it is a schematic flowchart of an image processing method according to Embodiment 1 of the present invention.
- the execution body of the method may be a terminal device. Wherein the method comprises:
- Step 101 Acquire an image in which the subject is distorted.
- step 101 in the image capturing stage in which the subject is distorted, the terminal device maps the subject to the image sensor using the imaging function of the imaging unit to obtain a distorted image, and the image sensor will map the resulting distorted image.
- the processor sent to the terminal device.
- the terminal device may be a terminal device having a camera function, such as a camera, a camera, a mobile phone, and the like, and the imaging unit may be a lens in the terminal device.
- a camera function such as a camera, a camera, a mobile phone, and the like
- the imaging unit may be a lens in the terminal device.
- the image converted by the subject is easily distorted.
- the optical cause of the lens imaging causes the image converted by the subject to be subject to optical distortion.
- Step 102 Select a lens optical distortion model whose reprojection error value is smaller than a set threshold according to a mapping relationship between at least one lens optical distortion model and a reprojection error value.
- the re-projection error value is a difference between the pointer to any calibration object, the theoretical distortion image coordinate value of the calibration object and the actual distortion image coordinate value of the calibration object.
- At least one lens optical distortion model may be stored in the memory.
- step 102 the mapping relationship between the at least one lens optical distortion model and the re-projection error value can be obtained by learning the optical distortion model of the lens.
- a lens optical distortion model is selected, and the re-projection error value corresponding to the lens optical distortion model is calculated for different calibration objects.
- the lens optical distortion model and the calculated multiple re-projection error values are stored.
- the average value of the obtained multiple re-projection error values or other forms of values is determined as the re-projection error value corresponding to the lens optical distortion model, and the optical distortion model of the lens is stored and determined.
- the mapping relationship between projection error values is selected, and the re-projection error value corresponding to the lens optical distortion model is calculated for different calibration objects.
- multiple lens optical distortion models can be stored.
- the lens optical distortion model can be obtained by combining different optical distortion types.
- the lens optical distortion model is obtained by combining radial distortion and tangential distortion.
- the radial distortion model corresponding to radial distortion can be: (Formula 1);
- (x, y) is the ideal image coordinate value of the subject;
- (x rd , y rd ) is the coordinate value after the radial distortion of the ideal image coordinate value of the subject;
- r represents (x, y) Polar radius;
- K i represents the radial distortion coefficient, 2i in r 2i represents the order of the radial distortion, and i takes the value 1 to N, where N is a positive integer.
- the tangential distortion model corresponding to the tangential distortion can be: (Formula 2);
- (x, y) is the ideal image coordinate value of the subject
- (x pd , y pd ) is the coordinate value after the tangential distortion of the ideal image coordinate value of the subject
- r represents (x, y) The polar radius
- P 1 , P 2 , P 3 , P 4 , ... represent the tangential distortion coefficient
- the index of r represents the order of the tangential distortion.
- the combined lens optical distortion model is: (Formula 3);
- (x rd , y rd ) is the coordinate value after the radial distortion of the ideal image coordinate value of the subject;
- (x pd , y pd ) is the coordinate after the tangential distortion of the ideal image coordinate value of the subject The value (x d , y d ) is the coordinate value after the lens optical distortion occurs in the ideal image coordinate value of the subject.
- the order of the radial distortion is different, and/or the order of the tangential distortion is different, and the obtained lens optical distortion model is also different.
- Lens optical distortion model number Radial Distortion Model and Tangential Distortion Model Corresponding to Lens Optical Distortion Model 1 2nd order radial distortion model and 0th order tangential distortion model 2 4th order radial distortion model and 0th order tangential distortion model 3 6th-order radial distortion model and 0-order tangential distortion model 4 (2+4) order radial distortion model and 0th order tangential distortion model 5 (2+6) order radial distortion model and 0th order tangential distortion model 6 (4+6) order radial distortion model and 0th order tangential distortion model 7 (2+4+6) order radial distortion model and 0th order tangential distortion model
- the second-order radial distortion model refers to The fourth-order radial distortion model refers to The 6th order radial distortion model refers to ; (2+4) order radial distortion model means (2+6) order radial distortion model means (4+6) order radial distortion model means (2+4+6) order radial distortion model means
- the 0th order tangential distortion model refers to
- a terminal device determines a radial distortion coefficient, a tangential distortion coefficient, and a polar radius of different lens optical distortion models by a camera/camera calibration method (for example, Zhang Zhengyou camera calibration method, Tsai camera calibration method, etc.) in the prior art.
- a camera/camera calibration method for example, Zhang Zhengyou camera calibration method, Tsai camera calibration method, etc.
- mapping relationship between the optical distortion model of the lens and the corresponding re-projection error value can be established as follows:
- a calibration object is selected.
- a mapping relationship between the lens optical distortion model and the determined re-projection error value is established.
- determining a re-projection error value corresponding to the lens optical distortion model includes:
- the difference between the theoretical distortion image coordinate value and the actual distortion image coordinate value is determined as the re-projection error value corresponding to the lens optical distortion model.
- the calibration object is composed of a plurality of points
- the theory representing the same point of the calibration object is selected.
- the difference between the distortion image coordinate value and the actual distortion image coordinate value is calculated.
- the average or weighted average of the obtained plurality of differences is determined as the re-projection error value corresponding to the lens optical distortion model.
- the lens optical model includes a 2nd order radial distortion model and a 0th order tangential distortion model, then the calculated reprojection error value is 0.6; the lens optical model includes a 6th order radial distortion model and a 0th order tangential direction.
- the distortion model, then the calculated re-projection error value is 0.67; the lens optical model includes the 6th-order radial distortion model and the 0-order tangential distortion model, then the calculated re-projection error value is 1.1;
- the lens optical model contains The (2+4) order radial distortion model and the 0th order tangential distortion model, then the calculated reprojection error value is 0.51; the lens optical model includes the (2+6) order radial distortion model and the 0th order cut.
- the calculated re-projection error value is 0.54; the lens optical model includes the (4+6)-order radial distortion model and the 0-order tangential distortion model, then the calculated re-projection error value is 0.51; The lens optical model includes the (2+4+6) order radial distortion model and the 0th order tangential distortion model, and the calculated reprojection error value is 0.49.
- FIG. 2 it is a schematic diagram of re-projection error values corresponding to different lens optical distortion models.
- the lens optical distortion model with a re-projection error value less than 0.52 includes:
- the lens optical distortion model obtained by combining the (2+4+6) order radial distortion model and the 0th order tangential distortion model is the lens optical distortion model7.
- the lens optical distortion model 7 can be used as the preferred lens optical distortion model; however, the complexity of the model is also the highest.
- the lens optical distortion model 4 or the lens optical distortion model 6 can be preferentially selected in a device with limited computing resources, so that the correction accuracy can be slightly sacrificed. The computational complexity has been reduced.
- the terminal device may further obtain the acquired image that is distorted
- Corresponding objects are used as calibration objects to calculate re-projection error values corresponding to different lens optical distortion models, and according to the calculated re-projection error values, a lens optical distortion model whose re-projection error value is smaller than a set threshold is selected, or the minimum selection is selected.
- the lens optical distortion model corresponding to the re-projection error value is selected.
- the object to be photographed is used as a calibration object.
- the actual distortion image coordinate value corresponding to the object to be photographed is determined.
- the theoretical distortion image coordinate values corresponding to the subject are obtained by using different lens optical distortion models respectively; Calculate the difference between each theoretical distortion image coordinate value and the actual distortion image coordinate value, and obtain the re-projection error value corresponding to each lens optical distortion model.
- Step 103 Perform optical distortion correction on the acquired image that is distorted by using the selected lens optical distortion model to obtain an image after optical distortion correction.
- step 103 optical distortion correction is performed on the acquired image that is distorted by using the lens optical distortion model, including:
- the spatial coordinate value of each point included in the acquired object corresponding to the acquired image may be determined first, and then the ideal image coordinate value corresponding to the spatial coordinate value of each point is calculated.
- the ideal image coordinate value refers to a coordinate value of the subject in an optical distortion image
- the spatial coordinate value refers to a coordinate value of the subject in three-dimensional space.
- the ideal image coordinate value corresponding to the spatial coordinate value of each point can be calculated by:
- a virtual grid image without optical distortion is mapped, and the object to be photographed is mapped in the grid image to obtain an ideal image of the object, and an ideal image coordinate value of each grid point in the ideal image is determined.
- the coordinate coordinates of the ideal image coordinate values of the selected grid points are obtained, and the theoretical optical distortion image coordinate values are obtained, which specifically includes:
- Step 1 Using the inverse matrix of the internal parameter matrix of the terminal device, the ideal image coordinate value of the selected grid point is converted to obtain the first pinhole plane coordinate value.
- the second step using the selected lens optical distortion model, converting the first pinhole plane coordinate value to obtain a distorted second pinhole plane coordinate value, wherein the distorted second pinhole plane coordinate value is The first pinhole plane coordinate value corresponding to the selected grid point becomes optically distorted based on the selected lens optical distortion model.
- the third step converting the distortion of the second pinhole plane coordinate value to obtain the theoretical optical distortion image coordinate value by using the internal parameter matrix of the terminal device.
- the pinhole plane coordinate refers to the coordinates of the point determined in the coordinate system established based on the terminal device.
- the coordinate system established based on the terminal device includes: taking the optical center of the imaging unit of the terminal device as the origin, the optical axis as the Z axis of the coordinate system, and perpendicular to the imaging plane, and taking the imaging direction as the positive direction, and the coordinate system X
- the axis is parallel to the x-axis of the image physical coordinate system in the imaging plane
- the Y-axis of the coordinate system is parallel to the y-axis of the image physical coordinate system in the imaging plane.
- the ideal image coordinate values of the selected grid points are converted into the first pinhole plane coordinate values corresponding to the selected grid points by:
- (x, y, 1) is the homogeneous coordinate of the ideal image coordinate value of the selected grid point; (X, Y, Z) is the first pinhole plane coordinate value; A is the upper triangular matrix of 3*3 Indicates the internal parameter matrix output during the calibration of the terminal device, and A -1 is the inverse matrix of A.
- (x, y, 1) is obtained by homogeneous coordinate transformation of (x, y), and (x, y) is an ideal image coordinate value of the selected grid point.
- the first pinhole plane coordinate value is coordinate-converted by using the selected lens optical distortion model to obtain a distorted second pinhole plane coordinate value.
- the selected lens optical distortion model is the lens optical distortion model in Table 1:
- the radial distortion model is: (Formula 4); the tangential distortion model is: (Formula 5); the combined lens optical distortion model is: (Formula 6).
- the second pinhole plane coordinate value of the distortion is converted into a theoretical optical distortion image coordinate value by:
- (X d , Y d , 1) is the homogeneous coordinate of (X d , Y d ) calculated in the second step.
- homogeneous coordinate transformation refers to the way that an n-dimensional vector is represented by an n+1-dimensional vector.
- the distance between the values is less than the pixel of the set threshold, including:
- the pixel value corresponding to the ideal image coordinate value of the captured object is calculated according to the pixel value of the found pixel, including:
- (x d , y d ) is the theoretical optical distortion image coordinate value, and finds the pixel point where the distance between the actual optical distortion image coordinate value and the theoretical optical distortion image coordinate value is less than the set threshold, and the actual optical distortion is used.
- the image coordinate values are expressed as: (x1, y1), (x2, y2), (x3, y3), (x4, y4), then the pixel value of (x1, y1), the pixel value of (x2, y2), The pixel value of (x3, y3) and the pixel value of (x4, y4) are interpolated, and the selected grid point (x, y) corresponds to the pixel value in the ideal image.
- interpolation calculation method may adopt bilinear interpolation; bicubic interpolation may also be adopted; more complicated interpolation method based on edge statistical information may also be adopted, which is not specifically limited herein.
- the obtained ideal image is used as an image obtained by optically correcting the acquired distortion image.
- Step 104 When obtaining an image after optical distortion correction, detecting whether the image after the optical distortion correction includes a setting object, if yes, executing step 105; otherwise, outputting an image obtained by optical distortion correction.
- the setting object may be a face feature image, an image of a specific object, or the like, which is not limited herein.
- Step 105 Determine the intensity and direction of the regional distortion occurring in the acquired image in which the set object is distorted.
- the direction in which the setting object generates the area distortion in the acquired image that is distorted includes: setting the object to move in a circumferential direction of the acquired image that is distorted by the acquired image of the distorted image, Setting a target to move from a periphery of the acquired distorted image to a center direction of the acquired image;
- the intensity of occurrence of the area distortion in the acquired image in which the distortion is acquired includes one or more of a displacement value and a displacement change amount.
- the regional distortion may refer to an image distortion caused by the spatial distance or the shooting angle between the object and the terminal device in the process of converting the object into an image by using the imaging function of the optical imaging unit. .
- determining the intensity and direction of the regional distortion occurring in the acquired image in which the set object is distorted includes:
- First step determining a first set of position coordinates of the set object in the acquired distorted image, and determining a second set of position coordinates of the set object in the image after optical distortion correction.
- the pixel points belonging to the face feature are determined in the acquired image, and the determined coordinate set of the pixel points belonging to the face feature is obtained as the first position coordinate set; after the optical distortion correction The pixel points belonging to the face feature are determined in the image, and the determined coordinate set of the pixel points belonging to the face feature is obtained as the second position coordinate set.
- the coordinates of the pixel points of the face feature included in the first position coordinate set and the second position coordinate set may be coordinates of all pixel points of the subject that represent the face feature, and may also be partial representations.
- the coordinates of the pixel points of the face features included in the first position coordinate set and the coordinates of the pixel points of the face features included in the second position coordinate set may satisfy that the face features represented by the pixel points of the face feature are the same.
- the pixel points representing the face feature eyes in the subject are composed of numbers 1 to 10. If the first position coordinate set contains the coordinates of the number 1 of the pixel representing the face feature eye, then the second position coordinate set The coordinates of the number 1 of the pixel representing the face of the face feature are also included.
- a second step determining, for at least one pixel point in the setting object, a coordinate value of the at least one pixel point in the first position coordinate set and a coordinate set of the at least one pixel point in the second position coordinate The coordinate value in .
- the coordinate value of the number 1 of the pixel point representing the face feature eye included in the first position coordinate set is (a, b);
- the coordinate value of the number 1 of the pixel representing the face of the face feature included in the second position coordinate set is (c, d).
- the third step determining, according to the coordinate value of the at least one pixel in the first set of position coordinates and the coordinate value of the at least one pixel in the second set of position coordinates, determining that the set object is acquired The intensity and direction of regional distortion in the image where distortion occurs.
- the setting object is in the four corners of the acquired distorted image, and the set object is in the acquired distorted image.
- the direction in which the regional distortion occurs is caused by the center of the acquired image that is distorted to move in the circumferential direction of the acquired image that is distorted, and the speed of the distortion is changed to first become larger and then decrease.
- the setting object is stretched in the direction of the four corners, resulting in an increase in the intensity of the area distortion of the setting object;
- the setting object is compressed in the direction of the center, and the intensity of the distortion of the setting object is reduced.
- FIG. 3(a) is a displacement vector standard diagram of an optical distortion correction image.
- FIG. 3(b) it is a displacement vector change diagram of the optical distortion correction image.
- Step 106 Select an area distortion correction parameter according to the determined intensity and direction of the set object occurrence area distortion.
- the regional distortion correction parameter can be used to describe the regional distortion correction direction and the regional distortion correction strength.
- step 106 according to the correspondence between the intensity and direction of the regional distortion and the regional distortion correction parameter, the determined regional distortion correction parameter corresponding to the intensity and direction of the set object occurrence region distortion is obtained.
- Step 107 Perform area distortion correction on the image after the optical distortion correction by using the selected region distortion correction parameter to obtain an image after the regional distortion correction.
- step 107 the region distortion correction is performed on the image after the optical distortion correction by using the selected region distortion correction parameter, which specifically includes:
- the first step correcting the coordinate value of each pixel point included in the first position coordinate set by using the selected region distortion correction parameter.
- selecting a pixel point from the first set of position coordinates, and correcting coordinate values of the selected pixel point by:
- F ldc is the coordinate value of the selected pixel point in the second position coordinate set
- F d is the coordinate value of the selected pixel point in the first position coordinate set before correction
- Alpha is a regional distortion correction parameter that includes the direction of regional correction and the strength of regional correction.
- determining coordinate values of the pixel points of the set object in the corrected first set of position coordinates according to the corrected first position coordinate set and the second position coordinate set A conversion rule between pixel values in the second set of position coordinates.
- homography matrix H describes The spatial transformation relationship of the coordinate values corresponding to the same pixel in F ldc , ie among them,
- M is the number of pixel point pairs included in ⁇ F d ', F ldc ⁇ .
- the least square method or the gradient descent method can be used to solve h, and then the homography matrix H is obtained, wherein the homography matrix H represents the pixel of the set object after the correction.
- the third step using the determined conversion rule to correct the distortion of the image after the optical distortion correction, and obtain the image after the regional distortion correction.
- the image distortion correction is performed on the image after the optical distortion correction, and the image after the regional distortion correction is obtained, which specifically includes:
- the virtual region is corrected by the corrected grid image.
- the number of grid points included in the grid image after the area distortion correction is the same as the number of pixel points included in the image after the optical distortion correction, and the coordinate values and pixels of the grid points at the same position The coordinates of the points are the same.
- the pixel value of the selected grid point in the grid image is calculated according to the found pixel value of the pixel.
- the coordinate values of the grid points are converted to obtain the regional distortion coordinate values by using the determined conversion rule, including:
- calculating pixel values of the selected grid points in the grid image according to the found pixel values of the pixel points including:
- the acquired distortion image is combined to perform optical distortion correction and region distortion correction image.
- the method further includes:
- the method further includes: performing display adjustment on the obtained image by using the display parameter, so that the resolution of the obtained image and the image resolution of the terminal device Same as and output the adjusted image.
- the display parameters may include display size, display resolution, and the like.
- Embodiment 2 is a diagrammatic representation of Embodiment 1:
- FIG. 4 is a schematic structural diagram of an image processing apparatus according to Embodiment 2 of the present invention.
- the image processing apparatus is provided with the functions of the first embodiment of the present invention.
- the function of the correcting device can be implemented by a general purpose computer.
- the image processing device entity includes an imaging device 31, an image sensor 32, and a processor 33.
- the image sensor 32 and the processor 33 are connected by a bus 34.
- the imaging device 31 is configured to map a subject onto the image sensor 32;
- the image sensor 32 is configured to acquire an image in which distortion occurs
- the processor 33 is configured to select a lens optical distortion model whose reprojection error value is less than a set threshold according to a mapping relationship between at least one set of lens optical distortion models and reprojection error values, wherein the lens optical distortion model Including an optical distortion type, a distortion order, and a distortion coefficient for characterizing between any of the calibration objects, the theoretical distortion image coordinate value of the calibration object and the actual distortion image coordinate value of the calibration object Difference
- the distortion image obtained by the image sensor 32 is optically corrected by the lens optical distortion model to obtain an image after optical distortion correction.
- the image processing apparatus may further include a memory 35, and the memory 35 and the processor 33 are connected by a bus 34.
- the memory 34 is configured to store an image obtained by the image sensor 32 that is distorted.
- the memory 34 is further configured to send the stored distorted image to the processor 33.
- the image processing apparatus may further include a display 36, and the display 36 and the processor 33 are connected by a bus 34.
- the display 36 is configured to output an image obtained by correcting the optical distortion obtained by the processor 33.
- the processor 33 performs optical distortion correction on the acquired image that is distorted by using the lens optical distortion model, specifically for:
- the processor 33 determines an ideal image coordinate value of the captured object corresponding to the acquired image that is distorted, specifically for:
- the processor 33 performs coordinate conversion on the determined ideal image coordinate value of the captured object by using the lens optical distortion model to obtain a theoretical optical distortion image coordinate value corresponding to the ideal image coordinate value, specifically Used for:
- the processor 33 performs coordinates on the ideal image coordinate values of the selected grid points by using the internal parameter matrix of the terminal device, the selected lens optical distortion model, and the inverse matrix of the internal parameter matrix of the terminal device. Convert the theoretical optical distortion image coordinate values, specifically for:
- the distorted second pinhole plane coordinate value is converted to obtain a theoretical optical distortion image coordinate value.
- the processor 33 searches for actual optical distortion image coordinate values and the theory according to the theoretical optical distortion image coordinate value and the actual optical distortion image coordinate value of the pixel point included in the acquired image that is distorted.
- the distance between the optical distortion image coordinate values is less than the pixel point of the set threshold, specifically for:
- the processor 33 calculates, according to the pixel value of the found pixel point, a pixel value corresponding to the ideal image coordinate value of the captured object, specifically for:
- the processor 33 is specifically configured to:
- the obtained ideal image is used as an image obtained by optically correcting the acquired distortion image.
- mapping relationship between the lens optical distortion model and the re-projection error value includes:
- a calibration object is selected
- a mapping relationship between the lens optical distortion model and the determined re-projection error value is established.
- the processor 33 is further configured to: when obtaining an image after optical distortion correction:
- the region distortion correction is performed on the image after the optical distortion correction by using the selected region distortion correction parameter, and the image after the regional distortion correction is obtained.
- the processor 33 determines the strength and direction of the regional distortion caused by the setting object in the acquired distorted image, specifically for:
- the processor 33 performs regional distortion correction on the image after the optical distortion correction by using the selected region distortion correction parameter, and obtains an image after the regional distortion correction, specifically for:
- the image distortion correction is performed on the image after the optical distortion correction by using the determined conversion rule, and the image after the regional distortion correction is obtained.
- the processor 33 performs regional distortion correction on the optical distortion corrected image by using the determined conversion rule, specifically for:
- the virtual region distortion corrected mesh image wherein the number of grid points included in the mesh image after the region distortion correction and the optical distortion corrected image
- the number of included pixel points is the same, and the coordinate values of the grid points at the same position are the same as the coordinate values of the pixel points;
- the display 36 is further configured to display an image after the area distortion correction.
- the processor 33 may be a general purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more for controlling the execution of the program of the present invention. integrated circuit.
- CPU central processing unit
- ASIC application-specific integrated circuit
- the memory 35 may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (RAM) or may store information and instructions.
- ROM read-only memory
- RAM random access memory
- Other types of dynamic storage devices may also be Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM) or other optical disk storage.
- EEPROM Electrically Erasable Programmable Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- optical disk storage including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.
- magnetic disk storage media or other magnetic storage devices or capable of carrying or storing desired program code in the form of instructions or data structures and Any other medium that can be accessed by a computer, but is not limited thereto.
- These memories are connected to the processor via a bus.
- the image processing device corrects the optical distortion caused by the lens device not only by the lens optical distortion model but also corrects the regional distortion caused by the shooting angle by the regional distortion correction parameter, thereby improving the quality of the image acquired by the acquisition device.
- Embodiment 3 is a diagrammatic representation of Embodiment 3
- FIG. 5 it is a schematic structural diagram of an image processing device according to Embodiment 3 of the present invention.
- the image processing device includes: an obtaining module 41, a selecting module 42 and a processing module 43, wherein:
- the acquiring module 41 is configured to acquire an image in which the subject is distorted
- the selection module 42 is configured to select a lens optical distortion model whose reprojection error value is less than a set threshold according to a mapping relationship between at least one set of lens optical distortion models and reprojection error values, wherein the lens optical distortion model comprises optics a distortion type, a distortion order, and a distortion coefficient, the reprojection error value being used to characterize a difference between a theoretical distortion image coordinate value of the calibration object and an actual distortion image coordinate value of the calibration object for a calibration object ;
- a processing module 43 configured to use the lens optical distortion model to obtain an image that is distorted Optical distortion correction is performed to obtain an image after optical distortion correction.
- the processing module 43 performs optical distortion correction on the acquired image that is distorted by using the lens optical distortion model, specifically for:
- the processing module 43 determines an ideal image coordinate value of the captured object corresponding to the acquired image that is distorted, specifically for:
- the processing module 43 performs coordinate conversion on the determined ideal image coordinate value of the captured object by using the lens optical distortion model to obtain a theoretical optical distortion image coordinate value corresponding to the ideal image coordinate value, specifically Used for:
- the processing module 43 utilizes an internal parameter matrix of the terminal device, and the selected mirror
- the head optical distortion model and the inverse matrix of the internal parameter matrix of the terminal device perform coordinate transformation on the ideal image coordinate values of the selected grid points to obtain theoretical optical distortion image coordinate values, specifically for:
- the distorted second pinhole plane coordinate value is converted to obtain a theoretical optical distortion image coordinate value.
- the processing module 43 searches for the actual optical distortion image coordinate value and the theory according to the theoretical optical distortion image coordinate value and the actual optical distortion image coordinate value of the pixel point included in the acquired image that is distorted.
- the distance between the optical distortion image coordinate values is less than the pixel point of the set threshold, specifically for:
- the processing module 43 calculates, according to the pixel value of the found pixel point, a pixel value corresponding to the ideal image coordinate value of the captured object, specifically for:
- the processing module 43 is specifically configured to: when obtaining the pixel value of each grid point in the ideal image, use the obtained ideal image as an image obtained by performing optical distortion correction on the acquired distortion image. .
- mapping relationship between the lens optical distortion model and the re-projection error value includes:
- a calibration object is selected
- a mapping relationship between the lens optical distortion model and the determined re-projection error value is established.
- the processing module 43 is further configured to:
- the region distortion correction is performed on the image after the optical distortion correction by using the selected region distortion correction parameter, and the image after the regional distortion correction is obtained.
- the processing module 43 determines the strength and direction of the regional distortion caused by the setting object in the acquired distorted image, specifically for:
- the processing module 43 utilizes the selected region distortion correction parameter to optically deform
- the corrected image is corrected for the regional distortion, and the image after the regional distortion correction is obtained, which is specifically used for:
- the image distortion correction is performed on the image after the optical distortion correction by using the determined conversion rule, and the image after the regional distortion correction is obtained.
- the processing module 43 performs regional distortion correction on the image after the optical distortion correction by using the determined conversion rule, specifically for:
- the virtual region distortion corrected mesh image wherein the number of grid points included in the mesh image after the region distortion correction and the optical distortion corrected image
- the number of included pixel points is the same, and the coordinate values of the grid points at the same position are the same as the coordinate values of the pixel points;
- the pixel value of the selected grid point in the grid image is calculated according to the found pixel value of the pixel.
- the image processing device corrects the optical distortion caused by the lens device not only by the lens optical distortion model but also corrects the regional distortion caused by the shooting angle by the regional distortion correction parameter, thereby improving the quality of the image acquired by the acquisition device.
- the image processing device may be a logical component integrated in the terminal device, implemented by hardware or software, or may be a device independent of the terminal device. There is no limit in it.
- embodiments of the present invention can be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
- a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
Abstract
Description
镜头光学畸变模型编号 | 镜头光学畸变模型对应的径向畸变模型和切向畸变模型 |
1 | 2阶径向畸变模型和0阶切向畸变模型 |
2 | 4阶径向畸变模型和0阶切向畸变模型 |
3 | 6阶径向畸变模型和0阶切向畸变模型 |
4 | (2+4)阶径向畸变模型和0阶切向畸变模型 |
5 | (2+6)阶径向畸变模型和0阶切向畸变模型 |
6 | (4+6)阶径向畸变模型和0阶切向畸变模型 |
7 | (2+4+6)阶径向畸变模型和0阶切向畸变模型 |
Claims (39)
- 一种图像处理方法,其特征在于,包括:获取被拍摄对象发生畸变的图像;根据至少一组镜头光学畸变模型与重投影误差值之间的映射关系,选择重投影误差值小于设定阈值的镜头光学畸变模型,其中,所述镜头光学畸变模型包含光学畸变类型、畸变阶数和畸变系数,所述重投影误差值用于表征针对一个标定对象,所述标定对象的理论畸变图像坐标值与所述标定对象的实际畸变图像坐标值之间的差值;利用所述镜头光学畸变模型对获取的发生畸变的图像进行光学畸变矫正,得到光学畸变矫正后的图像。
- 如权利要求1所述的方法,其特征在于,利用所述镜头光学畸变模型对获取的发生畸变的图像进行光学畸变矫正,包括:确定获取的发生畸变的图像对应的被拍摄对象的理想图像坐标值,其中,所述理想图像坐标值用于表征所述被拍摄对象在未发生光学畸变图像中的坐标值;利用所述镜头光学畸变模型,对确定的所述被拍摄对象的理想图像坐标值进行坐标转换,得到所述理想图像坐标值对应的理论光学畸变图像坐标值;根据所述理论光学畸变图像坐标值和所述获取的发生畸变的图像中包含的像素点的实际光学畸变图像坐标值,查找实际光学畸变图像坐标值与所述理论光学畸变图像坐标值之间的距离值小于设定门限的像素点;根据查找到的像素点的像素值,计算得到所述被拍摄对象的理想图像坐标值对应的像素值。
- 如权利要求2所述的方法,其特征在于,确定获取的发生畸变的图像对应的被拍摄对象的理想图像坐标值,包括:虚拟一张没有发生光学畸变的网格图像,将所述被拍摄对象映射在所述网格图像中,得到所述被拍摄对象的理想图像;确定所述理想图像中每一个网格点的理想图像坐标值。
- 如权利要求3所述的方法,其特征在于,利用所述镜头光学畸变模型,对确定的所述被拍摄对象的理想图像坐标值进行坐标转换,得到所述理想图像坐标值对应的理论光学畸变图像坐标值,包括:读取终端设备的内参矩阵以及所述内参矩阵的逆矩阵;针对所述理想图像中每一个网格点的理想图像坐标值,执行:从所述理想图像中选择一个网格点,利用所述终端设备的内参矩阵、选择的所述镜头光学畸变模型以及所述终端设备的内参矩阵的逆矩阵,对选择的网格点的理想图像坐标值进行坐标转换得到理论光学畸变图像坐标值。
- 如权利要求4所述的方法,其特征在于,利用所述终端设备的内参矩阵、选择的所述镜头光学畸变模型以及所述终端设备的内参矩阵的逆矩阵,对选择的网格点的理想图像坐标值进行坐标转换得到理论光学畸变图像坐标值,包括:利用终端设备的内参矩阵的逆矩阵,将选择的网格点的理想图像坐标值转换得到第一针孔平面坐标值;利用选择的所述镜头光学畸变模型,将所述第一针孔平面坐标值转换得到畸变的第二针孔平面坐标值,其中,所述畸变的第二针孔平面坐标值为选择的网格点对应的第一针孔平面坐标值基于选择的所述镜头光学畸变模型发生光学畸变得到的;利用所述终端设备的内参矩阵,将所述畸变的第二针孔平面坐标值转换得到理论光学畸变图像坐标值。
- 如权利要求2至5任一所述的方法,其特征在于,根据所述理论光学畸变图像坐标值和所述获取的发生畸变的图像中包含的像素点的实际光学畸变图像坐标值,查找实际光学畸变图像坐标值与所述理论光学畸变图像坐标值之间的距离值小于设定门限的像素点,包括:计算所述理论光学畸变图像坐标值与所述获取的发生畸变的图像中包含的每一个像素点的实际光学畸变图像坐标值之间的距离值,确定计算得到的 距离值小于设定门限对应的像素点。
- 如权利要求2至6任一所述的方法,其特征在于,根据查找到的像素点的像素值,计算得到所述被拍摄对象的理想图像坐标值对应的像素值,包括:对查找到的像素点的像素值进行插值计算,得到所述被拍摄对象的理想图像坐标值在所述理想图像中的像素值。
- 如权利要求3至7任一所述的方法,其特征在于,所述得到光学畸变矫正后的图像,包括:在得到所述理想图像中每一个网格点的像素值时,将得到的所述理想图像作为获取的发生畸变的图像进行光学畸变矫正后的图像。
- 如权利要求1至8任一所述的方法,其特征在于,所述镜头光学畸变模型与重投影误差值之间的映射关系,包括:针对一种镜头光学畸变模型,选取标定对象;将所述标定对象映射至网格图像中,得到所述标定对象的理想图像坐标值;利用所述镜头光学畸变模型,将得到的所述标定对象的理想图像坐标值转换成理论畸变图像坐标值;通过光学成像设备的成像功能将所述标定对象映射至图像传感器中得到发生光学畸变的图像,并确定发生光学畸变的图像中的像素点的实际畸变图像坐标值;根据所述理论畸变图像坐标值与所述实际畸变图像坐标值的差值,确定所述镜头光学畸变模型对应的重投影误差值;建立所述镜头光学畸变模型与确定的重投影误差值之间的映射关系。
- 如权利要求1至9任一所述的方法,其特征在于,在得到光学畸变矫正后的图像时,所述方法还包括:在确定获取的发生畸变的图像中包含了设定对象时,确定所述设定对象在获取的发生畸变的图像中发生区域畸变的强度和方向;根据确定的所述设定对象发生区域畸变的强度和方向,选择区域畸变矫正参数;利用选择的所述区域畸变矫正参数,对光学畸变矫正后的图像进行区域畸变矫正,得到区域畸变矫正后的图像。
- 如权利要求10所述的方法,其特征在于,确定所述设定对象在获取的发生畸变的图像中发生区域畸变的强度和方向,包括:确定所述设定对象在获取的发生畸变的图像中的第一位置坐标集合,以及确定所述设定对象在光学畸变矫正后的图像中的第二位置坐标集合;针对所述设定对象中的至少一个像素点,分别确定该至少一个像素点在所述第一位置坐标集合中的坐标值和该至少一个像素点在所述第二位置坐标集合中的坐标值;根据该至少一个像素点在所述第一位置坐标集合中的坐标值和该至少一个像素点在所述第二位置坐标集合中的坐标值,确定所述设定对象在获取的发生畸变的图像中发生区域畸变的强度和方向。
- 如权利要求10或11所述的方法,其特征在于,利用选择的所述区域畸变矫正参数,对光学畸变矫正后的图像进行区域畸变矫正,得到区域畸变矫正后的图像,包括:利用选择的所述区域畸变矫正参数,对所述第一位置坐标集合中包含的每一个像素点的坐标值进行矫正;根据所述矫正后的第一位置坐标集合和所述第二位置坐标集合,确定所述设定对象的像素点在矫正后的所述第一位置坐标集合中的坐标值与所述像素点在所述第二位置坐标集合中的坐标值之间的转换规则;利用确定的转换规则,对光学畸变矫正后的图像进行区域畸变矫正,得到区域畸变矫正后的图像。
- 如权利要求12所述的方法,其特征在于,利用确定的转换规则,对光学畸变矫正后的图像进行区域畸变矫正,包括:根据所述光学畸变矫正后的图像,虚拟区域畸变矫正后的网格图像,其 中,所述区域畸变矫正后的网格图像中包含的网格点的个数与所述光学畸变矫正后的图像包含的像素点个数相同、相同位置上的网格点的坐标值与像素点的坐标值相同;针对所述网格图像中的每一个网格点,执行以下操作:从所述网格图像中选择一个网格点,利用确定的转换规则,将所述网格点的坐标值转换得到区域畸变坐标值;根据所述区域畸变坐标值和光学畸变矫正后的图像中包含的像素点的坐标值,查找坐标值与所述区域畸变坐标值之间的距离值小于设定距离值的像素点;根据查找到的所述像素点的像素值,计算得到选择的网格点在网格图像中的像素值。
- 一种图像处理设备,其特征在于,所述图像处理设备包括:成像设备、图像传感器和处理器,其中,所述图像传感器和所述处理器之间通过总线连接;所述成像设备,用于将被拍摄对象映射至所述图像传感器上;所述图像传感器,用于获取被拍摄对象发生畸变的图像;所述处理器,用于根据至少一组镜头光学畸变模型与重投影误差值之间的映射关系,选择重投影误差值小于设定阈值的镜头光学畸变模型,其中,所述镜头光学畸变模型包含光学畸变类型、畸变阶数和畸变系数,所述重投影误差值用于表征针对一个标定对象,所述标定对象的理论畸变图像坐标值与所述标定对象的实际畸变图像坐标值之间的差值;利用所述镜头光学畸变模型对所述图像传感器获取的发生畸变的图像进行光学畸变矫正,得到光学畸变矫正后的图像。
- 如权利要求14所述的图像处理设备,其特征在于,所述处理器利用所述镜头光学畸变模型对获取的发生畸变的图像进行光学畸变矫正,具体用于:确定获取的发生畸变的图像对应的被拍摄对象的理想图像坐标值,其中, 所述理想图像坐标值用于表征所述被拍摄对象在未发生光学畸变图像中的坐标值;利用所述镜头光学畸变模型,对确定的所述被拍摄对象的理想图像坐标值进行坐标转换,得到所述理想图像坐标值对应的理论光学畸变图像坐标值;根据所述理论光学畸变图像坐标值和所述获取的发生畸变的图像中包含的像素点的实际光学畸变图像坐标值,查找实际光学畸变图像坐标值与所述理论光学畸变图像坐标值之间的距离值小于设定门限的像素点;根据查找到的像素点的像素值,计算得到所述被拍摄对象的理想图像坐标值对应的像素值。
- 如权利要求15所述的图像处理设备,其特征在于,所述处理器确定获取的发生畸变的图像对应的被拍摄对象的理想图像坐标值,具体用于:虚拟一张没有发生光学畸变的网格图像,将所述被拍摄对象映射在所述网格图像中,得到所述被拍摄对象的理想图像;确定所述理想图像中每一个网格点的理想图像坐标值。
- 如权利要求16所述的图像处理设备,其特征在于,所述处理器利用所述镜头光学畸变模型,对确定的所述被拍摄对象的理想图像坐标值进行坐标转换,得到所述理想图像坐标值对应的理论光学畸变图像坐标值,具体用于:读取终端设备的内参矩阵以及所述内参矩阵的逆矩阵;针对所述理想图像中每一个网格点的理想图像坐标值,执行:从所述理想图像中选择一个网格点,利用所述终端设备的内参矩阵、选择的所述镜头光学畸变模型以及所述终端设备的内参矩阵的逆矩阵,对选择的网格点的理想图像坐标值进行坐标转换得到理论光学畸变图像坐标值。
- 如权利要求17所述的图像处理设备,其特征在于,所述处理器利用所述终端设备的内参矩阵、选择的所述镜头光学畸变模型以及所述终端设备的内参矩阵的逆矩阵,对选择的网格点的理想图像坐标值进行坐标转换得到理论光学畸变图像坐标值,具体用于:利用终端设备的内参矩阵的逆矩阵,将选择的网格点的理想图像坐标值转换得到第一针孔平面坐标值;利用选择的所述镜头光学畸变模型,将所述第一针孔平面坐标值转换得到畸变的第二针孔平面坐标值,其中,所述畸变的第二针孔平面坐标值为选择的网格点对应的第一针孔平面坐标值基于选择的所述镜头光学畸变模型发生光学畸变得到的;利用所述终端设备的内参矩阵,将所述畸变的第二针孔平面坐标值转换得到理论光学畸变图像坐标值。
- 如权利要求15至18任一所述的图像处理设备,其特征在于,所述处理器根据所述理论光学畸变图像坐标值和所述获取的发生畸变的图像中包含的像素点的实际光学畸变图像坐标值,查找实际光学畸变图像坐标值与所述理论光学畸变图像坐标值之间的距离值小于设定门限的像素点,具体用于:计算所述理论光学畸变图像坐标值与所述获取的发生畸变的图像中包含的每一个像素点的实际光学畸变图像坐标值之间的距离值,确定计算得到的距离值小于设定门限对应的像素点。
- 如权利要求15至19任一所述的图像处理设备,其特征在于,所述处理器根据查找到的像素点的像素值,计算得到所述被拍摄对象的理想图像坐标值对应的像素值,具体用于:对查找到的像素点的像素值进行插值计算,得到所述被拍摄对象的理想图像坐标值在所述理想图像中的像素值。
- 如权利要求15至20任一所述的图像处理设备,其特征在于,所述处理器,具体用于:在得到所述理想图像中每一个网格点的像素值时,将得到的所述理想图像作为获取的发生畸变的图像进行光学畸变矫正后的图像。
- 如权利要求14至21任一所述的图像处理设备,其特征在于,所述镜头光学畸变模型与重投影误差值之间的映射关系,包括:针对一种镜头光学畸变模型,选取标定对象;将所述标定对象映射至网格图像中,得到所述标定对象的理想图像坐标值;利用所述镜头光学畸变模型,将得到的所述标定对象的理想图像坐标值转换成理论畸变图像坐标值;通过光学成像设备的成像功能将所述标定对象映射至图像传感器中得到发生光学畸变的图像,并确定发生光学畸变的图像中的像素点的实际畸变图像坐标值;根据所述理论畸变图像坐标值与所述实际畸变图像坐标值的差值,确定所述镜头光学畸变模型对应的重投影误差值;建立所述镜头光学畸变模型与确定的重投影误差值之间的映射关系。
- 如权利要求14至22任一所述的图像处理设备,其特征在于,所述处理器在得到光学畸变矫正后的图像时,还用于:在确定获取的发生畸变的图像中包含了设定对象时,确定所述设定对象在获取的发生畸变的图像中发生区域畸变的强度和方向;根据确定的所述设定对象发生区域畸变的强度和方向,选择区域畸变矫正参数;利用选择的所述区域畸变矫正参数,对光学畸变矫正后的图像进行区域畸变矫正,得到区域畸变矫正后的图像。
- 如权利要求23所述的图像处理设备,其特征在于,所述处理器确定所述设定对象在获取的发生畸变的图像中发生区域畸变的强度和方向,具体用于:确定所述设定对象在获取的发生畸变的图像中的第一位置坐标集合,以及确定所述设定对象在光学畸变矫正后的图像中的第二位置坐标集合;针对所述设定对象中的至少一个像素点,分别确定该至少一个像素点在所述第一位置坐标集合中的坐标值和该至少一个像素点在所述第二位置坐标集合中的坐标值;根据该至少一个像素点在所述第一位置坐标集合中的坐标值和该至少一 个像素点在所述第二位置坐标集合中的坐标值,确定所述设定对象在获取的发生畸变的图像中发生区域畸变的强度和方向。
- 如权利要求23或24所述的图像处理设备,其特征在于,所述处理器利用选择的所述区域畸变矫正参数,对光学畸变矫正后的图像进行区域畸变矫正,得到区域畸变矫正后的图像,具体用于:利用选择的所述区域畸变矫正参数,对所述第一位置坐标集合中包含的每一个像素点的坐标值进行矫正;根据所述矫正后的第一位置坐标集合和所述第二位置坐标集合,确定所述设定对象的像素点在矫正后的所述第一位置坐标集合中的坐标值与所述像素点在所述第二位置坐标集合中的坐标值之间的转换规则;利用确定的转换规则,对光学畸变矫正后的图像进行区域畸变矫正,得到区域畸变矫正后的图像。
- 如权利要求25所述的图像处理设备,其特征在于,所述处理器利用确定的转换规则,对光学畸变矫正后的图像进行区域畸变矫正,具体用于:根据所述光学畸变矫正后的图像,虚拟区域畸变矫正后的网格图像,其中,所述区域畸变矫正后的网格图像中包含的网格点的个数与所述光学畸变矫正后的图像包含的像素点个数相同、相同位置上的网格点的坐标值与像素点的坐标值相同;针对所述网格图像中的每一个网格点,执行以下操作:从所述网格图像中选择一个网格点,利用确定的转换规则,将所述网格点的坐标值转换得到区域畸变坐标值;根据所述区域畸变坐标值和光学畸变矫正后的图像中包含的像素点的坐标值,查找坐标值与所述区域畸变坐标值之间的距离值小于设定距离值的像素点;根据查找到的所述像素点的像素值,计算得到选择的网格点在网格图像中的像素值。
- 一种图像处理设备,其特征在于,包括:获取模块,用于获取被拍摄对象发生畸变的图像;选择模块,用于根据至少一组镜头光学畸变模型与重投影误差值之间的映射关系,选择重投影误差值小于设定阈值的镜头光学畸变模型,其中,所述镜头光学畸变模型包含光学畸变类型、畸变阶数和畸变系数,所述重投影误差值用于表征针对一个标定对象,所述标定对象的理论畸变图像坐标值与所述标定对象的实际畸变图像坐标值之间的差值;处理模块,用于利用所述镜头光学畸变模型对获取的发生畸变的图像进行光学畸变矫正,得到光学畸变矫正后的图像。
- 如权利要求27所述的图像处理设备,其特征在于,所述处理模块利用所述镜头光学畸变模型对获取的发生畸变的图像进行光学畸变矫正,具体用于:确定获取的发生畸变的图像对应的被拍摄对象的理想图像坐标值,其中,所述理想图像坐标值用于表征所述被拍摄对象在未发生光学畸变图像中的坐标值;利用所述镜头光学畸变模型,对确定的所述被拍摄对象的理想图像坐标值进行坐标转换,得到所述理想图像坐标值对应的理论光学畸变图像坐标值;根据所述理论光学畸变图像坐标值和所述获取的发生畸变的图像中包含的像素点的实际光学畸变图像坐标值,查找实际光学畸变图像坐标值与所述理论光学畸变图像坐标值之间的距离值小于设定门限的像素点;根据查找到的像素点的像素值,计算得到所述被拍摄对象的理想图像坐标值对应的像素值。
- 如权利要求28所述的图形处理设备,其特征在于,所述处理模块确定获取的发生畸变的图像对应的被拍摄对象的理想图像坐标值,具体用于:虚拟一张没有发生光学畸变的网格图像,将所述被拍摄对象映射在所述网格图像中,得到所述被拍摄对象的理想图像;确定所述理想图像中每一个网格点的理想图像坐标值。
- 如权利要求29所述的图像处理设备,其特征在于,所述处理模块利 用所述镜头光学畸变模型,对确定的所述被拍摄对象的理想图像坐标值进行坐标转换,得到所述理想图像坐标值对应的理论光学畸变图像坐标值,具体用于:读取终端设备的内参矩阵以及所述内参矩阵的逆矩阵;针对所述理想图像中每一个网格点的理想图像坐标值,执行:从所述理想图像中选择一个网格点,利用所述终端设备的内参矩阵、选择的所述镜头光学畸变模型以及所述终端设备的内参矩阵的逆矩阵,对选择的网格点的理想图像坐标值进行坐标转换得到理论光学畸变图像坐标值。
- 如权利要求30所述的图像处理设备,其特征在于,所述处理模块利用所述终端设备的内参矩阵、选择的所述镜头光学畸变模型以及所述终端设备的内参矩阵的逆矩阵,对选择的网格点的理想图像坐标值进行坐标转换得到理论光学畸变图像坐标值,具体用于:利用终端设备的内参矩阵的逆矩阵,将选择的网格点的理想图像坐标值转换得到第一针孔平面坐标值;利用选择的所述镜头光学畸变模型,将所述第一针孔平面坐标值转换得到畸变的第二针孔平面坐标值,其中,所述畸变的第二针孔平面坐标值为选择的网格点对应的第一针孔平面坐标值基于选择的所述镜头光学畸变模型发生光学畸变得到的;利用所述终端设备的内参矩阵,将所述畸变的第二针孔平面坐标值转换得到理论光学畸变图像坐标值。
- 如权利要求28至31任一所述的图像处理设备,其特征在于,所述处理模块根据所述理论光学畸变图像坐标值和所述获取的发生畸变的图像中包含的像素点的实际光学畸变图像坐标值,查找实际光学畸变图像坐标值与所述理论光学畸变图像坐标值之间的距离值小于设定门限的像素点,具体用于:计算所述理论光学畸变图像坐标值与所述获取的发生畸变的图像中包含的每一个像素点的实际光学畸变图像坐标值之间的距离值,确定计算得到的 距离值小于设定门限对应的像素点。
- 如权利要求28至32任一所述的图像处理设备,其特征在于,所述处理模块根据查找到的像素点的像素值,计算得到所述被拍摄对象的理想图像坐标值对应的像素值,具体用于:对查找到的像素点的像素值进行插值计算,得到所述被拍摄对象的理想图像坐标值在所述理想图像中的像素值。
- 如权利要求29至33任一所述的图像处理设备,其特征在于,所述处理模块,具体用于在得到所述理想图像中每一个网格点的像素值时,将得到的所述理想图像作为获取的发生畸变的图像进行光学畸变矫正后的图像。
- 如权利要求27至34任一所述的图像处理设备,其特征在于,所述镜头光学畸变模型与重投影误差值之间的映射关系,包括:针对一种镜头光学畸变模型,选取标定对象;将所述标定对象映射至网格图像中,得到所述标定对象的理想图像坐标值;利用所述镜头光学畸变模型,将得到的所述标定对象的理想图像坐标值转换成理论畸变图像坐标值;通过光学成像设备的成像功能将所述标定对象映射至图像传感器中得到发生光学畸变的图像,并确定发生光学畸变的图像中的像素点的实际畸变图像坐标值;根据所述理论畸变图像坐标值与所述实际畸变图像坐标值的差值,确定所述镜头光学畸变模型对应的重投影误差值;建立所述镜头光学畸变模型与确定的重投影误差值之间的映射关系。
- 如权利要求27至35任一所述的图像处理设备,其特征在于,所述处理模块在得到光学畸变矫正后的图像时,还用于:在确定获取的发生畸变的图像中包含了设定对象时,确定所述设定对象在获取的发生畸变的图像中发生区域畸变的强度和方向;根据确定的所述设定对象发生区域畸变的强度和方向,选择区域畸变矫 正参数;利用选择的所述区域畸变矫正参数,对光学畸变矫正后的图像进行区域畸变矫正,得到区域畸变矫正后的图像。
- 如权利要求36所述的图像处理设备,其特征在于,所述处理模块确定所述设定对象在获取的发生畸变的图像中发生区域畸变的强度和方向,具体用于:确定所述设定对象在获取的发生畸变的图像中的第一位置坐标集合,以及确定所述设定对象在光学畸变矫正后的图像中的第二位置坐标集合;针对所述设定对象中的至少一个像素点,分别确定该至少一个像素点在所述第一位置坐标集合中的坐标值和该至少一个像素点在所述第二位置坐标集合中的坐标值;根据该至少一个像素点在所述第一位置坐标集合中的坐标值和该至少一个像素点在所述第二位置坐标集合中的坐标值,确定所述设定对象在获取的发生畸变的图像中发生区域畸变的强度和方向。
- 如权利要求36或37所述的图像处理设备,其特征在于,所述处理模块利用选择的所述区域畸变矫正参数,对光学畸变矫正后的图像进行区域畸变矫正,得到区域畸变矫正后的图像,具体用于:利用选择的所述区域畸变矫正参数,对所述第一位置坐标集合中包含的每一个像素点的坐标值进行矫正;根据所述矫正后的第一位置坐标集合和所述第二位置坐标集合,确定所述设定对象的像素点在矫正后的所述第一位置坐标集合中的坐标值与所述像素点在所述第二位置坐标集合中的坐标值之间的转换规则;利用确定的转换规则,对光学畸变矫正后的图像进行区域畸变矫正,得到区域畸变矫正后的图像。
- 如权利要求38所述的图像处理设备,其特征在于,所述处理模块利用确定的转换规则,对光学畸变矫正后的图像进行区域畸变矫正,具体用于:根据所述光学畸变矫正后的图像,虚拟区域畸变矫正后的网格图像,其 中,所述区域畸变矫正后的网格图像中包含的网格点的个数与所述光学畸变矫正后的图像包含的像素点个数相同、相同位置上的网格点的坐标值与像素点的坐标值相同;针对所述网格图像中的每一个网格点,执行以下操作:从所述网格图像中选择一个网格点,利用确定的转换规则,将所述网格点的坐标值转换得到区域畸变坐标值;根据所述区域畸变坐标值和光学畸变矫正后的图像中包含的像素点的坐标值,查找坐标值与所述区域畸变坐标值之间的距离值小于设定距离值的像素点;根据查找到的所述像素点的像素值,计算得到选择的网格点在网格图像中的像素值。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP14904981.9A EP3200148B1 (en) | 2014-10-31 | 2014-10-31 | Image processing method and device |
KR1020177012306A KR101921672B1 (ko) | 2014-10-31 | 2014-10-31 | 이미지 처리 방법 및 장치 |
US15/522,930 US10262400B2 (en) | 2014-10-31 | 2014-10-31 | Image processing method and device using reprojection error values |
CN201480001341.9A CN104363986B (zh) | 2014-10-31 | 2014-10-31 | 一种图像处理方法和设备 |
PCT/CN2014/090094 WO2016065632A1 (zh) | 2014-10-31 | 2014-10-31 | 一种图像处理方法和设备 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2014/090094 WO2016065632A1 (zh) | 2014-10-31 | 2014-10-31 | 一种图像处理方法和设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016065632A1 true WO2016065632A1 (zh) | 2016-05-06 |
Family
ID=52530962
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/090094 WO2016065632A1 (zh) | 2014-10-31 | 2014-10-31 | 一种图像处理方法和设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10262400B2 (zh) |
EP (1) | EP3200148B1 (zh) |
KR (1) | KR101921672B1 (zh) |
CN (1) | CN104363986B (zh) |
WO (1) | WO2016065632A1 (zh) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10754149B2 (en) | 2018-04-20 | 2020-08-25 | Amd (Shanghai) Co., Ltd. | Efficient radial lens shading correction |
CN112435303A (zh) * | 2020-12-09 | 2021-03-02 | 华中科技大学 | 振镜系统校正表构建方法、构建系统及振镜系统校正方法 |
CN112465917A (zh) * | 2020-11-30 | 2021-03-09 | 北京紫光展锐通信技术有限公司 | 镜头模组的畸变标定方法、系统、设备及存储介质 |
CN112509035A (zh) * | 2020-11-26 | 2021-03-16 | 江苏集萃未来城市应用技术研究所有限公司 | 一种光学镜头和热成像镜头的双镜头图像像素点匹配方法 |
CN112509034A (zh) * | 2020-11-26 | 2021-03-16 | 江苏集萃未来城市应用技术研究所有限公司 | 一种基于图像像素点匹配的大范围行人体温精准检测方法 |
US11272146B1 (en) | 2020-08-28 | 2022-03-08 | Advanced Micro Devices, Inc. | Content adaptive lens shading correction method and apparatus |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016065632A1 (zh) * | 2014-10-31 | 2016-05-06 | 华为技术有限公司 | 一种图像处理方法和设备 |
CN105869110B (zh) | 2016-03-28 | 2018-09-28 | 腾讯科技(深圳)有限公司 | 图像显示方法和装置、异形曲面幕布的定制方法和装置 |
RU2623806C1 (ru) * | 2016-06-07 | 2017-06-29 | Акционерное общество Научно-производственный центр "Электронные вычислительно-информационные системы" (АО НПЦ "ЭЛВИС") | Способ и устройство обработки стереоизображений |
CN106161941B (zh) * | 2016-07-29 | 2022-03-11 | 南昌黑鲨科技有限公司 | 双摄像头自动追焦方法、装置及终端 |
CN106447602B (zh) * | 2016-08-31 | 2020-04-03 | 浙江大华技术股份有限公司 | 一种图像拼接方法及装置 |
CN107544147A (zh) * | 2016-11-30 | 2018-01-05 | 深圳市虚拟现实技术有限公司 | 基于图像刻度的景深激光设置的方法及装置 |
CN108171673B (zh) * | 2018-01-12 | 2024-01-23 | 京东方科技集团股份有限公司 | 图像处理方法、装置、车载抬头显示系统及车辆 |
CN108399606B (zh) * | 2018-02-02 | 2020-06-26 | 北京奇艺世纪科技有限公司 | 一种图像调整的方法及装置 |
CN110290285B (zh) * | 2018-03-19 | 2021-01-22 | 京东方科技集团股份有限公司 | 图像处理方法、图像处理装置、图像处理系统及介质 |
WO2020014881A1 (zh) * | 2018-07-17 | 2020-01-23 | 华为技术有限公司 | 一种图像校正方法和终端 |
CN109285190B (zh) * | 2018-09-06 | 2021-06-04 | 广东天机工业智能系统有限公司 | 对象定位方法、装置、电子设备和存储介质 |
CN109447908A (zh) * | 2018-09-25 | 2019-03-08 | 上海大学 | 一种基于立体视觉的钢卷识别定位方法 |
KR102118173B1 (ko) * | 2018-10-31 | 2020-06-02 | 중앙대학교 산학협력단 | 왜곡계수 추정을 통한 영상 보정 시스템 및 방법 |
CN111243028B (zh) * | 2018-11-09 | 2023-09-08 | 杭州海康威视数字技术股份有限公司 | 一种电子设备及镜头关联方法、装置 |
US11741584B2 (en) * | 2018-11-13 | 2023-08-29 | Genesys Logic, Inc. | Method for correcting an image and device thereof |
CN111263137B (zh) * | 2018-11-30 | 2022-04-19 | 南京理工大学 | 单图像的畸变检测处理方法 |
CN109799073B (zh) | 2019-02-13 | 2021-10-22 | 京东方科技集团股份有限公司 | 一种光学畸变测量装置及方法、图像处理系统、电子设备和显示设备 |
CN111756984A (zh) * | 2019-03-26 | 2020-10-09 | 深圳市赛格导航科技股份有限公司 | 一种实现倒车实时图像半全景的图像处理方法和系统 |
CN110473159B (zh) * | 2019-08-20 | 2022-06-10 | Oppo广东移动通信有限公司 | 图像处理方法和装置、电子设备、计算机可读存储介质 |
CN110519486B (zh) * | 2019-09-19 | 2021-09-03 | Oppo广东移动通信有限公司 | 基于广角镜头的畸变补偿方法、装置、及相关设备 |
CN112541861A (zh) * | 2019-09-23 | 2021-03-23 | 华为技术有限公司 | 图像处理方法、装置、设备及计算机存储介质 |
CN110751609B (zh) * | 2019-10-25 | 2022-03-11 | 浙江迅实科技有限公司 | 一种基于智能光畸变校正的dlp打印精度提升方法 |
CN111355863B (zh) * | 2020-04-07 | 2022-07-22 | 北京达佳互联信息技术有限公司 | 一种图像畸变校正方法、装置、电子设备及存储介质 |
CN113724141B (zh) * | 2020-05-26 | 2023-09-05 | 杭州海康威视数字技术股份有限公司 | 一种图像校正方法、装置及电子设备 |
KR102267696B1 (ko) * | 2020-06-16 | 2021-06-21 | 남서울대학교 산학협력단 | 사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환 시스템 및 그 방법 |
CN114648449A (zh) * | 2020-12-18 | 2022-06-21 | 华为技术有限公司 | 一种图像重映射方法以及图像处理装置 |
CN112871703B (zh) * | 2020-12-30 | 2022-09-06 | 天津德通电气股份有限公司 | 一种智能管理选煤平台及其方法 |
CN113284196B (zh) * | 2021-07-20 | 2021-10-22 | 杭州先奥科技有限公司 | 一种摄像机畸变逐像素标定方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102169573A (zh) * | 2011-03-23 | 2011-08-31 | 北京大学 | 高精度的宽视场镜头实时畸变矫正方法及系统 |
CN102622747A (zh) * | 2012-02-16 | 2012-08-01 | 北京航空航天大学 | 一种用于视觉测量的摄像机参数优化方法 |
CN102750697A (zh) * | 2012-06-08 | 2012-10-24 | 华为技术有限公司 | 一种参数标定方法及装置 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6819333B1 (en) * | 2000-05-12 | 2004-11-16 | Silicon Graphics, Inc. | System and method for displaying an image using display distortion correction |
CN1316427C (zh) | 2001-07-12 | 2007-05-16 | 杜莱布斯公司 | 产生与装置链的装置的缺陷相关的格式化信息的方法和系统 |
ATE354837T1 (de) | 2001-07-12 | 2007-03-15 | Do Labs | Verfahren und system zur herstellung von auf geometrischen verzerrungen bezogenen formatierten informationen |
US6856449B2 (en) * | 2003-07-10 | 2005-02-15 | Evans & Sutherland Computer Corporation | Ultra-high resolution light modulation control system and method |
JP2005327220A (ja) * | 2004-05-17 | 2005-11-24 | Sharp Corp | イラスト図作成装置、イラスト図作成方法、制御プログラムおよび可読記録媒体 |
US7742624B2 (en) | 2006-04-25 | 2010-06-22 | Motorola, Inc. | Perspective improvement for image and video applications |
JP4714174B2 (ja) | 2007-03-27 | 2011-06-29 | 富士フイルム株式会社 | 撮像装置 |
KR101014572B1 (ko) * | 2007-08-27 | 2011-02-16 | 주식회사 코아로직 | 영상 왜곡 보정 방법 및 그 보정 방법을 채용한 영상처리장치 |
JP2011049733A (ja) | 2009-08-26 | 2011-03-10 | Clarion Co Ltd | カメラキャリブレーション装置および映像歪み補正装置 |
CN102075785B (zh) | 2010-12-28 | 2012-05-23 | 武汉大学 | 一种atm机广角摄像机镜头畸变校正方法 |
WO2013015699A1 (en) * | 2011-07-25 | 2013-01-31 | Universidade De Coimbra | Method and apparatus for automatic camera calibration using one or more images of a checkerboard pattern |
CN103685951A (zh) * | 2013-12-06 | 2014-03-26 | 华为终端有限公司 | 一种图像处理方法、装置及终端 |
WO2016065632A1 (zh) * | 2014-10-31 | 2016-05-06 | 华为技术有限公司 | 一种图像处理方法和设备 |
CN107004261B (zh) * | 2015-09-15 | 2020-01-21 | 华为技术有限公司 | 图像畸变校正方法及装置 |
US10373297B2 (en) * | 2016-10-26 | 2019-08-06 | Valve Corporation | Using pupil location to correct optical lens distortion |
-
2014
- 2014-10-31 WO PCT/CN2014/090094 patent/WO2016065632A1/zh active Application Filing
- 2014-10-31 EP EP14904981.9A patent/EP3200148B1/en active Active
- 2014-10-31 CN CN201480001341.9A patent/CN104363986B/zh active Active
- 2014-10-31 US US15/522,930 patent/US10262400B2/en active Active
- 2014-10-31 KR KR1020177012306A patent/KR101921672B1/ko active IP Right Grant
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102169573A (zh) * | 2011-03-23 | 2011-08-31 | 北京大学 | 高精度的宽视场镜头实时畸变矫正方法及系统 |
CN102622747A (zh) * | 2012-02-16 | 2012-08-01 | 北京航空航天大学 | 一种用于视觉测量的摄像机参数优化方法 |
CN102750697A (zh) * | 2012-06-08 | 2012-10-24 | 华为技术有限公司 | 一种参数标定方法及装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3200148A4 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10754149B2 (en) | 2018-04-20 | 2020-08-25 | Amd (Shanghai) Co., Ltd. | Efficient radial lens shading correction |
US11272146B1 (en) | 2020-08-28 | 2022-03-08 | Advanced Micro Devices, Inc. | Content adaptive lens shading correction method and apparatus |
CN112509035A (zh) * | 2020-11-26 | 2021-03-16 | 江苏集萃未来城市应用技术研究所有限公司 | 一种光学镜头和热成像镜头的双镜头图像像素点匹配方法 |
CN112509034A (zh) * | 2020-11-26 | 2021-03-16 | 江苏集萃未来城市应用技术研究所有限公司 | 一种基于图像像素点匹配的大范围行人体温精准检测方法 |
CN112465917A (zh) * | 2020-11-30 | 2021-03-09 | 北京紫光展锐通信技术有限公司 | 镜头模组的畸变标定方法、系统、设备及存储介质 |
CN112465917B (zh) * | 2020-11-30 | 2023-02-28 | 北京紫光展锐通信技术有限公司 | 镜头模组的畸变标定方法、系统、设备及存储介质 |
CN112435303A (zh) * | 2020-12-09 | 2021-03-02 | 华中科技大学 | 振镜系统校正表构建方法、构建系统及振镜系统校正方法 |
CN112435303B (zh) * | 2020-12-09 | 2024-03-19 | 华中科技大学 | 振镜系统校正表构建方法、构建系统及振镜系统校正方法 |
Also Published As
Publication number | Publication date |
---|---|
US20170330308A1 (en) | 2017-11-16 |
CN104363986A (zh) | 2015-02-18 |
KR20170063953A (ko) | 2017-06-08 |
KR101921672B1 (ko) | 2019-02-13 |
EP3200148A1 (en) | 2017-08-02 |
EP3200148B1 (en) | 2019-08-28 |
CN104363986B (zh) | 2017-06-13 |
EP3200148A4 (en) | 2017-08-02 |
US10262400B2 (en) | 2019-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016065632A1 (zh) | 一种图像处理方法和设备 | |
JP5437311B2 (ja) | 画像補正方法、画像補正システム、角度推定方法、および角度推定装置 | |
CN107705333B (zh) | 基于双目相机的空间定位方法及装置 | |
KR102227583B1 (ko) | 딥 러닝 기반의 카메라 캘리브레이션 방법 및 장치 | |
CN105635588B (zh) | 一种稳像方法及装置 | |
JP6677098B2 (ja) | 全天球動画の撮影システム、及びプログラム | |
CN104917955A (zh) | 一种图像转换和多视图输出系统及方法 | |
CN110717942A (zh) | 图像处理方法和装置、电子设备、计算机可读存储介质 | |
CN102156969A (zh) | 图像纠偏处理方法 | |
JP2012114849A (ja) | 画像処理装置、及び画像処理方法 | |
JP2011049733A (ja) | カメラキャリブレーション装置および映像歪み補正装置 | |
JP2019536151A (ja) | 広角画像を修正するシステム及び方法 | |
WO2021142843A1 (zh) | 图像扫描方法及装置、设备、存储介质 | |
TWI517094B (zh) | 影像校正方法及影像校正電路 | |
KR20160040330A (ko) | 어안 렌즈로부터 획득한 왜곡 영상을 동심원 형태의 표준 패턴을 이용하여 보정하는 방법 | |
US20180047133A1 (en) | Image processing apparatus, image processing method, and storage medium | |
CN115086625B (zh) | 投影画面的校正方法、装置、系统、校正设备和投影设备 | |
CN115272124A (zh) | 一种畸变图像校正方法、装置 | |
CN103929584B (zh) | 图像校正方法及图像校正电路 | |
US20130343636A1 (en) | Image processing apparatus, control method of the same and non-transitory computer-readable storage medium | |
TW201843648A (zh) | 影像視角轉換方法及其系統 | |
CN113191975A (zh) | 图像畸变矫正方法及装置 | |
JP2012134668A (ja) | 画像処理装置、画像処理方法 | |
CN114697542A (zh) | 视频处理方法、装置、终端设备及存储介质 | |
WO2024101429A1 (ja) | カメラパラメータ算出装置、カメラパラメータ算出方法、カメラパラメータ算出プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14904981 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2014904981 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15522930 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20177012306 Country of ref document: KR Kind code of ref document: A |