WO2019011027A1 - 应用于三维相机的图像校准方法和装置 - Google Patents

应用于三维相机的图像校准方法和装置 Download PDF

Info

Publication number
WO2019011027A1
WO2019011027A1 PCT/CN2018/083761 CN2018083761W WO2019011027A1 WO 2019011027 A1 WO2019011027 A1 WO 2019011027A1 CN 2018083761 W CN2018083761 W CN 2018083761W WO 2019011027 A1 WO2019011027 A1 WO 2019011027A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
phase difference
distance
pixel point
target
Prior art date
Application number
PCT/CN2018/083761
Other languages
English (en)
French (fr)
Inventor
简羽鹏
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Priority to EP18832958.5A priority Critical patent/EP3640892B1/en
Publication of WO2019011027A1 publication Critical patent/WO2019011027A1/zh
Priority to US16/740,152 priority patent/US10944956B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present disclosure relates to the field of image sensor application technologies, and in particular, to an image calibration method and apparatus applied to a three-dimensional camera.
  • the three-dimensional camera emits modulated near-infrared light or laser light by modulating a light source, and the modulated near-infrared light or laser light is reflected by the object to be measured, and the time difference or phase difference between the three-dimensional camera and the measured object is calculated by calculating the emitted light and the reflected light. , the distance information of the measured object can be obtained.
  • the object to be measured is photographed by a three-dimensional camera based on TOF (Time of Flight) technology, and the distance of the object to be measured is calculated by using the time difference or phase difference of the frequency-modulated light pulse propagation, that is, the three-dimensional camera obtains
  • the image of the measured object substantially represents the distance between the measured object and the three-dimensional camera.
  • the photosensitive area of the three-dimensional camera is a pixel matrix composed of an image sensor
  • the distance between the pixel point of the edge area and the central area of the photosensitive area and the object to be measured is not completely the same, which results in a three-dimensional camera.
  • the obtained image of the measured object will be distorted to some extent.
  • the embodiment of the present invention provides an image calibration method and apparatus applied to the three-dimensional camera.
  • An image calibration method applied to a three-dimensional camera comprising:
  • the measurement deviation value corresponding to the pixel point is obtained from a preset measurement deviation set, and the depth information is corrected according to the measurement deviation value.
  • the obtaining the depth information corresponding to the pixel point for the pixel corresponding to the measured object includes:
  • the method before the obtaining the measurement deviation value corresponding to each of the pixel points in the pre-stored measurement deviation set, the method further includes:
  • the calculating an average reference phase difference according to a reference phase difference corresponding to each reference pixel in the reference region includes:
  • the method further calculates a target phase difference corresponding to the target pixel point according to a target distance between the target pixel point and the preset reflective surface in the photosensitive region.
  • the determining a field of view corresponding to the pixel distance according to a pixel distance between the target pixel point and a center reference point comprises:
  • An image calibration device applied to a three-dimensional camera comprising:
  • An imaging module configured to capture an object to be measured by a three-dimensional camera, and obtain an image of the measured object in a photosensitive area of the three-dimensional camera, and determine, by the image of the measured object, a pixel corresponding to the measured object in the photosensitive area point;
  • An acquiring module configured to acquire, according to a pixel point corresponding to the measured object, depth information corresponding to the pixel point, where the depth information indicates a distance between the measured object and the pixel point;
  • a correction module configured to acquire a measurement deviation value corresponding to the pixel point from a preset measurement deviation set, and correct the depth information according to the measurement deviation value.
  • the apparatus further includes:
  • a calculation module configured to calculate a phase difference of the preset modulated light propagating between the pixel point and the measured object, to calculate the obtained phase difference as the depth information corresponding to the pixel point.
  • the apparatus further includes:
  • An average reference phase difference acquisition module configured to select a reference region from the photosensitive regions, and calculate an average reference phase difference according to a reference phase difference corresponding to each reference pixel in the reference region, where the reference phase difference indicates a preset a reference distance between the reflective surface and the reference pixel;
  • a target phase difference acquisition module configured to calculate a target phase difference corresponding to the target pixel point according to a target distance between the target pixel point and the preset reflection surface in the photosensitive area, where the target pixel point is the photosensitive area Any one of all pixels in the middle;
  • a comparison module configured to compare the obtained target phase difference with the average reference phase difference to obtain a measured deviation value corresponding to the target pixel point
  • a storage module configured to store a measurement deviation value corresponding to the target pixel point to the measurement deviation set.
  • the average reference phase difference acquisition module is specifically configured to:
  • the apparatus further includes:
  • a field of view calculation module configured to determine a field of view corresponding to the pixel distance according to a pixel distance between the target pixel and a center reference point, where the center reference point represents a reference pixel of a center position of the reference area ;
  • a target distance calculation module configured to calculate a target distance between the target pixel point and the preset reflective surface according to the viewing angle and a reference distance between the central reference point and the preset reflective surface.
  • the viewing angle calculation module further includes:
  • a unit angle of view calculation unit for calculating a unit angle of view between adjacent pixel points in the photosensitive area
  • the field of view angle calculating unit is configured to calculate an angle of view corresponding to the pixel distance according to the pixel distance and the unit field of view angle.
  • An image calibration apparatus for a three-dimensional camera comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, The instructions are executed by the at least one processor to cause the at least one processor to perform the image calibration method applied to the three-dimensional camera as described above.
  • the technical solution provided by the embodiments of the present disclosure may include the following beneficial effects: capturing an object to be measured by a three-dimensional camera, and obtaining an image of the measured object in a photosensitive area of the three-dimensional camera, and determining a corresponding one of the photosensitive regions from the image of the measured object.
  • the depth information corresponding to the pixel point is obtained for the pixel corresponding to the measured object, and the depth information indicates the distance between the measured object and the pixel.
  • the measurement deviation value corresponding to the pixel point is obtained from the pre-stored measurement deviation set, and the depth information is corrected according to the measurement deviation value.
  • the image of the object to be measured obtained by the three-dimensional camera is calibrated to eliminate the distortion generated by the image of the object to be measured.
  • FIG. 1 is a flow chart of an image calibration method applied to a three-dimensional camera, shown in an exemplary embodiment.
  • FIG. 2 is a flow chart of an image calibration method applied to a three-dimensional camera, shown by another exemplary embodiment.
  • FIG. 3 is a flow chart of an image calibration method applied to a three-dimensional camera, shown by another exemplary embodiment.
  • FIG. 4 is a flow diagram of an embodiment of step 1421 in accordance with the embodiment of FIG.
  • FIG. 5 is a schematic diagram of a specific implementation of selecting a reference area and setting a reference distance in an application scenario.
  • FIG. 6 is a schematic diagram of a specific implementation of calculating a target distance corresponding to a target pixel point in an application scenario.
  • FIG. 7 is a schematic diagram of a specific implementation of calculating pixel distance and field of view in an application scenario.
  • FIG. 8 is a block diagram of an image calibration apparatus applied to a three-dimensional camera, shown in an exemplary embodiment.
  • FIG. 9 is a block diagram of an image calibration apparatus applied to a three-dimensional camera, shown in another exemplary embodiment.
  • FIG. 10 is a block diagram of an image calibration apparatus applied to a three-dimensional camera, shown in another exemplary embodiment.
  • Figure 11 is a block diagram of an embodiment of an angle of view calculation unit in a device according to the embodiment of Figure 10 in one embodiment.
  • FIG. 1 is a flow chart showing an image calibration method applied to a three-dimensional camera, according to an exemplary embodiment. As shown in FIG. 1, the method includes but is not limited to the following steps:
  • step 1100 the object to be measured is photographed by the three-dimensional camera, and an image of the object to be measured is obtained in the photosensitive area of the three-dimensional camera, and the pixel of the object corresponding to the object to be measured is determined from the image of the object to be measured.
  • a three-dimensional camera refers to a camera that uses an image sensor technology to take an object to be measured and obtain an image of the object to be measured.
  • the three-dimensional camera emits modulated near-infrared light or laser light by modulating a light source, and the modulated near-infrared light or laser light is reflected by the object to be measured, and the time difference or phase difference between the three-dimensional camera and the measured object is calculated by calculating the emitted light and the reflected light. , the distance information of the measured object can be obtained.
  • the photosensitive area refers to an area in a three-dimensional camera for photographic imaging of an object to be measured, and the photosensitive area is composed of a pixel matrix of the image sensor.
  • the image sensor includes a CCD photosensor, a CMOS photosensor, and the like.
  • the object to be measured is photographed by the three-dimensional camera, and the preset modulated light is emitted by the modulated light source, and the preset modulated light is reflected by the measured object to the photosensitive area of the three-dimensional camera, and is measured in the photosensitive area of the three-dimensional camera.
  • the object image, and thus the pixel of the object to be measured in the photosensitive area can be determined from the image of the object to be measured.
  • the preset modulated light may be near-infrared light or laser light modulated by different modulation frequencies.
  • the pixel corresponding to the measured object determined by the image of the measured object is only a part of all the pixels in the photosensitive area, so that the pixel that is subsequently image-calibrated is only the part and the measured object.
  • Related pixels are only a part of all the pixels in the photosensitive area, so that the pixel that is subsequently image-calibrated is only the part and the measured object.
  • step 1300 depth information corresponding to the pixel point is acquired for the pixel corresponding to the measured object.
  • the depth information refers to the distance information of the measured object represented by the image of the measured object in the photosensitive area of the three-dimensional camera, that is, the depth information indicates the distance between the measured object and the pixel corresponding to the measured object.
  • the image of the measured object obtained by the 3D camera based on TOF technology can reflect the distance between the measured object and the 3D camera, and represent different distances by different colors to record and represent the depth information of the corresponding pixel of the measured object. Thereby, the depth information corresponding to the pixel point can be obtained by the image of the measured object.
  • the calculated time difference or phase difference is used as the depth information corresponding to the pixel point.
  • step 1500 the measurement deviation value corresponding to the pixel point is obtained from the pre-stored measurement deviation set, and the depth information is corrected according to the measurement deviation value.
  • the measurement deviation set includes a measurement deviation value corresponding to a plurality of pixel points, and the measurement deviation value reflects a distance deviation between a pixel point of the edge area in the photosensitive area and a pixel point of the central area and the object to be measured.
  • the measurement deviation set is stored in advance in a storage medium of the three-dimensional camera, for example, the storage medium includes a read only memory, a random access memory, a flash memory, and the like.
  • the set of measurement deviations of the three-dimensional camera is different corresponding to different modulation frequencies of the preset modulated light. Therefore, when the depth information is corrected by acquiring the measurement deviation value corresponding to each pixel point from the previously stored measurement deviation set, the measurement deviation value included in the measurement deviation set corresponding to the modulation frequency should be read.
  • the image calibration can be performed by reading the measurement deviation value included in the measurement deviation set in the storage medium, that is, the depth information is corrected according to the measured deviation value.
  • the pixel points of the edge regions in the photosensitive region and the pixel points of the central region are substantially the same as the distance between the objects to be measured, thereby avoiding distortion caused by the image of the object to be measured.
  • the measurement deviation value corresponding to the pixel point is obtained from the pre-stored measurement deviation set, and the depth information is corrected according to the measurement deviation value.
  • the image of the object to be measured obtained by the three-dimensional camera is calibrated to eliminate the distortion generated by the image of the object to be measured.
  • An image calibration method applied to a three-dimensional camera which is illustrated in an exemplary embodiment, the method further comprising the following steps:
  • the depth information corresponding to the pixel point is obtained.
  • the distance between the measured object and the pixel point can be calculated by the following formula:
  • It represents the phase difference of the preset modulated light propagating between the pixel and the measured object.
  • the phase difference ranges from - ⁇ to ⁇ , D is the distance between the measured object and the pixel, C is the speed of light, and f is Preset the frequency of the modulated light.
  • the phase difference of the preset modulated light propagating between the pixel point and the measured object specifically refers to a phase difference between the emitted light and the reflected light, and the emitted light and the reflected light are preset modulated light at the pixel point.
  • the generated between the measured object and the measured object that is, the preset modulated light is emitted by the modulated light source to the measured object to form the emitted light, and the preset modulated light is reflected back to the three-dimensional camera by the measured object to form the reflected light.
  • FIG. 2 is a flow chart of an image calibration method applied to a three-dimensional camera, shown by another exemplary embodiment. As shown in FIG. 2, the method further includes the following steps:
  • a reference region is selected from the photosensitive regions, and an average reference phase difference is calculated according to a reference phase difference corresponding to each pixel point in the reference region, and the reference phase difference indicates a reference reflection surface and a reference distance of the pixel.
  • an area of the center position of the photosensitive area is selected as a reference area
  • a pixel point in the reference area is defined as a reference pixel point
  • a reference pixel point of the reference area center position is defined as a center reference point.
  • the reference phase difference corresponding to each reference pixel point is obtained by calculating a phase difference of the preset modulated light propagating between each reference pixel point and the preset reflection surface, and then calculating a reference according to the reference phase difference corresponding to each reference pixel point in the reference area.
  • the average reference phase difference corresponding to all reference pixels in the region is obtained by calculating a phase difference of the preset modulated light propagating between each reference pixel point and the preset reflection surface, and then calculating a reference according to the reference phase difference corresponding to each reference pixel point in the reference area.
  • the average reference phase difference corresponding to all reference pixel points in the reference area can be calculated by the following formula:
  • diff average represents the average reference phase difference
  • k is the number of reference pixels included in the reference region
  • phase i is the reference phase difference corresponding to each reference pixel, indicating between the preset reflective surface and the reference pixel Reference distance.
  • the size of the reference area of the photosensitive area that is, the number k of reference pixel points included in the reference area, can be flexibly adjusted according to the distance between the three-dimensional camera and the preset reflective surface, for example, a three-dimensional camera and a preset reflection The closer the distance between the faces, the larger k is.
  • step 1430 the target phase difference corresponding to the target pixel point is calculated according to the target distance between the target pixel point and the preset reflective surface in the photosensitive area.
  • the target pixel is any one of all the pixels in the photosensitive area.
  • the distance between the pixel point of the edge region and the central region in the photosensitive region and the object to be measured is not completely the same, the distance between any target pixel and the preset reflective surface in the photosensitive region is not uniform.
  • the reference distance dist between the center reference point and the preset reflection surface is coincident, whereby the distance between the target pixel point and the preset reflection surface is defined as the target distance.
  • P is an arbitrarily selected target pixel in the photosensitive area, and P is also an image point corresponding to the object point A in the preset reflection surface, that is, the distance between P and the preset reflection surface is substantially P and The distance between object points A.
  • the target phase difference can be calculated by the following formula:
  • Phase_real dist_real ⁇ max_phase/max_distance
  • phase_real is the target phase difference corresponding to the target pixel point
  • dist_real is the target distance between the target pixel point and the preset reflection surface in the photosensitive area
  • max_phase is the maximum phase of the preset modulated light
  • max_distance is the three-dimensional based on the maximum phase
  • max_phase is related to the chip of the image sensor
  • max_distance is related to the modulation frequency of the preset modulated light, that is, if the chip of the image sensor is different, the max_phase is different, and if the modulation frequency of the preset modulated light is different, max_distance will also be All the differences.
  • max_phase and max_distance can be flexibly adjusted according to the chip of the image sensor and the modulation frequency of the preset modulated light, so that the image calibration achieves the best effect.
  • phase of the preset modulated light is a periodic function that changes with time, it should be ensured that the reference distance between the central reference point and the preset emitting surface is within a certain range, so that the calculated target pixel point is obtained.
  • the target phase difference of the corresponding preset modulated light is in the same period.
  • step 1450 the obtained target phase difference and the average reference phase difference are compared to obtain a measured deviation value corresponding to the target pixel point.
  • the difference between the target phase difference corresponding to each target pixel point and the average reference phase difference in the photosensitive area is obtained, and the difference is the measured deviation value corresponding to the target pixel point, and then the measured deviation value of the target pixel point constitutes a three-dimensional The camera's measurement bias set.
  • the measured deviation value can be calculated by the following formula:
  • diff[p] represents the measured deviation value corresponding to the target pixel point P.
  • the diff average represents the average reference phase difference
  • the phase_real represents the target phase difference corresponding to the target pixel point P.
  • step 1470 the measured deviation value corresponding to the target pixel point is stored to the measurement deviation set.
  • the measurement deviation set since the size and shape of the measured object cannot be predicted in advance when the object is measured by the three-dimensional camera, the measurement deviation set includes the measurement deviation value of all the target pixel points in the photosensitive area. Therefore, the measurement deviation value corresponding to each target pixel point is stored in the measurement deviation set, so as to obtain the target pixel corresponding to the measured object in the photosensitive region when the object to be measured is subsequently captured by the three-dimensional camera. The measurement deviation value corresponding to the target pixel point of the deviation set is measured, and the image of the measured object is corrected.
  • FIG. 3 is a flow chart of an image calibration method applied to a three-dimensional camera, shown by another exemplary embodiment. As shown in Figure 3, it includes:
  • step 1421 the angle of view corresponding to the pixel distance is determined based on the pixel distance between the target pixel point and the center reference point.
  • the central reference point represents a reference pixel point of the center position of the reference area.
  • the pixel distance refers to the number of pixels spaced between a certain pixel point of the pixel matrix and the center reference point.
  • the pixel matrix constituting the photosensitive area is a matrix having a length of 320 and a width of 240. Then the diagonal length of the pixel matrix is 400.
  • the pixel distances corresponding to the four apex angles of the pixel matrix correspond to 200, that is, 200 pixel points are spaced from the center reference point.
  • the pixels according to the pixel point and the center reference point may be known.
  • Distance the angle of view corresponding to the pixel distance is calculated.
  • step 1423 the target distance between the target pixel point and the preset reflection surface is calculated according to the angle of view and the reference distance between the center reference point and the preset reflection surface.
  • the target distance can be calculated by the following formula:
  • dist_real is the target distance
  • dist is the reference distance between the central reference point and the preset reflection surface
  • cos ⁇ is the angle of view
  • step 1420 is a flow diagram of an embodiment of step 1420 in accordance with the embodiment of FIG. As shown in Figure 4, it includes:
  • step 4211 a unit field of view between adjacent pixel points in the photosensitive region is calculated.
  • the unit angle of view between adjacent pixel points in the photosensitive region is only related to the distance between adjacent pixel points.
  • the photosensitive area of the three-dimensional camera is a pixel matrix with a length of 320 and a width of 240.
  • the field of view between adjacent pixels can be calculated by the following formula:
  • n represents the number of pixels of one row of the pixel matrix constituting the photosensitive region.
  • n is 320
  • is the unit field of view angle
  • rad is the radians unit
  • pixel represents a single pixel point.
  • step 4213 the field of view angle corresponding to the pixel distance is calculated from the pixel distance and the unit angle of view.
  • the object point on the preset reflection surface corresponding to the target pixel point P is A
  • the object point on the preset reflection surface corresponding to the center reference point is B.
  • the distance between the target pixel point P and the object point A is defined as the target distance
  • the distance between the center reference point and the object point B is defined as the reference distance.
  • the pixel distance between the target pixel point P and the central reference point is ⁇ z
  • the field of view corresponding to the pixel distance ⁇ z is ⁇
  • the field of view corresponding to the pixel distance can be calculated by the following formula:
  • represents the angle of view
  • ⁇ z represents the pixel distance
  • represents the unit field of view
  • rad is the radians unit.
  • the pixel points located at the four vertices of the pixel matrix correspond to a pixel distance of 200, that is, a distance of 200 pixels from the center reference point.
  • the calculated unit angle of view is A radians, and the calculated angle of view corresponding to the pixel distance is 200 times the unit of A radians.
  • FIG. 8 is a block diagram of an image calibration apparatus applied to a three-dimensional camera, shown in an exemplary embodiment.
  • the device includes, but is not limited to, an imaging module 6100, an acquisition module 6300, and a correction module 6500.
  • An imaging module 6100 configured to capture an object to be measured by using a three-dimensional camera, and obtain an image of the measured object in a photosensitive area of the three-dimensional camera, and determine, by the image of the measured object, an image corresponding to the measured object in the photosensitive region pixel;
  • the obtaining module 6300 is configured to acquire, according to a pixel point corresponding to the measured object, depth information corresponding to the pixel point, where the depth information indicates a distance between the measured object and the pixel point;
  • the correction module 6500 is configured to obtain a measurement deviation value corresponding to the pixel point from a preset measurement deviation set, and correct the depth information according to the measurement deviation value.
  • Another exemplary embodiment shows an image calibration apparatus applied to a three-dimensional camera, the apparatus further comprising a calculation module for calculating a phase difference of a predetermined modulated light propagating between the pixel point and the measured object to calculate The obtained phase difference is used as depth information corresponding to the pixel point.
  • FIG. 9 is a block diagram of an image calibration apparatus applied to a three-dimensional camera, shown in another exemplary embodiment.
  • the apparatus includes, but is not limited to, an average reference phase difference acquisition module 6410, a target phase difference acquisition module 6430, a comparison module 6450, and a storage module 6470.
  • the average reference phase difference obtaining module 6410 is configured to select a reference region from the photosensitive regions, and calculate an average reference phase difference according to a reference phase difference corresponding to each reference pixel in the reference region, where the reference phase difference indicates Setting a reference distance of the reflective surface and the reference pixel;
  • a target phase difference acquisition module 6430 configured to calculate a target phase difference corresponding to the target pixel point according to a target distance between the target pixel point and the preset reflective surface in the photosensitive area, where the target pixel point is the photosensitive Any one of all pixels in the region;
  • the comparing module 6450 is configured to compare the obtained target phase difference with the average reference phase difference to obtain a measured deviation value corresponding to the target pixel point;
  • the storage module 6470 is configured to store the measured deviation value corresponding to the target pixel point to the measurement deviation set.
  • FIG. 10 is a block diagram of an image calibration apparatus applied to a three-dimensional camera, shown in another exemplary embodiment.
  • the cache module 630 includes, but is not limited to, a field of view angle calculation module 6421, and a target distance calculation module 6423.
  • the field of view angle calculation module 6421 is configured to determine an angle of view corresponding to the pixel distance according to a pixel distance between the target pixel point and a center reference point, where the center reference point represents a reference pixel of a center position of the reference area point;
  • a target distance calculation module 6423 configured to calculate a target distance between the target pixel point and the preset reflective surface according to the viewing angle and a reference distance between the central reference point and the preset reflective surface .
  • Figure 11 is a block diagram of another embodiment of a field of view angle calculation module 6421 in an apparatus in accordance with a corresponding embodiment of Figure 10.
  • the field of view angle calculation module 6421 includes, but is not limited to, a unit angle of view calculation unit 4211, and a field of view angle calculation unit 4213.
  • a unit field of view angle calculating unit 4211 configured to calculate a unit field of view angle between adjacent pixel points in the photosensitive area
  • the field of view angle calculating unit 4213 is configured to calculate an angle of view corresponding to the pixel distance according to the pixel distance and the unit field of view angle.
  • An embodiment of the present invention also provides an image calibration apparatus for a three-dimensional camera, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory is stored with the An instruction executed by the processor, the instructions being executed by the at least one processor to cause the at least one processor to perform the image calibration method applied to the three-dimensional camera as described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种应用于三维相机的图像校准方法和装置,其中,所述方法包括:通过三维相机拍摄被测物体,并在所述三维相机的感光区域中得到被测物体图像,由所述被测物体图像确定所述感光区域中对应所述被测物体的像素点;针对所述被测物体对应的像素点,获取所述像素点对应的深度信息(1300),所述深度信息指示所述被测物体和所述像素点之间的距离;由预先存储的测量偏差集合中获取所述像素点对应的测量偏差值,并根据所述测量偏差值对所述深度信息进行修正(1500)。由此,通过修正被测物体对应像素点的深度信息,对三维相机获得的被测物体图像进行校准,以消除被测物体图像所产生的畸变。

Description

应用于三维相机的图像校准方法和装置
申请要求于2017年7月11日申请的、申请号为201710561888.X、申请名称为“应用于三维相机的图像校准方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像传感器应用技术领域,特别涉及应用于三维相机的图像校准方法和装置。
背景技术
随着图像传感器技术的发展,三维相机得到了越来越广泛的应用。三维相机通过调制光源发射调制近红外光或激光,该调制近红外光或激光遇到被测物体后反射,通过计算发射光和反射光在三维相机和被测物体之间传播的时间差或相位差,可以获得被测物体的距离信息。
在现有技术中,通过基于TOF(Time of Flight,飞行时间)技术的三维相机拍摄被测物体,利用经频率调制的光脉冲传播的时间差或者相位差计算被测物体的距离,即三维相机获得的被测物体图像实质上表示的是被测物体和三维相机的距离。
然而,由于三维相机的感光区域是由图像传感器构成的像素矩阵,位于感光区域中边缘区域的像素点和中心区域的像素点与被测物体之间的距离不完全相同,这就导致了三维相机获得的被测物体图像会在一定程度上产生畸变。
发明内容
为了解决相关技术中存在的由三维相机获得的被测物体图像会在一定程度上产生畸变的技术问题,本发明公开的实施例提供了一种应用于三维相机的图像校准方法和装置。
一种应用于三维相机的图像校准方法,其特征在于,所述方法包括:
通过三维相机拍摄被测物体,并在所述三维相机的感光区域中得到被测物体图像,由所述被测物体图像确定所述感光区域中对应所述被测物体的像素点;
针对所述被测物体对应的像素点,获取所述像素点对应的深度信息,所述深度信息指示所述被测物体和所述像素点的距离;
由预先存储的测量偏差集合中获取所述像素点对应的测量偏差值,并根据所述测量偏差值对所述深度信息进行修正。
在其中一个示例性实施例中,所述针对所述被测物体对应的像素点,获取所述像素点对应的深度信息,包括:
计算预设调制光在所述像素点和被测物体之间传播的相位差,以计算得到的相位差作为所述像素点对应的深度信息。
在其中一个示例性实施例中,所述由预先存储的测量偏差集合中获取各所述像素点对应的测量偏差值之前,所述方法还包括:
由所述感光区域中选取一参考区域,并根据所述参考区域中各参考像素点对应的参考相位差计算平均参考相位差,所述参考相位差指示预设反射面和所述参考像素点之间的参考距离;
根据所述感光区域中目标像素点和所述预设反射面的目标距离,计算所述目标像素点对应的目标相位差,所述目标像素点是所述感光区域中所有像素点中的任意一像素点;
将得到的所述目标相位差和所述平均参考相位差进行比较,得到所述目标像素点对应的测量偏差值;
将所述目标像素点对应的测量偏差值存储至所述测量偏差集合。
在其中一个示例性实施例中,所述根据所述参考区域中各参考像素点对应的参考相位差计算平均参考相位差,包括:
计算预设调制光在各参考像素点和预设反射面之间传播的相位差,得到各参考像素点对应的参考相位差;
根据参考区域中各参考像素点对应的参考相位差计算参考区域内所有参考 像素点对应的平均参考相位差。
在其中一个示例性实施例中,所述根据所述感光区域中目标像素点和所述预设反射面之间的目标距离,计算所述目标像素点对应的目标相位差之前,所述方法还包括:
根据所述目标像素点和中心参考点之间的像素距离确定所述像素距离对应的视场角,所述中心参考点表示所述参考区域中心位置的参考像素点;
根据所述视场角以及所述中心参考点与所述预设反射面之间的参考距离计算所述目标像素点和所述预设反射面之间的目标距离。
在其中一个示例性实施例中,所述根据所述目标像素点和中心参考点之间的像素距离确定所述像素距离对应的视场角,包括:
计算所述感光区域中相邻像素点之间的单位视场角;
根据所述像素距离和所述单位视场角计算得到所述像素距离对应的视场角。
一种应用于三维相机的图像校准装置,所述装置包括:
成像模块,用于通过三维相机拍摄被测物体,并在所述三维相机的感光区域中得到被测物体图像,由所述被测物体图像确定所述感光区域中对应所述被测物体的像素点;
获取模块,用于针对所述被测物体对应的像素点,获取所述像素点对应的深度信息,所述深度信息指示所述被测物体和所述像素点之间的距离;
修正模块,用于由预先存储的测量偏差集合中获取所述像素点对应的测量偏差值,并根据所述测量偏差值对所述深度信息进行修正。
在其中一个示例性实施例中,所述装置还包括:
计算模块,用于计算预设调制光在所述像素点和被测物体之间传播的相位差,以计算得到的相位差作为所述像素点对应的深度信息。
在其中一个示例性实施例中,所述装置还包括:
平均参考相位差获取模块,用于由所述感光区域中选取一参考区域,并根 据所述参考区域中各参考像素点对应的参考相位差计算平均参考相位差,所述参考相位差指示预设反射面和所述参考像素点之间的参考距离;
目标相位差获取模块,用于根据所述感光区域中目标像素点和所述预设反射面的目标距离,计算所述目标像素点对应的目标相位差,所述目标像素点是所述感光区域中所有像素点中的任意一像素点;
比较模块,用于将得到的所述目标相位差和所述平均参考相位差进行比较,得到所述目标像素点对应的测量偏差值;
存储模块,用于将所述目标像素点对应的测量偏差值存储至所述测量偏差集合。
在其中一个示例性实施例中,所述平均参考相位差获取模块,具体用于:
计算预设调制光在各参考像素点和预设反射面之间传播的相位差,得到各参考像素点对应的参考相位差;
根据参考区域中各参考像素点对应的参考相位差计算参考区域内所有参考像素点对应的平均参考相位差。
在其中一个示例性实施例中,所述装置还包括:
视场角计算模块,用于根据所述目标像素点和中心参考点之间的像素距离确定所述像素距离对应的视场角,所述中心参考点表示所述参考区域中心位置的参考像素点;
目标距离计算模块,用于根据所述视场角以及所述中心参考点与所述预设反射面之间的参考距离计算所述目标像素点和所述预设反射面之间的目标距离。
在其中一个示例性实施例中,所述视场角计算模块还包括:
单位视场角计算单元,用于计算得到所述感光区域中相邻像素点之间的单位视场角;
视场角计算单元,用于根据所述像素距离和所述单位视场角计算得到所述像素距离对应的视场角。
一种应用于三维相机的图像校准装置,包括:至少一个处理器;以及与所 述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行上述应用于三维相机的图像校准方法。
本发明公开的实施例提供的技术方案可以包括以下有益效果:通过三维相机拍摄被测物体,并在三维相机的感光区域中得到被测物体图像,由被测物体图像确定所述感光区域中对应被测物体的像素点。针对被测物体对应的像素点,获取像素点对应的深度信息,深度信息指示被测物体和像素点之间的距离。由预先存储的测量偏差集合中获取像素点对应的测量偏差值,并根据测量偏差值对所述深度信息进行修正。由此,通过修正被测物体对应像素点的深度信息,对三维相机获得的被测物体图像进行校准,以消除被测物体图像所产生的畸变。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并于说明书一起用于解释本发明的原理。下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他实施例的附图。
图1是一示例性实施例示出的应用于三维相机的图像校准方法的流程图。
图2是另一示例性实施例示出的应用于三维相机的图像校准方法的流程图。
图3是另一示例性实施例示出的应用于三维相机的图像校准方法的流程图。
图4是根据图3对应实施例中步骤1421在一个实施例的流程图。
图5是一应用场景中选定参考区域和设定参考距离的一种具体实现示意图。
图6是一应用场景中计算目标像素点对应的目标距离的一种具体实现示意图。
图7是一应用场景中计算像素距离和视场角的一种具体实现示意图。
图8是一示例性实施例示出的应用于三维相机的图像校准装置的框图。
图9是另一示例性实施例示出的应用于三维相机的图像校准装置的框图。
图10另一示例性实施例示出的应用于三维相机的图像校准装置的框图。
图11根据图10对应实施例的装置中的视场角计算单元在一个实施例的框图。
具体实施方式
这里将详细地对示例性实施例执行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的应用于三维相机的图像校准方法的流程图。如图1所示,该方法包括但不限于以下步骤:
在步骤1100中,通过三维相机拍摄被测物体,并在三维相机的感光区域中得到被测物体图像,由被测物体图像确定感光区域中对应被测物体的像素点。
三维相机,是指一种利用图像传感器技术拍摄被测物体并获得被测物体图像的相机。三维相机通过调制光源发射调制近红外光或激光,该调制近红外光或激光遇到被测物体后反射,通过计算发射光和反射光在三维相机和被测物体之间传播的时间差或相位差,可以获得被测物体的距离信息。
感光区域,是指三维相机中用于对被测物体进行感光成像的区域,感光区域由图像传感器的像素矩阵构成。例如,图像传感器包括CCD光电传感器、CMOS光电传感器等等。
具体的,由三维相机对被测物体进行拍摄,通过调制光源发射预设调制光,该预设调制光经过被测物体反射至三维相机的感光区域,并在三维相机的感光区域中得到被测物体图像,进而便能够由被测物体图像确定感光区域中对应被测物体的像素点。其中,预设调制光可以是经过不同调制频率调制的近红外光或者激光。
应当说明的是,由被测物体图像所确定得到的被测物体对应的像素点仅是感光区域中所有像素点的一部分,进而使得后续进行图像校准的像素点也仅是 该一部分与被测物体相关的像素点。
在步骤1300中,针对被测物体对应的像素点,获取像素点对应的深度信息。
深度信息,是指三维相机的感光区域中由被测物体图像表示的该被测物体的距离信息,即深度信息指示了被测物体与被测物体对应像素点之间的距离。
由基于TOF技术的三维相机获得的被测物体图像能够反映被测物体和三维相机之间的距离,并通过不同的色彩代表不同距离,以记录和表示被测物体对应像素点的深度信息,由此,便能够通过被测物体图像获得像素点对应的深度信息。
进一步地,通过计算预设调制光在感光区域的像素点和被测物体之间传播的时间差或者相位差,进而以计算得到的时间差或者相位差作为像素点对应的深度信息。
在步骤1500中,由预先存储的测量偏差集合中获取像素点对应的测量偏差值,并根据测量偏差值对深度信息进行修正。
测量偏差集合中包含若干像素点对应的测量偏差值,该测量偏差值反映了感光区域中边缘区域的像素点和中心区域的像素点与被测物体之间的距离偏差。
进一步地,测量偏差集合预先存储在三维相机的存储介质中,例如,存储介质包括只读存储器、随机存储器、闪存等。
更进一步,对应预设调制光的不同调制频率,所述三维相机的测量偏差集合是不同的。因此,在通过由预先存储的测量偏差集合中获取各像素点对应的测量偏差值对深度信息进行修正时,应当读取对应调制频率的测量偏差集合所包含的测量偏差值。
由此,当三维相机对被测物体进行拍摄并获得被测物体图像后,便能够通过读取存储介质中测量偏差集合所包含的测量偏差值进行图像校准,即根据测量偏差值修正深度信息,进而使得感光区域中边缘区域的像素点和中心区域的像素点与被测物体之间的距离偏差大致相同,从而避免被测物体图像所产生的畸变。
通过上述过程,由预先存储的测量偏差集合中获取像素点对应的测量偏差值,并根据测量偏差值对深度信息进行修正。通过修正被测物体对应像素点的 深度信息,对三维相机获得的被测物体图像进行校准,以消除被测物体图像所产生的畸变。
在一示例性实施例示出的应用于三维相机的图像校准方法,所述方法还包括如下步骤:
通过计算预设调制光在像素点和被测物体之间传播的相位差,得到像素点对应的深度信息,即被测物体和该像素点的距离。
具体地,被测物体和像素点的距离可由如下公式进行计算:
Figure PCTCN2018083761-appb-000001
其中,
Figure PCTCN2018083761-appb-000002
代表的是预设调制光在像素点和被测物体之间传播的相位差,相位差的取值范围为-π~π,D为被测物体和像素点的距离,C为光速,f为预设调制光的频率。
需要说明的是,预设调制光在像素点和被测物体之间传播的相位差具体是指发射光与反射光之间的相位差,该发射光和反射光是预设调制光在像素点和被测物体之间传播过程中产生的,即预设调制光由调制光源发射至被测物体形成发射光,预设调制光经过被测物体反射回三维相机形成反射光。
图2是另一示例性实施例示出的应用于三维相机的图像校准方法的流程图。如图2所示,所述方法还包括以下步骤:
在步骤1410中,由感光区域中选取一参考区域,并根据参考区域中各像素点对应的参考相位差计算平均参考相位差,参考相位差指示预设反射面和像素点的参考距离。
如图5所述,选取感光区域中心位置的区域作为参考区域,将该参考区域中的像素点定义为参考像素点,并将参考区域中心位置的参考像素点定义为中心参考点。设定白墙作为预设反射面,dist为预设反射面和中心参考点之间的参考距离,FOV(Field OF View,视场角)表示感光区域的视场角。
通过计算预设调制光在各参考像素点和预设反射面之间传播的相位差,得到各参考像素点对应的参考相位差,然后根据参考区域中各参考像素点对应的参考相位差计算参考区域内所有参考像素点对应的平均参考相位差。
具体地,参考区域内所有参考像素点对应的平均参考相位差可由如下公式 进行计算:
Figure PCTCN2018083761-appb-000003
其中,diff average表示平均参考相位差,k为参考区域包含的参考像素点的个数,phase i为各参考像素点分别对应的参考相位差,指示了预设反射面和参考像素点之间的参考距离。
更进一步,感光区域的参考区域的大小,即参考区域所包括的参考像素点的个数k,可以根据三维相机与预设反射面之间的距离灵活地调整,例如,三维相机与预设反射面之间的距离越近,则k越大。
在步骤1430中,根据感光区域中目标像素点和预设反射面之间的目标距离,计算目标像素点对应的目标相位差。
其中,目标像素点是感光区域中所有像素点中的任意一像素点。
由于位于感光区域中边缘区域的像素点和中心区域的像素点与被测物体之间的距离不完全相同,故而,感光区域中任一目标像素点与预设反射面之间的距离并非均和中心参考点与预设反射面之间的参考距离dist一致,由此,将目标像素点与预设反射面之间的距离定义为目标距离。如图6所述,P为感光区域中任意选取的目标像素点,P同时也是预设反射面中物点A所对应的像点,即P与预设反射面之间的距离实质是P与物点A之间的距离。
具体地,目标相位差可以通过如下公式进行计算:
phase_real=dist_real×max_phase/max_distance
其中,phase_real为目标像素点对应的目标相位差,dist_real为感光区域中目标像素点和预设反射面之间的目标距离,max_phase为预设调制光的最大相位,max_distance为基于该最大相位时三维相机能够准确拍摄被测物体的最大距离。
进一步地,max_phase与图像传感器的芯片相关,max_distance与预设调制光的调制频率相关,即,如果图像传感器的芯片不同,max_phase会有所区别,如果预设调制光的调制频率不同,max_distance也将所有差别。换而言之,在不同的应用场景中,max_phase和max_distance可以根据图像传感器的芯片以及预设调制光的调制频率进行灵活地调整,以使图像校准达到最佳的效果。
举例来说,如表1所示,不同的调制频率具有不同的max_distance。
表1调制频率与max_distance的关系
调制频率 max_distance
20.00MHz 7.5m
10.00MHz 15m
5.00MHz 30m
2.50MHz 60m
1.25MHz 120m
更进一步,由于预设调制光的相位是一个随时间变化的周期函数,故应保证中心参考点与预设发射面之间的参考距离在一定范围内之内,以使得计算获得的目标像素点对应的预设调制光的目标相位差在同一个周期内。
在步骤1450中,将得到的目标相位差和平均参考相位差进行比较,得到目标像素点对应的测量偏差值。
具体的,求感光区域中各目标像素点对应的目标相位差和平均参考相位差的差值,该差值即为目标像素点对应的测量偏差值,进而由目标像素点的测量偏差值构成三维相机的测量偏差集合。
测量偏差值可以通过如下公式进行计算:
diff[p]=phase_real-diff average
其中,P表示感光区域中任一目标像素点,diff[p]表示目标像素点P对应的测量偏差值。diff average表示平均参考相位差,phase_real表示目标像素点P对应的目标相位差。
在步骤1470中,将目标像素点对应的测量偏差值存储至测量偏差集合。
其中,由于在三维相机拍摄被测物体时,被测物体的大小和形状不能提前预测,所以测量偏差集合包括感光区域中所有目标像素点的测量偏差值。由此,每一目标像素点对应的测量偏差值都将被存储至测量偏差集合中,以便于后续通过三维相机拍摄被测物体时,通过确定感光区域中对应被测物体的目标像素点,获取测量偏差集合目标像素点对应的测量偏差值,对被测物体图像进行修正。
图3是另一示例性实施例示出的应用于三维相机的图像校准方法的流程图。如图3所示,包括:
在步骤1421中,根据目标像素点和中心参考点之间的像素距离确定像素距 离对应的视场角。
其中,中心参考点表示所述参考区域中心位置的参考像素点。
像素距离,是指像素矩阵的某个像素点与中心参考点之间间隔的像素点的个数。例如,构成感光区域的像素矩阵是一个长为320,宽为240的矩阵。则该像素矩阵的对角线长度为400。此时,位于该像素矩阵四个顶角的像素点对应的像素距离为200,即与中心参考点之间间隔了200个像素点。
具体的,如果已经知道两个相邻像素点对应的视场角的大小,即单位视场角,且像素矩阵中的像素点是均匀分布的,即可根据该像素点和中心参考点的像素距离,计算像素距离对应的视场角。
在步骤1423中,根据视场角以及中心参考点与预设反射面之间的参考距离计算目标像素点和预设反射面之间的目标距离。
目标距离可以通过如下公式进行计算:
dist_real=dist/cosα
其中,dist_real为目标距离,dist为中心参考点与预设反射面之间的参考距离,cosα为视场角。
图4是根据图3对应实施例中步骤1420在一个实施例的流程图。如图4所示,包括:
在步骤4211中,计算得到感光区域中相邻像素点之间的单位视场角。
具体的,当三维相机的焦距确定时,感光区域中相邻像素点之间的单位视场角仅和相邻像素点之间的距离有关。
在一个具体的实施中,三维相机的感光区域是长为320,宽为240的像素矩阵,相邻像素间的视场角可以通过如下公式进行计算:
θ=FOV/(180*n)rad/pixel,
其中,n表示构成感光区域的像素矩阵的一行像素点个数。对于长为320,宽为240的像素矩阵,n为320,θ为单位视场角,rad为弧度单位,pixel表示单个像素点。
在步骤4213中,根据像素距离和单位视场角计算得到像素距离对应的视场 角。
如图7所示,目标像素点P对应的预设反射面上的物点为A,中心参考点对应的预设反射面上的物点为B。则将目标像素点P与物点A之间的距离定义为目标距离,中心参考点与物点B之间的距离定义为参考距离。
目标像素点P和中心参考点的像素距离为△z,像素距离△z对应的视场角为α,则像素距离对应的视场角可以通过如下公式进行计算:
α=△z*θrad
其中,α表示视场角,△z表示像素距离,θ表示单位视场角,rad为弧度单位。
在一个具体的实施中,位于像素矩阵四个顶角的像素点对应的像素距离为200,即距离中心参考点有200个像素点的距离。通过计算得到的单位视场角为A弧度单位,则计算得到的像素距离对应的视场角为A弧度单位的200倍。
图8是一示例性实施例示出的应用于三维相机的图像校准装置的框图。如图8所示,该装置包括但不限于:成像模块6100,获取模块6300,修正模块6500。
成像模块6100,用于通过三维相机拍摄被测物体,并在所述三维相机的感光区域中得到被测物体图像,由所述被测物体图像确定所述感光区域中对应所述被测物体的像素点;
获取模块6300,用于针对所述被测物体对应的像素点,获取所述像素点对应的深度信息,所述深度信息指示所述被测物体和所述像素点的距离;
修正模块6500,用于由预先存储的测量偏差集合中获取所述像素点对应的测量偏差值,并根据所述测量偏差值对所述深度信息进行修正。
另一示例性实施例示出的应用于三维相机的图像校准装置,所述装置还包括计算模块,用于计算预设调制光在所述像素点和被测物体之间传播的相位差,以计算得到的相位差作为所述像素点对应的深度信息。
图9是另一示例性实施例示出的应用于三维相机的图像校准装置的框图。如图9所示,所述装置包括但不限于:平均参考相位差获取模块6410,目标相位差获取模块6430,比较模块6450,存储模块6470。
平均参考相位差获取模块6410,用于由所述感光区域中选取一参考区域, 并根据所述参考区域中各参考像素点对应的参考相位差计算平均参考相位差,所述参考相位差指示预设反射面和所述参考像素点的参考距离;
目标相位差获取模块6430,用于根据所述感光区域中目标像素点和所述预设反射面的目标距离,计算所述目标像素点对应的目标相位差,所述目标像素点是所述感光区域中所有像素点中的任意一像素点;
比较模块6450,用于将得到的所述目标相位差和所述平均参考相位差进行比较,得到所述目标像素点对应的测量偏差值;
存储模块6470,用于将所述目标像素点对应的测量偏差值存储至所述测量偏差集合。
图10是另一示例性实施例示出的应用于三维相机的图像校准装置的框图。如图10所示,该缓存模块630包括但不限于:视场角计算模块6421,目标距离计算模块6423。
视场角计算模块6421,用于根据所述目标像素点和中心参考点之间的像素距离确定所述像素距离对应的视场角,所述中心参考点表示所述参考区域中心位置的参考像素点;
目标距离计算模块6423,用于根据所述视场角以及所述中心参考点与所述预设反射面之间的参考距离计算所述目标像素点和所述预设反射面之间的目标距离。
图11是根据图10对应实施例的装置中的视场角计算模块6421在另一个实施例的框图。如图10所示,该视场角计算模块6421包括但不限于:单位视场角计算单元4211,视场角计算单元4213。
单位视场角计算单元4211,用于计算得到所述感光区域中相邻像素点之间的单位视场角;
视场角计算单元4213,用于根据所述像素距离和所述单位视场角计算得到所述像素距离对应的视场角。
本发明一个实施例中还提供一种应用于三维相机的图像校准装置,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述 存储器存储有可被所述一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行上述应用于三维相机的图像校准方法。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围执行各种修改和改变。本发明的范围仅由所附的权利要求来限制。

Claims (13)

  1. 一种应用于三维相机的图像校准方法,其特征在于,所述方法包括:
    通过三维相机拍摄被测物体,并在所述三维相机的感光区域中得到被测物体图像,由所述被测物体图像确定所述感光区域中对应所述被测物体的像素点;
    针对所述被测物体对应的像素点,获取所述像素点对应的深度信息,所述深度信息指示所述被测物体和所述像素点之间的距离;
    由预先存储的测量偏差集合中获取所述像素点对应的测量偏差值,并根据所述测量偏差值对所述深度信息进行修正。
  2. 根据权利要求1所述的方法,其特征在于,所述针对所述被测物体对应的像素点,获取所述像素点对应的深度信息,包括:
    计算预设调制光在所述像素点和被测物体之间传播的相位差,以计算得到的相位差作为所述像素点对应的深度信息。
  3. 根据权利要求2所述的方法,其特征在于,所述由预先存储的测量偏差集合中获取所述像素点对应的测量偏差值,并根据所述测量偏差值对所述深度信息进行修正之前,还包括:
    由所述感光区域中选取一参考区域,并根据所述参考区域中各参考像素点对应的参考相位差计算平均参考相位差,所述参考相位差指示预设反射面和所述参考像素点之间的参考距离;
    根据所述感光区域中目标像素点和所述预设反射面的目标距离,计算所述目标像素点对应的目标相位差,所述目标像素点是所述感光区域中所有像素点中的任意一像素点;
    将得到的所述目标相位差和所述平均参考相位差进行比较,得到所述目标像素点对应的测量偏差值;
    将所述目标像素点对应的测量偏差值存储至所述测量偏差集合。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述参考区域中各参考像素点对应的参考相位差计算平均参考相位差,包括:
    计算预设调制光在各参考像素点和预设反射面之间传播的相位差,得到各参考像素点对应的参考相位差;
    根据参考区域中各参考像素点对应的参考相位差计算参考区域内所有参考像素点对应的平均参考相位差。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述感光区域中目标像素点和所述预设反射面的目标距离,计算所述目标像素点对应的目标相位差之前,所述方法还包括:
    根据所述目标像素点和中心参考点之间的像素距离确定所述像素距离对应的视场角,所述中心参考点表示所述参考区域中心位置的参考像素点;
    根据所述视场角以及所述中心参考点与所述预设反射面之间的参考距离计算所述目标像素点和所述预设反射面之间的目标距离。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述目标像素点和中心参考点之间的像素距离确定所述像素距离对应的视场角,包括:
    计算所述感光区域中相邻像素点之间的单位视场角;
    根据所述像素距离和所述单位视场角计算得到所述像素距离对应的视场角。
  7. 一种应用于三维相机的图像校准装置,其特征在于,所述装置包括:
    成像模块,用于通过三维相机拍摄被测物体,并在所述三维相机的感光区域中得到被测物体图像,由所述被测物体图像确定所述感光区域中对应所述被测物体的像素点;
    获取模块,用于针对所述被测物体对应的像素点,获取所述像素点对应的深度信息,所述深度信息指示所述被测物体和所述像素点的距离;
    修正模块,用于由预先存储的测量偏差集合中获取所述像素点对应的测量偏差值,并根据所述测量偏差值对所述深度信息进行修正。
  8. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    计算模块,用于计算预设调制光在所述像素点和被测物体之间传播的相位差,以计算得到的相位差作为所述像素点对应的深度信息。
  9. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    平均参考相位差获取模块,用于由所述感光区域中选取一参考区域,并根据所述参考区域中各参考像素点对应的参考相位差计算平均参考相位差,所述参考相位差指示预设反射面和所述参考像素点之间的参考距离;
    目标相位差获取模块,用于根据所述感光区域中目标像素点和所述预设反射面之间的目标距离,计算所述目标像素点对应的目标相位差,所述目标像素点是所述感光区域中所有像素点中的任意一像素点;
    比较模块,用于将得到的所述目标相位差和所述平均参考相位差进行比较,得到所述目标像素点对应的测量偏差值;
    存储模块,用于将所述目标像素点对应的测量偏差值存储至所述测量偏差集合。
  10. 根据权利要求9所述的装置,其特征在于,所述平均参考相位差获取模块,具体用于:
    计算预设调制光在各参考像素点和预设反射面之间传播的相位差,得到各参考像素点对应的参考相位差;
    根据参考区域中各参考像素点对应的参考相位差计算参考区域内所有参考像素点对应的平均参考相位差。
  11. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    视场角计算模块,用于根据所述目标像素点和中心参考点之间的像素距离确定所述像素距离对应的视场角,所述中心参考点表示所述参考区域中心位置的参考像素点;
    目标距离计算模块,用于根据所述视场角以及所述中心参考点与所述预设反射面之间的参考距离计算所述目标像素点和所述预设反射面之间的目标距离。
  12. 根据权利要求11所述的装置,其特征在于,所述视场角计算模块还包括:
    单位视场角计算单元,用于计算得到所述感光区域中相邻像素点之间的单 位视场角;
    视场角计算单元,用于根据所述像素距离和所述单位视场角计算得到所述像素距离对应的视场角。
  13. 一种应用于三维相机的图像校准装置,其特征在于,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行上述权利要求1至6中任一所述的应用于三维相机的图像校准方法。
PCT/CN2018/083761 2017-07-11 2018-04-19 应用于三维相机的图像校准方法和装置 WO2019011027A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18832958.5A EP3640892B1 (en) 2017-07-11 2018-04-19 Image calibration method and device applied to three-dimensional camera
US16/740,152 US10944956B2 (en) 2017-07-11 2020-01-10 Image calibration method and apparatus applied to three-dimensional camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710561888.XA CN109242901B (zh) 2017-07-11 2017-07-11 应用于三维相机的图像校准方法和装置
CN201710561888.X 2017-07-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/740,152 Continuation US10944956B2 (en) 2017-07-11 2020-01-10 Image calibration method and apparatus applied to three-dimensional camera

Publications (1)

Publication Number Publication Date
WO2019011027A1 true WO2019011027A1 (zh) 2019-01-17

Family

ID=65002343

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/083761 WO2019011027A1 (zh) 2017-07-11 2018-04-19 应用于三维相机的图像校准方法和装置

Country Status (4)

Country Link
US (1) US10944956B2 (zh)
EP (1) EP3640892B1 (zh)
CN (1) CN109242901B (zh)
WO (1) WO2019011027A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991459A (zh) * 2021-03-09 2021-06-18 北京百度网讯科技有限公司 一种相机标定方法、装置、设备以及存储介质
CN113256512A (zh) * 2021-04-30 2021-08-13 北京京东乾石科技有限公司 深度图像的补全方法、装置及巡检机器人
EP3865981A1 (de) * 2020-02-12 2021-08-18 Valeo Comfort and Driving Assistance Verfahren und vorrichtung zum kalibrieren eines 3d-bildsensors

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11669724B2 (en) 2018-05-17 2023-06-06 Raytheon Company Machine learning using informed pseudolabels
US11068747B2 (en) * 2019-09-27 2021-07-20 Raytheon Company Computer architecture for object detection using point-wise labels
CN110703270B (zh) * 2019-10-08 2022-02-22 歌尔光学科技有限公司 深度模组测距方法、装置、可读存储介质及深度相机
CN111095914B (zh) 2019-12-06 2022-04-29 深圳市汇顶科技股份有限公司 三维图像传感系统以及相关电子装置以及飞时测距方法
US11676391B2 (en) 2020-04-16 2023-06-13 Raytheon Company Robust correlation of vehicle extents and locations when given noisy detections and limited field-of-view image frames
CN113865481B (zh) * 2020-06-30 2024-05-07 北京小米移动软件有限公司 对象尺寸测量方法、装置及存储介质
CN112008234B (zh) * 2020-09-07 2022-11-08 广州黑格智造信息科技有限公司 一种用于隐形矫治器生产的激光打标方法及打标系统
CN114189670B (zh) * 2020-09-15 2024-01-23 北京小米移动软件有限公司 显示方法、显示装置、显示设备及存储介质
US11562184B2 (en) 2021-02-22 2023-01-24 Raytheon Company Image-based vehicle classification
CN115131215A (zh) * 2021-03-24 2022-09-30 奥比中光科技集团股份有限公司 一种图像的校正方法及及屏下系统
CN113298785A (zh) * 2021-05-25 2021-08-24 Oppo广东移动通信有限公司 校正方法、电子设备及计算机可读存储介质
CN114023232B (zh) * 2021-11-05 2024-03-15 京东方科技集团股份有限公司 显示校准方法及显示校准装置、显示器、智能手表

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722080A (zh) * 2012-06-27 2012-10-10 绍兴南加大多媒体通信技术研发有限公司 一种基于多镜头拍摄的多用途立体摄像方法
CN103581648A (zh) * 2013-10-18 2014-02-12 清华大学深圳研究生院 绘制新视点中的空洞填补方法
US20150036105A1 (en) * 2012-12-26 2015-02-05 Citizen Holdings Co., Ltd. Projection apparatus
CN104537627A (zh) * 2015-01-08 2015-04-22 北京交通大学 一种深度图像的后处理方法
CN106767933A (zh) * 2017-02-10 2017-05-31 深圳奥比中光科技有限公司 深度相机误差的测量系统、测量方法、评价方法及补偿方法

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19727281C1 (de) * 1997-06-27 1998-10-22 Deutsch Zentr Luft & Raumfahrt Verfahren und Vorrichtung zur geometrischen Kalibrierung von CCD-Kameras
JP3417377B2 (ja) * 1999-04-30 2003-06-16 日本電気株式会社 三次元形状計測方法及び装置並びに記録媒体
KR100960294B1 (ko) * 2002-10-23 2010-06-07 코닌클리케 필립스 일렉트로닉스 엔.브이. 디지털 비디오 신호를 후-처리하기 위한 방법 및 컴퓨터 프로그램을 기록한 컴퓨터로 판독 가능한 기록매체
US20090095047A1 (en) * 2007-10-16 2009-04-16 Mehul Patel Dimensioning and barcode reading system
KR101497659B1 (ko) * 2008-12-04 2015-03-02 삼성전자주식회사 깊이 영상을 보정하는 방법 및 장치
KR101310213B1 (ko) * 2009-01-28 2013-09-24 한국전자통신연구원 깊이 영상의 품질 개선 방법 및 장치
KR101590767B1 (ko) * 2009-06-09 2016-02-03 삼성전자주식회사 영상 처리 장치 및 방법
KR101669840B1 (ko) * 2010-10-21 2016-10-28 삼성전자주식회사 다시점 비디오로부터 일관성 있는 변이를 추정하는 변이 추정 시스템 및 방법
US8866889B2 (en) * 2010-11-03 2014-10-21 Microsoft Corporation In-home depth camera calibration
JP5773944B2 (ja) * 2012-05-22 2015-09-02 株式会社ソニー・コンピュータエンタテインメント 情報処理装置および情報処理方法
US9842423B2 (en) * 2013-07-08 2017-12-12 Qualcomm Incorporated Systems and methods for producing a three-dimensional face model
KR20150037366A (ko) * 2013-09-30 2015-04-08 삼성전자주식회사 깊이 영상의 노이즈를 저감하는 방법, 이를 이용한 영상 처리 장치 및 영상 생성 장치
CN103824318B (zh) * 2014-02-13 2016-11-23 西安交通大学 一种多摄像头阵列的深度感知方法
CN104010178B (zh) * 2014-06-06 2017-01-04 深圳市墨克瑞光电子研究院 双目图像视差调节方法及装置和双目相机
FR3026591B1 (fr) * 2014-09-25 2016-10-21 Continental Automotive France Procede de calibration extrinseque de cameras d'un systeme de formation d'images stereos embarque
EP3016076A1 (en) * 2014-10-31 2016-05-04 Thomson Licensing Method and apparatus for removing outliers from a main view of a scene during 3D scene reconstruction
CN104506838B (zh) * 2014-12-23 2016-06-29 宁波盈芯信息科技有限公司 一种符号阵列面结构光的深度感知方法、装置及系统
CN104596444B (zh) * 2015-02-15 2017-03-22 四川川大智胜软件股份有限公司 一种基于编码图案投影的三维照相系统及方法
WO2016202295A1 (zh) * 2015-06-19 2016-12-22 上海图漾信息科技有限公司 深度数据检测及监控装置
CN108369639B (zh) * 2015-12-11 2022-06-21 虞晶怡 使用多相机和深度相机阵列的基于图像的图像渲染方法和系统
US10298912B2 (en) * 2017-03-31 2019-05-21 Google Llc Generating a three-dimensional object localization lookup table
CN109242782B (zh) * 2017-07-11 2022-09-09 深圳市道通智能航空技术股份有限公司 噪点处理方法及装置
CN109961406B (zh) * 2017-12-25 2021-06-25 深圳市优必选科技有限公司 一种图像处理的方法、装置及终端设备
CN108344376A (zh) * 2018-03-12 2018-07-31 广东欧珀移动通信有限公司 激光投射模组、深度相机和电子装置
DE102019118457A1 (de) * 2018-07-13 2020-01-16 Sony Semiconductor Solutions Corporation Tof-(time-of-flight)-kamera, elektronikeinrichtung und kalibrierungsverfahren

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722080A (zh) * 2012-06-27 2012-10-10 绍兴南加大多媒体通信技术研发有限公司 一种基于多镜头拍摄的多用途立体摄像方法
US20150036105A1 (en) * 2012-12-26 2015-02-05 Citizen Holdings Co., Ltd. Projection apparatus
CN103581648A (zh) * 2013-10-18 2014-02-12 清华大学深圳研究生院 绘制新视点中的空洞填补方法
CN104537627A (zh) * 2015-01-08 2015-04-22 北京交通大学 一种深度图像的后处理方法
CN106767933A (zh) * 2017-02-10 2017-05-31 深圳奥比中光科技有限公司 深度相机误差的测量系统、测量方法、评价方法及补偿方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3640892A4

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3865981A1 (de) * 2020-02-12 2021-08-18 Valeo Comfort and Driving Assistance Verfahren und vorrichtung zum kalibrieren eines 3d-bildsensors
CN112991459A (zh) * 2021-03-09 2021-06-18 北京百度网讯科技有限公司 一种相机标定方法、装置、设备以及存储介质
CN112991459B (zh) * 2021-03-09 2023-12-12 阿波罗智联(北京)科技有限公司 一种相机标定方法、装置、设备以及存储介质
CN113256512A (zh) * 2021-04-30 2021-08-13 北京京东乾石科技有限公司 深度图像的补全方法、装置及巡检机器人

Also Published As

Publication number Publication date
EP3640892B1 (en) 2022-06-01
CN109242901A (zh) 2019-01-18
US10944956B2 (en) 2021-03-09
EP3640892A4 (en) 2020-06-10
EP3640892A1 (en) 2020-04-22
US20200267373A1 (en) 2020-08-20
CN109242901B (zh) 2021-10-22

Similar Documents

Publication Publication Date Title
WO2019011027A1 (zh) 应用于三维相机的图像校准方法和装置
US11461930B2 (en) Camera calibration plate, camera calibration method and device, and image acquisition system
CN110689581B (zh) 结构光模组标定方法、电子设备、计算机可读存储介质
US11763518B2 (en) Method and system for generating a three-dimensional image of an object
TWI624170B (zh) 影像掃描系統及其方法
US6876775B2 (en) Technique for removing blurring from a captured image
CN112634374B (zh) 双目相机的立体标定方法、装置、系统及双目相机
JP6394005B2 (ja) 投影画像補正装置、投影する原画像を補正する方法およびプログラム
KR101497659B1 (ko) 깊이 영상을 보정하는 방법 및 장치
JP5633058B1 (ja) 3次元計測装置及び3次元計測方法
US20110310376A1 (en) Apparatus and method to correct image
WO2020038255A1 (en) Image processing method, electronic apparatus, and computer-readable storage medium
JP2012504222A (ja) 較正
KR20180032989A (ko) ToF(time of flight) 촬영 장치 및 다중 반사에 의한 깊이 왜곡 저감 방법
JP6612740B2 (ja) モデリング構成及び三次元表面のトポグラフィーをモデリングする方法及びシステム
US11538151B2 (en) Method for measuring objects in digestive tract based on imaging system
JP2016100698A (ja) 校正装置、校正方法、プログラム
WO2019033777A1 (zh) 一种提升3d图像深度信息的方法、装置及无人机
US20150341520A1 (en) Image reading apparatus, image reading method, and medium
US10281265B2 (en) Method and system for scene scanning
KR101818104B1 (ko) 카메라 및 카메라 캘리브레이션 방법
US11143499B2 (en) Three-dimensional information generating device and method capable of self-calibration
JP2022024688A (ja) デプスマップ生成装置及びそのプログラム、並びに、デプスマップ生成システム
WO2023007625A1 (ja) 3次元計測システム、装置、方法及びプログラム
KR20240054051A (ko) 엑스레이 스캐너 및 그 운용 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18832958

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018832958

Country of ref document: EP

Effective date: 20200115