CN209787294U - Multispectral three-dimensional imaging system - Google Patents

Multispectral three-dimensional imaging system Download PDF

Info

Publication number
CN209787294U
CN209787294U CN201920048300.5U CN201920048300U CN209787294U CN 209787294 U CN209787294 U CN 209787294U CN 201920048300 U CN201920048300 U CN 201920048300U CN 209787294 U CN209787294 U CN 209787294U
Authority
CN
China
Prior art keywords
camera
multispectral
singlechip
cameras
visual information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201920048300.5U
Other languages
Chinese (zh)
Inventor
孔慧
杨健
高以成
顾硕
张翼弓
郭宇斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN201920048300.5U priority Critical patent/CN209787294U/en
Application granted granted Critical
Publication of CN209787294U publication Critical patent/CN209787294U/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

a multispectral three-dimensional imaging system comprises a visual information acquisition system and a visual information processing system, wherein the visual information acquisition system comprises a bracket, two identical multispectral cameras are arranged on the bracket, and are aligned in the same direction and have parallel optical axes; the multispectral camera comprises a visible light-near infrared camera, a far infrared camera and a coating spectroscope; the visual information processing system comprises a visual processing computer, a singlechip and a data transmission line, wherein the visual processing computer is connected with the singlechip through the data transmission line and transmits acquisition starting and acquisition terminating signals to the singlechip; the singlechip is connected with each camera through a data transmission line; each camera is connected to a vision processing computer. The utility model discloses utilize same light path to obtain multistage spectral information simultaneously, make the shooting condition not confine to static scene condition, do not restrict and shoot distance and illumination condition to guarantee to have the high degree of overlap between the image.

Description

multispectral three-dimensional imaging system
Technical Field
the utility model belongs to the spectral imaging field, concretely relates to multispectral three-dimensional imaging system.
Background
With the continuous development of imaging technology and visual sensor manufacturing process, various types of cameras gradually enter into our lives and are widely applied to the fields of traffic, manufacturing industry, security protection, inspection, document analysis, medical diagnosis, military and the like. The most common vision sensors are visible light cameras, including a visible light camera that takes a grayscale image and a color camera (RGB camera) that takes a color image. However, the visible light is only a small part of the whole solar spectrum, so the information acquired by the visible light camera is relatively limited, the acquisition condition is harsh, the visible light camera is sensitive to the illumination condition of the environment, and the visible light camera is difficult to effectively shoot at night, in strong illumination or in haze weather. Therefore, there are vision sensors for capturing invisible light of different bands, including ultraviolet cameras, near-infrared cameras, mid-infrared cameras, far-infrared cameras, and the like, but since a single sensor has a single information source and limited information collection capability, a multispectral camera capable of collecting multiple spectra has been developed. The multispectral camera uses an optical filter to separate specific wave bands from light with various wave bands obtained from a scene, and performs sensitization by using a specific imaging device, so as to obtain information of different frequency spectrums. From the spectrum of the acquired image, the imaging range of the existing multispectral camera is concentrated on a visible light wave band, an ultraviolet wave band and a near infrared wave band, and a few multispectral cameras relate to a middle infrared wave band. In particular, some multispectral cameras limit the spectrum they acquire to a particular wavelength, such as 880nm near infrared, 650nm red, 550nm green, 450nm blue, etc., which acquire images of the particular wavelength spectrum.
From the acquisition of multiple spectra, existing multispectral cameras can be classified into the following three categories: the first type of multispectral camera takes the form of an array of cameras, each position in the array corresponding to a common camera or multispectral camera. The multispectral video camera combines the cameras for obtaining different frequency spectrums, so that various frequency spectrum information in a scene can be obtained simultaneously, but because the light path of each camera in the array is different, the whole video camera is difficult to aim at the same target object, and the obtained image overlapping precision is poor.
The second type of multispectral camera only comprises a lens and an imaging component (i.e. the same optical path is used). The camera obtains spectrum information of different wave bands in a scene by using different optical filters at different moments. The optical filter is usually replaced in the form of a high-speed rotating optical filter wheel, the wheel disc generally comprises 6-8 replaceable optical filters, the rotating speed can reach 100 circles per second, and the scene is required to be static when the camera shoots, so that the higher overlapping precision among images of different frequency spectrums can be ensured.
The third type of multispectral camera only comprises a lens, but comprises a plurality of optical filters and imaging components. The camera uses a spectroscope to decompose light rays acquired from a scene into a plurality of beams, and each beam of light is projected onto a corresponding imaging component through a corresponding optical filter, so that an image of a specific spectrum is acquired. The camera mainly obtains 2-3 kinds of image information of different frequency spectrums in visible light, ultraviolet and near infrared wave bands simultaneously, the overlapping precision of images among different frequency spectrums is high, a shooting scene is not required to be static, but the cost is high, and the cost is directly related to the quantity of light paths after light splitting.
All the multispectral cameras passively receive various spectrums in a scene for imaging. In addition, cameras for sensing three-dimensional characteristics of a scene by receiving actively transmitted infrared light, structured light or laser light are also available in the market, the cameras can be collectively referred to as RGB-D cameras, color information of a scene object is acquired by using a visible light camera, and depth information of the scene object can be acquired by an active transmitting/receiving part, so that a function of simultaneously acquiring the visible light information and the depth information is realized, but the cameras can only passively sense the visible light information in the scene, and the accurate depth detection range is usually 0.4m-4.0m and is not more than 10m at most, so that the cameras are only suitable for indoor environments.
the camera for obtaining the depth information of the object, namely the stereo camera, is based on a binocular stereo imaging principle, and obtains the depth information of the object by calculating the position deviation between corresponding points of an image according to the difference of imaging positions of the object between a left camera and a right camera in a scene. The measuring method has the advantages of high efficiency, proper precision, simple system structure, low cost and the like, the measuring distance can reach infinity theoretically, and the method is suitable for measuring moving objects, but the existing binocular stereo camera only receives visible light information and cannot be used under the conditions of poor illumination conditions (such as night, strong illumination, haze weather and the like).
Therefore, the existing multispectral camera does not relate to the acquisition of far infrared band information, and lacks the capability of simultaneously acquiring visible light, near infrared and far infrared information.
Disclosure of Invention
The utility model aims at providing a multispectral three-dimensional imaging system utilizes same light path to obtain multistage spectral information simultaneously, makes the shooting condition not confine the static scene condition to, does not restrict and shoots distance and illumination condition to guarantee to have the high degree of overlap between the image.
The utility model adopts the technical proposal that:
A multispectral three-dimensional imaging system comprises a visual information acquisition system and a visual information processing system, wherein the visual information acquisition system comprises a bracket, two identical multispectral cameras are arranged on the bracket, the two multispectral cameras are aligned in the same direction, and the optical axes of the two multispectral cameras are parallel;
Each multispectral camera comprises a visible light-near infrared (RGB-NIR) camera, a Far Infrared (FIR) camera and a coated beam splitter, wherein the beam splitter can transmit light beams in a far infrared band and reflect light beams in the visible and near infrared bands;
The visual information processing system comprises a visual processing computer, a singlechip and a data transmission line, wherein the visual processing computer is connected with the singlechip through the data transmission line and transmits acquisition starting and acquisition terminating signals to the singlechip; the singlechip is connected with each camera (each camera comprises two visible light-near infrared cameras and two far infrared cameras) through a data transmission line, and transmits periodic trigger signals to each camera; each camera is connected with a vision processing computer through a gigabit Ethernet port/USB 3.0 interface through a gigabit network cable.
Further, the geometric position relationship among the visible light-near infrared camera, the far infrared camera and the spectroscope is determined by the joint calibration between the two cameras (the visible light-near infrared camera and the far infrared camera), and the calibration method comprises the following steps: firstly, adhering two layers of materials on a flat plate to manufacture a calibration plate, wherein the bottom layer of material is a whole piece of paper printed with checkerboard patterns, and the other layer of material is tinfoil paper; before the calibration is started, covering a white area of the checkerboard with tinfoil paper, cooling the calibration plate to 15 ℃ with cold air during the calibration, and utilizing the characteristic that the radiation coefficients of the two materials have larger difference at low temperature, so that the checkerboard pattern can be clearly seen in the imaging of the infrared camera by the calibration plate, and for the visible light camera, the clear checkerboard pattern can be also seen in the visible light image due to the contrast of black and white colors; then, by detecting the corner coordinates in the visible light and far infrared images, inputting the corner coordinates into a camera calibration algorithm to obtain internal parameters of the two cameras, and then performing distortion removal operation on the images shot by the two cameras, wherein the size of the visible light image after distortion removal needs to be adjusted to be the same as that of the infrared image after distortion removal because the resolution of the visible light camera is higher than that of the infrared camera; and finally, adjusting the poses of the two cameras through an adjuster, aligning principal points in the two images, and then aligning other angular points in the images, so that pixel-level registration is realized between the images, and further the geometric position relation among the visible light-near infrared camera, the far infrared camera and the spectroscope is determined.
Further, the visible light-near infrared (RGB-NIR) camera is perpendicular to the optical axis of the Far Infrared (FIR) camera, and the spectroscope is a square with the side length of 6.5 cm.
The utility model has the advantages that:
(1) The synchronism is good. The system can simultaneously output visible light, near infrared, far infrared and depth information of a scene to be observed at the same time without additional timestamp matching and stereo matching.
(2) The degree of pixel registration is high. The system is designed based on the light splitting principle of the same light path, the obtained multispectral data and depth data can be aligned at the pixel level, and the subsequent processing difficulty of multispectral information is reduced.
(3) The precision is high. Three kinds of information of visible light, near-infrared and far infrared are fused during stereo matching, the matching effect is better than that of a traditional binocular camera for singly acquiring visible light information, and the stereo matching precision is improved.
(4) The application conditions are wide. The system can collect visible light, near infrared and far infrared rays, the illumination constraint during shooting is smaller than that of a traditional single-spectrum camera and a traditional visible light camera, the system can be used under the condition of poor illumination, and the work and the shooting of dynamic scenes under the conditions of night and the like are realized.
(5) The cost is low. The cost of a single visible light-near infrared-far infrared mixed camera unit is lower than that of a multispectral camera with three light splitting in the same optical path.
(6) The process is simple. The whole system only needs to be processed by the mounting bracket and the light splitting system, and is convenient to process and mount.
(7) Based on the light splitting principle, the same light path is utilized to simultaneously obtain visible light, near infrared and far infrared information in a scene, so that the shooting condition is not limited to the static state of the scene, and the high overlapping degree among the collected visible light image, the collected near infrared image and the collected far infrared image is ensured.
(8) The two multispectral cameras are used for forming the three-dimensional camera, large-distance measurement under various illumination conditions (such as day and night) is achieved, and the distance measurement function of the outdoor environment is guaranteed.
The principle of reducing the size of the beam splitter is:
When the optical axes of the two camera assemblies are placed perpendicularly, the length of the beam splitter is proportional to the camera-to-beam splitter distance, given the camera's field of view. When the distance from the camera to the spectroscope is fixed, the length of the spectroscope is inversely proportional to the included angle between the optical axes of the two-phase machine assembly.
Therefore, the size of the spectroscope can be reduced by reducing the distance from the camera to the spectroscope and enlarging the included angle between the optical axes of the two cameras. However, the optical axes of the two cameras are not perpendicular to each other, so that the field of view is asymmetric, the cameras are easily shielded, and the processing workpieces such as the bases of the cameras need to be specially made, so the system of the present invention adopts the scheme of the optical axes of the cameras being perpendicular, and the size of the spectroscope is 6.5cm (taking the square spectroscope as an example) according to the scheme.
Drawings
FIG. 1 is a schematic diagram of the external structure of the visual information acquisition subsystem of the present invention;
FIG. 2 is a schematic diagram of the spectroscope size calculation method of the present invention;
FIG. 3 is a diagram showing the relationship between the size of the beam splitter and the included angle between the optical axes of the two camera assemblies;
FIG. 4 is a schematic diagram of the internal structure of the visual information acquisition subsystem of the present invention;
FIG. 5 is a schematic diagram of the multispectral inter-image registration process of the present invention;
Fig. 6 is a flow chart of multispectral stereoscopic vision information processing of the present invention.
Detailed Description
as shown in FIG. 1, a multispectral stereo imaging system comprises a visual information acquisition system and a visual information processing systemThe system comprises two parts, wherein the visual information acquisition subsystem consists of a bracket and two multispectral cameras arranged on the bracket, the two multispectral cameras are oppositely arranged one above the other, and the optical axes are parallel to ensure that a large enough public view field area is reserved between the two camera units. Each multi-spectrum camera comprises a PointGrey Flea3 color camera (FL3-GE-13S2C-C), a FLIRA65 far infrared camera and a coated spectroscope. The resolution of a Flea3 camera is 1280 x 960, the photosensitive wavelength is 400-700 nm, and the field angle is about 40 degrees; the FLIRA65 camera has the resolution of 640 x 512, the photosensitive wavelength of 7.5-13 mu m and the field angle of about 45 degrees; the shape of spectroscope is the square, and the spectroscope coating film can see through the light beam of far infrared wave band and reflect the light beam of visible light wave band simultaneously, and the far infrared that constitutes multispectral camera and visible light camera are finished product camera, the utility model discloses a reduce the size of spectroscope as far as possible and make multispectral camera's size reduce, and the size is confirmed through the content that fig. 2 shows. As shown in fig. 2, three different scenarios are considered. In the first case, as shown in FIG. 2(a), the optical axes of the two cameras are vertically disposed, the field angles of the cameras are θ, and the distance from each camera to the beam splitter is d1And d2Let d be1=d2The length of the beam splitter can be calculated as
It is clear that given the field of view of the camera, the length of the beam splitter and d1Is in direct proportion. If two cameras are simultaneously brought close to the beam splitter, as in FIG. 2(b), assume that the camera is at a distance d from the beam splitter1' and d2', and d1′=d2' then the length of the beam splitter can be calculated as
if d is1′=d12, the length of the beam splitter is reduced to half of the length of the beam splitter in FIG. 2(a). The hybrid camera solution of fig. 2(b) is better and smaller than the hybrid camera solution of fig. 2(b) as long as the width of the perceived field of view of the hybrid cameras of fig. 2(a) and 2(b) do not differ much. Considering a scene with distance camera d, the perceived widths of the hybrid cameras in fig. 2(a) and 2(b) are respectively
s=2(d+d1)×tan(θ/2)\*MERGEFORMAT(3)
And
s'=2(d+d1')×tan(θ/2)\*MERGEFORMAT(4)
In a practical application scenario, d is d1And d1several hundred times 'and thus s and s' are approximately equal, i.e. the perceived widths in fig. 2(a) and fig. 2(b) are almost the same. This also illustrates that the hybrid camera solution of fig. 2(b) is better than the hybrid camera solution of fig. 2(a), and the hybrid camera is smaller in size.
In addition to the arrangement of fig. 2(b) allowing the size of the beam splitter to be reduced, the size of the beam splitter in the arrangement of fig. 2(c) may be smaller than that of fig. 2 (b). In FIG. 2(c), it is assumed that the distance between the camera and the spectroscope is constant as compared with FIG. 2(b), and d is1A and d2And d is1*=d2*=d1'. Unlike the scheme of fig. 2(b), if the two cameras are rotated by β degrees counterclockwise and clockwise, respectively, the length of the beam splitter can be calculated as
When θ is 50 degrees, we vary the value of β from 0 to θ/2. Fig. 3(a) shows the value of the first term in equation (5) as β changes from 0 to θ/2. Fig. 3(b) shows the value of the second term in equation (5) as β changes from 0 to θ/2. Fig. 3(c) shows the value of equation (5) as β changes from 0 to θ/2. It can be seen that the scheme of fig. 2(c) is the same as the scheme of fig. 2(b) when the value of β is 0 degrees, provided that camera rotation does not cause mutual occlusion. The size of the beam splitter in the scheme of fig. 2(c) is the smallest when the value of β is θ/2. However, the view field of the scheme of fig. 2(c) is asymmetric, which easily causes mutual occlusion, and the processing workpiece such as the base of the camera needs to be specially manufactured. In contrast, in the scheme of fig. 2(b), we can use standard parts. The system thus used the scheme of figure 2(b) according to which the hybrid camera beamsplitter used was a square coated beamsplitter with a side length of about 6.5cm and the multispectral camera size was 10cm x 21cm x 20 cm.
The internal schematic diagram of the visual information acquisition subsystem is shown in fig. 4, after incident light is split by the coating spectroscope, light with the wave band of 7.5-13 μm is incident to an imaging device of the FLIRA65 camera, and a far infrared image is acquired; light rays with the wavelength range of 400-700 nm are reflected to a Flea3 camera, then the light beams are split into near infrared light beams and visible light beams again through a light splitting prism inside the Flea3 camera, and visible light images and near infrared images are obtained through an imaging device corresponding to the Flea3 camera.
the far infrared camera, the visible light camera and the spectroscope are placed on the same unified base, the bodies of the visible light camera and the far infrared camera are fixed on the respective bases, and the camera bases are connected with the whole mixed camera base through two adjustable supports. And a freedom degree adjusting frame arranged on the adjusting table is connected with the support of the camera and used for adjusting the relative posture of the camera so as to realize the registration of the two images.
Let P be a point in three-dimensional space, P1=(u1,v1,1)TAnd p2=(u2,v2,1)Trespectively the mapping of point P in the visible light camera and the far infrared camera. P1=(x1,y1,z1)TAnd P2=(x2,y2,z2)Tthe coordinates of the point P in the visible camera coordinate system and the far infrared camera coordinate system are respectively. The relationship between the available ones is
In the formula K1,K2Internal references for visible and far-infrared cameras respectivelyand (4) matrix. If the two camera assemblies are in perfect registration, z1=z2And P is1=P2Equation (6) can be simplified to
The conversion relation between the visible light camera and the far infrared camera is given by the formula (7), and the matrix K is transformed1·K2 -1Can map far infrared image to visible light image by transforming matrix K2·K1 -1The optical head image may be mapped onto the far infrared image. Internal parameter K1,K2Can be obtained by a camera calibration algorithm according to K1,K2The images under the two cameras are subjected to a de-distortion operation. Since the resolution of the visible light camera is higher than that of the infrared camera, the size of the undistorted visible light image is adjusted to be the same as that of the undistorted infrared image. The pose of the two cameras is adjusted through the adjuster, the principal points in the two images are aligned firstly, and then other angular points in the images are aligned, so that the pixel-level registration between the images is realized. Each camera component and the spectroscope are fixed by strong glue, and after the glue is completely dried, the relative posture between the two camera components is slightly changed (a small rotation transformation and a small translation transformation exist) due to expansion caused by heat and contraction caused by cold. Suppose P1And P2The transformation relation between the two is P1=R·P2+ t, equation (4) may be rewritten as
Generally, the translational shift between cameras caused by thermal expansion and contraction is very small, z1Is much larger than t, so thatAnd t ≈ 0, equation (6) may be approximated as
Accurate registration between multispectral images can be achieved by only estimating the rotation matrix between cameras and computing the rotation matrix using homography mapping, as shown in fig. 5.
The visual information processing system comprises a visual processing computer, a single chip microcomputer and a data transmission line, wherein the visual processing computer comprises a multi-core CPU, a GPU, a gigabit Ethernet card and a USB3.0 interface. The vision processing computer is connected with the singlechip through a data transmission line and transmits acquisition starting and acquisition terminating signals to the singlechip; the singlechip is connected with each camera through a data transmission line and transmits periodic trigger signals to each camera; each camera is connected with the vision processing computer through a gigabit Ethernet port/USB 3.0 interface through a gigabit network cable, and visual information acquired by each camera is transmitted to the computer.
Fig. 6 is a flow chart of multispectral stereoscopic vision information processing, in which after a vision processing computer obtains a data acquisition instruction, it first sends a signal to each camera unit in a vision information acquisition subsystem, performs camera initialization operation, configures related parameters, and sets a common cache space in the computer for temporary storage of multispectral vision data; after the initialization is finished, the vision processing computer sends a signal to the single chip microcomputer, the single chip microcomputer is required to generate a stable periodic square wave signal which is used as a trigger signal of each camera unit and is synchronously sent to each camera unit; after receiving the synchronous trigger signal, each camera unit carries out image acquisition and continuously transmits the acquired images to a public buffer space in a vision processing computer; the vision processing computer receives the information of each spectral image, matches the images acquired by each camera at the same moment by using the timestamp information of the images, and transmits the images to the image processing unit in the computer; after the image processing unit carries out operations such as preprocessing, correction, registration and the like on the multispectral data, the three-dimensional matching between the two multispectral camera units is realized, and the depth image of a scene is obtained; and finally, outputting the obtained multispectral image and depth image with aligned pixels by the vision processing computer, and then returning to process the multispectral image data received by the next frame.

Claims (2)

1. The multispectral three-dimensional imaging system is characterized by comprising a visual information acquisition system and a visual information processing system, wherein the visual information acquisition system comprises a support (1), two identical multispectral cameras (2) are mounted on the support (1), the two multispectral cameras are aligned in the same direction, and optical axes of the multispectral cameras are parallel;
Each multispectral camera (2) comprises a visible light-near infrared (RGB-NIR) camera (3), a Far Infrared (FIR) camera (4) and a coated spectroscope (5), wherein the spectroscope can transmit light beams in a far infrared wave band and reflect the light beams in the visible light and the near infrared wave band;
The visual information processing system comprises a visual processing computer, a singlechip and a data transmission line, wherein the visual processing computer is connected with the singlechip through the data transmission line and transmits acquisition starting and acquisition terminating signals to the singlechip; the singlechip is connected with each camera through a data transmission line and transmits periodic trigger signals to each camera; each camera is connected with a vision processing computer through a gigabit Ethernet port/USB 3.0 interface through a gigabit network cable.
2. the multispectral stereo imaging system according to claim 1, wherein the visible-near infrared camera (3) is perpendicular to the optical axis of the far infrared camera (4), and the coated beam splitter (5) is square with a side length of 6.5 cm.
CN201920048300.5U 2019-01-11 2019-01-11 Multispectral three-dimensional imaging system Expired - Fee Related CN209787294U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201920048300.5U CN209787294U (en) 2019-01-11 2019-01-11 Multispectral three-dimensional imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201920048300.5U CN209787294U (en) 2019-01-11 2019-01-11 Multispectral three-dimensional imaging system

Publications (1)

Publication Number Publication Date
CN209787294U true CN209787294U (en) 2019-12-13

Family

ID=68792268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201920048300.5U Expired - Fee Related CN209787294U (en) 2019-01-11 2019-01-11 Multispectral three-dimensional imaging system

Country Status (1)

Country Link
CN (1) CN209787294U (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109688342A (en) * 2019-01-11 2019-04-26 南京理工大学 A kind of multispectral stereo imaging system
CN110840386A (en) * 2019-12-19 2020-02-28 中国科学院长春光学精密机械与物理研究所 Visible light and near-infrared fluorescence 3D common imaging endoscope system based on single detector
CN113687369A (en) * 2021-07-14 2021-11-23 南京大学 Synchronous acquisition system and method for spectral information and depth information
CN117201949A (en) * 2023-11-08 2023-12-08 荣耀终端有限公司 Image processing method, electronic device, spectroscopic device, and storage medium
CN117768634A (en) * 2024-02-22 2024-03-26 长春市榣顺科技有限公司 vehicle-mounted stereoscopic vision camera based on binocular camera and laser radar and imaging method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109688342A (en) * 2019-01-11 2019-04-26 南京理工大学 A kind of multispectral stereo imaging system
CN110840386A (en) * 2019-12-19 2020-02-28 中国科学院长春光学精密机械与物理研究所 Visible light and near-infrared fluorescence 3D common imaging endoscope system based on single detector
CN113687369A (en) * 2021-07-14 2021-11-23 南京大学 Synchronous acquisition system and method for spectral information and depth information
CN117201949A (en) * 2023-11-08 2023-12-08 荣耀终端有限公司 Image processing method, electronic device, spectroscopic device, and storage medium
CN117768634A (en) * 2024-02-22 2024-03-26 长春市榣顺科技有限公司 vehicle-mounted stereoscopic vision camera based on binocular camera and laser radar and imaging method

Similar Documents

Publication Publication Date Title
CN209787294U (en) Multispectral three-dimensional imaging system
CA3157194C (en) Systems and methods for augmentation of sensor systems and imaging systems with polarization
CN109688342A (en) A kind of multispectral stereo imaging system
US7769205B2 (en) Fast three dimensional recovery method and apparatus
JP4495041B2 (en) A method for determining projector pixels associated with laser points on a display surface by pinhole projection
WO2016037486A1 (en) Three-dimensional imaging method and system for human body
WO2017121058A1 (en) All-optical information acquisition system
CN116194866A (en) Alignment of images from separate cameras using 6DOF pose information
JP2023526239A (en) Method and system for imaging a scene, such as a medical scene, and tracking objects in the scene
CN110425983B (en) Monocular vision three-dimensional reconstruction distance measurement method based on polarization multispectral
US20210118177A1 (en) Method and system for calibrating a plenoptic camera system
WO2012030815A2 (en) Single-shot photometric stereo by spectral multiplexing
CN111854636A (en) Multi-camera array three-dimensional detection system and method
US7491935B2 (en) Thermally-directed optical processing
CN115359127A (en) Polarization camera array calibration method suitable for multilayer medium environment
Zhang et al. Building a stereo and wide-view hybrid RGB/FIR imaging system for autonomous vehicle
WO2020163742A1 (en) Integrated spatial phase imaging
CN111272101A (en) Four-dimensional hyperspectral depth imaging system
CN103630118B (en) A kind of three-dimensional Hyperspectral imaging devices
JP2008128771A (en) Apparatus and method for simultaneously acquiring spectroscopic information and shape information
CN106644074B (en) A kind of 3 D stereo spectrum imaging system
JP2005275789A (en) Three-dimensional structure extraction method
US20220295038A1 (en) Multi-modal and multi-spectral stereo camera arrays
CN112396687B (en) Binocular stereoscopic vision three-dimensional reconstruction system and method based on infrared micro-polarizer array
CN204334737U (en) Camera assembly

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191213

Termination date: 20210111