CN210201926U - Double-fisheye panoramic image acquisition device - Google Patents

Double-fisheye panoramic image acquisition device Download PDF

Info

Publication number
CN210201926U
CN210201926U CN201920310777.6U CN201920310777U CN210201926U CN 210201926 U CN210201926 U CN 210201926U CN 201920310777 U CN201920310777 U CN 201920310777U CN 210201926 U CN210201926 U CN 210201926U
Authority
CN
China
Prior art keywords
image
image sensor
panoramic
sensor
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201920310777.6U
Other languages
Chinese (zh)
Inventor
Enze Zhang
张恩泽
Wenjie Lai
赖文杰
Zhifa Hu
胡志发
Yin Cheng
成茵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Visionertech Co Ltd
Original Assignee
Chengdu Visionertech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Visionertech Co Ltd filed Critical Chengdu Visionertech Co Ltd
Priority to CN201920310777.6U priority Critical patent/CN210201926U/en
Application granted granted Critical
Publication of CN210201926U publication Critical patent/CN210201926U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The utility model discloses a two fisheye panoramic image collection system, the device is through the unique design to fisheye camera lens formation of image group, at the different optical information of same optical lens system light path transmission of group to through rational design, realize that two fisheye camera lens systems form images simultaneously to same image sensor to different optical information, when guaranteeing imaging quality, guarantee the synchronization between the different camera lenses, have very big realization convenience in the aspect of the integration of different optical information. In addition, the optical information of the same type of the two lenses is acquired simultaneously through different sensors, so that the rapid fusion of panoramic optical information can be realized simply and conveniently, the integration of various optical information in the panoramic field is greatly improved, the realization cost of panoramic equipment is effectively reduced, and the application field of the panoramic equipment is expanded.

Description

Double-fisheye panoramic image acquisition device
Technical Field
The utility model relates to a two fisheye panoramic image collection system for among panoramic camera, in particular to wherein the design of two fisheye camera lenses, with the transmission and the processing of final image.
Background
At present, the existing optical structure related to camera exposure compensation, TOF and the like is often realized by combining a single lens and a sensor, and the realization form also causes the problem that a plurality of optical lens positions are required to be reserved in product design, so that the structure of the whole product is more complicated along with the increase of the functional requirements of the whole product. In addition, in terms of functional implementation, information of a plurality of optical paths needs to be fused after being separated, the fusion precision of the information also has high requirements on structural installation and algorithm implementation precision, and errors are often difficult to eliminate. In the panoramic field, the lens of the panoramic lens is relatively complex, and the actual system design is complex and difficult to realize due to the fact that more optical channels are added in the optical structure.
One mainstream optical structure in the panoramic solution is a back-to-back combination of the dual-fish glasses heads. The panoramic image acquisition device has two main implementation structures. Firstly, two fisheye lens modules contain independent sensor imaging element respectively, and image quality difference can be introduced to this kind of mode owing to need handle two sets of optical system formation of image of independent formation of image, and the time is asynchronous scheduling problem, the demand that can't fine satisfy the real-time shooting of panorama. The other scheme is that the panoramic image is acquired through a single sensor scheme of double fisheye and prism reflection, and the method can solve the problems of image quality balance, time synchronization and large data volume of two independent sensor imaging elements. For the panoramic scheme of the double fisheyes and the prism reflection single sensor, only panoramic real-time images can be simply acquired, and no other expansion such as panoramic exposure, panoramic depth and the like exists at present.
SUMMERY OF THE UTILITY MODEL
For overcoming the purpose that current two fisheye type panoramic image collection system different optical information fused, the utility model provides a panoramic image collection system, the device is through the unique design to fisheye camera lens formation of image group, at the different optical information of same optical lens system light path transmission of group to through rational design, realize two fisheye camera lens systems and image simultaneously to same image sensor to the optical information of difference on, when guaranteeing imaging quality, guarantee the synchronization between the different camera lenses. The method also has great realization convenience in the aspect of fusion of different optical information. In addition, the optical information of the same type of the two lenses is acquired simultaneously through different sensors, so that the rapid fusion of panoramic optical information can be realized simply and conveniently, the integration of various optical information in the panoramic field is greatly improved, the realization cost of panoramic equipment is effectively reduced, and the application field of the panoramic equipment is expanded.
The utility model provides a technical scheme that technical problem adopted is: two paths of optical signals pass through two ultra-wide angle lens groups and a prism, and light paths are projected to sensors on the upper side and the lower side of the prism. Panoramic optical image information can be acquired by a single sensor, and the two pieces of panoramic image information are fused, so that the imaging performance of the camera is improved, and the application effect of various requirements of panoramic images is effectively improved.
The beneficial effects of the utility model are that, through the special design of the panoramic optical lens module that uses, realize panoramic image collection system when guaranteeing fisheye camera lens image quality and panorama formation of image concatenation effect, guarantee the synchronization between two fisheye camera lenses, the final effect that shows of reinforcing reduces the complexity of post processing algorithm to the integration processing through two kinds of optical information obtains superior panoramic effect.
Drawings
FIG. 1 is a schematic sectional view of a binocular panoramic image pickup apparatus (in the figure, 11 front view wide-angle lenses, 12 rear view wide-angle lenses, 2 main image sensors, 21 main image sensor filters, 3 auxiliary image sensors, 31 auxiliary image sensor filters, 4 triple prisms)
FIG. 2 is a schematic view of the light path of a cubic beam splitter of a main image sensor and an auxiliary image sensor (in the figure, 411 represents an antireflection film, 412 represents a prism, 413 represents a beam splitting interface and a reflecting surface, and the structure is designed symmetrically)
FIG. 3 is a schematic diagram of cube beam splitter optical path imaging of a main image sensor and an auxiliary image sensor (203 shows that a front field lens corresponds to an image on the main image sensor, 214 shows that a front field lens corresponds to an image on the auxiliary image sensor, 204 shows that a rear field lens corresponds to an image on the main image sensor, 213 shows that a rear field lens corresponds to an image on the auxiliary image sensor)
FIG. 4 is a schematic view of image registration
(10 denotes a main image sensor imaging image, 20 denotes a post-registration sub image sensor imaging image, 5 denotes a post-registration image)
FIG. 5 is a flow chart of super resolution construction algorithm
Detailed Description
In fig. 1, there are two groups of fisheye lenses, optical axes of the two groups of fisheye lenses are overlapped with each other and are oppositely arranged at two ends of the same optical axis, a prism is arranged on a connection line of optical centers of the two groups of fisheye lens modules, and the prism comprises a spectroscope which can allow one part of light to reflect and the other part of light to transmit. Light is guided to the prism 4 through the lens structures 11 and 12, the surface of the prism is a spectroscope, a part of light path is guided to the prism, a part of light path is reflected to the main image sensor 2, and the light path guided to the prism is transmitted to the prism, transmitted and reflected and then projected to the auxiliary image sensor 3. The corresponding optical filters are arranged above the sensors 2 and 3 to filter optical signals received by the corresponding sensors, and different functions can be realized aiming at the combination of different optical paths in an actual scene.
Wherein, the front group view field angle of the fisheye lens is larger than 180 degrees. The optical axes of the front view field wide-angle lens and the rear view field wide-angle lens are overlapped. Projecting light obtained by reflection of the prism onto a main image sensor above the prism to form a double-circumference fisheye picture; light obtained by prism transmission is projected to an auxiliary image sensor below the prism after internal reflection and transmission of the prism to form a double-circumference fisheye picture; the interval between two circumferential fisheye images is controlled by the position of the prism, and two images can be separated by setting an image isolation strip smaller than 0.3mm at the position of the prism part in the actual imaging process.
The actual structure of the prism portion may be in the form of:
as shown in fig. 2, light enters from two ends (illustrated by one end optical transmission, and the actual optical path is left-right symmetrical), and first passes through the transmission prism, and the prism entrance surface 411 is coated with an antireflection film to allow as much light as possible to pass through. The transmitted light reaches the splitting surface 413 and then is split, a part of light is transmitted to the main image sensor for imaging through reflection, the other part of light is reflected on the back of the other splitting surface after being transmitted, and the reflected light is directly projected to the auxiliary image sensor for imaging. In the actual light path process, the material and the manufacturing process of the spectroscope can be prepared, so that the reflected light and the transmitted light of the spectroscope can be distributed according to a specified proportion. The finally obtained image effect schematic diagram is shown in fig. 3, wherein a picture on the left side of the double-fisheye image of the main sensor and a picture on the right side of the double-fisheye image of the auxiliary sensor are imaged by a front field lens collecting light path, and the picture contents are that the two pictures rotate 180 degrees relatively; the right side picture of the double-fisheye image of the main sensor and the left side picture of the double-fisheye image of the auxiliary sensor are used for acquiring light path imaging for the rear view field lens, and the picture contents are that the two pictures rotate 180 degrees relatively.
Aiming at the setting of the splitting ratio of reflected light and transmitted light of a beam splitter, an imaging light path of a main image sensor in the existing light path is reflected once by the beam splitter, an imaging light path of an auxiliary image sensor is transmitted once and reflected once by the beam splitter, the incident light intensity is set to be I without considering energy loss on the beam splitter, the reflectivity of the beam splitter is α, the transmissivity is 1- α, the imaging light intensity of the main image sensor is I. α, the imaging light intensity of the auxiliary sensor is I. α (1- α). 852, wherein 0 is not less than α and not more than 1, α is a fixed ratio of other energy losses of the light path and 0 is not less than α and not more than 1, the imaging light intensity of the main image sensor is 0.5I because I.6385 is more than I.6338 (1- α). β, the imaging light intensity of the auxiliary sensor is weaker, in order to ensure that the imaging light intensity of the auxiliary sensor reaches the maximum, α is 0.5, the value is not less than β is obtained according to the actual material characteristic value, the value is not less than 0.95, the imaging light intensity is increased by the original light intensity, the original light intensity is increased by the original F2, and the original light intensity is increased by four times, and the original light intensity is increased by the original F2, when the original light intensity is increased by the original F2, the original light intensity is.
The optical filters above the image sensor are the optical filters which can sense light of the corresponding sensor, such as an infrared cut filter for a visible color light sensor, an all-pass filter for a brightness sensor, an infrared pass filter for an infrared imaging sensor, a corresponding infrared band-pass filter for a TOF resolving sensor, and the like.
The image signal is connected with an external system through a transmission module. After the pictures of the two image sensors are obtained and the two pictures are synchronously adjusted, the synchronized pictures are subjected to corresponding image information fusion processing, and a panoramic video and a panoramic image which are fused with multiple information in real time can be obtained. In the image fusion process, the image on the main image sensor is used as a reference image, and the picture acquired by the auxiliary image sensor is fused to the picture acquired by the main image sensor.
The image fusion result can utilize the correlation of the two images in space and time and the complementarity of the two images in information, and the image obtained after fusion has more comprehensive and clear description on the scene, thereby being more beneficial to the recognition of human eyes and the automatic detection of a machine. The general image fusion also comprises an image registration process, so that the final image fusion result is obtained by fusing image information at the same position after images from different imaging sources are kept consistent on the position information. The image fusion process comprises methods based on a space domain, a frequency domain and the like, and the following is an implementation flow combining image matching and fusion based on characteristic points in the space domain:
1. matching the characteristic points: in order to obtain an accurate correspondence between the corresponding positions of the two images, some "markers" are determined to calculate the correspondence, and the "markers" are represented by matching feature point pairs in the images, which means point pairs representing the same object in different pictures. The feature point matching firstly detects feature points, namely, detecting representative points with obvious features in the image. There are many alternatives for the general detection feature detection algorithm, such as Harris, SIFT, SURF, ORB, etc., and the candidate point for matching the feature points in the two images can be obtained through the detection algorithm. The feature matching is to pair points with close feature description in two images after feature calculation of feature points, commonly used algorithms in the pairing process include Brute-Force matching, FLANN matching and the like, and a series of feature point pairs are obtained after matching.
2. Homography matrix calculation: the homography of the planes in computer vision is defined as the projection mapping from one plane to the other plane, and the projection relation between the image to be registered and the projection plane is determined by calculating a homography matrix, so that the position corresponding relation of the same content in the two images on the two images can be confirmed.
3. Image projection: and calculating and projecting all pixel points on the auxiliary sensor imaging image through a homography matrix to obtain a target position, and obtaining a registered image through image interpolation. And then, cutting the overlapped area of the two images through area comparison to obtain the final area range of the final fusion image, as shown in fig. 4.
4. Image fusion: the image fusion is to extract, combine and display limited information of two images, and a corresponding fusion algorithm needs to be determined according to specific fusion requirements to realize the fusion.
The first embodiment is as follows:
the sensors are respectively a main image sensor and a black-and-white image sensor (the positions of the two can be exchanged), and the corresponding optical filters are arranged according to the corresponding sensors. The whole system enhances the camera illumination in the night vision environment through infrared supplementary lighting, generally selects an infrared lamp source of 850nm or 940nm to supplement lighting for the environment, and needs to configure the infrared lamp source supplementary lighting of the full field of view for panoramic field of view acquisition. For a color sensor, in order to acquire real color information in an environment, an infrared cut filter is used to reduce the image red exposure phenomenon, but in a low-illumination environment, a camera has limited acquisition of visible light flux, so that the color imaging is dark. Under the same imaging condition, imaging is carried out on the black-and-white image sensor after infrared light supplement (the optical filter is full-pass), so that a better brightness image can be obtained for final imaging, and a better color image effect under low illumination can be obtained after image fusion is carried out on the black-and-white image and the color image.
The black-and-white imaging and the color image are fused under the infrared condition, the method mainly combines the brightness of the black-and-white image and the color information of the color image, and the imaging effect of the color image is often greatly interfered by noise under the low illumination condition, so the problems of noise suppression and the like need to be considered in the re-fusion process. The following solution is proposed by the present solution.
1. Image color space conversion: because brightness information needs to be extracted from a black-and-white image and color information needs to be extracted from a color image, and the format of a general image is RGB or YUV (linear equivalent to RGB), the three channels of the image in the format are not suitable for extracting the brightness and the color information, and the HSL color format is generally selected to convert the image and then extract specific information.
Figure DEST_PATH_GDA0002225076540000081
Figure DEST_PATH_GDA0002225076540000082
Figure DEST_PATH_GDA0002225076540000083
Wherein h represents hue angle, s represents saturation, and l represents brightness information; r represents a red component in an RGB three-channel, g represents a green component in the RGB three-channel, and b represents a blue component in the RGB three-channel; max represents the maximum value of r, g and b, and min represents the minimum value of r, g and b;
2. for the HSL image, an S-channel image corresponding to a black-and-white image and H and L channels corresponding to a color image are extracted, wherein the original color image has large noise, so that noise suppression processing needs to be performed on color information. The noise processing can be processed by mean filtering or gaussian filtering, but this results in blurring of the edge of the object and also results in a loss of a part of the picture information. The guide filtering is adopted to carry out noise suppression processing on the color picture, and the guide map can be set as a brightness map of the black-white image after registration to carry out edge-preserving denoising filtering operation on the color picture.
3. After noise suppression, the luminance image and the color image are merged to form an HSL format of a final image after fusion, and then the HSL format is reversely converted into an RGB format to obtain a final image effect, such as the following calculation process.
C=(1-|2L-1|)×S
X=C×(1-|(H/60°)mod 2-1|)
m=L-C/2
Figure DEST_PATH_GDA0002225076540000091
(R,G,B)=((R'+m)×255,(G'+m)×255,(B'+m)×255)
Wherein H is hue angle, S is saturation, and L is brightness; r denotes a red component, G denotes a green component, and B denotes a blue component.
Example two:
the main image sensor is a color image sensor, the auxiliary image sensor is a Time of flight (TOF) sensor, and the TOF sensor is used for imaging by matching with a modulated light source. The modulated light source is used for supplementing light to the surrounding environment, the TOF is used for analyzing the obtained modulated light source signal, so that a depth map of the surrounding environment is obtained, and the depth map is fused with visible light imaging content, so that depth information of panoramic imaging is obtained.
In the scheme, the TOF imaging is an active imaging mode, namely, a camera system emits laser to a target, and the distance to the target is calculated by measuring the time when a sensor receives target reflected light. The light source mostly adopts a laser light source, the photoelectric response adopts a CMOS photosensitive array, and the TOF is used as one of image depth information acquisition modes and is generally realized by adopting modulation type exposure ranging. One complete measurement cycle includes two laser shotsAnd pixel exposure. The first laser is emitted as a pulse wave with a width of Tpluse(ii) a The receiver pixels start exposure at the same time, the exposure time being the same as the emission pulse, i.e. the light emission is synchronized with the pixel exposure period. The pixel receives the number of reflected photons only during an effective exposure time period Δ T in which the laser light reflected from the target is received, and the pixel outputs a voltage VF1Proportional to Δ T. In the second exposure pulse time interval, the laser emission pulse time interval is unchanged, but the pixel exposure time is increased, covering the whole process of the second reflected light, so that the output voltage and the light emission pulse time TpluseIs in direct proportion. Two comparisons gave:
Figure DEST_PATH_GDA0002225076540000101
the time of light reflection can thus be obtained, and the distance between the target and the TOF is:
Figure DEST_PATH_GDA0002225076540000111
where c is the speed of light 299792458 m/s. For the CMOS photosensitive array sensor, the distance information of each pixel point can be analyzed, the depth information of all the points on an imaging surface can be acquired, and the depth information of the corresponding pixel point in a color picture can be accurately determined through over-registration after the depth information is acquired.
For the fusion of depth information and color images, general application and understanding of image contents are realized, and accurate segmentation is performed on a picture by using the depth information and the color image information, for example, accurate and rapid segmentation of color picture contents based on a depth map can be realized by using a K-mean clustering algorithm, so that accurate edges, contours and surface contents of targets in the picture are obtained, and further, real-time analysis of the image contents, such as limb segmentation and identification of people in visual interaction, can be performed.
Example three:
the main image sensor is a color image sensor, the auxiliary image sensor is a CMOS image sensor for analyzing structured light, and the structured light CMOS image sensor is used by being matched with a structured light coding projection light source. The scene coding calibration is carried out on the structured light coding projection light source before the use, so that coding patterns at different distances are obtained. In the practical application process, each structured light sensor obtains the coding pattern corresponding to each pixel, and the depth information represented by the final pixel is confirmed through comparison with the calibration pattern. The structured light technology core lies in designing and coding patterns projected by a projection light source, corresponding distances can be directly analyzed from projection light spots shot by objects with different distances through the coding design, and then the coding mode can be realized in different modes according to application requirements in specific scenes.
The depth information of the image can be obtained by analyzing the coded image in the shot imaging picture, the depth information of the corresponding pixel points in the color picture can be accurately determined by registration after the depth information is obtained, and in the same embodiment, the picture is accurately segmented by utilizing the depth information and the color image information to obtain the accurate edge, contour and surface content of the target in the picture, so that the real-time analysis of the image content can be further carried out.
Example four in real time:
the main image sensor is a color image sensor, the auxiliary sensor is a color image sensor, and the two images can be set to have the same resolution or different resolutions. The images acquired after the acquisition are two images which are independently imaged, because the exposure settings of the sensors are different in the actual imaging process and the images have certain random noise, after the super-resolution fusion is carried out on the images, the super-resolution image effect of the images can be obtained, and the panoramic image with higher definition can be obtained.
Because the imaging pictures of the two sensors are imaged through the same group of lenses, but because the imaging time of different sensors is asynchronous at the same moment, a synchronous circuit needs to be designed in the imaging device to ensure that the imaging time of the sensors keeps synchronous to a certain extent, the image data acquired after the sensors are synchronized is subjected to subsequent registration and fusion operation, and the image fusion is carried out according to the flow as shown in fig. 5:
1. and (3) motion detection: due to possible errors in the temporal synchronization between the two registered low resolution images, the picture content between the two images may be shifted to some extent due to motion, i.e. the motion parts of the pictures do not correspond to each other in the same background. The detection of moving objects is realized by a Lucas-Kanade optical flow pyramid estimation method.
2. Constructing a high-definition graph: the sub-pixel level optical flow accuracy of the image is obtained by an optical flow method, and the pixel coordinate values corresponding to the high-resolution grid data are estimated by the obtained sub-pixel optical flow deviation.
3. Image interpolation: after obtaining the corresponding pixel coordinate values of the high-resolution part, the Kernel Regression Interpolation (KRI) is used to eliminate the geometric distortion of the image.
4. Image reconstruction: in order to eliminate the inconsistency of color and light among images from different sources, the images are subjected to convolution processing through an image unsharp operation, and high-definition image color and brightness balance of a reconstructed image is achieved.

Claims (5)

1. The utility model provides a two fisheye panoramic image collection system, includes fisheye lens module, main image sensor, supplementary image sensor, light filter, beam splitter prism and signal transmission module, its characterized in that:
the fisheye lens module comprises two groups of fisheye lens modules, optical axes of the two groups of fisheye lens modules are overlapped and oppositely arranged at two ends of the same optical axis, a prism is arranged on a connecting line of the optical centers of the two groups of fisheye lens modules, the prism comprises a spectroscope, light is incident from the two fisheye lenses and then reaches the spectroscope through a prism anti-reflection surface, and light obtained after reflection of the light splitting surface is transmitted and projected onto a main image sensor above the prism to form a double-circumference fisheye picture; light obtained by transmission of the light splitting mirror surface is transmitted to an auxiliary image sensor below the prism after internal reflection of the prism, and a double-circumference fisheye picture is formed; the sensor is connected with the external processor component through the signal transmission module;
after the pictures of the two image sensors are obtained and the time synchronization adjustment is carried out on the two pictures, the corresponding image information fusion processing is carried out on the synchronized pictures, and a multi-information fusion panoramic video or panoramic image can be obtained; in the image fusion process, the image on the main image sensor is used as a reference image, and the picture acquired by the auxiliary image sensor is fused to the picture acquired by the main image sensor.
2. The device for capturing a binocular panoramic image according to claim 1, wherein the main image sensor is a color image sensor, the sub-image sensor is a black-and-white image sensor, the imaging brightness of the black-and-white image in a low-illumination environment is increased by supplementing light from an infrared light source, and the images of the two are fused to obtain a color panoramic image in a low-illumination environment.
3. The dual-fisheye panoramic image acquisition device of claim 1, wherein the main image sensor is a color image sensor, the auxiliary image sensor is a TOF sensor, the environment is irradiated by adding a modulated light source, the TOF sensor acquires and acquires corresponding modulated light to demodulate and acquire a panoramic environment depth map, and then a panoramic image including depth information of each object is obtained by combining a color image acquired by the color image sensor.
4. The dual-fisheye panoramic image acquisition device of claim 1, wherein the main image sensor is a color image sensor, the auxiliary image sensor is a structured light CMOS sensor, the spatial scene is illuminated by a structured light coding projector, the structured light CMOS sensor obtains a coding pattern corresponding to each pixel and analyzes the coding pattern to obtain a panoramic environment depth map, and then a panoramic image including depth information of each object is obtained by combining a color image obtained by the color image sensor.
5. The device for acquiring a binocular panoramic image according to claim 1, wherein the main image sensor and the auxiliary image sensor are both color image sensors, the two sensors respectively acquire corresponding images, and perform super-resolution fusion on the two acquired images to obtain a panoramic image with higher definition.
CN201920310777.6U 2019-03-12 2019-03-12 Double-fisheye panoramic image acquisition device Active CN210201926U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201920310777.6U CN210201926U (en) 2019-03-12 2019-03-12 Double-fisheye panoramic image acquisition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201920310777.6U CN210201926U (en) 2019-03-12 2019-03-12 Double-fisheye panoramic image acquisition device

Publications (1)

Publication Number Publication Date
CN210201926U true CN210201926U (en) 2020-03-27

Family

ID=69879748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201920310777.6U Active CN210201926U (en) 2019-03-12 2019-03-12 Double-fisheye panoramic image acquisition device

Country Status (1)

Country Link
CN (1) CN210201926U (en)

Similar Documents

Publication Publication Date Title
US11877086B2 (en) Method and system for generating at least one image of a real environment
CN111023970B (en) Multi-mode three-dimensional scanning method and system
CN101673026B (en) Camera and imaging system
CN107580163A (en) A kind of twin-lens black light camera
US8334893B2 (en) Method and apparatus for combining range information with an optical image
CN107563971A (en) A kind of very color high-definition night-viewing imaging method
US9131136B2 (en) Lens arrays for pattern projection and imaging
US20140192238A1 (en) System and Method for Imaging and Image Processing
CN108648225B (en) Target image acquisition system and method
US7609327B2 (en) Polarization difference matting using a screen configured to reflect polarized light
WO2019184183A1 (en) Target image acquisition system and method
CN111462128A (en) Pixel-level image segmentation system and method based on multi-modal spectral image
CN110139012A (en) Pisces eye panoramic picture acquisition device
CN210201927U (en) Double-fisheye panoramic image information acquisition device
KR101994473B1 (en) Method, apparatus and program sotred in recording medium for refocucing of planar image
CN117237553A (en) Three-dimensional map mapping system based on point cloud image fusion
CN210201926U (en) Double-fisheye panoramic image acquisition device
CN210780970U (en) Double-fisheye panoramic image acquisition device
CN210225540U (en) Acquisition device for information fusion of double-fisheye panoramic image
CN109029380B (en) Stereoscopic vision system based on film-coated multispectral camera and calibration ranging method thereof
CN208536839U (en) Image capture device
CN114782502B (en) Multispectral multi-sensor cooperative processing method and device and storage medium
TWI504936B (en) Image processing device
CN113905223B (en) Color 3D imaging method based on single color camera zero registration error and color camera
JP2023172202A (en) Image processing device, mobile body, image processing method, and computer program

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant