CN215495425U - Compound eye imaging system and vehicle using same - Google Patents

Compound eye imaging system and vehicle using same Download PDF

Info

Publication number
CN215495425U
CN215495425U CN202121214038.0U CN202121214038U CN215495425U CN 215495425 U CN215495425 U CN 215495425U CN 202121214038 U CN202121214038 U CN 202121214038U CN 215495425 U CN215495425 U CN 215495425U
Authority
CN
China
Prior art keywords
compound
eye imaging
imaging system
vehicle
lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202121214038.0U
Other languages
Chinese (zh)
Inventor
黄奇卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202121214038.0U priority Critical patent/CN215495425U/en
Application granted granted Critical
Publication of CN215495425U publication Critical patent/CN215495425U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model discloses a compound eye camera system and a vehicle using the same, wherein the compound eye camera system comprises a first lens, at least four second lenses, a storage unit and an image processing controller, the storage unit is used for storing a plurality of source image files shot by the first lens or the second lens, a reference length is displayed in each source image file shot by a first shooting area, and the image processing controller is used for identifying the image characteristics of at least one object to be detected in the plurality of source image files; therefore, the compound eye camera system can assist vehicle monitoring and achieve automatic driving, so that the compound eye camera system has 3D space recognition and 3D space multi-object monitoring, and the unmanned monitoring level is improved.

Description

Compound eye imaging system and vehicle using same
Technical Field
The utility model relates to a compound eye camera system with multiple lenses, in particular to a compound eye camera system and an image processing method thereof, wherein the compound eye camera system can be used for different industrial equipment such as vehicle monitoring, AI robots, automatic driving, sweeping robots, aerial unmanned aerial vehicles, multi-axis processing machine instruments and the like.
Background
In recent years, self-driving has become a problem of fever, and no matter traditional car factories such as GM, Volvo, Toyota, etc., or new owners such as Telsa, UBER, Waymo, nuro. As the 'eyes' of self-driving, the self-driving vehicle measured on the road is equipped with various sensing systems such as image sensing, among which the light-reaching (LiDAR) sensor is the key position. The light is used for measuring the distance between surrounding vehicles and objects, establishing a 3D space image and identifying the space image. The main application fields at present comprise remote measurement, factory automation equipment, disaster prevention and monitoring purposes of social infrastructure such as railways and tunnels and the like.
Compared with an image sensing device using a visible light range, the near infrared ray is used as the light source, so that the image sensing device is not easily interfered by light rays in the environment, and the light source sensing device mainly has multiple advantages, including identification in a range capable of receiving reflected light (about 200 meters or even 300 meters of an automobile product), and no influence of the intensity of the light rays or shadows in the environment; sensors such as infrared and millimeter wave sensors are mainly used for measuring distances, and if 3D vision is established by image sensing, multiple image sensing devices are required; the optical system can use a sensor to establish the scanning information of the three-dimensional environment, so that the optical system can better maintain the measurement accuracy under the remote use situation. In addition, some international factories such as Google, Waymo, Uber also develop sensor fusion technologies including radar, integrate the information returned by radar sensing technology with the information detected by other types of sensors, and give different error correction logics to improve the overall recognition accuracy, so as to serve as the basic information for artificial intelligence deep learning training and inference required by future self-driving.
The main structure of the light source is composed of a light emitting module for irradiating near infrared laser to the surroundings and a light receiving module for receiving light of a reflecting object, and the principle of establishing an environment three-dimensional model is to calculate the distance according to the time difference between the irradiated light and the received light. However, the accuracy of the light is easily affected by rain and fog, and the material of the object cannot be identified, so that the sign, the billboard or the object image cannot be accurately judged and read. In addition, the time-consuming adjustment and calibration of the light is not easy to realize mass production, so that the cost is high and the large-scale popularization is not easy. This is also an important disadvantage.
Therefore, on the premise of controllable cost, how to measure and measure peripheral scenes, clearly identify the material of the peripheral scenes and establish a 3D digital space perception system so that the system can be used for different industrial equipment such as vehicle monitoring, automatic driving, AI robots, sweeping robots, aerial unmanned aerial vehicles, multi-axis processing machine instruments and the like is the aim of common knowledge people in the field.
SUMMERY OF THE UTILITY MODEL
The utility model mainly aims to achieve measurable and measurable peripheral scenery and clear identification of peripheral scenery materials on the premise of controllable cost so as to establish a 3D digital spatial perception system.
The utility model also aims to assist different industrial equipment such as vehicle monitoring, AI robots, automatic driving, sweeping robots, aerial unmanned aerial vehicles, multi-axis processing machine instruments and the like, so that the equipment can have 3D space identification and 3D space multi-object monitoring, thereby improving the industrial unmanned monitoring level.
In order to solve the above and other problems, the present invention provides a compound-eye imaging system, which includes a first lens, at least four second lenses, a storage unit, and an image processing controller. The first lens is provided with a first shooting area which is spread in a fan shape, the second lenses are distributed at the periphery of the first lens, each second lens is provided with a second shooting area which is spread in a fan shape, the central shooting direction of the first shooting area and the central shooting direction of the second shooting area form an angle, and the second shooting area and the first shooting area are partially overlapped; the storage unit is used for storing a plurality of source image files shot by the first lens or the second lens; the image processing controller is used for identifying the image characteristics of at least one object to be detected in the source image files, analyzing the source image files shot at the same time point and generating a corresponding 3D primitive, and then analyzing a portable image file with 3D spatial information through the 3D primitives generated at different time points.
In the compound-eye imaging system, a reference length is displayed in the source image file captured by the first capturing area, and the image processing controller uses the reference length as a scale to construct a 3D spatial digital model.
In the above compound-eye imaging system, the storage unit is coupled to the image processing controller, so that the 3D primitive or the portable image file is transmitted to and stored in the storage unit.
The compound-eye imaging system as described above, wherein the storage unit stores at least one primitive template, and the primitive template is a two-dimensional image of all or a part of the features of the object.
The compound-eye imaging system further comprises at least one warning light coupled to the image processing controller, so that the image processing controller can directly control the warning light after calculating and analyzing the object to be measured in the source image files.
In order to solve the above and other problems, the present invention further provides a vehicle with a compound-eye imaging system as described above, wherein a plurality of compound-eye imaging systems are distributed on a roof portion, a front edge, a rear edge or both sides of the vehicle.
Therefore, the compound-eye camera system and the vehicle using the same can achieve measurable and measurable peripheral scenery and clearly identify the material of the peripheral scenery on the premise of controllable cost, are used for establishing a 3D digital space sensing system, and further assist different industrial equipment such as vehicle monitoring, automatic driving, AI robots, sweeping robots, aerial unmanned aerial vehicles and multi-axis processing machine instruments, so that the equipment can have 3D space identification and 3D space multi-object monitoring, and the industrial unmanned monitoring level is improved.
For a better understanding of the nature and technical aspects of the present invention, reference should be made to the following detailed description of the utility model, taken in conjunction with the accompanying drawings, which are provided for purposes of illustration and description, and are not intended to limit the utility model
Drawings
Fig. 1A is a schematic structural diagram of a compound-eye imaging system.
Fig. 1B is a diagram illustrating a usage state of the compound-eye imaging system applied to a vehicle.
Fig. 1C is a functional block diagram of a compound-eye imaging system.
Fig. 2A to 2E are flowcharts illustrating an image processing method of a compound-eye imaging system.
FIG. 3 is a schematic diagram illustrating the image processing controller identifying image features in the source image file.
Fig. 4A is a schematic diagram illustrating a vehicle equipped with the compound-eye imaging system calculating a distance by the reference length.
FIG. 4B is a schematic diagram of the image processing controller calculating the distance from the vehicle reference length in the source image file.
Fig. 5 is a schematic diagram illustrating triangulation application of the compound-eye imaging system.
Fig. 6A is a schematic view illustrating an azimuth angle of an object to be measured by a vehicle equipped with the compound-eye imaging system.
Fig. 6B is a schematic diagram illustrating the image processing controller calculating the azimuth angle of the object to be measured through the source image file.
Fig. 7A to 7C are schematic diagrams illustrating the situation perception of the surrounding scene of the vehicle equipped with the compound-eye imaging system in the 3D space digital model.
Fig. 8 is a schematic diagram illustrating a scene in which the image processing controller is required to perform residual image blind-correction.
Fig. 9 is a functional block diagram of another embodiment of a compound-eye imaging system.
Description of reference numerals: 50-compound eye camera system; 51-a first lens; 52-second lens; 53-first shot; 53A-center shot direction; 54-a second shot area; 54A-center shot direction; 55-a storage unit; 56-image processing controller; 57-warning light; 91-a vehicle; 92-the analyte; 61-source image picture file; 62-3D space digital model; 63-image features; 64-3D primitives; 65-portable drawing files; 66-primitive template; 71-reference length; 72-azimuth scale marking line; 74-multiple scale marking line; 75-spatial grid lines; h 1-vertical distance; d-lens spacing; α, β, Θ -angle.
Detailed Description
Referring to fig. 1A to 1C, fig. 1A is a schematic structural diagram of a compound-eye imaging system, fig. 1B is a diagram illustrating a use state of the compound-eye imaging system applied to a vehicle, and fig. 1C is a functional block diagram of the compound-eye imaging system. As shown, a compound-eye imaging system 50 includes a first lens 51, four second lenses 52, a storage unit 55 and an image processing controller 56. The first lens 51 has a fan-shaped first photographing region 53, the plurality of second lenses 52 are distributed around the first lens 51, and each of the second lenses 52 has a fan-shaped second photographing region 54, an angle Θ is formed between a central photographing direction 53A of the first photographing region 53 and a central photographing direction 54A of the second photographing region 54, and the second photographing region 54 is partially overlapped with the first photographing region 53. The surfaces of the first lens 51 and the second lens 52 of the compound-eye imaging system 50 are designed as arc surfaces, so that the central shooting direction 53A of the first shooting area 53 and the central shooting direction 54A of the second shooting area 54 do not point to the same direction, and thus the overall coverage area of the first shooting area 53 and the second shooting area 54 is larger, and more shooting dead angles are avoided. The storage unit 55 is used for storing a plurality of source image files 61 captured by the first lens 51 or the second lens 52. The storage unit 55 is coupled to the image processing controller 56, and the image processing controller 56 is configured to parse a plurality of source image files 61 captured at a same time point and generate a corresponding 3D primitive 64, and further parse a portable file 65 with 3D spatial information through the 3D primitives 64 generated at a plurality of different time points. The storage unit 55 stores a plurality of primitive templates 66, wherein the primitive templates 66 are two-dimensional images of local features or all features of a certain object 92 to be measured in the source image file 61. So that the 3D graphics primitives 64 or the portable image file 65 are transferred to and stored in the storage unit 55. The source image archive 61 is an archive with an image format captured by the first lens 51 or the second lens 52, and the format includes, but is not limited to, JPG, JPEG, PSD, TIFF, PDF, BMP, EPS, PNG, GIF, PCX, and the like. The 3D primitive 64 is a digitized file with 3-dimensional spatial multi-view visualization, vectorization, and resolution. The portable drawing file 65 is in a portable electronic file format with 3D spatial information, which can be transmitted to the cloud or other machine equipment via network for storage, analysis and application; the 3D space information includes position information (e.g., positioning by GPS, compass, or other satellite system), directional velocity vector information, or acceleration vector information of the 3D space.
The compound-eye camera system 50 can be applied to different industrial devices such as automobile monitoring aids, automatic driving, sweeping robots, AI robots, aerial unmanned aerial vehicles and multi-axis processing machine instruments, so that the devices can be used for 3D space recognition and 3D space multi-object monitoring, and the industrial unmanned monitoring level is improved. In the following, the method for processing an image of the compound-eye imaging system 50 is described by taking the compound-eye imaging system 50 applied to a vehicle 91 for monitoring assistance, as shown in fig. 1B, a plurality of compound-eye imaging systems 50 are distributed on a roof portion, a front edge, a rear edge or two side edges of the vehicle 91 for monitoring a 3D spatial condition around and above the vehicle 91. It should be particularly noted that the compound eye imaging system 50 is installed around the vehicle 91 to enable the vehicle 91 to create a 3D spatial situation perception of the periphery, and to know the size, appearance, shape, speed, and acceleration of the objects within approximately 200 meters around the vehicle 91, so that the vehicle 91 can respond to the surrounding traffic conditions in advance to prevent traffic accidents. In addition, the compound-eye camera system 50 is installed on the top of the vehicle 91, and can be used to monitor objects above the vehicle 91, for example, if the vehicle 91 frequently runs on a mountain top rockfall or a mountain area debris flow high-frequency occurrence area, the compound-eye camera system 50 can early warn and know rockfall, mountain collapse, mountain walking and debris flow, so as to enable the vehicle 91 to avoid or stop. The value of the compound-eye camera system 50 and the image processing method thereof for the vehicle 91 is that the vehicle 91 can have the situation perception of peripheral 3D space scenes, so as to improve the controllability and accuracy of automatic driving of the vehicle.
In order to achieve the above-mentioned purpose of improving the controllability and accuracy of the automatic driving of the vehicle, the present invention further provides an image processing method of the compound-eye imaging system 50. Referring to fig. 2A to fig. 2E, fig. 2A to fig. 2E are flowcharts illustrating an image processing method of the compound-eye imaging system 50; as shown in fig. 2A, the source image files 61 at multiple time points are captured by the first lens 51 and the second lens 52 (step a01), and the source image files 61 are identified and analyzed by the image processing controller 56 to generate the 3D primitive 64 corresponding to at least one object 92 (step a 02). Here, as shown in fig. 2B, the detailed implementation of step a02 includes the following sub-steps: an image feature 63 of an object 92 to be tested is extracted from the source image files 61 (step a021), wherein, as shown in fig. 3, the image features 63 of the object 92 to be tested in the source image files 61 can be identified by the image processing controller 56 from the source image files 61 shot by the compound eye imaging system 50. The test object 92 may be an automobile, truck, locomotive, traffic sign, utility pole, overpass, road tree, etc. Different objects 92 will have different image features 63, where the image features 63 are planar image features of the objects 92, including but not limited to color features, texture features, gray-scale value features, shape features, spatial correspondence features, local features, or global features. Taking a specific real object as an example, the image features 63 of the road tree are leaves or a trunk; the image features 63 of the car are the hull contour and the tire; the image feature 63 of the truck is the container or the truck driver's seat above the tires. Therefore, by recognizing the image feature 63 of the dut 92, the compound-eye imaging system 50 can recognize whether the dut 92 in front of the vehicle 91 is a locomotive, an automobile, or a pedestrian. Next, as shown in fig. 2B, comparing the image features 63 with primitive templates 66 of different viewing angles in the storage unit 55 (step a022), and determining whether the image features 63 of the object 92 are attached to the primitive templates 66; if so, a 3D primitive 64 for the DUT 92 is generated (shown in FIG. 1C, see step A023), such that the 3D primitive 64 and the primitive templates 66 in the storage unit 55 both correspond to the particular DUT 92. Here, the primitive template 66 is a file (i.e., a set of complete images of the object 92 at different viewing angles) formed by combining two-dimensional images of the object 92 at different viewing angles, which may be a built-in or captured image-based template file of big data; for example, the primitive template 66 may be a contrasting image file from different perspectives of a particular object (e.g., a car, a locomotive, a truck, a traffic sign, a road tree, etc.) for the purpose of providing the compound eye imaging system 50 with reference to the image features 63 from multiple different perspectives. Therefore, only a few source image files 61 with specific angles can be compared by the image processing controller 56, so that the compound-eye imaging system 50 can identify and confirm whether the object 92 is an object, even a car of which type and style. Furthermore, the primitive template 66 can be a local feature of a certain object 92 to be measured in the source image file 61, so that the residual image comparison and residual image blind compensation can be performed through the primitive template 66 with the local feature; referring to fig. 8, fig. 8 is a schematic diagram illustrating a scene in which the image processing controller is required to perform residual image blind-correction. As shown in fig. 8, in the first photographing area 53 of the compound-eye photographing system 50, the pedestrian object 92 is partially blocked by the front box-type vehicle, so that the image processing controller 56 cannot completely recognize the pedestrian object 92. At this time, the afterimage and afterimage of the primitive template 66 (i.e., the image of the local feature of the pedestrian object 92) are established, so that the image processing controller 56 can obtain and confirm the blocked object in the first capturing area 53 through feature comparison. In this way, the compound eye imaging system 50 can know what object is behind the blocked area in advance, and achieve the functions of advance prediction and early warning.
It should be added that the core technology of the analysis of the source image file 61, or the identification and comparison of the image features 63 is achieved by image matching. The image matching refers to identifying homogeneous characteristics or same-polarity characteristics between two or more images through a certain matching algorithm, for example, in two-dimensional image matching, by comparing correlation coefficients of windows with the same size in a target area and a search area, a window center point corresponding to the largest number of relations in the search area is taken as the homogeneous characteristics or the same-polarity characteristics, that is, a statistical method is adopted to find out the correlation matching degree between signals. The essence is that the matching criterion is applied to achieve the best search effect under the relevant and similar conditions of basic primitives. In general, image matching can be divided into grayscale-based matching and feature-based matching.
Then, after the object 92 is confirmed and the corresponding 3D primitive 64 is generated, the distance of the object 92 is calculated (step a 03); as shown in fig. 2C, the distance of the object 92 to be measured can be measured by first photographing the source image file 61 at the "same time point" through the first lens 51 or the second lens 52 to measure the distance of the object 92 (step a031), and then measuring the azimuth angle and the pitch angle of the object 92 to be measured from the source image file 61 (step a032), so that the spatial relationship of the object 92 to be measured can be calculated and confirmed (step a 033). As shown in fig. 4A and 4B, the distance measurement in step a031 may be performed by comparing the relative distance between the truck or automobile object 92 in the source image frame 61 and the vehicle 91 on which the compound-eye imaging system 50 is installed, with the reference length 71 of the vehicle 91 in the source image frame 61, and measuring the relative distance with the reference length 71; that is, at the one-time reference length 71, the two-time reference length 71, the three-time reference length 71, and the four-time reference length 71 of fig. 4A, the multiple scale mark line 74 of the reference length 71 is displayed or marked in the source image file 61 of fig. 4B, so as to facilitate the image processing controller 56 of the compound eye imaging system 50 to compare or calculate the distance between the object 92 of the truck or the automobile. As shown in fig. 4A and 4B, the reference length 71 of the vehicle 91 is preferably the distance from the mounting point of the compound-eye imaging system 50 to the foremost end of the vehicle 91; in other embodiments, the reference length 71 and its multiple of the vehicle 91 in the source image file 61 captured by the first lens 51 can be achieved by a built-in scale (a built-in standard fixed length) of software, or by marking a physical scale on the outer surface of the lens of the first lens 51. In addition, as shown in fig. 4B, from the source image file 61, if the area of the object 92 is larger (it may be the actual volume of the object 92 is larger, or it is closer to the vehicle 91 equipped with the compound eye imaging system 50) and crosses multiple scale mark lines 74, the image processing controller 56 compares the multiple scale mark lines 74 with the position of the shape center point of the outline appearance of the object 92, so as to determine the distance of the object 92.
The distance to the object 92 of the truck or automobile can be calculated by triangulation in addition to the reference length 71 of the vehicle 91. As shown in fig. 5, the lens pitch d between the first lens 51 and the second lens 52 of the compound-eye imaging system 50 and the vertical distance h1 between the locomotive-shaped object 92 and the compound-eye imaging system 50 can be obtained by trigonometric function and triangulation method to obtain the vertical distance h1 ═ d [ (sin α sin β)/sin (α + β) ]. That is, the lens distance d between the first lens 51 and the second lens 52 is known, and then the vertical distance h1 can be obtained by observing and measuring the angles α and β through the compound-eye imaging system 50. The locomotive-shaped object to be measured 92 may be an automobile, a truck, a pedestrian, a road tree, a traffic sign, or the like.
The azimuth or pitch angle measurement in step a032 can be measured by the azimuth scale marking line 72 or the pitch scale marking line in the source image file 61; for example, as shown in fig. 6A and 6B, when looking from the compound-eye imaging system 50 to the front of the vehicle 91, the image of the source image frame 61 can be divided into a plurality of regions by the Azimuth scale marking lines 72, and the corresponding Azimuth angle (Azimuth angle) of the object 92 relative to the compound-eye imaging system 50 can be known from the position of the object 92. If the area of the object 92 in the source image file 61 is large (it may be the actual volume of the object 92 is large, or it is closer to the vehicle 91 equipped with the compound-eye imaging system 50, as shown in fig. 6B) and spans multiple azimuth scale mark lines 72 or pitch scale mark lines, the image processing controller 56 uses the center point of the outline shape of the object 92 as the azimuth/pitch angle determination of the object 92. In the same way, the image processing controller 56 may also divide the source image file 61 into a plurality of different pitch angle regions by a plurality of pitch scale indicating lines, and further determine the pitch position of the dut 92. In this way, the image processing controller 56 analyzes the distance, the orientation, and the pitch of the object 92, and knows and confirms the corresponding spatial relationship between the object 92 and the vehicle 91 according to the spherical coordinate principle, so as to complete the execution of step a 033.
Next, the 3D motion vector of the object 92 is calculated (step a04), wherein the positions of the object 92 at different time points are obtained through the step a03 (step a041), and then the motion vector of the object 92 is calculated (step a042), so that a plurality of motion vectors at different time points can be continuously displayed (step a 043). In step a03, the purpose of the source image files 61 obtained from the "same time point and different lenses" is to refine the position calculation of the remote object 92 by the spatial position difference between the first lens 51 and the second lens 52 at different positions; the essence is that the object 92 to be measured is located multiple times through multiple different lens positions, and the accuracy is improved through multiple calculations. In step a04, the moving track and the moving vector of a specific object 92 are obtained from the source image files 61 shot at "different time points" (the position change at different time points is the moving vector of the object 92).
Then, the error of the 3D motion vector of the dut 92 is selectively compensated for (step a05), and the error correction method includes the following sub-steps: extracting at least one steering characteristic of the object 92 to be tested from the source image picture 61 (step A051), wherein the steering characteristic comprises but is not limited to the steering of tires of the automobile, the head steering of pedestrians on the road or the intersection angle of an automobile body and a road lane. These turning characteristics represent that the automobiles or pedestrians around the vehicle 91 to which the compound-eye imaging system 50 is attached have a strong intention to turn, and may change their traveling direction greatly, turn suddenly, change lane suddenly, and cause a collision accident of the vehicle 91. Therefore, if the compound eye imaging system 50 can predict the steering intention of the surrounding automobiles and pedestrians in advance, corresponding measures can be taken in advance, and the probability of collision between the vehicle 91 and the surrounding object 92 is reduced. When it is confirmed that the vehicles and pedestrians around the vehicle 91 want to turn, the compensation correction vector of the object 92 can be calibrated and generated (step a052), and the weight is redistributed to the compensation correction vector of the object 92 to correct the moving path of the object 92 (step a053), that is, the compensation correction vector generated in step a04 is corrected to predict the sudden turning and sudden lane change of the vehicles and pedestrians around the vehicle in advance. It is specifically noted that the selective execution of step a05 means that it may or may not be executed. As shown in fig. 2A, if the compensation correction vector calculated in step a05 is too large, the compensation correction vector may be fed back to step a04 to re-execute the calculation of the motion vector of the dut 92 in step a 04.
At this time, the compound-eye imaging system 50 has completed the calculation of the distance and the motion vector of the surrounding car pedestrian or traffic signal sign relative to the vehicle 91, and then the image processing controller 56 is used to combine the 3D primitive 64 of the object 92 and the corresponding 3D motion information thereof, so as to analyze the portable drawing 65 with the 3D spatial information (step a 06); then, the 3D space digital model 62 of the object 92 to be measured is created, and the portable drawing 65 is overlaid on the 3D space digital model 62 (step a07), so that the image processing controller 56 of the compound-eye imaging system 50 can overlay all the scenes, such as the vehicles and people around the vehicle 91, the traffic signal signs, etc., on the 3D space digital model 62. Referring to fig. 7A to 7C, fig. 7A to 7C are schematic diagrams illustrating a situation perception of a surrounding scene of a vehicle equipped with the compound-eye imaging system in the 3D spatial digital model; as shown in fig. 7A and 7B, the compound-eye imaging system 50 can detect possible objects 92 to be detected, such as vehicles, people, road trees, traffic signal signs, etc., to obtain corresponding 3D primitives 64, sense 3D spatial positions, motion vectors, and accelerations of the objects, and finally convert the 3D primitives 64 into portable files 65 with 3D spatial information to be overlaid on a 3D spatial digital model 62, so that the compound-eye imaging system 50 establishes 3D spatial situation sensing and 3D spatial depth estimation around the vehicle 91 to detect the size, speed, and acceleration of objects within 200 meters around the vehicle 91, thereby enabling the vehicle 91 to have strong monitoring capability for objects around the vehicle. As shown in fig. 7A, the vehicle 91 with the compound-eye camera system 50 can detect and perceive the locomotive object 92 to be tested at the left and back and the lane marking lines on the road, and further determine whether to avoid dodging or accelerate to leave; as shown in fig. 7B, the vehicle 91 with the compound-eye imaging system 50 mounted thereon can detect and perceive a plurality of automobile objects 92 around the vehicle 91, and the image processing controller 56 can establish the 3D spatial digital model 62 and spatial coordinates around the vehicle 91, so that the 3D spatial digital model 62 has virtual spatial grid lines 75, and the compound-eye imaging system 50 can know and perceive the relative coordinates of all objects 92 around the vehicle, so as to allow the image processing controller 56 to plan the optimal traveling, avoiding or even bypassing route, and even determine whether to slow down, stop or overtake. Finally, referring to fig. 7C, the vehicle 91 can send a deceleration warning signal, a brake warning signal, a steering prompting signal or a steering control signal according to the image monitoring and determination of the compound-eye camera system 50 or the image processing controller 56 (step a08), so that the vehicle 91 has the functions of autonomous control and automatic driving. As shown in the left half of fig. 7C, the compound-eye imaging system 50 may further integrate a map system (e.g., google map, Baidu map, Gade map, etc.) to know the road direction of tens of kilometers around the vehicle 91, and simultaneously display a plurality of spatial grid lines 75 generated by the image processing controller 56. Moreover, after the integration, the compound-eye camera system 50 can present the road direction and the road plan of the map system, the detected and sensed peripheral scenery and the object 92 to be measured in the 3D space digital model 62 together; as shown in the right half of fig. 7C, the compound-eye imaging system 50 of the present invention can achieve: the sensing and prediction of the coordinates, relative distance, and motion vector of the object 92 can provide early warning for possible collisions.
Referring to fig. 9, fig. 9 is a functional block diagram of a compound-eye imaging system according to another embodiment. As shown in fig. 9, the compound-eye imaging system 50 of the present invention can further include at least one warning light 57, and the warning light 57 is coupled to the image processing controller 56, so that the image processing controller 56 can be used to control the on/off or flashing of the warning light 57. As shown in fig. 7A, when the left rear locomotive object 92 approaches the vehicle 91, and the distance of the locomotive object 92 is calculated and analyzed by the image processing controller 56, the image processing controller 56 may autonomously drive the warning light 57 to blink (i.e., without being controlled by the driver of the vehicle 91), so as to remind the locomotive object 92 to keep the inter-vehicle distance. That is, after the image processing controller 56 of the compound-eye imaging system 50 calculates and analyzes the objects 92 in the source image files 61, the image processing controller 56 can directly control the warning light 57 to emit warning light. Therefore, the function of the step a08 is to control and drive the warning light 57 to send out a deceleration warning signal, a brake warning signal, and a steering warning signal when the image processing controller 56 determines that the peripheral object 92 is too close or too fast, so as to achieve the purpose of preventing collision.
Therefore, compound eye camera system 50 uses compound eye camera system 50's vehicle 91 and image processing method, can not use expensive equipment such as laser radar, infrared radar, light, and under the controllable prerequisite of cost, reach peripheral scenery measurably measurable and measurable, clearly discern peripheral scenery material to establish 3D digital space perception system, and then make this system can be used to different industrial equipment such as car monitoring, AI robot, autopilot, robot of sweeping the floor, aerial unmanned aerial vehicle, multiaxis processing board instrument. Therefore, it has huge commercial application potential.
The present invention is described above by way of examples, which are not intended to limit the scope of the present invention. The scope of which should be determined with reference to the claims and their equivalents. It will be appreciated by those skilled in the art that changes may be made and equivalents may be substituted for elements thereof without departing from the true spirit and scope of the utility model, and that the utility model includes all modifications and equivalents as fall within the true spirit and scope of the utility model.

Claims (6)

1. A compound-eye imaging system, comprising:
a first lens having a first photographing region fanned out;
at least four second lenses distributed around the first lens, each second lens having a second photographing region with a fan-shaped expansion, the central photographing direction of the first photographing region and the central photographing direction of the second photographing region having an angle therebetween, and the second photographing region being partially overlapped with the first photographing region;
a storage unit for storing a plurality of source image files shot by the first lens or the second lens; and
the image processing controller is used for analyzing the source image files shot at the same time point and generating a corresponding 3D primitive, and then analyzing a portable image file with 3D space information through the 3D primitives generated at different time points.
2. The compound-eye imaging system of claim 1, wherein the source image files captured in the first capture area each have a reference length, and the image processing controller uses the reference length as a scale to construct a 3D spatial digital model.
3. The compound-eye imaging system of claim 1, wherein the storage unit is coupled to the image processing controller, such that the 3D graphics primitives or the portable image files are transmitted to and stored in the storage unit.
4. The compound-eye imaging system of claim 1, wherein the storage unit stores at least one primitive template, and the primitive template is a two-dimensional image of all or part of the features of the object.
5. The compound-eye imaging system of claim 1, further comprising at least one warning light coupled to the image processing controller, such that the image processing controller is configured to control the warning light directly after calculating and analyzing the object to be measured in the source image files.
6. A vehicle using a plurality of compound eye imaging systems according to claim 1, wherein the plurality of compound eye imaging systems are distributed on a roof portion, a front edge, a rear edge or both sides of the vehicle.
CN202121214038.0U 2021-06-01 2021-06-01 Compound eye imaging system and vehicle using same Active CN215495425U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202121214038.0U CN215495425U (en) 2021-06-01 2021-06-01 Compound eye imaging system and vehicle using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202121214038.0U CN215495425U (en) 2021-06-01 2021-06-01 Compound eye imaging system and vehicle using same

Publications (1)

Publication Number Publication Date
CN215495425U true CN215495425U (en) 2022-01-11

Family

ID=79781987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202121214038.0U Active CN215495425U (en) 2021-06-01 2021-06-01 Compound eye imaging system and vehicle using same

Country Status (1)

Country Link
CN (1) CN215495425U (en)

Similar Documents

Publication Publication Date Title
CN108572663B (en) Target tracking
US10445928B2 (en) Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types
JP7157054B2 (en) Vehicle navigation based on aligned images and LIDAR information
CN112665556B (en) Generating a three-dimensional map of a scene using passive and active measurements
US11287523B2 (en) Method and apparatus for enhanced camera and radar sensor fusion
CN110537109B (en) Sensing assembly for autonomous driving
US11508122B2 (en) Bounding box estimation and object detection
US20210019536A1 (en) Signal processing device and signal processing method, program, and mobile body
JP2020085886A (en) Vehicle, vehicle positioning system, and method for positioning vehicle
JP2019045892A (en) Information processing apparatus, information processing method, program and movable body
CN113673282A (en) Target detection method and device
JPH10187930A (en) Running environment recognizing device
US11294387B2 (en) Systems and methods for training a vehicle to autonomously drive a route
CN114442101B (en) Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
CN110750153A (en) Dynamic virtualization device of unmanned vehicle
CN112379674A (en) Automatic driving equipment and system
US20220397675A1 (en) Imaging systems, devices and methods
US20230237783A1 (en) Sensor fusion
US20210080264A1 (en) Estimation device, estimation method, and computer program product
CN113435224A (en) Method and device for acquiring 3D information of vehicle
US10249056B2 (en) Vehicle position estimation system
CN215495425U (en) Compound eye imaging system and vehicle using same
JP7380904B2 (en) Information processing device, information processing method, and program
CN115440067A (en) Compound eye imaging system, vehicle using compound eye imaging system, and image processing method thereof
TW202248963A (en) Compound eyes system, the vehicle using the compound eyes systme and the image processing method for the compound eyes system

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant