CN115440067A - Compound eye imaging system, vehicle using compound eye imaging system, and image processing method thereof - Google Patents

Compound eye imaging system, vehicle using compound eye imaging system, and image processing method thereof Download PDF

Info

Publication number
CN115440067A
CN115440067A CN202110611056.0A CN202110611056A CN115440067A CN 115440067 A CN115440067 A CN 115440067A CN 202110611056 A CN202110611056 A CN 202110611056A CN 115440067 A CN115440067 A CN 115440067A
Authority
CN
China
Prior art keywords
compound
image processing
imaging system
eye imaging
lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110611056.0A
Other languages
Chinese (zh)
Inventor
黄奇卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110611056.0A priority Critical patent/CN115440067A/en
Publication of CN115440067A publication Critical patent/CN115440067A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors

Abstract

The invention discloses a compound eye camera system, a vehicle using the same and an image processing method thereof, wherein the compound eye camera system comprises a first lens, at least four second lenses, a storage unit and an image processing controller, the storage unit is used for storing a plurality of source image files shot by the first lens or the second lens, a reference length is displayed in each source image file shot by a first shooting area, the image processing controller is used for identifying the image characteristics of at least one object to be detected in the plurality of source image files and using the reference length as a ruler mark to construct a 3D space digital model; therefore, the system can be used for assisting vehicle monitoring, AI robots and automatic driving, so that the system has 3D space recognition and 3D space multi-object monitoring, and the unmanned monitoring level is improved.

Description

Compound eye imaging system, vehicle using compound eye imaging system, and image processing method thereof
Technical Field
The invention relates to a compound eye camera system with multiple lenses, in particular to a compound eye camera system and an image processing method thereof, wherein the compound eye camera system can be used for different industrial equipment such as vehicle monitoring, AI robots, automatic driving, sweeping robots, aerial unmanned aerial vehicles, multi-axis processing machine instruments and the like.
Background
In recent years, self-driving has become a problem of fever, and no matter traditional car factories such as GM, volvo, toyota, etc., or new owners such as Telsa, UBER, waymo, nuro. As the 'eyes' of self-driving, the self-driving vehicle measured on the road is equipped with various sensing systems such as image sensing, among which the light-reaching (LiDAR) sensor is the key position. The light is used to measure the distance between surrounding vehicles and objects, and to create and identify 3D space images. The main application fields at present comprise remote measuring, factory automation equipment, disaster prevention and monitoring purposes of social infrastructure such as railways and tunnels and the like.
Compared with an image sensing device using a visible light range, the near infrared ray is used as the light source, so that the image sensing device is not easily interfered by light rays in the environment, and the light source sensing device mainly has multiple advantages, including identification in a range capable of receiving reflected light (about 200 meters or even 300 meters of an automobile product), and no influence of the intensity of the light rays or shadows in the environment; sensors such as infrared and millimeter wave sensors are mainly used for measuring distances, and if 3D vision is established by image sensing, a plurality of image sensing devices are required; the light can use a sensor to establish the scanning information of the three-dimensional environment, so that the light can better maintain the measurement accuracy under the situation of remote use. In addition, some international factories such as Google, waymo, uber also develop sensor fusion technologies including radar, integrate the information returned by radar sensing technology with the information detected by other types of sensors, and give different error correction logics to improve the overall recognition accuracy, so as to serve as the basic information for artificial intelligence deep learning training and inference required by future self-driving.
The main structure of the light source is composed of a light emitting module for irradiating near infrared laser to the surroundings and a light receiving module for receiving light of a reflecting object, and the principle of establishing an environment three-dimensional model is to calculate the distance according to the time difference between the irradiated light and the received light. However, the accuracy of the light is easily affected by rain and fog, and the material of the object cannot be identified, so that the sign, the billboard or the object image cannot be accurately judged and read. In addition, the time-consuming adjustment and calibration of the light is not easy to realize mass production, so that the cost is high and the large-scale popularization is not easy. This is also an important disadvantage.
Therefore, on the premise of controllable cost, how to measure and measure peripheral scenes, clearly identify the material of the peripheral scenes and establish a 3D digital space sensing system so that the system can be used for different industrial equipment such as vehicle monitoring, automatic driving, AI robots, sweeping robots, aerial unmanned aerial vehicles, multi-axis machining machine instruments and the like is a goal of common knowledge people in the field.
Disclosure of Invention
The invention mainly aims to achieve measurable and measurable peripheral scenery and clear identification of peripheral scenery materials on the premise of controllable cost so as to establish a 3D digital spatial perception system.
The invention also aims to assist different industrial equipment such as vehicle monitoring, AI robots, automatic driving, sweeping robots, aerial unmanned aerial vehicles, multi-axis processing machine instruments and the like, so that the equipment can have 3D space identification and 3D space multi-object monitoring, thereby improving the industrial unmanned monitoring level.
In order to solve the above and other problems, the present invention provides a compound-eye imaging system, which includes a first lens, at least four second lenses, a storage unit, and an image processing controller. The first lens is provided with a first shooting area which is spread in a fan shape, the second lenses are distributed at the periphery of the first lens, each second lens is provided with a second shooting area which is spread in a fan shape, the central shooting direction of the first shooting area and the central shooting direction of the second shooting area form an angle, and the second shooting area and the first shooting area are partially overlapped; the storage unit is used for storing a plurality of source image files shot by the first lens or the second lens; the image processing controller is used for identifying the image characteristics of at least one object to be detected in the source image files, analyzing the source image files shot at the same time point and generating a corresponding 3D primitive, and then analyzing a portable image file with 3D spatial information through the 3D primitives generated at different time points.
In the compound-eye imaging system, a reference length is displayed in the source image files captured by the first capturing area, and the image processing controller uses the reference length as a scale to construct a 3D spatial digital model.
In the above compound-eye imaging system, the storage unit is coupled to the image processing controller, so that the 3D primitive or the portable image file is transmitted to and stored in the storage unit.
The compound-eye imaging system as described above, wherein the storage unit stores at least one primitive template, and the primitive template is a two-dimensional image of all or a part of the features of the object.
The compound-eye imaging system further comprises at least one warning light coupled to the image processing controller, so that the image processing controller can directly control the warning light after calculating and analyzing the objects to be measured in the source image files.
In order to solve the above and other problems, the present invention further provides a vehicle with a compound-eye imaging system as described above, wherein a plurality of compound-eye imaging systems are distributed on a roof portion, a front edge, a rear edge or two side edges of the vehicle.
In order to solve the above and other problems, the present invention further provides an image processing method of a compound-eye imaging system, the compound-eye imaging system including a first lens having a first capture area and a second lens having a second capture area, the image processing method including the steps of: step A01: intercepting source image files of multiple lenses and multiple time points; step A02: identifying and analyzing the source image files to generate a 3D primitive corresponding to at least one object to be detected; step A03: calculating the distance of the object to be measured; step A04: calculating a 3D movement vector of the object to be detected; step A05: selectively compensating and correcting the error of the 3D motion vector of the object to be detected; step A06: analyzing a portable drawing file with 3D space information by combining the 3D graphic primitive of the object to be detected and the corresponding 3D movement information thereof; and step A07: and establishing a 3D space digital model for the movement of the object to be detected, and overlapping the portable figure file on the 3D space digital model.
The method for processing an image in a compound-eye imaging system further includes step a08: and sending a deceleration early warning signal, a brake early warning signal, a steering prompt signal or a steering control signal.
In the method for processing an image in a compound-eye imaging system, step a02 further includes the following sub-steps: step A021: extracting image characteristics of an object to be detected from the source image files; step A022: comparing the image features with a plurality of primitive templates with different visual angles in a storage unit; step a023: generating a 3D graphic element of an object to be detected; in a further embodiment, the primitive template of step a022 is a two-dimensional image of all or part of the features of the dut.
In the image processing method of the compound-eye imaging system, step a03 further includes the following sub-steps: step A031: shooting a source image file at the same time point through the first lens or the second lens to measure the distance of the object to be measured; step A032: measuring the azimuth angle and the pitch angle of the object to be measured from the source image picture file; step a033: and calculating and confirming the spatial relationship of the object to be detected.
In the method for processing an image in a compound-eye imaging system, the distance measurement in step a031 is obtained by comparing the reference lengths of vehicles in the source image file, or is measured by scaling a line in multiples of the reference length in the source image file; in a further embodiment, the image processing controller compares the multiple scale marking line with a shape center point position of a profile appearance presented by the object to be measured.
In the method for processing an image in a compound-eye imaging system, the distance measurement in step a031 is obtained by triangulation of the observation angles of the first lens and the second lens.
In the image processing method of the compound-eye imaging system, the azimuth angle or the pitch angle in step a032 is measured by using the azimuth scale marking line or the pitch scale marking line in the source image file; in a further embodiment, the image processing controller compares the azimuth scale marking line or the elevation scale marking line with a shape center point position of a profile appearance presented by the object to be measured.
In the image processing method of the compound-eye imaging system, step a04 further includes the following sub-steps: step A041: obtaining the positions of the objects to be measured at different time points; step A042: calculating the movement vector of the object to be detected; step A043: and continuously displaying the motion vectors of a plurality of time points.
In the method for processing an image in a compound-eye imaging system, step a05 further includes the following sub-steps: step A051: extracting the turning characteristics of at least one object to be detected from the source image picture file; step A052: calibrating and generating a compensation correction vector of the object to be detected; step A053: and distributing weight to the compensation correction vector of the object to be measured so as to correct the moving path of the object to be measured.
Therefore, the compound-eye camera system and the image processing method thereof can achieve measurable and measurable peripheral scenery and clearly identify the material of the peripheral scenery on the premise of controllable cost, are used for establishing a 3D digital space sensing system, and further assist different industrial equipment such as vehicle monitoring, automatic driving, AI robots, sweeping robots, aerial unmanned aerial vehicles, multi-axis processing machine instruments and the like, so that the equipment can have 3D space identification and 3D space multi-object monitoring, and the industrial unmanned monitoring level is improved.
For a better understanding of the nature and technical aspects of the present invention, reference should be made to the following detailed description of the invention, which is to be read in connection with the accompanying drawings, wherein the following drawings are provided for purposes of illustration and description only and are not intended to be limiting.
Drawings
Fig. 1A is a schematic structural diagram of a compound-eye imaging system.
Fig. 1B is a diagram illustrating a usage state of the compound-eye camera system applied to a vehicle.
Fig. 1C is a functional block diagram of a compound-eye imaging system.
Fig. 2A to fig. 2E are flow charts of image processing methods of the compound-eye imaging system.
FIG. 3 is a schematic diagram illustrating the image processing controller identifying image features in the source image file.
Fig. 4A is a schematic diagram illustrating a vehicle equipped with the compound-eye imaging system calculating a distance by the reference length.
FIG. 4B is a schematic diagram of the image processing controller calculating the distance from the vehicle reference length in the source image file.
Fig. 5 is a schematic diagram illustrating triangulation application of the compound-eye imaging system.
Fig. 6A is a schematic view illustrating an azimuth angle of an object to be measured by a vehicle equipped with the compound-eye imaging system.
Fig. 6B is a schematic diagram illustrating the image processing controller calculating the azimuth angle of the object to be measured through the source image file.
Fig. 7A to 7C are schematic diagrams illustrating the situation perception of the surrounding scene of the vehicle equipped with the compound-eye imaging system in the 3D space digital model.
Fig. 8 is a schematic diagram illustrating a scenario in which the image processing controller is required to perform residual image blind correction.
Fig. 9 is a functional block diagram of another embodiment of a compound-eye imaging system.
Description of the reference numerals: 50-compound eye camera system; 51-a first lens; 52-second lens; 53-first shot; 53A-center shot direction; 54-a second shot area; 54A-center shot direction; 55-a storage unit; 56-image processing controller; 57-warning light; 91-a vehicle; 92-the analyte; 61-source image picture file; 62-3D space digital model; 63-image features; 64-3D primitives; 65-portable drawing files; 66-primitive template; 71-reference length; 72-azimuth scale marking line; 74-multiple scale marking line; 75-spatial grid lines; h 1-vertical distance; d-lens pitch; α, β, Θ -angle.
Detailed Description
Referring to fig. 1A to 1C, fig. 1A is a schematic structural diagram of a compound-eye imaging system, fig. 1B is a diagram illustrating a use state of the compound-eye imaging system applied to a vehicle, and fig. 1C is a functional block diagram of the compound-eye imaging system. As shown, a compound-eye imaging system 50 includes a first lens 51, four second lenses 52, a storage unit 55, and an image processing controller 56. The first lens 51 has a fan-shaped first photographing region 53, the plurality of second lenses 52 are distributed around the first lens 51, and each of the second lenses 52 has a fan-shaped second photographing region 54, an angle Θ is formed between a central photographing direction 53A of the first photographing region 53 and a central photographing direction 54A of the second photographing region 54, and the second photographing region 54 is partially overlapped with the first photographing region 53. The surfaces of the first lens 51 and the second lens 52 of the compound-eye imaging system 50 are designed to be arc surfaces, so that the central shooting direction 53A of the first shooting area 53 and the central shooting direction 54A of the second shooting area 54 do not point to the same direction, and thus the overall coverage area of the first shooting area 53 and the second shooting area 54 is larger, and more shooting dead angles are avoided. The storage unit 55 is used for storing a plurality of source image files 61 captured by the first lens 51 or the second lens 52. The storage unit 55 is coupled to the image processing controller 56, and the image processing controller 56 is configured to parse a plurality of source image files 61 captured at a same time point and generate a corresponding 3D primitive 64, and further parse a portable file 65 with 3D spatial information through the 3D primitives 64 generated at a plurality of different time points. The storage unit 55 stores a plurality of primitive templates 66, wherein the primitive templates 66 are two-dimensional images of local features or all features of a certain object 92 to be measured in the source image file 61. Causing the 3D primitives 64 or the portable files 65 to be transferred and stored to the storage unit 55. The source image archive 61 is an archive with an image format captured by the first lens 51 or the second lens 52, and the format includes, but is not limited to, JPG, JPEG, PSD, TIFF, PDF, BMP, EPS, PNG, GIF, PCX, and the like. The 3D primitive 64 is a digitized file with 3-dimensional spatial multi-view visualization, vectorization, and resolution. The portable drawing file 65 is in a portable electronic file format with 3D spatial information, which can be transmitted to the cloud or other machine equipment via network for storage, analysis and application; the 3D space information includes position information (e.g., positioning by GPS, compass, or other satellite system), directional velocity vector information, or acceleration vector information of the 3D space.
The compound-eye camera system 50 can be applied to different industrial devices such as automobile monitoring aids, automatic driving, sweeping robots, AI robots, aerial unmanned aerial vehicles and multi-axis processing machine instruments, so that the devices can be used for 3D space recognition and 3D space multi-object monitoring, and the industrial unmanned monitoring level is improved. In the following, the image processing method of the compound eye imaging system 50 of the present invention is described by taking the compound eye imaging system 50 applied to the monitoring assistance of the vehicle 91 as an example, as shown in fig. 1B, a plurality of compound eye imaging systems 50 are distributed on the roof, the front edge, the rear edge or both sides of the vehicle 91 to monitor the 3D spatial condition around and above the vehicle 91. It should be particularly noted that the compound eye imaging system 50 is installed around the vehicle 91 to enable the vehicle 91 to create a 3D spatial situation perception of the periphery, and to know the size, appearance, shape, speed, and acceleration of the objects within approximately 200 meters around the vehicle 91, so that the vehicle 91 can respond to the surrounding traffic conditions in advance to prevent traffic accidents. In addition, the compound-eye camera system 50 is installed on the top of the vehicle 91 for monitoring objects above the vehicle 91, for example, if the vehicle 91 frequently travels in a mountain-top rockfall or mountain-debris-flow high-frequency-occurrence area, the compound-eye camera system 50 can early warn and learn rockfall, mountain collapse, mountain walking and debris flow, so that the vehicle 91 can move around or stop. The value of the compound-eye camera system 50 and the image processing method thereof for the vehicle 91 is that the vehicle 91 can have the situation perception of peripheral 3D space scenes, so as to improve the controllability and accuracy of automatic driving of the vehicle.
In order to achieve the above-mentioned purpose of improving the controllability and accuracy of the automatic driving of the vehicle, the present invention further provides an image processing method of the compound-eye imaging system 50. Referring to fig. 2A to fig. 2E, fig. 2A to fig. 2E are flowcharts illustrating an image processing method of the compound-eye imaging system 50; as shown in fig. 2A, a plurality of source image files 61 at a plurality of time points are captured by the first lens 51 and the second lens 52 (step a 01), and the plurality of source image files 61 are identified and analyzed by the image processing controller 56 to generate a 3D primitive 64 corresponding to at least one object 92 (step a 02). Here, as shown in fig. 2B, the detailed implementation of step a02 includes the following sub-steps: an image feature 63 of an object 92 to be tested is extracted from the source image files 61 (step a 021), wherein, as shown in fig. 3, the image features 63 of the object 92 to be tested in the source image files 61 can be identified by the image processing controller 56 from the source image files 61 shot by the compound eye imaging system 50. The test object 92 may be an automobile, truck, locomotive, traffic sign, utility pole, overpass, road tree, etc. Different objects 92 have different image characteristics 63, and the image characteristics 63 are planar image characteristics of the objects 92, including but not limited to color characteristics, texture characteristics, gray-level value characteristics, shape characteristics, spatial correspondence characteristics, local characteristics, or global characteristics. Taking a specific real object as an example, the image features 63 of the road tree are leaves or trunks; the image features 63 of the car are the hull contour and the tire; the image feature 63 of the truck is the container or the truck driver's seat above the tires. Therefore, by identifying the image feature 63 of the object 92, the compound-eye imaging system 50 can identify whether the object 92 in front of the vehicle 91 is a locomotive, an automobile or a pedestrian. Next, as shown in fig. 2B, comparing the image features 63 with primitive templates 66 of different viewing angles in the storage unit 55 (step a 022), and determining whether the image features 63 of the object 92 are attached to the primitive templates 66; if so, a 3D primitive 64 of the object 92 is generated (shown in FIG. 1C, see step A023), such that the 3D primitive 64 and the primitive templates 66 in the storage unit 55 both correspond to the specific object 92. Here, the primitive template 66 is a file (i.e., a set of complete images of the object 92 at different viewing angles) formed by combining two-dimensional images of the object 92 at different viewing angles, which may be a built-in or captured image-based template file of big data; for example, the primitive template 66 may be a contrasting image file from different perspectives of a particular object (e.g., a car, a locomotive, a truck, a traffic sign, a road tree, etc.) for the purpose of providing the compound eye imaging system 50 with reference to the image features 63 from multiple different perspectives. Therefore, only a few source image files 61 with specific angles can be compared by the image processing controller 56, so that the compound-eye imaging system 50 can identify and confirm whether the object 92 is an object, even a car of which type and style. Furthermore, the primitive template 66 can be a local feature of a certain object 92 in the source image file 61, so that the primitive template 66 with the local feature can be used for residual image comparison and residual image blindness compensation; referring to fig. 8, fig. 8 is a schematic diagram illustrating a scene in which the image processing controller is required to perform residual image blind-correction. As shown in fig. 8, in the first photographing area 53 of the compound-eye photographing system 50, the pedestrian object 92 is partially blocked by the front box-type vehicle, so that the image processing controller 56 cannot completely recognize the pedestrian object 92. At this time, the residual image and the residual shadow of the primitive template 66 (i.e., the image of the local feature of the object 92 to be detected of the pedestrian) are established, so that the image processing controller 56 can obtain and confirm the blocked object in the first capturing area 53 through feature comparison. In this way, the compound eye imaging system 50 can know what object is behind the blocked area in advance, and achieve the functions of advance prediction and early warning.
It should be added that the core technology of the analysis of the source image files 61, or the identification and comparison of the image features 63 is achieved by image matching. The image matching refers to identifying homogeneous characteristics or homogeneous characteristics between two or more images through a certain matching algorithm, for example, in two-dimensional image matching, by comparing correlation coefficients of windows with the same size in a target area and a search area, a window center point corresponding to the largest number of relations in the search area is taken as the homogeneous characteristics or the homogeneous characteristics, that is, a statistical method is adopted to find out the correlation matching degree between signals. The essence is that the matching criterion is applied to achieve the best search effect under the relevant and similar conditions of basic primitives. Generally, image matching can be divided into grayscale-based matching and feature-based matching.
Then, after the object 92 is confirmed and the corresponding 3D primitive 64 is generated, the distance of the object 92 can be calculated (step a 03); as shown in fig. 2C, the distance of the object 92 is calculated by first taking a source image file 61 at the same time point through the first lens 51 or the second lens 52 to measure the distance of the object 92 (step a 031), and then measuring the azimuth angle and the elevation angle of the object 92 from the source image file 61 (step a 032), so as to calculate and confirm the spatial relationship of the object 92 (step a 033). As shown in fig. 4A and 4B, the distance measurement in step a031 may be performed by comparing the relative distance between the truck or automobile object 92 in the source image frame 61 and the vehicle 91 on which the compound-eye imaging system 50 is installed, with the reference length 71 of the vehicle 91 in the source image frame 61, and measuring the relative distance with the reference length 71; that is, at the one-time reference length 71, the two-time reference length 71, the three-time reference length 71, and the four-time reference length 71 of fig. 4A, the multiple scale mark line 74 of the reference length 71 is displayed or marked in the source image file 61 of fig. 4B, so as to facilitate the image processing controller 56 of the compound eye imaging system 50 to compare or calculate the distance between the object 92 of the truck or the automobile. As shown in fig. 4A and 4B, the reference length 71 of the vehicle 91 is preferably the distance from the mounting point of the compound-eye imaging system 50 to the foremost end of the vehicle 91; in other embodiments, the reference length 71 and its multiple of the vehicle 91 in the source image file 61 captured by the first lens 51 can be achieved by a built-in scale (a built-in standard fixed length) of software, or by marking a physical scale on the outer surface of the lens of the first lens 51. In addition, as shown in fig. 4B, if the area of the object 92 is larger (may be the actual volume of the object 92 is larger, or is closer to the vehicle 91 equipped with the compound-eye imaging system 50) and crosses multiple scale mark lines 74, the image processing controller 56 compares the multiple scale mark lines 74 with the position of the center point of the shape of the outline appearance of the object 92, and determines the distance of the object 92.
The distance to the object 92 of the truck or automobile can be calculated by triangulation in addition to the reference length 71 of the vehicle 91. As shown in fig. 5, the lens distance d between the first lens 51 and the second lens 52 of the compound-eye imaging system 50, and the vertical distance h1 between the motorcycle-shaped object 92 and the compound-eye imaging system 50 can be obtained by trigonometric function and triangulation method, i.e., the vertical distance h1= d [ (sin α sin β)/sin (α + β) ]. That is, the lens distance d between the first lens 51 and the second lens 52 is known, and then the vertical distance h1 can be obtained by observing and measuring the angles α and β through the compound-eye imaging system 50. The locomotive-shaped object to be measured 92 may be an automobile, a truck, a pedestrian, a road tree, a traffic sign, or the like.
The azimuth or pitch angle measurement in step a032 can be measured by the azimuth scale marking line 72 or the pitch scale marking line in the source image file 61; for example, as shown in fig. 6A and 6B, when looking from the compound-eye imaging system 50 to the front of the vehicle 91, the image of the source image frame 61 can be divided into a plurality of regions by the Azimuth scale marking lines 72, and the corresponding Azimuth angle (Azimuth angle) of the object 92 relative to the compound-eye imaging system 50 can be known from the position of the object 92. If the area of the object 92 in the source image file 61 is large (it may be the actual volume of the object 92 is large, or it is closer to the vehicle 91 equipped with the compound-eye imaging system 50, as shown in fig. 6B) and spans multiple azimuth scale mark lines 72 or pitch scale mark lines, the image processing controller 56 uses the center point of the outline shape of the object 92 as the azimuth/pitch angle determination of the object 92. In the same way, the image processing controller 56 may also divide the source image file 61 into a plurality of different pitch angle regions by a plurality of pitch scale indicating lines, and further determine the pitch position of the dut 92. In this way, the image processing controller 56 analyzes the distance, the orientation, and the pitch of the object 92, and knows and confirms the corresponding spatial relationship between the object 92 and the vehicle 91 according to the spherical coordinate principle, so as to complete the execution of step a 033.
Next, a 3D motion vector of the object 92 is calculated (step a 04), wherein the positions of the object 92 at different time points are obtained through the step a03 (step a 041), and then the motion vector of the object 92 is calculated (step a 042), so that a plurality of motion vectors at different time points can be continuously displayed (step a 043). In step a03, the purpose of the source image files 61 obtained from the "same time point and different lenses" is to perform the position calculation of the remote object 92 by the spatial position difference between the first lens 51 and the second lens 52 at different positions; the essence is that the object 92 to be measured is located multiple times through multiple different lens positions, and the accuracy is improved through multiple calculations. In step A04, the motion trajectory and motion vector of a specific object 92 (i.e. the motion vector of the object 92) are obtained through the source image files 61 taken at "different time points".
Then, the error of the 3D motion vector of the dut 92 is selectively compensated for (step a 05), and the error correction method includes the following sub-steps: extracting at least one steering characteristic of the object 92 to be tested from the source image picture 61 (step A051), wherein the steering characteristic comprises but is not limited to the steering of tires of the automobile, the head steering of pedestrians on the road or the intersection angle of an automobile body and a road lane. These steering characteristics, which represent the cars or pedestrians around the vehicle 91 to which the compound-eye imaging system 50 is installed, have strong steering intentions, and may change the traveling direction thereof greatly, thereby causing a sudden steering or a sudden lane change, which may result in a collision accident of the vehicle 91. Therefore, if the compound eye imaging system 50 can predict the steering intention of the surrounding automobiles and pedestrians in advance, corresponding treatment can be performed in advance, and the probability of collision between the vehicle 91 and the surrounding object 92 is reduced. When it is confirmed that the surrounding vehicles and pedestrians of the vehicle 91 intend to turn, the compensation and correction vector of the object 92 is calibrated and generated (step a 052), and the weight is redistributed to the compensation and correction vector of the object 92 to correct the moving path of the object 92 (step a 053), that is, the compensation and correction vector generated in the step a04 is corrected to predict the sudden turning and sudden lane change of the surrounding vehicles and pedestrians in advance. It is specifically noted that the selective execution of step a05 means that it may or may not be executed. As shown in fig. 2A, if the compensation correction vector calculated in step a05 is too large, the compensation correction vector may be fed back to step a04 to perform the calculation of the movement vector of the dut 92 in step a04 again.
At this time, the compound-eye imaging system 50 has completed the calculation of the distance and the motion vector of the surrounding car pedestrian or traffic signal sign relative to the vehicle 91, and then the image processing controller 56 is used to combine the 3D primitive 64 of the object 92 and the corresponding 3D motion information thereof, so as to analyze the portable drawing 65 with the 3D spatial information (step a 06); then, the 3D space digital model 62 where the object 92 to be measured moves is established, and the portable drawing 65 is overlaid on the 3D space digital model 62 (step a 07), so that the image processing controller 56 of the compound-eye imaging system 50 can overlay all scenes, such as pedestrians and vehicles around the vehicle 91, traffic signal signs, and the like, on the 3D space digital model 62. Referring to fig. 7A to 7C, fig. 7A to 7C are schematic diagrams illustrating a situation perception of a surrounding scene of a vehicle installed with the compound-eye camera system in the 3D spatial digital model; as shown in fig. 7A and 7B, the compound-eye imaging system 50 can detect possible objects 92 to be detected, such as vehicles, people, road trees, traffic signal signs, etc., to obtain corresponding 3D primitives 64, sense 3D spatial positions, motion vectors, and accelerations of the objects, and finally convert the 3D primitives 64 into portable files 65 with 3D spatial information to be overlaid on a 3D spatial digital model 62, so that the compound-eye imaging system 50 establishes 3D spatial situation sensing and 3D spatial depth estimation around the vehicle 91 to detect the size, speed, and acceleration of objects within 200 meters around the vehicle 91, thereby enabling the vehicle 91 to have strong monitoring capability for objects around the vehicle. As shown in fig. 7A, the vehicle 91 with the compound-eye camera system 50 can detect and perceive the locomotive object 92 to be tested at the left and back and the lane marking lines on the road, and further determine whether to avoid dodging or accelerate to leave; as shown in fig. 7B, the vehicle 91 with the compound-eye camera system 50 mounted thereon can detect and sense a plurality of automobile testees 92 around the vehicle 91, and the image processing controller 56 can establish the 3D spatial digitization model 62 and spatial coordinates around the vehicle 91, so that the 3D spatial digitization model 62 has virtual spatial grid lines 75, and the compound-eye camera system 50 can know and sense the relative coordinates of all the testees 92 around the vehicle, so that the image processing controller 56 can plan the optimal traveling, avoiding or even bypassing, and even determine whether to slow down, stop or overtake. Finally, referring to fig. 7C, the vehicle 91 can send out a deceleration warning signal, a brake warning signal, a steering prompting signal or a steering control signal according to the monitoring and determination of the compound-eye camera system 50 or the image processing controller 56 (step a 08), so that the vehicle 91 has the functions of autonomous control and automatic driving. As shown in the left half of fig. 7C, the compound-eye imaging system 50 may further integrate a map system (e.g., google map, baidu map, gade map, etc.) to know the road direction of tens of kilometers around the vehicle 91, and simultaneously display a plurality of spatial grid lines 75 generated by the image processing controller 56. Moreover, after the integration, the compound-eye camera system 50 can present the road direction and the road plan of the map system, the detected and sensed peripheral scenery and the object 92 to be detected in the 3D space digital model 62 together; as shown in the right half of fig. 7C, the compound-eye imaging system 50 of the present invention can achieve: the sensing and prediction of the coordinates, relative distance, and motion vector of the object 92 can provide early warning for possible collisions.
Referring to fig. 9, fig. 9 is a functional block diagram of a compound-eye imaging system according to another embodiment. As shown in fig. 9, the compound-eye imaging system 50 of the present invention may further include at least one warning light 57, and the warning light 57 is coupled to the image processing controller 56, so that the image processing controller 56 can be used to control the turning on, turning off or flashing of the warning light 57. As shown in fig. 7A, when the locomotive dut 92 at the rear left side approaches the vehicle 91, and the distance of the locomotive dut 92 is calculated and analyzed by the image processing controller 56 to be too close, the image processing controller 56 can autonomously drive the warning light 57 to blink (i.e. without being controlled by the driver of the vehicle 91), so as to remind the locomotive dut 92 to keep the inter-vehicle distance. That is, after the image processing controller 56 of the compound-eye imaging system 50 calculates and analyzes the objects 92 in the source image files 61, the image processing controller 56 can directly control the warning light 57 to emit warning light. Therefore, the function of step a08 is to control and drive the warning light 57 to emit a deceleration warning signal, a brake warning signal, and a steering signal when the image processing controller 56 determines that the peripheral dut 92 is too close or too fast, so as to achieve the purpose of preventing collision.
Therefore, the compound-eye camera system 50, the vehicle 91 using the compound-eye camera system 50 and the image processing method thereof can achieve measurable peripheral scenery, clear identification of peripheral scenery materials and establishment of a 3D digital space perception system without using expensive equipment such as laser radar, infrared radar, and light radar under the premise of controllable cost, so that the system can be used for different industrial equipment such as automobile monitoring, AI robots, automatic driving, sweeping robots, aerial unmanned aerial vehicles, multi-axis processing machine instruments and the like. Therefore, it has huge commercial application potential.
The present invention is described above by way of examples, which are not intended to limit the scope of the present invention. The scope of which should be determined with reference to the claims and their equivalents. It will be appreciated by those skilled in the art that changes may be made and equivalents may be substituted for elements thereof without departing from the true spirit and scope of the invention.

Claims (18)

1. A compound-eye imaging system, comprising:
a first lens having a first photographing region fanned out;
at least four second lenses distributed around the first lens, each second lens having a second photographing region with a fan-shaped expansion, the central photographing direction of the first photographing region and the central photographing direction of the second photographing region having an angle therebetween, and the second photographing region being partially overlapped with the first photographing region;
a storage unit for storing a plurality of source image files shot by the first lens or the second lens; and
the image processing controller is used for analyzing the source image files shot at the same time point and generating a corresponding 3D primitive, and then analyzing a portable image file with 3D space information through the 3D primitives generated at different time points.
2. The compound-eye imaging system of claim 1, wherein the source image files captured in the first capture area each have a reference length, and the image processing controller uses the reference length as a scale to construct a 3D spatial digital model.
3. The compound-eye imaging system of claim 1, wherein the storage unit is coupled to the image processing controller, such that the 3D graphics primitives or the portable image files are transmitted to and stored in the storage unit.
4. The compound-eye imaging system of claim 1, wherein the storage unit stores at least one primitive template, and the primitive template is a two-dimensional image of all or part of the features of the object.
5. The compound-eye imaging system of claim 1, further comprising at least one warning light coupled to the image processing controller, such that the image processing controller is configured to control the warning light directly after calculating and analyzing the object to be measured in the source image files.
6. A vehicle using a plurality of compound eye imaging systems according to claim 1, wherein the plurality of compound eye imaging systems are distributed on a roof portion, a front edge, a rear edge or both sides of the vehicle.
7. An image processing method of a compound eye camera system is characterized in that the compound eye camera system comprises a first lens with a first shooting area and a second lens with a second shooting area, and the image processing method comprises the following steps:
step A01: intercepting source image files of multiple lenses and multiple time points;
step A02: identifying and analyzing the source image files to generate 3D graphics primitives corresponding to at least one object to be detected;
step A03: calculating the distance of the object to be measured;
step A04: calculating a 3D movement vector of the object to be detected;
step A05: selectively compensating and correcting the error of the 3D motion vector of the object to be detected;
step A06: analyzing a portable drawing file with 3D space information by combining the 3D graphic primitive of the object to be detected and the corresponding 3D movement information thereof; and
step A07: and establishing a 3D space digital model for the movement of the object to be detected, and overlapping the portable figure file on the 3D space digital model.
8. The image processing method of the compound-eye imaging system according to claim 7, further comprising step a08: and sending a deceleration early warning signal, a brake early warning signal, a steering prompt signal or a steering control signal.
9. The image processing method of the compound-eye imaging system according to claim 7, wherein the step a02 further comprises the following substeps:
step A021: extracting image characteristics of an object to be detected from the source image files;
step A022: comparing the image characteristics with a plurality of primitive templates with different visual angles in a storage unit;
step a023: and generating a 3D graphic element of the object to be detected.
10. The image processing method of the compound-eye imaging system according to claim 9, wherein the primitive template in step a022 is a two-dimensional image of all or part of the features of the object.
11. The image processing method of a compound-eye imaging system as claimed in claim 7, wherein the step a03 further comprises the following substeps:
step A031: shooting a source image file at the same time point through the first lens or the second lens to measure the distance of the object to be measured;
step A032: measuring the azimuth angle and the pitch angle of the object to be measured from the source image picture file;
step a033: and calculating and confirming the spatial relationship of the object to be detected.
12. The method as claimed in claim 11, wherein the distance measurement in step a031 is obtained by comparing a reference length of the vehicle in the source image file, or by scaling a line in the source image file by a multiple of the reference length.
13. The image processing method of a compound eye imaging system of claim 12, wherein the image processing controller compares the multiple scale marking line with a shape center point position of a contour appearance presented by the object to be measured.
14. The method of claim 11, wherein the distance measurement in step A031 is obtained by triangulation of the angles of view of the first and second lenses.
15. The image processing method of the compound-eye imaging system as claimed in claim 11, wherein the azimuth or elevation angle measurement of step a032 is measured by an azimuth scale marking line or an elevation scale marking line in the source image file.
16. The image processing method of the compound-eye imaging system of claim 15, wherein the image processing controller compares the azimuth scale marking line or the elevation scale marking line with a shape center point position of a contour appearance presented by the object to be measured.
17. The image processing method of a compound-eye imaging system according to claim 7, wherein the step a04 further comprises the following substeps:
step A041: obtaining the positions of the objects to be measured at different time points;
step A042: calculating the movement vector of the object to be detected;
step A043: the motion vectors of a plurality of time points are displayed in succession.
18. The image processing method of a compound-eye imaging system according to claim 7, wherein the step a05 further comprises the following substeps:
step A051: extracting the turning characteristic of at least one object to be detected from the source image picture file;
step A052: calibrating and generating a compensation correction vector of the object to be detected;
step A053: and distributing weight to the compensation correction vector of the object to be measured so as to correct the moving path of the object to be measured.
CN202110611056.0A 2021-06-01 2021-06-01 Compound eye imaging system, vehicle using compound eye imaging system, and image processing method thereof Pending CN115440067A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110611056.0A CN115440067A (en) 2021-06-01 2021-06-01 Compound eye imaging system, vehicle using compound eye imaging system, and image processing method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110611056.0A CN115440067A (en) 2021-06-01 2021-06-01 Compound eye imaging system, vehicle using compound eye imaging system, and image processing method thereof

Publications (1)

Publication Number Publication Date
CN115440067A true CN115440067A (en) 2022-12-06

Family

ID=84240146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110611056.0A Pending CN115440067A (en) 2021-06-01 2021-06-01 Compound eye imaging system, vehicle using compound eye imaging system, and image processing method thereof

Country Status (1)

Country Link
CN (1) CN115440067A (en)

Similar Documents

Publication Publication Date Title
CN108572663B (en) Target tracking
US10445928B2 (en) Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types
CN110537109B (en) Sensing assembly for autonomous driving
JP7073315B2 (en) Vehicles, vehicle positioning systems, and vehicle positioning methods
JP7157054B2 (en) Vehicle navigation based on aligned images and LIDAR information
US11157014B2 (en) Multi-channel sensor simulation for autonomous control systems
US11287523B2 (en) Method and apparatus for enhanced camera and radar sensor fusion
CN112665556B (en) Generating a three-dimensional map of a scene using passive and active measurements
CN107161141B (en) Unmanned automobile system and automobile
US11508122B2 (en) Bounding box estimation and object detection
US9201424B1 (en) Camera calibration using structure from motion techniques
US20210019536A1 (en) Signal processing device and signal processing method, program, and mobile body
WO2021227645A1 (en) Target detection method and device
JP2020525809A (en) System and method for updating high resolution maps based on binocular images
WO2020185489A1 (en) Sensor validation using semantic segmentation information
JP2019045892A (en) Information processing apparatus, information processing method, program and movable body
US20050030378A1 (en) Device for image detecting objects, people or similar in the area surrounding a vehicle
US11294387B2 (en) Systems and methods for training a vehicle to autonomously drive a route
CN112698306A (en) System and method for solving map construction blind area by combining multiple laser radars and camera
CN112379674B (en) Automatic driving equipment and system
CN114442101B (en) Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
CN110750153A (en) Dynamic virtualization device of unmanned vehicle
US20230237783A1 (en) Sensor fusion
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
CN113435224A (en) Method and device for acquiring 3D information of vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication