CN115798232A - Holographic intersection traffic management system based on combination of radar and video all-in-one machine and multi-view camera - Google Patents

Holographic intersection traffic management system based on combination of radar and video all-in-one machine and multi-view camera Download PDF

Info

Publication number
CN115798232A
CN115798232A CN202211366486.1A CN202211366486A CN115798232A CN 115798232 A CN115798232 A CN 115798232A CN 202211366486 A CN202211366486 A CN 202211366486A CN 115798232 A CN115798232 A CN 115798232A
Authority
CN
China
Prior art keywords
target
camera
view camera
radar
intersection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211366486.1A
Other languages
Chinese (zh)
Inventor
冯澍
闫昊
李孟迪
王鹏
王伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Intercommunication Technology Co ltd
Original Assignee
Smart Intercommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Intercommunication Technology Co ltd filed Critical Smart Intercommunication Technology Co ltd
Priority to CN202211366486.1A priority Critical patent/CN115798232A/en
Publication of CN115798232A publication Critical patent/CN115798232A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses a holographic intersection traffic management system based on the combination of a radar and video all-in-one machine and a multi-view camera, which relates to the field of intelligent intersection traffic management, and comprises the following components: the system comprises a radar and video all-in-one machine, a main control machine and a multi-view camera; by adopting a combined configuration mode of a plurality of cameras of a camera group in the radar-vision all-in-one machine and a configuration method of near-far view image acquisition, target detection and identification are carried out from near to far, and then multi-camera fusion processing is carried out on the target, so that the accuracy of target detection can be improved, and the tracking processing of the target is facilitated; meanwhile, when vehicles at the intersection wait in a queuing area, clear acquisition can be carried out on images of the vehicles below the intersection by configuring the multi-camera, so that the problem of shielding when the radar-vision all-in-one machine acquires data is effectively solved; and the radar sensor is adopted to acquire the motion information of each moving target from the stop line of the intersection to the end of the road, and the motion information is fused with target data detected and identified by the camera set and the multi-view camera, so that the accuracy of target identification and tracking can be further improved.

Description

Holographic intersection traffic management system based on combination of radar and video all-in-one machine and multi-view camera
Technical Field
The invention relates to the field of intelligent traffic management, in particular to a holographic intersection traffic management system based on the combination of a radar and video all-in-one machine and a multi-view camera.
Background
The vehicle-road cooperation is a necessary trend for commanding traffic development, and the urban intersection is an important node for the vehicle-road cooperation. With the development of the concept of holographic intersections, various large software and hardware manufacturers have various layouts. The holographic intersection can provide high-precision and real-time traffic data, and is an important means for promoting traffic intelligent management to finely land and comprehensively enabling urban traffic management. The multi-view multi-mode road condition sensing is carried out through various sensors, the traffic state is accurately sensed, the complex state of the intersection is digitally restored, and the road right scheduling and intelligent management are achieved according to needs. In each module of the holographic intersection, sensing is the first step, and the configuration scheme of the sensor directly influences the subsequent data processing and algorithm flow.
At present, the main mode of holographic intersection traffic management is a multi-camera pure vision scheme, and the tasks of detection, classification, tracking, re-identification and the like of targets are completed by configuring cameras with different focal lengths and resolutions and adopting related algorithms in the field of computer vision. However, in the existing multi-camera pure vision scheme, the camera is strongly influenced by illumination when collecting images, and is influenced by factors such as night, strong light, rain, snow, smoke and the like, so that the visibility is poor under poor environmental conditions, and the all-weather all-day-long characteristic is not provided, and the efficiency and the accuracy of the traffic management of the existing holographic intersection are low.
Disclosure of Invention
In order to solve the technical problems, the invention provides a holographic intersection traffic management system based on the combination of a radar-vision all-in-one machine and a multi-view camera, so as to solve the problem that the efficiency and the accuracy of the conventional holographic intersection traffic management are low.
In order to achieve the purpose, the invention provides a holographic intersection traffic management system based on the combination of a radar and vision all-in-one machine and a multi-view camera, which comprises: the system comprises a radar-vision all-in-one machine, a multi-view camera and a main control machine; the radar and television integrated machine is arranged on the electric warning rod; the multi-view camera is arranged below the radar and video all-in-one machine and is connected with the radar and video all-in-one machine;
the multi-view camera is used for acquiring images of a traffic light waiting area and an intersection queuing area corresponding to the electric police pole;
the radar and video integrated machine comprises a camera set and a radar sensor;
the camera set is used for acquiring the whole intersection area image corresponding to the electric warning rod, the vehicle waiting area image of the opposite road and the road area image extending to the far end of the opposite road corresponding to the electric warning rod;
the radar sensor is used for acquiring information of each moving target from a stop line of the intersection to the end of a far-end road;
the main control computer is used for carrying out cross-lens tracking identification and fusion on the images of the traffic light waiting area and the intersection queuing area corresponding to the electric police pole to obtain identification information and characteristic information of each target and pixel position information of the target in the picture;
the main control computer is further configured to perform target identification on images of different areas acquired by the camera set, establish track information of targets, and fuse the track information of each target acquired by the camera set and speed information and position information of each target acquired according to the radar sensor, where the track information includes identification information of the target, feature information, and real position information of the target, and the real position information of the target is acquired according to pixel position information of the target in a picture;
the main control computer is also used for fusing the cross-lens tracking identification and fusion data corresponding to the multi-view camera with the fusion data corresponding to the radar-vision all-in-one machine and carrying out intersection traffic management according to the fused data.
Further, the multi-view camera includes: a front view camera, a down view camera, and a rear view camera;
the front-view camera is in the same direction with the road, the included angle range between the visual field center and the horizontal line is 10-40 degrees, the rear-view camera is opposite to the road, the included angle range between the visual field center and the horizontal line is 10-40 degrees, and the visual field center of the downward-view camera is vertically downward and is used for sequentially collecting images of a traffic light waiting area and an intersection queuing area corresponding to the electric police pole.
Further, when the images of the traffic light waiting area and the intersection queuing area corresponding to the electric police pole collected by the front-view camera, the downward-view camera and the rear-view camera are not enough to completely cover the areas on two sides of the road, the multi-view camera further comprises a left-side-view camera and a right-side-view camera;
the left side view camera is perpendicular to the road to the left, the included angle range of the visual field center and the horizontal line is 5-45 degrees, the right side view camera is perpendicular to the road to the right, the included angle range of the visual field center and the horizontal line is 5-45 degrees, and the left side view camera and the right side view camera are used for collecting images of a traffic light waiting area and an intersection queuing area corresponding to the electric police pole;
the main control computer is also used for carrying out cross-lens tracking recognition and fusion on the images of the traffic light waiting area and the intersection queuing area corresponding to the electric police pole, which are acquired by the front-view camera, the downward-view camera, the rear-view camera, the left-side-view camera and the right-side-view camera, so as to obtain the identification information and the characteristic information of each target and the pixel position information of the target in the picture.
Further, the system further comprises: the thunder-vision all-in-one machine is arranged in a reverse direction with the thunder-vision all-in-one machine;
the radar and target image acquisition unit is used for acquiring radar speed and target images of a target when the target drives to the electric warning rod from a distance;
and the main control computer is also used for identifying the target image to acquire the identification information and the characteristic information of the target and the pixel position information of the target in the picture, and performing data fusion and intersection traffic management according to the radar data, the identification information and the characteristic information of the target and the pixel position information of the target in the picture.
Further, the camera group includes: a global exposure camera, a close-up camera, a long-range camera;
the global exposure camera is used for acquiring the image of the whole intersection region corresponding to the electric police pole or the traffic light pole;
the close-range camera is used for acquiring images of the whole intersection area corresponding to the electric police pole or the traffic light pole and the vehicle waiting area of the opposite road;
the long-range camera is used for acquiring a vehicle waiting area image of an opposite road and a road area image extending to a far end, which correspond to an electric police pole or a traffic light pole;
the main control computer is specifically used for carrying out target identification on the whole intersection region image corresponding to the electric police pole or the traffic light pole acquired by the global exposure camera to obtain identification information and characteristic information of a target and pixel position information of the target in the image;
the main control computer is specifically used for carrying out target identification on the whole intersection area corresponding to the electric police pole or the traffic light pole acquired by the close-range camera and the vehicle waiting area image of the opposite road, matching the target detected by the global exposure camera through a preset image splicing and re-identification algorithm according to an identification result, and establishing track information, wherein the track information comprises identification information and characteristic information of the target and real position information of the target, and the real position information of the target is acquired according to the pixel position information of the target in the image;
the main control computer is specifically further used for detecting and identifying a far target by using a vehicle waiting area image of an opposite road corresponding to an electric police pole or a traffic light pole collected by a distant view camera and a road area image extending to a far end, matching an identification result with a target identification result corresponding to a near view camera through a preset image splicing algorithm, and updating the track information.
Further, the main control computer is specifically configured to obtain a distortion coefficient of each camera according to a compensation parameter of each camera in the camera group, and obtain a distortion coefficient of each camera according to a compensation parameter of each camera in the multi-view camera;
the main control computer is specifically used for acquiring perspective transformation coefficients of the cameras by selecting four non-collinear pixel points in the pictures acquired by the cameras and corresponding physical coordinates under a world coordinate system;
the main control computer is specifically further configured to obtain real position information of the target according to the distortion coefficient, the perspective transformation coefficient, and pixel position information of the target in the picture.
Further, the main control computer is specifically configured to perform point cloud segmentation and clustering on data of each moving target from a crossing stop line to a road end, which is acquired by the radar sensor, to obtain speed information and position information of each target.
Furthermore, the installation sight angle of the radar and vision all-in-one machine is 3-15 degrees downward from the horizontal direction.
Further, the included angle between the visual field center of the global exposure camera and the close-range camera and the horizontal line ranges from 10 degrees to 50 degrees.
Further, the included angle between the center of the vision field of the long-range view camera and the horizontal line is about 5-45 degrees.
According to the holographic intersection traffic management system based on the combination of the radar-vision all-in-one machine and the multi-view camera, a configuration method of near-distance view image acquisition is adopted through a combined configuration mode of a plurality of cameras of a camera set in the radar-vision all-in-one machine, targets in acquired camera pictures change from large to small, target detection and identification are respectively carried out on the near-distance view images and the far-distance view images, then multi-camera fusion processing is carried out on the targets, the accuracy of target detection can be improved, and tracking processing of the targets is facilitated; meanwhile, when vehicles at the intersection wait in a queuing area, the small vehicles are easily shielded by the large buses and the buses in front, and images of vehicles at the front and the rear can be clearly acquired by configuring the multi-view camera, so that the shielding problem when the radar vision all-in-one machine acquires data is effectively solved; and the radar sensor is adopted to acquire speed information and position information of each moving target from a stop line at the intersection to the end of the road, and the speed information and the position information are fused with target data detected and identified by the camera set and the multi-view camera, so that the accuracy of target identification and tracking can be further improved.
Drawings
FIG. 1 is a first schematic view of a capturing scene of a video-thunder integrated machine and a multi-view camera provided by the invention;
FIG. 2 is a schematic structural diagram of a holographic intersection traffic management system based on a combination of a radar and video integrated machine and a multi-view camera provided by the invention;
FIG. 3 is a schematic view II of a capturing scene of the integrated TV and multi-view camera provided by the invention;
FIG. 4 is a third schematic view of a thunder-vision all-in-one machine and a multi-view camera collecting scene provided by the invention;
fig. 5 is a fourth schematic view of a capturing scene of the video-thunder integrated machine and the multi-view camera provided by the invention.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
The embodiment of the invention provides a holographic intersection traffic management system based on a combination of a radar-vision all-in-one machine and a multi-view camera, as shown in figure 2, the system comprises: the radar and television integrated machine 21, the main control machine 22 and the multi-view camera 23; the radar and television integrated machine 21 is arranged on an electric police rod; the multi-view camera 23 is configured below the video-audio integrated machine 21 and connected with the video-audio integrated machine, and a system connection scene can be as shown in fig. 1; the multi-view camera 23 is used for acquiring images of a traffic light waiting area and an intersection queuing area corresponding to the electric police pole.
Wherein, the multi-view camera is hoisted under the radar vision all-in-one machine. The number of lenses, focal length of the lenses, and installation angle of the multi-view camera are related to the height from the ground, the field of view to be acquired, the width and number of lanes, and the like. For a narrower total lane width, a configuration scheme of a three-eye camera can be adopted; for example, for a wider overall lane width, a five-eye camera configuration may be used. The focal length of the front lens and the rear lens of the multi-view camera is generally 6-20mm, the focal length of the side-view lens is generally 4-8mm, and the included angle range between the center of the visual field and a horizontal line is 5-45 degrees; the downward-looking camera is generally 2-6mm, and the center of the visual field is vertically downward. The actual installation process can be adjusted according to specific road conditions.
The radar and video integrated machine 21 comprises a camera set and a radar sensor.
The integrated thunder and vision machine is usually arranged on an electric police pole or a traffic light pole of about 5-7m, the electric police pole is a pole for arranging an electronic police or an electronic eye and is generally opposite to the traffic light pole, the included angle between the inclination angle of the sight of the integrated thunder and vision machine and the horizontal plane is usually 3-15 degrees, and the intersection area can be well observed under the included angle; the main flow frequency of the radar sensor is 77G-80G, and the distance measurement of the motor vehicle under the scene of 8 lanes in two directions and 350m of longitudinal distance are met. The focal lengths of the lenses of the global exposure camera, the long-range camera and the close-range camera are generally 6-20mm. The included angle range between the center of the visual field of the global exposure camera and the close-range camera and the horizontal line is about 10-50 degrees; the angle between the center of the vision field of the long-range camera and the horizontal line is about 5-45 degrees. Upwards in the position, the camera group can satisfy the coverage of two-way 8 lanes. In the longitudinal distance, the effective action range of the global exposure camera is 20-200m away from the mounting rod; the effective action range of the close-range camera is about 20-300m away from the mounting rod; the effective range of action of the long range camera is about 100-350m from the mounting bar. The multiple cameras are respectively used for motor vehicle information identification and target detection of a distant view and a close view, wherein: the global exposure camera is responsible for collecting targets in the intersection area, the distance from the global exposure camera to the installation position is 15-50 m, the close-range camera is responsible for collecting targets in the intersection and a vehicle waiting area facing a road, the distance from the global exposure camera to the installation position is 20-150 m, the far-range camera mainly collects vehicles in the range from the vehicle waiting area to the end of the road, and the distance from the vehicle waiting area to the installation position is 150-250 m, so that the detection, identification and tracking of the targets from near to far are achieved.
The camera set is used for acquiring the whole intersection area image corresponding to the electric police rod, the vehicle waiting area image of the opposite road and the road area image extending from the opposite road to the far end corresponding to the electric police rod.
The radar sensor is used for acquiring information such as speed information, acceleration information, position information, RCS and the like of each moving target from a stop line of an intersection to the end of a far-end road.
The main control computer 22 is configured to perform cross-lens tracking identification and fusion on the images of the traffic light waiting area and the intersection queuing area corresponding to the electric police pole, so as to obtain identification information and feature information of each target and pixel position information of the target in the image; the system is also used for carrying out target identification on images of different areas acquired by the camera set, establishing track information of targets, and fusing the track information of each target acquired by the camera set and the speed information and the position information of each target acquired according to the radar sensor identification, wherein the track information comprises identification information and characteristic information of the target and real position information of the target, and the real position information of the target is acquired according to the pixel position information of the target in the picture; and the system is also used for fusing the cross-lens tracking identification and fusion data corresponding to the multi-view camera with the fusion data corresponding to the radar-vision all-in-one machine and carrying out intersection traffic management according to the fused data.
Further, the multi-view camera includes: a front view camera, a down view camera, and a rear view camera; the front-view camera is in the same direction with the road, the included angle range of the visual field center and the horizontal line is 5-45 degrees, the rear-view camera is opposite to the road, the included angle range of the visual field center and the horizontal line is 5-45 degrees, and the visual field center of the downward-view camera is vertically downward and is used for sequentially collecting images of a traffic light waiting area and an intersection queuing area corresponding to the electric warning rod.
Further, when the images of the traffic light waiting area and the intersection queuing area corresponding to the electric police pole collected by the front-view camera, the downward-view camera and the rear-view camera are not enough to completely cover the areas on two sides of the road, the multi-view camera further comprises a left-side-view camera and a right-side-view camera; the left side view camera is perpendicular to the road to the left, the included angle range of the visual field center and the horizontal line is 5-45 degrees, the right side view camera is perpendicular to the road to the right, the included angle range of the visual field center and the horizontal line is 5-45 degrees, and the left side view camera and the right side view camera are used for collecting images of a traffic light waiting area and an intersection queuing area corresponding to the electric police pole; the main control computer is also used for carrying out cross-lens tracking recognition and fusion on the images of the traffic light waiting area and the intersection queuing area corresponding to the electric police pole, which are acquired by the front-view camera, the downward-view camera, the rear-view camera, the left-side-view camera and the right-side-view camera, so as to obtain the identification information and the characteristic information of each target and the pixel position information of the target in the picture.
For example, as shown in fig. 3, in a common intersection scene, a three-eye camera and a radar all-in-one machine are arranged at an electric warning lamp post. When the target enters the irradiation range of the sensor, the target is captured by a rear-view camera of the three-view camera, captured by a downward-view camera and captured by a front-view camera and a camera module of the radar all-in-one machine. And carrying out target detection on the pictures shot by the cameras in the multi-view camera and the radar vision all-in-one machine, and fusing the pictures with the targets captured by the radar sensor to obtain accurate target motion information.
For another example, as shown in fig. 4, in a T-shaped road junction scene, for a road with only one direction in the T-shaped road, when the lighting and vision integrated machine is placed on an electric police pole at the junction, the lighting and vision integrated machine can only irradiate the central area of the junction, which causes great performance waste. Therefore, for the T-shaped road, the multi-view camera is placed on the electric warning light point, the queuing information of the vehicles in the road is obtained, and the radar vision machine is placed on the traffic light point. And the radar and video all-in-one machine and the multi-view camera are placed on the point position of the electric police pole in other directions.
Further, in order to better observe the vehicle information in two directions, the system performs global management on the vehicle, as shown in fig. 5, and the system further includes: the radar and video integrated machine is reversely mounted with the radar and video integrated machine; the radar and target image acquisition unit is used for acquiring radar speed and target images of a target when the target drives to the electric warning rod from a distance; and the main control computer is also used for identifying the target image to acquire the identification information and the characteristic information of the target and the pixel position information of the target in the picture, and performing data fusion and intersection traffic management according to the radar data, the identification information and the characteristic information of the target and the pixel position information of the target in the picture.
Further, the camera group includes: a global exposure camera, a close-up camera, a long-range camera; the global exposure camera is used for acquiring the image of the whole intersection area corresponding to the electric police pole or the traffic light pole; the close-range camera is used for acquiring images of the whole intersection area corresponding to the electric police pole or the traffic light pole and the vehicle waiting area of the opposite road; the long-range camera is used for acquiring vehicle waiting area images of opposite roads corresponding to the electric police poles or the traffic light poles and road area images extending towards far ends.
The main control computer 22 is specifically configured to perform target identification on an image of the whole intersection area corresponding to the electric warning pole or the traffic light pole collected by the global exposure camera to obtain identification information and feature information of a target and pixel position information of the target in a picture; the system is particularly used for carrying out target identification on the whole intersection area corresponding to an electric police pole or a traffic light pole and a vehicle waiting area image of an opposite road collected by a close-range camera, matching the target detected by a global exposure camera through a preset image splicing and re-identification algorithm according to an identification result, and establishing track information, wherein the track information comprises identification information and characteristic information of the target and real position information of the target, and the real position information of the target is acquired according to pixel position information of the target in an image; the system is particularly used for detecting and identifying a far target by using a vehicle waiting area image of an opposite road corresponding to an electric police post or a traffic light post acquired by a far-view camera and a road area image extending to a far end, matching an identification result with a target identification result corresponding to a near-view camera by using a preset image splicing algorithm, and updating the track information.
Specifically, for example, the target information is first identified for a picture taken by the global exposure camera. Such as license plate information, body color, vehicle type, etc.; and then, carrying out target detection on the picture acquired by the close-range camera and matching the picture with a target detected by the global exposure camera. And tracking the target, giving the target ID and establishing track information. And matching the vehicle ID detected by the close-range camera with the target detected by the global exposure camera by using a picture splicing and re-recognition algorithm, and giving relevant information such as a license plate, a color, a vehicle type and the like of the corresponding ID vehicle. And finally, detecting a distant target by using a distant camera and matching the distant target with a target detected by a close camera picture. Generally, a long-range camera acquires a picture, and a short-range camera acquires a local enlargement of a long-range area of the picture, so that an image splicing method can be used for transforming a target frame acquired in the long-range camera into the short-range camera.
Further, the main control computer 22 is specifically configured to obtain a distortion coefficient of each camera according to a compensation parameter of each camera in the camera group, and obtain a distortion coefficient of each camera according to a compensation parameter of each camera in the multi-view camera; acquiring perspective transformation coefficients of the cameras by selecting four non-collinear pixel points in the pictures acquired by the cameras and corresponding physical coordinates under a world coordinate system; and acquiring the real position information of the target according to the distortion coefficient, the perspective transformation coefficient and the pixel position information of the target in the picture.
Specifically, the solution of the camera distortion correction parameters may be accomplished, for example, by the Zhang Zhengyou calibration method or the compensation parameters given by the manufacturer. And 4, solving the perspective transformation coefficient by selecting four non-collinear pixel points in the acquired picture and the corresponding physical coordinates in the world coordinate system. And obtaining the real position of the target by using the distortion coefficient, the perspective transformation coefficient and the detection pixel. Generally, the detection algorithm based on deep learning is performed in an original picture acquired by a camera, distortion correction is not performed, and the result is the pixel position of the target in the picture. Therefore, after the distortion position compensation is completed by using the correction coefficient, the target pixel is mapped to the bird's-eye view image by using the perspective transformation coefficient.
Further, the laser all-in-one machine further comprises: a configuration unit; the configuration unit is used for carrying out time calibration on the image data acquired by each camera and synchronizing time stamps corresponding to the image data acquired by different cameras; and carrying out space calibration on the installation position of the radar sensor, and aligning with lane information of a monitored area according to the correction position error coefficient. It should be noted that, due to the coding and decoding and transmission delay of the video, the timestamps of the cameras in the camera group are not synchronized. Firstly, timing is carried out on the cameras, the time stamps of the acquired data are aligned in time after the acquired pictures are transmitted to the edge calculator, and the data to be processed of the cameras at each moment are guaranteed to be data of the same frame.
Further, the main control computer 22 is specifically configured to perform point cloud segmentation and clustering on data of each moving target from a crossing stop line to a road end, which is acquired by a radar sensor, to obtain speed information and position information of each target.
According to the holographic intersection traffic management system based on the combination of the radar-vision all-in-one machine and the multi-view camera, the combination configuration mode of the cameras of the camera set in the radar-vision all-in-one machine is adopted, the configuration method of near-distance view image acquisition is adopted, targets in the acquired camera images change from large to small, the near-distance view images and the far-distance view images are respectively subjected to target detection and identification, and then the targets are subjected to multi-camera fusion processing, so that the accuracy of target detection can be improved, and the targets can be conveniently tracked; meanwhile, when vehicles at the intersection wait in a queuing area, the small vehicles are easily shielded by the large bus and the bus in front, and the shielding problem when the radar vision all-in-one machine acquires data can be effectively solved by configuring the multi-camera; and the radar sensor is adopted to acquire speed information and position information of each moving target from a stop line at the intersection to the end of the road, and the speed information and the position information are fused with target data detected and identified by the camera set and the multi-camera, so that the accuracy of target identification and tracking can be further improved.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Those of skill in the art will also appreciate that the various illustrative logical blocks, elements, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate the interchangeability of hardware and software, various illustrative components, elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks or elements described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described in the embodiments of the present invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store program code in the form of instructions or data structures and which can be read by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Additionally, any connection is properly termed a computer-readable medium, and, thus, is included if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wirelessly, e.g., infrared, radio, and microwave. Such discs (disk) and disks (disc) include compact disks, laser disks, optical disks, DVDs, floppy disks and blu-ray disks where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included in the computer-readable medium.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A holographic intersection traffic management system based on a radar-vision all-in-one machine and a multi-view camera are combined, and the system is characterized by comprising: the system comprises a radar-vision all-in-one machine, a multi-view camera and a main control machine; the radar and television integrated machine is arranged on the electric police rod; the multi-view camera is arranged below the radar and video all-in-one machine and is connected with the radar and video all-in-one machine;
the multi-view camera is used for acquiring images of a traffic light waiting area and an intersection queuing area corresponding to the electric police pole;
the radar and video integrated machine comprises a camera set and a radar sensor;
the camera set is used for acquiring the whole intersection area image corresponding to the electric police rod, the vehicle waiting area image of the opposite road and the road area image extending to the far end from the opposite road corresponding to the electric police rod;
the radar sensor is used for acquiring information of each moving target from a stop line of the intersection to the end of a far-end road;
the main control computer is used for carrying out cross-lens tracking identification and fusion on the images of the traffic light waiting area and the intersection queuing area corresponding to the electric police pole to obtain identification information and characteristic information of each target and pixel position information of the target in the picture;
the main control computer is further configured to perform target identification on images of different areas acquired by the camera set, establish track information of targets, and fuse the track information of each target acquired by the camera set and speed information and position information of each target acquired according to the radar sensor, where the track information includes identification information of the target, feature information, and real position information of the target, and the real position information of the target is acquired according to pixel position information of the target in a picture;
the main control computer is also used for fusing the cross-lens tracking identification and fusion data corresponding to the multi-view camera with the fusion data corresponding to the radar-vision all-in-one machine and carrying out intersection traffic management according to the fused data.
2. The holographic intersection traffic management system based on the combination of the all-in-one radar vision machine and the multi-view camera, as claimed in claim 1, wherein the multi-view camera comprises: a front view camera, a down view camera, and a rear view camera;
the front-view camera is in the same direction with the road, the included angle range of the visual field center and the horizontal line is 5-45 degrees, the rear-view camera is opposite to the road, the included angle range of the visual field center and the horizontal line is 5-45 degrees, and the visual field center of the downward-view camera is vertically downward and is used for sequentially collecting images of a traffic light waiting area and an intersection queuing area corresponding to the electric warning rod.
3. The holographic intersection traffic management system based on the combination of the integrated radar and vision machine and the multi-view camera is characterized in that when a road is wide, when images of a traffic light waiting area and an intersection queuing area corresponding to an electric warning rod collected by the front-view camera, the down-view camera and the rear-view camera are not enough to completely cover areas on two sides of the road, the multi-view camera further comprises a left-view camera and a right-view camera;
the left side view camera is perpendicular to the road to the left, the included angle range of the visual field center and the horizontal line is 5-45 degrees, the right side view camera is perpendicular to the road to the right, the included angle range of the visual field center and the horizontal line is 5-45 degrees, and the left side view camera and the right side view camera are used for collecting images of two sides of a traffic light waiting area and an intersection queuing area corresponding to the electric police pole;
the main control computer is also used for carrying out cross-lens tracking recognition and fusion on the images of the traffic light waiting area and the intersection queuing area corresponding to the electric police pole, which are acquired by the front-view camera, the downward-view camera, the rear-view camera, the left-side-view camera and the right-side-view camera, so as to obtain the identification information and the characteristic information of each target and the pixel position information of the target in the picture.
4. The holographic intersection traffic management system based on the combination of the all-in-one radar vision machine and the multi-view camera, as claimed in claim 1, further comprising: the radar and video integrated machine is reversely mounted with the radar and video integrated machine;
the radar and target image acquisition unit is used for acquiring radar speed and target images of a target when the target drives to the electric warning rod from a distance;
and the main control computer is also used for identifying the target image to acquire the identification information and the characteristic information of the target and the pixel position information of the target in the picture, and performing data fusion and intersection traffic management according to the radar data, the identification information and the characteristic information of the target and the pixel position information of the target in the picture.
5. The holographic intersection traffic management system based on the combination of the all-in-one radar vision machine and the multi-view camera as claimed in any one of claims 1 to 4, wherein the camera group comprises: a global exposure camera, a close-up camera, a long-range camera;
the global exposure camera is used for acquiring the image of the whole intersection area corresponding to the electric police pole or the traffic light pole;
the close-range camera is used for acquiring images of the whole intersection area corresponding to the electric police pole or the traffic light pole and the vehicle waiting area of the opposite road;
the long-range camera is used for acquiring a vehicle waiting area image of an opposite road and a road area image extending to a far end, which correspond to an electric police pole or a traffic light pole;
the main control computer is specifically used for carrying out target identification on the whole intersection region image corresponding to the electric police pole or the traffic light pole acquired by the global exposure camera to obtain identification information and characteristic information of a target and pixel position information of the target in the image;
the main control computer is specifically used for carrying out target identification on the whole intersection area corresponding to the electric police pole or the traffic light pole acquired by the close-range camera and the vehicle waiting area image of the opposite road, matching the target detected by the global exposure camera through a preset image splicing and re-identification algorithm according to an identification result, and establishing track information, wherein the track information comprises identification information and characteristic information of the target and real position information of the target, and the real position information of the target is acquired according to the pixel position information of the target in the image;
the main control computer is specifically further used for detecting and identifying a far target by using a vehicle waiting area image of an opposite road and a road area image extending to a far end, which correspond to an electric police pole or a traffic light pole acquired by a long-range camera, matching an identification result with a target identification result corresponding to a close-range camera through a preset image splicing algorithm, and updating the track information.
6. The holographic intersection traffic management system based on the combination of the all-in-one radar vision machine and the multi-view cameras as claimed in any one of claims 1 to 4, wherein the master control machine is specifically configured to obtain distortion coefficients of the cameras according to compensation parameters of the cameras in the camera group, and obtain distortion coefficients of the cameras according to the compensation parameters of the cameras in the multi-view cameras;
the main control computer is specifically used for acquiring perspective transformation coefficients of the cameras by selecting four non-collinear pixel points in the pictures acquired by the cameras and corresponding physical coordinates under a world coordinate system;
and the main control computer is specifically used for acquiring the real position information of the target according to the distortion coefficient, the perspective transformation coefficient and the pixel position information of the target in the picture.
7. The holographic intersection traffic management system based on the combination of the all-in-one radar vision machine and the multi-view camera as claimed in claim 1,
the main control computer is specifically used for carrying out point cloud segmentation and clustering on data of each moving target from a crossing stop line to the end of a road, which are acquired by the radar sensor, so as to obtain speed information and position information of each target.
8. The holographic intersection traffic management system based on the combination of the radar and vision all-in-one machine and the multi-view camera as claimed in any one of claims 1 to 7, wherein the installation angle of the radar and vision all-in-one machine is 3 to 15 degrees downward from the horizontal direction.
9. The holographic intersection traffic management system based on the combination of the all-in-one radar vision machine and the multi-view camera as claimed in any one of claims 1 to 7, wherein the included angle between the center of the visual field of the global exposure camera and the near view camera and the horizontal line is in the range of 10-50 degrees.
10. The holographic intersection traffic management system based on the combination of the all-in-one radar vision machine and the multi-view camera as claimed in any one of claims 1 to 7, wherein an included angle between the visual field center of the long-range view camera and a horizontal line is about 5 to 45 degrees.
CN202211366486.1A 2022-11-01 2022-11-01 Holographic intersection traffic management system based on combination of radar and video all-in-one machine and multi-view camera Pending CN115798232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211366486.1A CN115798232A (en) 2022-11-01 2022-11-01 Holographic intersection traffic management system based on combination of radar and video all-in-one machine and multi-view camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211366486.1A CN115798232A (en) 2022-11-01 2022-11-01 Holographic intersection traffic management system based on combination of radar and video all-in-one machine and multi-view camera

Publications (1)

Publication Number Publication Date
CN115798232A true CN115798232A (en) 2023-03-14

Family

ID=85435083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211366486.1A Pending CN115798232A (en) 2022-11-01 2022-11-01 Holographic intersection traffic management system based on combination of radar and video all-in-one machine and multi-view camera

Country Status (1)

Country Link
CN (1) CN115798232A (en)

Similar Documents

Publication Publication Date Title
CN102782720B (en) Object recognition equipment, mobile agent control device and information provider unit
CN110555407B (en) Pavement vehicle space identification method and electronic equipment
CN111508260A (en) Vehicle parking space detection method, device and system
KR20190026773A (en) Camera system and method for capturing a peripheral zone of a vehicle
JP5686281B2 (en) Three-dimensional object identification device, and mobile object control device and information providing device provided with the same
US20140009589A1 (en) Vehicle having a device for detecting the surroundings of said vehicle
CN109657638A (en) Barrier localization method, device and terminal
CN110736472A (en) indoor high-precision map representation method based on fusion of vehicle-mounted all-around images and millimeter wave radar
CN110750153A (en) Dynamic virtualization device of unmanned vehicle
KR20190026767A (en) A camera system for photographing a peripheral area of a vehicle and a method of providing a driver assistance function
CN103253194A (en) Traveling vehicle auxiliary system
CN113205691A (en) Method and device for identifying vehicle position
CN114002669A (en) Road target detection system based on radar and video fusion perception
CN110780287A (en) Distance measurement method and distance measurement system based on monocular camera
CN111931673A (en) Vision difference-based vehicle detection information verification method and device
JP2019146012A (en) Imaging apparatus
CN116413725A (en) Barrier detection method based on camera and millimeter wave radar data fusion
US20230177724A1 (en) Vehicle to infrastructure extrinsic calibration system and method
CN117496452A (en) Method and system for associating intersection multi-camera with radar integrated machine detection target
CN115798232A (en) Holographic intersection traffic management system based on combination of radar and video all-in-one machine and multi-view camera
CN114863695B (en) Overproof vehicle detection system and method based on vehicle-mounted laser and camera
CN116794650A (en) Millimeter wave radar and camera data fusion target detection method and device
CN115457488A (en) Roadside parking management method and system based on binocular stereo vision
US20230266469A1 (en) System and method for detecting road intersection on point cloud height map
CN115719442A (en) Intersection target fusion method and system based on homography transformation matrix

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Feng Shu

Inventor after: Yan Jun

Inventor after: Li Mengdi

Inventor after: Wang Peng

Inventor after: Wang Wei

Inventor before: Feng Shu

Inventor before: Yan Hao

Inventor before: Li Mengdi

Inventor before: Wang Peng

Inventor before: Wang Wei

CB03 Change of inventor or designer information