CN117496452A - Method and system for associating intersection multi-camera with radar integrated machine detection target - Google Patents

Method and system for associating intersection multi-camera with radar integrated machine detection target Download PDF

Info

Publication number
CN117496452A
CN117496452A CN202311297898.9A CN202311297898A CN117496452A CN 117496452 A CN117496452 A CN 117496452A CN 202311297898 A CN202311297898 A CN 202311297898A CN 117496452 A CN117496452 A CN 117496452A
Authority
CN
China
Prior art keywords
camera
view
detection
radar
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311297898.9A
Other languages
Chinese (zh)
Inventor
闫军
李孟迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Intercommunication Technology Co ltd
Original Assignee
Smart Intercommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Intercommunication Technology Co ltd filed Critical Smart Intercommunication Technology Co ltd
Priority to CN202311297898.9A priority Critical patent/CN117496452A/en
Publication of CN117496452A publication Critical patent/CN117496452A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method and a system for associating detection targets of a multi-view camera and a radar integrated machine at an intersection, which relate to the field of intelligent traffic management and comprise the following steps: obtaining a homography matrix by utilizing pictures acquired by a multi-camera and a radar integrated camera, then carrying out line pressing detection and lane number marking on targets detected by the multi-camera, projecting detection results of the line pressing targets of the multi-camera into a field of view of the radar camera, and then associating detection results of two devices according to intersection ratio information and lane number information of a detection frame, thereby fully considering an observation relation between the devices, realizing association of the detection targets of the two sensors based on homography conversion, and reducing algorithm complexity and hardware configuration requirements compared with a fusion method of deep learning; in addition, the complexity and cost of implementation are reduced because large-scale data acquisition and model training are not required.

Description

Method and system for associating intersection multi-camera with radar integrated machine detection target
Technical Field
The invention relates to the field of intelligent traffic management, in particular to a method and a system for associating detection targets of intersection multi-camera and radar integrated machine.
Background
In an intelligent traffic system, an intersection is an important node of traffic flow and is also an accident-prone area. Therefore, the method has important significance in monitoring and controlling the traffic flow of the intersection. At present, a traffic flow monitoring system based on sensing equipment such as a monocular camera, an electric police camera, a multi-eye camera, a radar video all-in-one machine and the like is widely applied, and the functions of rapidly and accurately capturing, detecting, tracking and the like of vehicles and pedestrians in a road can be realized. Under the holographic intersection scene, the configuration scheme of the sensor directly influences the subsequent data processing and algorithm flow. A single sensor is difficult to meet the accurate sensing requirement of a complex scene, and multiple sensors are often required to be associated and fused.
The multi-view camera consists of a front view lens, a side view lens and a lower view lens, and is usually arranged at the lower end of a straight arm of the electric police lamp post. These cameras are used to detect the queuing situation at traffic intersections, where the field of view is generally short, but there is no occlusion problem. In contrast, the radar video all-in-one is installed at the upper end of the straight arm of the traffic light and the electric warning light and is used for monitoring the conditions of road sections and intersections, and the view field of the radar video all-in-one is wide, but the radar video all-in-one is possibly influenced by shielding. The difference in field of view of the cameras is such that both have advantages and disadvantages, respectively. Under the condition that the multi-view camera is not shielded, queuing conditions can be quickly and accurately captured, and the radar video all-in-one machine is suitable for monitoring road sections and intersections in a large range.
At present, when the association and fusion of the sensors are carried out, a deep learning method is generally adopted to fuse the multi-camera result and the detection result of the radar, however, when the deep learning method is adopted to fuse the multi-camera result and the detection result of the radar, the algorithm complexity is generally higher, a large amount of data is required to be collected for marking and pre-training, and the calculation force requirement on hardware is higher.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method and a system for associating the multi-camera at the intersection with the detection target of the radar integrated machine, which can solve the problems of higher complexity of an association algorithm of the multi-camera at the intersection with the detection target of the radar integrated machine and higher requirement on hardware conditions.
In order to achieve the above object, in one aspect, the present invention provides a method for associating a multi-view camera at an intersection with a radar integrated machine, the method comprising:
respectively acquiring detection frames of the vehicle targets of the multi-camera and the radar integrated camera through a preset target detection model;
performing joint calibration on the multi-camera and the radar video all-in-one machine to obtain a homography matrix of a vehicle target in a field of view of the multi-camera projected to a field of view of the radar video all-in-one machine;
performing line pressing detection on the detection result of the multi-camera in a stop line area of a road waiting area in which the multi-camera collects pictures, and determining a lane to which a vehicle target belongs;
and projecting the multi-view camera detection target of the line pressing to the field of view of the camera of the all-in-one-thunder camera according to the homography matrix, and carrying out association and combination on the multi-view detection target result and the all-in-one-thunder camera target detection result through the intersection ratio information corresponding to the detection frames of the multi-view camera and the all-in-one-thunder camera vehicle target and the lane information.
Further, the step of performing joint calibration on the multi-camera and the radar video all-in-one machine according to the detection frame of the vehicle targets of the multi-camera and the radar all-in-one machine, and obtaining a homography matrix of the vehicle targets in the multi-camera view field projected to the radar all-in-one machine camera view field comprises the following steps:
carrying out distortion correction on pictures acquired by the multi-camera according to distortion parameters of the multi-camera;
selecting preset number of mark points in the overlapping area of the pictures of the multi-camera according to the overlapping area of the pictures acquired by the multi-camera and the camera view of the all-in-one thunder-visual machine, and selecting the same number of mark points at the positions corresponding to the pictures acquired by the all-in-one thunder-visual machine;
and acquiring homography matrixes of the vehicle targets in the vision field of the multi-camera to be projected to the vision field of the camera of the all-in-one radar camera according to the pixels of the preset number of marking points, the pixels of the corresponding marking points, the left singular matrix, the diagonal matrix and the right singular matrix.
Further, the step of obtaining the homography matrix of the vehicle target projected to the camera view of the all-in-one thunder vision machine in the view of the multiple cameras according to the pixels of the preset number of the marking points, the pixels of the corresponding marking points, the left singular matrix, the diagonal matrix and the right singular matrix comprises the following steps:
according to the formula[U S V]Calculation was performed with =svd (a), h=v (: 9), where (x) i ,y i ) For pixel positions of 4 marker points selected in the overlapping region of the multi-view camera picture, i=1, 2,3,4, (X) i ,Y i ) The pixel positions of the 4 selected mark points which are in one-to-one correspondence with the same positions of the picture of the all-in-one machine are (X) i ,Y i ) I=1, 2,3,4, u is a left singular matrix, S is a diagonal matrix, elements on the diagonal are singular values, the arrangement is from large to small, V is a right singular matrix, and H is a homography matrix.
Further, the step of projecting the multi-camera detection target of the wire pressing line to the camera view of the radar integrated machine according to the homography matrix includes:
according to the formulaProjection is performed, wherein a ij (i=1, 2,3; j=1, 2, 3) is an element term of the homography matrix H; (x, y, z) is the homogeneous coordinates of the pixel points of the image of the multi-view camera, and the corresponding two-dimensional coordinates are (x, y, 1); (X, Y, Z) is the coordinate transformed to the view field of the radar integrated machine, the corresponding two-dimensional homogeneous coordinate is (X '=x/Z, Y' =y/Z), and the point (X ', Y') is the transformed two-dimensional plane coordinate of the pixel point corresponding to the original image.
Further, the method further comprises: and carrying out Kalman filtering and track management on the associated targets.
In another aspect, the present invention provides a system for associating a multi-view camera at an intersection with a detection target of a radar integrated machine, the system comprising: the acquisition module is used for respectively acquiring detection frames of the vehicle targets of the multi-camera and the radar integrated machine camera through a preset target detection model;
the acquisition module is also used for carrying out joint calibration on the multi-camera and the radar video all-in-one machine, and acquiring a homography matrix of a vehicle target in the field of view of the multi-camera projected to the field of view of the radar video all-in-one machine;
the determining module is used for detecting the line pressing of the detection result of the multi-camera in a stop line area of a road waiting area in which the multi-camera collects pictures and determining a lane to which a vehicle target belongs;
and the association module is used for projecting the multi-camera detection target of the line pressing to the field of view of the camera of the lightning all-in-one machine according to the homography matrix, and associating and combining the multi-camera detection target result with the lightning all-in-one machine target detection result through the overlapping degree information corresponding to the detection frames of the multi-camera and the lightning all-in-one machine camera vehicle target and the lane information.
Further, the obtaining module is specifically configured to perform distortion correction on the picture acquired by the multi-view camera according to the distortion parameter of the multi-view camera; selecting preset number of mark points in the overlapping area of the pictures of the multi-camera according to the overlapping area of the pictures acquired by the multi-camera and the camera view of the all-in-one thunder-visual machine, and selecting the same number of mark points at the positions corresponding to the pictures acquired by the all-in-one thunder-visual machine; and acquiring homography matrixes of the vehicle targets in the vision field of the multi-camera to be projected to the vision field of the camera of the all-in-one radar camera according to the pixels of the preset number of marking points, the pixels of the corresponding marking points, the left singular matrix, the diagonal matrix and the right singular matrix.
Further, the obtaining module is specifically further configured to perform a process according to a formula[U S V]Calculation was performed with =svd (a), h=v (: 9), where (x) i ,y i ) For pixel positions of 4 marker points selected in the overlapping region of the multi-view camera picture, i=1, 2,3,4, (X) i ,Y i ) The pixel positions of the 4 selected mark points which are in one-to-one correspondence with the same positions of the picture of the all-in-one machine are (X) i ,Y i ) I=1, 2,3,4, u is a left singular matrix, S is a diagonal matrix, elements on the diagonal are singular values, the arrangement is from large to small, V is a right singular matrix, and H is a homography matrix.
Further, the association module is specifically configured to perform according to a formulaProjection is performed, wherein a ij (i=1, 2,3; j=1, 2, 3) is an element term of the homography matrix H; (x, y, z) is the homogeneous coordinates of the pixel points of the image of the multi-view camera, and the corresponding two-dimensional coordinates are (x, y, 1); (X, Y, Z) is the coordinate transformed to the view field of the radar integrated machine, the corresponding two-dimensional homogeneous coordinate is (X '=x/Z, Y' =y/Z), and the point (X ', Y') is the transformed two-dimensional plane coordinate of the pixel point corresponding to the original image.
Further, the system further comprises: a filtering module;
and the filtering module is used for carrying out Kalman filtering and track management on the associated targets.
According to the method and the system for associating the detection targets of the intersection multi-camera and the radar integrated machine, provided by the invention, the homography matrix is obtained by utilizing the pictures acquired by the multi-camera and the radar integrated machine, then the line pressing detection and the lane number marking are carried out on the targets detected by the multi-camera, the detection results of the line pressing targets of the multi-camera are projected into the field of view of the radar integrated machine, and then the detection results of the two devices are associated according to the intersection ratio information and the lane number information of the detection frame, so that the observation relation between the devices is fully considered, the association of the detection targets of the two sensors based on homography conversion is realized, and compared with the fusion method of deep learning, the algorithm complexity and the requirement on hardware configuration are reduced; in addition, the complexity and cost of implementation are reduced because large-scale data acquisition and model training are not required.
Drawings
FIG. 1 is a flow chart of a method for associating a multi-view camera at an intersection with a radar integrated machine detection target;
fig. 2 is a schematic structural diagram of a correlation system of a multi-view camera and a radar integrated machine for detecting targets at an intersection.
Detailed Description
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
As shown in fig. 1, the method for associating the intersection multi-camera with the radar integrated machine detection target provided by the embodiment of the invention comprises the following steps:
101. and respectively acquiring detection frames of the vehicle targets of the multi-camera and the radar integrated camera through a preset target detection model.
It should be noted that the multi-camera adopted in the embodiment of the invention can be hung at the lower end of the straight arm of the electric police lamp post, and can better detect the queuing situation of the traffic intersection. The radar video all-in-one machine is arranged at the upper end of the straight arm of the opposite-side red-green lamp post or the same-side electric warning lamp, and the visual field is wider. According to the invention, the detection results of the intersection multi-camera and the radar integrated machine are correlated, so that the monitoring range and effect of the holographic intersection scene are improved, the functions of rapid and accurate snapshot, detection, tracking and the like of vehicles and pedestrians in a road are realized, and data support is provided for subsequent traffic event analysis, signal control strategies and other services.
Specifically, firstly, acquiring pictures of a multi-camera and radar video all-in-one machine, and then acquiring detection frames of vehicle targets of the multi-camera and the single-camera by a pre-trained target detection model.
102. And carrying out joint calibration on the multi-camera and the radar video all-in-one machine to obtain a homography matrix of the vehicle target in the field of view of the multi-camera projected to the field of view of the radar video all-in-one machine.
For the embodiment of the present invention, step 102 may specifically include: carrying out distortion correction on pictures acquired by the multi-camera according to distortion parameters of the multi-camera; selecting preset number of mark points in the overlapping area of the pictures of the multi-camera according to the overlapping area of the pictures acquired by the multi-camera and the camera view of the all-in-one thunder-visual machine, and selecting the same number of mark points at the positions corresponding to the pictures acquired by the all-in-one thunder-visual machine; and acquiring homography matrixes of the vehicle targets in the vision field of the multi-camera to be projected to the vision field of the camera of the all-in-one radar camera according to the pixels of the preset number of marking points, the pixels of the corresponding marking points, the left singular matrix, the diagonal matrix and the right singular matrix.
The step of obtaining the homography matrix of the vehicle target projected to the camera view of the all-in-one thunder vision machine in the view of the multiple cameras according to the pixels of the preset number of marking points, the pixels corresponding to the marking points, the left singular matrix, the diagonal matrix and the right singular matrix comprises the following steps: according to the formula[U S V]Calculation was performed with =svd (a), h=v (: 9), where (x) i ,y i ) For pixel positions of 4 marker points selected in the overlapping region of the multi-view camera picture, i=1, 2,3,4, (X) i ,Y i ) The pixel positions of the 4 selected mark points which are in one-to-one correspondence with the same positions of the picture of the all-in-one machine are (X) i ,Y i ) I=1, 2,3,4, u is a left singular matrix, S is a diagonal matrix, elements on the diagonal are singular values, the arrangement is from large to small, V is a right singular matrix, and H is a homography matrix.
103. And detecting the line pressing of the detection result of the multi-camera in a stop line area of a road waiting area in which the multi-camera collects pictures, and determining the lane to which the vehicle target belongs.
Specifically, a detection line is defined near a stop line of a road waiting area where the multi-camera collects pictures, line pressing detection is performed on detection results of the multi-camera, and a lane to which the multi-camera belongs is determined according to the line pressing detection results.
104. And projecting the multi-view camera detection target of the line pressing to the field of view of the camera of the all-in-one-thunder camera according to the homography matrix, and carrying out association and combination on the multi-view detection target result and the all-in-one-thunder camera target detection result through the intersection ratio information corresponding to the detection frames of the multi-view camera and the all-in-one-thunder camera vehicle target and the lane information.
For the embodiment of the invention, the multi-camera detection of the line pressing according to the homography matrixThe step of projecting the target to the field of view of the camera of the all-in-one thunder and visual machine comprises the following steps: according to the formulaProjection is performed, wherein a ij (i=1, 2,3; j=1, 2, 3) is an element term of the homography matrix H; (x, y, z) is the homogeneous coordinates of the pixel points of the image of the multi-view camera, and the corresponding two-dimensional coordinates are (x, y, 1); (X, Y, Z) is a coordinate transformed to the view field of the radar integrated machine, the corresponding two-dimensional homogeneous coordinate is (X '=x/Z, Y' =y/Z), the point (X ', Y') is a transformed two-dimensional plane coordinate of the pixel point corresponding to the original image, and the two-dimensional homogeneous coordinate is expanded: />
Further, in order to further improve the accuracy of the associated data, the method may further include: and carrying out Kalman filtering and track management on the associated targets. Specifically, a Kalman filtering algorithm is adopted to carry out measurement updating and prediction updating on the target track so as to realize filtering and tracking of the target. And performing track management on the filtered targets, such as adding targets and deleting targets, and sending queuing information and track information to an edge signal control device to complete subsequent algorithm processing and service analysis.
According to the association method of the intersection multi-camera and the radar integrated machine detection targets, which is provided by the embodiment of the invention, the homography matrix is obtained by utilizing pictures acquired by the multi-camera and the radar integrated machine camera, then the line pressing detection and the lane number marking are carried out on the targets detected by the multi-camera, the detection results of the line pressing targets of the multi-camera are projected into the field of view of the radar integrated machine, and then the detection results of two devices are associated according to the intersection ratio information and the lane number information of the detection frame, so that the observation relation between the devices is fully considered, the association of the detection targets of the two sensors based on homography conversion is realized, and compared with the fusion method of deep learning, the algorithm complexity and the requirement on hardware configuration are reduced; in addition, the complexity and cost of implementation are reduced because large-scale data acquisition and model training are not required.
In order to implement the method provided by the embodiment of the present invention, the embodiment of the present invention provides a system for associating a multi-view camera at an intersection with a detection target of a radar integrated machine, as shown in fig. 2, the system includes: the device comprises an acquisition module 21, a determination module 22, an association module 23 and a filtering module 24.
An acquisition module 21, configured to acquire detection frames of targets of the multi-camera and the radar integrated camera respectively through a preset target detection model;
the acquiring module 21 is further configured to perform joint calibration on the multi-view camera and the radar video all-in-one machine, and acquire a homography matrix of a vehicle target in a field of view of the multi-view camera projected to a field of view of the radar video all-in-one machine;
the determining module 22 is configured to perform line pressing detection on a detection result of the multiple cameras in a stop line area of a road waiting area where the multiple cameras collect pictures, and determine a lane to which a vehicle target belongs;
and the association module 23 is configured to project the multi-view camera detection target of the wire pressing line to a view field of the camera of the all-in-one-thunder camera according to the homography matrix, and associate and combine the multi-view detection target result with the all-in-one-thunder camera target detection result through overlapping degree information corresponding to detection frames of the multi-view camera and the all-in-one-thunder camera vehicle target and the lane information.
Further, the obtaining module 21 is specifically configured to perform distortion correction on the picture acquired by the multi-camera according to the distortion parameter of the multi-camera; selecting preset number of mark points in the overlapping area of the pictures of the multi-camera according to the overlapping area of the pictures acquired by the multi-camera and the camera view of the all-in-one thunder-visual machine, and selecting the same number of mark points at the positions corresponding to the pictures acquired by the all-in-one thunder-visual machine; and acquiring homography matrixes of the vehicle targets in the vision field of the multi-camera to be projected to the vision field of the camera of the all-in-one radar camera according to the pixels of the preset number of marking points, the pixels of the corresponding marking points, the left singular matrix, the diagonal matrix and the right singular matrix.
Further, the obtaining module 21 is specifically further configured to perform the following formula[U S V]Calculation was performed with =svd (a), h=v (: 9), where (x) i ,y i ) For pixel positions of 4 marker points selected in the overlapping region of the multi-view camera picture, i=1, 2,3,4, (X) i ,Y i ) The pixel positions of the 4 selected mark points which are in one-to-one correspondence with the same positions of the picture of the all-in-one machine are (X) i ,Y i ) I=1, 2,3,4, u is a left singular matrix, S is a diagonal matrix, elements on the diagonal are singular values, the arrangement is from large to small, V is a right singular matrix, and H is a homography matrix.
Further, the association module 23 is specifically configured to perform according to a formulaProjection is performed, wherein a ij (i=1, 2,3; j=1, 2, 3) is an element term of the homography matrix H; (x, y, z) is the homogeneous coordinates of the pixel points of the image of the multi-view camera, and the corresponding two-dimensional coordinates are (x, y, 1); (X, Y, Z) is the coordinate transformed to the view field of the radar integrated machine, the corresponding two-dimensional homogeneous coordinate is (X '=x/Z, Y' =y/Z), and the point (X ', Y') is the transformed two-dimensional plane coordinate of the pixel point corresponding to the original image.
Further, the system further comprises: a filtering module 24;
the filtering module 24 is configured to perform kalman filtering and track management on the associated target.
According to the association system of the intersection multi-camera and the radar integrated machine detection targets, the homography matrix is obtained by utilizing pictures acquired by the multi-camera and the radar integrated machine camera, then the line pressing detection and the lane number marking are carried out on the targets detected by the multi-camera, the detection results of the line pressing targets of the multi-camera are projected into the field of view of the radar integrated machine camera, and then the detection results of the two devices are associated according to the intersection ratio information and the lane number information of the detection frame, so that the observation relation between the devices is fully considered, the association of the detection targets of the two sensors based on homography conversion is realized, and compared with a fusion method of deep learning, the algorithm complexity and the requirements on hardware configuration are reduced; in addition, the complexity and cost of implementation are reduced because large-scale data acquisition and model training are not required.
It should be understood that the specific order or hierarchy of steps in the processes disclosed are examples of exemplary approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. As will be apparent to those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or".
Those of skill in the art will further appreciate that the various illustrative logical blocks (illustrative logical block), units, and steps described in connection with the embodiments of the invention may be implemented by electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components (illustrative components), elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Those skilled in the art may implement the described functionality in varying ways for each particular application, but such implementation is not to be understood as beyond the scope of the embodiments of the present invention.
The various illustrative logical blocks or units described in the embodiments of the invention may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described. A general purpose processor may be a microprocessor, but in the alternative, the general purpose processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. In an example, a storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may reside in a user terminal. In the alternative, the processor and the storage medium may reside as distinct components in a user terminal.
In one or more exemplary designs, the above-described functions of embodiments of the present invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on a computer-readable medium or transmitted as one or more instructions or code on the computer-readable medium. Computer readable media includes both computer storage media and communication media that facilitate transfer of computer programs from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media may include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store program code in the form of instructions or data structures and other data structures that may be read by a general or special purpose computer, or a general or special purpose processor. Further, any connection is properly termed a computer-readable medium, e.g., if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless such as infrared, radio, and microwave, and is also included in the definition of computer-readable medium. The disks (disks) and disks (disks) include compact disks, laser disks, optical disks, DVDs, floppy disks, and blu-ray discs where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included within the computer-readable media.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention. .

Claims (10)

1. An association method of a multi-view camera at an intersection and a radar integrated machine detection target is characterized by comprising the following steps:
respectively acquiring detection frames of the vehicle targets of the multi-camera and the radar integrated camera through a preset target detection model;
performing joint calibration on the multi-camera and the radar video all-in-one machine to obtain a homography matrix of a vehicle target in a field of view of the multi-camera projected to a field of view of the radar video all-in-one machine;
performing line pressing detection on the detection result of the multi-camera in a stop line area of a road waiting area in which the multi-camera collects pictures, and determining a lane to which a vehicle target belongs;
and projecting the multi-view camera detection target of the line pressing to the field of view of the camera of the all-in-one-thunder camera according to the homography matrix, and carrying out association and combination on the multi-view detection target result and the all-in-one-thunder camera target detection result through the intersection ratio information corresponding to the detection frames of the multi-view camera and the all-in-one-thunder camera vehicle target and the lane information.
2. The method for associating a multi-view camera at an intersection with a radar integrated machine to detect targets according to claim 1, wherein the step of jointly calibrating the multi-view camera and the radar video integrated machine according to a detection frame of the multi-view camera and the radar integrated machine camera vehicle targets to obtain a homography matrix of a vehicle target in a multi-view camera field projected to the radar integrated machine camera field of view comprises:
carrying out distortion correction on pictures acquired by the multi-camera according to distortion parameters of the multi-camera;
selecting preset number of mark points in the overlapping area of the pictures of the multi-camera according to the overlapping area of the pictures acquired by the multi-camera and the camera view of the all-in-one thunder-visual machine, and selecting the same number of mark points at the positions corresponding to the pictures acquired by the all-in-one thunder-visual machine;
and acquiring homography matrixes of the vehicle targets in the vision field of the multi-camera to be projected to the vision field of the camera of the all-in-one radar camera according to the pixels of the preset number of marking points, the pixels of the corresponding marking points, the left singular matrix, the diagonal matrix and the right singular matrix.
3. The method for associating a plurality of cameras at intersections with a radar integrated machine detection target according to claim 2, wherein the step of obtaining a homography matrix of a vehicle target in a plurality of camera views projected onto the radar integrated machine camera view according to the preset number of pixels of the marker points, the pixels corresponding to the marker points, the left singular matrix, the diagonal matrix, and the right singular matrix comprises:
according to the formula[U S V]Calculation was performed with =svd (a), h=v (: 9), where (x) i ,y i ) For pixel positions of 4 marker points selected in the overlapping region of the multi-view camera picture, i=1, 2,3,4, (X) i ,Y i ) The pixel positions of the 4 selected mark points which are in one-to-one correspondence with the same positions of the picture of the all-in-one machine are (X) i ,Y i ) I=1, 2,3,4, u is a left singular matrix, S is a diagonal matrix, elements on the diagonal are singular values, the arrangement is from large to small, V is a right singular matrix, and H is a homography matrix.
4. The method for associating a multi-camera at an intersection with a target for detection by a radar integrated machine according to claim 1, wherein the step of projecting the multi-camera detection target for the line to the field of view of the radar integrated machine according to the homography matrix comprises:
according to the formulaProjection is performed, wherein a ij (i=1, 2,3; j=1, 2, 3) is an element term of the homography matrix H; (x, y, z) is the homogeneous coordinates of the pixel points of the image of the multi-view camera, and the corresponding two-dimensional coordinates are (x, y, 1); (X, Y, Z) is the coordinate transformed to the view field of the radar integrated machine, the corresponding two-dimensional homogeneous coordinate is (X '=x/Z, Y' =y/Z), and the point (X ', Y') is the transformed two-dimensional plane coordinate of the pixel point corresponding to the original image.
5. The method for associating an intersection multi-view camera with a radar integrated machine detection target according to claim 1, wherein the method further comprises:
and carrying out Kalman filtering and track management on the associated targets.
6. An association system of a multi-view camera at an intersection and a radar integrated machine detection target, characterized in that the system comprises:
the acquisition module is used for respectively acquiring detection frames of the vehicle targets of the multi-camera and the radar integrated machine camera through a preset target detection model;
the acquisition module is also used for carrying out joint calibration on the multi-camera and the radar video all-in-one machine, and acquiring a homography matrix of a vehicle target in the field of view of the multi-camera projected to the field of view of the radar video all-in-one machine;
the determining module is used for detecting the line pressing of the detection result of the multi-camera in a stop line area of a road waiting area in which the multi-camera collects pictures and determining a lane to which a vehicle target belongs;
and the association module is used for projecting the multi-camera detection target of the line pressing to the field of view of the camera of the lightning all-in-one machine according to the homography matrix, and associating and combining the multi-camera detection target result with the lightning all-in-one machine target detection result through the overlapping degree information corresponding to the detection frames of the multi-camera and the lightning all-in-one machine camera vehicle target and the lane information.
7. The system for associating a plurality of cameras at an intersection with a target for detection by a radar integrated machine according to claim 6,
the acquisition module is specifically used for carrying out distortion correction on the pictures acquired by the multi-camera according to the distortion parameters of the multi-camera; selecting preset number of mark points in the overlapping area of the pictures of the multi-camera according to the overlapping area of the pictures acquired by the multi-camera and the camera view of the all-in-one thunder-visual machine, and selecting the same number of mark points at the positions corresponding to the pictures acquired by the all-in-one thunder-visual machine; and acquiring homography matrixes of the vehicle targets in the vision field of the multi-camera to be projected to the vision field of the camera of the all-in-one radar camera according to the pixels of the preset number of marking points, the pixels of the corresponding marking points, the left singular matrix, the diagonal matrix and the right singular matrix.
8. The method for associating a plurality of cameras at an intersection with a radar integrated machine for detecting objects according to claim 7, wherein,
the acquisition module is specifically further configured to perform a method according to a formula[U S V]Calculation was performed with =svd (a), h=v (: 9), where (x) i ,y i ) For pixel positions of 4 marker points selected in the overlapping region of the multi-view camera picture, i=1, 2,3,4, (X) i ,Y i ) The pixel positions of the 4 selected mark points which are in one-to-one correspondence with the same positions of the picture of the all-in-one machine are (X) i ,Y i ) I=1, 2,3,4, u is a left singular matrix, S is a diagonal matrix, elements on the diagonal are singular values, the arrangement is from large to small, V is a right singular matrix, and H is a homography matrix.
9. The system for associating a plurality of cameras at an intersection with a target for detection by a radar integrated machine according to claim 6,
the association module is specifically configured to perform according to a formulaProjection is performed, wherein a ij (i=1, 2,3; j=1, 2, 3) is an element term of the homography matrix H; (x, y, z) is the homogeneous coordinates of the pixel points of the image of the multi-view camera, and the corresponding two-dimensional coordinates are (x, y, 1); (X, Y, Z) is the coordinate transformed to the view field of the radar integrated machine, the corresponding two-dimensional homogeneous coordinate is (X '=x/Z, Y' =y/Z), and the point (X ', Y') is the transformed two-dimensional plane coordinate of the pixel point corresponding to the original image.
10. The system for associating an intersection multi-view camera with a radar integrated machine detection target of claim 6, further comprising: a filtering module;
and the filtering module is used for carrying out Kalman filtering and track management on the associated targets.
CN202311297898.9A 2023-10-09 2023-10-09 Method and system for associating intersection multi-camera with radar integrated machine detection target Pending CN117496452A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311297898.9A CN117496452A (en) 2023-10-09 2023-10-09 Method and system for associating intersection multi-camera with radar integrated machine detection target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311297898.9A CN117496452A (en) 2023-10-09 2023-10-09 Method and system for associating intersection multi-camera with radar integrated machine detection target

Publications (1)

Publication Number Publication Date
CN117496452A true CN117496452A (en) 2024-02-02

Family

ID=89667953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311297898.9A Pending CN117496452A (en) 2023-10-09 2023-10-09 Method and system for associating intersection multi-camera with radar integrated machine detection target

Country Status (1)

Country Link
CN (1) CN117496452A (en)

Similar Documents

Publication Publication Date Title
US11205284B2 (en) Vehicle-mounted camera pose estimation method, apparatus, and system, and electronic device
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN110824188A (en) Speed measuring method and device for highway vehicles, coder-decoder and storage device
WO2023155483A1 (en) Vehicle type identification method, device, and system
US20160212410A1 (en) Depth triggered event feature
CN106600628A (en) Target object identification method and device based on infrared thermal imaging system
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN111553956A (en) Calibration method and device of shooting device, electronic equipment and storage medium
CN113205691A (en) Method and device for identifying vehicle position
CN113408454A (en) Traffic target detection method and device, electronic equipment and detection system
CN110225236B (en) Method and device for configuring parameters for video monitoring system and video monitoring system
CN111931673A (en) Vision difference-based vehicle detection information verification method and device
CN111693998A (en) Method and device for detecting vehicle position based on radar and image data
JP2004354256A (en) Calibration slippage detector, and stereo camera and stereo camera system equipped with the detector
CN115457488A (en) Roadside parking management method and system based on binocular stereo vision
CN117496452A (en) Method and system for associating intersection multi-camera with radar integrated machine detection target
Bravo et al. Outdoor vacant parking space detector for improving mobility in smart cities
CN116794650A (en) Millimeter wave radar and camera data fusion target detection method and device
EP3349201B1 (en) Parking assist method and vehicle parking assist system
CN115166722A (en) Non-blind-area single-rod multi-sensor detection device for road side unit and control method
CN112364793A (en) Target detection and fusion method based on long-focus and short-focus multi-camera vehicle environment
CN209803848U (en) Integrated road tunnel variable-focus visual detection system
CN114882066B (en) Target tracking method and related device, electronic equipment and storage medium
CN115953328B (en) Target correction method and system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination