CN115762168A - Crossroad vehicle real-time tracking method based on multiple cameras - Google Patents

Crossroad vehicle real-time tracking method based on multiple cameras Download PDF

Info

Publication number
CN115762168A
CN115762168A CN202211589639.9A CN202211589639A CN115762168A CN 115762168 A CN115762168 A CN 115762168A CN 202211589639 A CN202211589639 A CN 202211589639A CN 115762168 A CN115762168 A CN 115762168A
Authority
CN
China
Prior art keywords
vehicle
camera
target vehicle
cameras
crossroad
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211589639.9A
Other languages
Chinese (zh)
Inventor
贾子彦
崔瑞
刘晓杰
诸一琦
丁兆明
张雷
陶为戈
薛波
俞洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Technology
Original Assignee
Jiangsu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Technology filed Critical Jiangsu University of Technology
Priority to CN202211589639.9A priority Critical patent/CN115762168A/en
Publication of CN115762168A publication Critical patent/CN115762168A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a crossroad vehicle real-time tracking method based on multiple cameras, which comprises the steps of collecting monitoring videos of camera visual angles in four directions of a crossroad; respectively extracting the characteristics of the target vehicles for each camera, solving the position of the target vehicle under each camera, and respectively solving the homography matrix among all the cameras, thereby determining the relative position relation of the target vehicles under the monitoring areas of all the cameras; space position constraint of a target vehicle is formed under the monitoring areas of all cameras in four directions of the intersection, position correlation is carried out on the same target vehicle under different cameras, and feature extraction of the target vehicle is carried out; when a target vehicle is lost in tracking of a certain camera, information of other cameras is compensated into a monitoring image of the camera, and continuous tracking of the target vehicle is completed. The invention overcomes the defects of the identification and tracking technology under a single camera, and discusses and improves the identification and positioning method of the vehicle under multiple cameras.

Description

Crossroad vehicle real-time tracking method based on multiple cameras
Technical Field
The invention relates to a crossroad vehicle real-time tracking method based on multiple cameras.
Background
The multi-camera vehicle identification and tracking are key contents of research in an intelligent traffic system, but the switching of the multiple cameras may cause the appearance of a target vehicle to change, and especially at places with high accident occurrence, such as crossroads, the problems of complex monitoring video background, target vehicle shielding and the like also present great challenges to the accuracy and the false detection rate of identification and tracking.
The same target appearing in a plurality of camera monitoring areas is determined, which is different from the position of single-camera target identification and positioning and is also the difficult point of multi-camera target identification and positioning problem. The determination of the same target comprises the matching determination of the same target under different camera view angles and the judgment of the shielded state of the target, and the continuous identification and tracking of the target under different camera view angles are completed by analyzing and judging the state and the shielded condition of the target. The invention provides a crossroad vehicle real-time tracking method based on multiple cameras, which aims to overcome the defects of a single-camera lower identification and tracking technology and discuss and improve a multi-camera lower vehicle identification and positioning method.
Disclosure of Invention
The invention provides a crossroad vehicle real-time tracking method based on multiple cameras in order to solve the problems in the prior art.
The technical scheme adopted by the invention is as follows:
a crossroad vehicle real-time tracking method based on multiple cameras comprises
S1: collecting monitoring videos of camera visual angles in four directions of a crossroad;
s2: respectively extracting the characteristics of the target vehicles for each camera, solving the position of the target vehicle under each camera, and respectively solving the homography matrix among all the cameras, thereby determining the relative position relation of the target vehicles under the monitoring areas of all the cameras;
s3: space position constraint of a target vehicle is formed under the monitoring areas of all cameras in four directions of the intersection, position correlation is carried out on the same target vehicle under different cameras, and feature extraction of the target vehicle is carried out;
s4: when a certain camera generates tracking loss of the target vehicle, the information of other cameras is compensated into the monitoring image of the camera, and continuous tracking of the target vehicle is completed.
Further, in step S2, according to the monitoring fields of the cameras in the four directions of the intersection, a fixed marker of each intersection is selected, the whole intersection is framed into a rectangular area according to the selected fixed markers, and homography matrices between the cameras in the four directions of the intersection are respectively obtained in the fields of view of the rectangular area, so as to determine the relative position relationship of vehicles in the monitoring areas of the cameras in the four directions of the intersection.
And further, selecting a crosswalk of each intersection as a fixed marker.
Furthermore, the same checkerboard is shot by the cameras in the four directions of the intersection at the same time, and homography matrixes among the cameras in the four directions of the intersection are obtained.
Further, in step S3, the feature extraction of the target vehicle includes color feature extraction of global features and SURF algorithm feature matching of local features;
aiming at the global feature extraction and the local feature matching of the target vehicle, when the target vehicle is not shielded, selecting a pixel area of the whole target vehicle through the color feature extraction of the global feature to carry out color feature extraction matching;
when the shielding phenomenon occurs to the target vehicle, comparing the target vehicle and the distance between the shielding vehicle and a camera at the advancing direction of the target vehicle through the position of the target vehicle, simultaneously carrying out global feature extraction and local feature matching on the shielding vehicle close to the camera at the advancing direction, selecting an external rectangular frame of the target vehicle in the monitoring video and subtracting an area overlapped with the shielding vehicle by the target vehicle far away from the camera at the advancing direction, then carrying out global feature extraction and local feature matching, and finishing the judgment of whether the target vehicle is shielded.
Further, when the target vehicle is determined to be shielded, firstly, judgment is carried out according to the space positions of the target vehicle and the shielded vehicle, when the external rectangular frame of the target vehicle in a camera monitoring video of the intersection is overlapped with the external rectangular frame of the shielded vehicle, shielding phenomena may occur, at the moment, the original color feature and SIFT feature of the same target vehicle are compared at the same time, the Euclidean distance of the color feature of the target vehicle is obtained, when the Euclidean distance of the target vehicle is larger than a set threshold value, the target vehicle is determined to be shielded, then tracking compensation is carried out on the target vehicle through cameras in other directions of the intersection, the relative position of the target vehicle in the shielded camera is marked, and identification and positioning of a target are completed.
Further, the color feature extraction of the local features is as follows: the method comprises the steps of firstly obtaining color characteristics of a vehicle through an RGB color space, then converting the color characteristics into an HSV color space to quantize the color characteristics, then selecting a proper color name to perform quantization dimension reduction processing on the color characteristics, establishing a color name quantization table, and quantizing the color name of a target vehicle, so that the color characteristics are quickly matched.
Further, SURF algorithm feature matching of the local features is as follows:
c1: constructing a black plug matrix: judging the scale and the position of the characteristic point of the detected target vehicle through the local maximum value of the determinant in the black-plug matrix, wherein if the value of the determinant is greater than zero, the pixel is an extreme point, otherwise, the pixel is a non-extreme point;
c2: constructing a scale space of the image: searching feature points at the same position in different scales, processing an initial picture of the detected target vehicle by using a square filter with different sizes by using an SURF algorithm, and acquiring a picture scale space;
c3: positioning of feature points: comparing the characteristic point of the pixel point with the surrounding points in the three-dimensional space of the pixel point, and if the value of the pixel point is larger than the surrounding values, defining the pixel point as the characteristic point.
Further, in step S4, a database of vehicle travel tracks is established according to each lane of the same intersection, when the situation that the vehicle tracking is lost due to partial or complete occlusion of the vehicle occurs in the monitoring view of the camera in a certain direction of the intersection, the camera with the tracking loss performs track prediction on the travel route of the occluded vehicle according to the established database, then the occluded vehicle is positioned and tracked through determination of the spatial position and feature matching by other cameras, the position information of the occluded vehicle is compensated into the monitoring image of the tracking loss camera, and the vehicle can still be accurately positioned to the position of the vehicle when the vehicle appears again in the monitoring view of the tracking loss camera by combining the predicted travel track, thereby completing the continuous tracking of the vehicle.
The invention has the following beneficial effects:
the invention is mainly applied to crossroads, overcomes the defects of the identification and tracking technology under a single camera, and discusses and improves the identification and positioning method of vehicles under multiple cameras.
Compared with a single-camera monitoring system, the system can monitor a target scene from multiple angles, and realizes the confirmation of the same target vehicle by identifying and matching the spatial characteristic information and the color characteristic information of the shot vehicle, thereby completing the identification, positioning and continuous tracking of the same vehicle in a complex scene. The monitoring problem of single camera can be improved greatly to many cameras, for the discernment and the location of solving target vehicle under the complicated scene provide the possibility, solved single camera control observation scope little, target tracking unstability, sheltering from under the complex conditions scheduling problem.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a diagram of a multi-camera target monitoring cooperation strategy.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in figure 1, the invention relates to a crossroad vehicle real-time tracking method based on multiple cameras, which comprises the following steps
S1: collecting monitoring videos of camera visual angles in four directions of a crossroad;
s2: respectively extracting the characteristics of the target vehicles for each camera, solving the position of the target vehicle under each camera, and respectively solving the homography matrix among all the cameras, thereby determining the relative position relation of the target vehicles under all the camera monitoring areas;
s3: space position constraint of a target vehicle is formed under the monitoring areas of all cameras in four directions of the intersection, position correlation is carried out on the same target vehicle under different cameras, and feature extraction of the target vehicle is carried out;
s4: when a target vehicle is lost in tracking of a certain camera, information of other cameras is compensated into a monitoring image of the camera, and continuous tracking of the target vehicle is completed.
In the step S2, crosswalk of each intersection is selected as a fixed marker according to the monitoring visual field of the cameras in the four directions of the intersection, so that the whole intersection can be framed into a rectangular area according to the crosswalk, homography matrixes among the cameras in the four directions of the intersection are respectively obtained in the visual field of the rectangular area, and the relative position relation of vehicles in the monitoring areas of the cameras in the four directions of the intersection is determined.
The homography matrix can well express the interrelation among a plurality of cameras, and the pixel coordinate conversion is carried out on the shot pictures at different visual angles, so that the pixel position conversion of the same target at different visual angles can be quickly realized, and the positioning of a vehicle at different visual angles of the plurality of cameras is realized.
The invention can also shoot the same checkerboard by the cameras in the four directions of the crossroad at the same time, and the homography matrix among the cameras in the four directions of the crossroad is obtained. Or the homography matrix can be obtained by actually measuring the distance and the position between the cameras of the crossroad, measuring the distance between the cameras, the ground clearance and the position of the marker, and the like. Therefore, even if the vehicle is blocked in the visual field of a certain camera, the vehicle can still be accurately positioned to the position of the vehicle through the spatial relationship among the vehicle, other cameras and the markers.
In the step S3, the feature extraction of the target vehicle comprises color feature extraction of global features and SURF algorithm feature matching of local features;
aiming at the global feature extraction and the local feature matching of the target vehicle, when the target vehicle is not shielded, selecting a pixel area of the whole target vehicle through the color feature extraction of the global feature to carry out color feature extraction matching;
when the shielding phenomenon occurs to the target vehicle, comparing the target vehicle and the distance between the shielding vehicle and a camera at the advancing direction of the target vehicle through the position of the target vehicle, simultaneously carrying out global feature extraction and local feature matching on the shielding vehicle close to the camera at the advancing direction, selecting an external rectangular frame of the target vehicle in the monitoring video and subtracting an area overlapped with the shielding vehicle by the target vehicle far away from the camera at the advancing direction, then carrying out global feature extraction and local feature matching, and finishing the judgment of whether the target vehicle is shielded.
The color feature extraction of the local features comprises the following steps: the method comprises the steps of firstly obtaining color characteristics of a vehicle through an RGB color space, then converting the color characteristics into an HSV color space to quantize the color characteristics, then selecting a proper color name to quantize and reduce the dimension of the color characteristics, establishing a color name quantization table, and quantizing the color name of a target vehicle, so that the color characteristics are quickly matched.
When the target vehicle is determined to be shielded, firstly, judgment is carried out from the target vehicle and the spatial position of the shielded vehicle, when the external rectangular frame of the target vehicle is overlapped with the external rectangular frame of the shielded vehicle in a certain camera monitoring video of the intersection, shielding phenomena are likely to occur, the original color characteristic and the original SIFT characteristic of the same target vehicle are compared at the same time, the Euclidean distance of the color characteristic of the target vehicle is obtained, when the Euclidean distance of the target vehicle is larger than a set threshold value, the target vehicle is judged to be shielded, then tracking compensation is carried out on the target vehicle through cameras in other directions of the intersection, the relative position of the target vehicle in the shielded camera is marked, and identification and positioning of the target are completed.
SURF algorithm feature matching of local features is as follows:
c1: constructing a black plug matrix: judging the scale and the position of the characteristic point of the detected target vehicle through the local maximum value of the determinant in the black-plug matrix, wherein if the value of the determinant is greater than zero, the pixel is an extreme point, otherwise, the pixel is a non-extreme point;
c2: constructing a scale space of the image: searching feature points at the same position in different scales, processing an initial picture of the detected target vehicle by using a square grid filter with different sizes by using an SURF algorithm, and acquiring a picture scale space;
c3: positioning of feature points: comparing the characteristic point of the pixel point with the surrounding points in the three-dimensional space of the pixel point, and if the value of the pixel point is larger than the surrounding values, defining the pixel point as the characteristic point.
Step S4 is further described with reference to fig. 2, a database is established by means of deep learning for the vehicle driving tracks of each lane at the same intersection, so that when the tracking loss of the vehicle 1 occurs in the monitoring view of the camera 1, that is, when the vehicle is partially or completely occluded, the occluded camera 1 predicts the driving route of the occluded vehicle 1 according to the database, then the other camera 2 determines the spatial position of the occluded vehicle and performs positioning and tracking through feature matching, and compensates the position information of the occluded vehicle 1 into the monitoring image of the camera, and combines the predicted positions to synthesize the information, and when the vehicle 1 reappears in the monitoring view of the camera 1, the position of the vehicle 1 can still be accurately positioned, thereby completing the continuous tracking of the vehicle 1.
The foregoing is only a preferred embodiment of this invention and it should be noted that modifications can be made by those skilled in the art without departing from the principle of the invention and these modifications should also be considered as the protection scope of the invention.

Claims (8)

1. A crossroad vehicle real-time tracking method based on multiple cameras is characterized by comprising the following steps: comprises that
S1: collecting monitoring videos of camera visual angles in four directions of a crossroad;
s2: respectively extracting the characteristics of the target vehicles for each camera, solving the position of the target vehicle under each camera, and respectively solving the homography matrix among all the cameras, thereby determining the relative position relation of the target vehicles under the monitoring areas of all the cameras;
s3: space position constraint of a target vehicle is formed under the monitoring areas of all cameras in four directions of the intersection, position correlation is carried out on the same target vehicle under different cameras, and feature extraction of the target vehicle is carried out;
s4: when a target vehicle is lost in tracking of a certain camera, information of other cameras is compensated into a monitoring image of the camera, and continuous tracking of the target vehicle is completed.
2. The multi-camera based intersection vehicle real-time tracking method of claim 1, characterized by: in the step S2, according to the monitoring visual fields of the cameras in the four directions of the crossroad, a fixed marker of each intersection is selected, the whole crossroad is framed into a rectangular area according to the selected fixed markers, homography matrixes among the cameras in the four directions of the crossroad are respectively obtained in the visual fields of the rectangular area, and therefore the relative position relation of vehicles in the monitoring areas of the cameras in the four directions of the crossroad is determined.
3. The multi-camera based crossroad vehicle real-time tracking method of claim 2, wherein: and selecting the crosswalk of each intersection as a fixed marker.
4. The multi-camera based crossroad vehicle real-time tracking method of claim 1, wherein: the cameras in the four directions of the intersection shoot the same checkerboard at the same time, and homography matrixes among the cameras in the four directions of the intersection are obtained.
5. The multi-camera based intersection vehicle real-time tracking method of claim 1, characterized by: in the step S3, the feature extraction of the target vehicle comprises color feature extraction of global features and SURF algorithm feature matching of local features;
aiming at the global feature extraction and the local feature matching of the target vehicle, when the target vehicle is not shielded, selecting a pixel area of the whole target vehicle for color feature extraction and matching through the color feature extraction of the global feature;
when the shielding phenomenon occurs to the target vehicle, comparing the target vehicle and the distance between the shielding vehicle and a camera at the advancing direction of the target vehicle through the position of the target vehicle, simultaneously carrying out global feature extraction and local feature matching on the shielding vehicle close to the camera at the advancing direction, selecting an external rectangular frame of the target vehicle in the monitoring video and subtracting an area overlapped with the shielding vehicle by the target vehicle far away from the camera at the advancing direction, then carrying out global feature extraction and local feature matching, and finishing the judgment of whether the target vehicle is shielded.
6. The multi-camera based intersection vehicle real-time tracking method of claim 5, characterized by: the color feature extraction of the local features comprises the following steps: the method comprises the steps of firstly obtaining color characteristics of a vehicle through an RGB color space, then converting the color characteristics into an HSV color space to quantize the color characteristics, then selecting a proper color name to quantize and reduce the dimension of the color characteristics, establishing a color name quantization table, and quantizing the color name of a target vehicle, so that the color characteristics are quickly matched.
7. The multi-camera based intersection vehicle real-time tracking method of claim 5, characterized by: the SURF algorithm feature matching of the local features is as follows:
c1: constructing a black plug matrix: judging the scale and the position of the characteristic point of the detected target vehicle through the local maximum value of the determinant in the black-plug matrix, wherein if the value of the determinant is greater than zero, the pixel is an extreme point, otherwise, the pixel is a non-extreme point;
c2: constructing a scale space of the image: searching feature points at the same position in different scales, processing an initial picture of the detected target vehicle by using a square grid filter with different sizes by using an SURF algorithm, and acquiring a picture scale space;
c3: positioning of feature points: comparing the characteristic point of the pixel point with the surrounding points in the three-dimensional space of the pixel point, and if the value of the pixel point is larger than the surrounding values, defining the pixel point as the characteristic point.
8. The multi-camera based intersection vehicle real-time tracking method of claim 1, characterized by: step S4, a database of vehicle running tracks is established according to each lane of the same crossroad, when the situation that the vehicle tracking is lost due to partial or complete shielding of the vehicle occurs in the monitoring visual field of a camera in a certain direction of the crossroad, the camera with the tracking loss performs track prediction on the running route of the shielded vehicle according to the established database, then the shielded vehicle is positioned and tracked through space position determination and feature matching through other cameras, the position information of the shielded vehicle is compensated into the monitoring image of the tracking loss camera, and the position of the vehicle can still be accurately positioned when the vehicle reappears in the monitoring visual field of the tracking loss camera by combining the predicted running tracks, so that the continuous tracking of the vehicle is completed.
CN202211589639.9A 2022-12-12 2022-12-12 Crossroad vehicle real-time tracking method based on multiple cameras Pending CN115762168A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211589639.9A CN115762168A (en) 2022-12-12 2022-12-12 Crossroad vehicle real-time tracking method based on multiple cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211589639.9A CN115762168A (en) 2022-12-12 2022-12-12 Crossroad vehicle real-time tracking method based on multiple cameras

Publications (1)

Publication Number Publication Date
CN115762168A true CN115762168A (en) 2023-03-07

Family

ID=85345454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211589639.9A Pending CN115762168A (en) 2022-12-12 2022-12-12 Crossroad vehicle real-time tracking method based on multiple cameras

Country Status (1)

Country Link
CN (1) CN115762168A (en)

Similar Documents

Publication Publication Date Title
CN106709436B (en) Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
Zhang et al. A longitudinal scanline based vehicle trajectory reconstruction method for high-angle traffic video
CN112102409B (en) Target detection method, device, equipment and storage medium
CN111563415A (en) Binocular vision-based three-dimensional target detection system and method
EP1560160A2 (en) A multiple camera system for obtaining high resolution images of objects
CN111652097A (en) Image millimeter wave radar fusion target detection method
Rodríguez et al. An adaptive, real-time, traffic monitoring system
KR20160062880A (en) road traffic information management system for g using camera and radar
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
CN101344965A (en) Tracking system based on binocular camera shooting
WO2001084844A1 (en) System for tracking and monitoring multiple moving objects
CN115049700A (en) Target detection method and device
CN109828267A (en) The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera
CN112381132A (en) Target object tracking method and system based on fusion of multiple cameras
CN113034586B (en) Road inclination angle detection method and detection system
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN107506753B (en) Multi-vehicle tracking method for dynamic video monitoring
CN115346155A (en) Ship image track extraction method for visual feature discontinuous interference
JP2001167282A (en) Device and method for extracting moving object
Hernández et al. Lane marking detection using image features and line fitting model
Kanhere et al. Real-time detection and tracking of vehicle base fronts for measuring traffic counts and speeds on highways
CN110706251B (en) Cross-lens tracking method for pedestrians
KR100994722B1 (en) Method for tracking moving object on multiple cameras using probabilistic camera hand-off
CN115762168A (en) Crossroad vehicle real-time tracking method based on multiple cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination