CN114118238B - Vehicle model analysis method based on laser and video technology time sequence and feature fusion - Google Patents

Vehicle model analysis method based on laser and video technology time sequence and feature fusion Download PDF

Info

Publication number
CN114118238B
CN114118238B CN202111335287.XA CN202111335287A CN114118238B CN 114118238 B CN114118238 B CN 114118238B CN 202111335287 A CN202111335287 A CN 202111335287A CN 114118238 B CN114118238 B CN 114118238B
Authority
CN
China
Prior art keywords
laser
vehicle
video
time
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111335287.XA
Other languages
Chinese (zh)
Other versions
CN114118238A (en
Inventor
袁彬
王军群
杨东烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cosco Shipping Technology Co Ltd
Original Assignee
Cosco Shipping Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cosco Shipping Technology Co Ltd filed Critical Cosco Shipping Technology Co Ltd
Priority to CN202111335287.XA priority Critical patent/CN114118238B/en
Publication of CN114118238A publication Critical patent/CN114118238A/en
Application granted granted Critical
Publication of CN114118238B publication Critical patent/CN114118238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/04Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness specially adapted for measuring length or width of objects while moving
    • G01B11/043Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness specially adapted for measuring length or width of objects while moving for measuring length
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/04Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness specially adapted for measuring length or width of objects while moving
    • G01B11/046Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness specially adapted for measuring length or width of objects while moving for measuring width
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/06Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
    • G01B11/0608Height gauges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/06Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
    • G01B11/0691Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material of objects while moving
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Abstract

The application discloses a vehicle type analysis method based on laser and video technology time sequence and feature fusion, the vehicle type analysis method based on laser needs to install two lasers, one laser is installed perpendicular to the ground and used for measuring and calculating the length, width and height of a vehicle through scanning the vehicle to construct a vehicle three-dimensional model, the other laser is erected at a certain angle with the ground and used for calculating the vehicle speed, and in order to prevent the condition that the laser detection method has a vehicle leakage, the distance between the scanning lines of the two lasers needs to be ensured to be not more than the distance of a car. The vehicle model analysis method based on the video is characterized in that the vehicle in the detection scene is trained to generate detection weights, and the vehicle model analysis is realized by adopting a deep learning method. Because the triggering positions of the two detection results are different, the two detection results are fused according to the triggering distance and time of the two methods, and finally the classification detection of the vehicle type is realized.

Description

Vehicle model analysis method based on laser and video technology time sequence and feature fusion
Technical Field
The invention relates to the technical field of vehicle type classification, in particular to a vehicle type analysis method based on laser and video technology time sequence and feature fusion.
Background
As an important component of Intelligent Transportation Systems (ITS), vehicle classification algorithms are one of the current research hotspots and difficulties. The automatic identification of the vehicles realizes the classification of the vehicle types and can provide data for traffic management, charging, scheduling and statistics. At present, a vehicle type classification algorithm mainly adopts a vehicle type classification detection method based on video or laser, but both the two methods have certain problems in respective fields, cannot adapt to all environments, and realize real-time detection of vehicle type classification. The image data is analyzed through the traditional video image processing method, the classification of vehicle types is realized, the quality of the video images directly influences the detection effect, and in the actual application scene, the acquisition of the video images is greatly interfered due to weather interference such as rain, snow, haze and the like, low visibility of certain road sections at night and the like, so that the video images cannot be subjected to subsequent processing, and the effect of classifying the vehicle types is achieved. The vehicle type classification method based on the laser overcomes the defect of insufficient video acquisition, the laser relies on reflection and reception of active emission signals, overcomes the defect of passive signal reception, and can be well adapted to measurement scenes with low visibility and difficult operation. The laser imaging technology is used for receiving the transmitting information of the active transmitting signal, obtaining the specific information of the vehicle in the space coordinates by using a digital processing method in a computer system, and then classifying the vehicle types by using the obtained vehicle information. The vehicle type classification algorithm using the laser makes up the defects of a part of video methods, but the accurate classification of the vehicle type cannot be realized even if the laser data is missing under the conditions of rainy days and the like.
Therefore, a new vehicle model analysis method is needed.
Disclosure of Invention
In view of the above, the invention provides a vehicle model analysis method based on laser and video technology time sequence and feature fusion, which is characterized in that: the method comprises the following steps:
s1: the method comprises the steps of constructing an acquisition unit, wherein the acquisition unit comprises a laser acquisition unit I and a video acquisition unit II;
s2: based on the information acquired by the acquisition unit, determining a vehicle type identification result by a vehicle type identification method, wherein the vehicle type identification method comprises laser identification and video identification;
the laser identification comprises the steps of acquiring the laser acquisition unit data, determining a vehicle type classification identification result I according to the data, taking the identification result I and the triggering time corresponding to the identification result I as unit data I, and storing the unit data I according to the sequence of the triggering time to obtain a laser identification result queue I;
the video identification comprises the steps of obtaining video information of the video acquisition unit, determining a vehicle type classification identification result II according to the video information, taking the identification result II and trigger time corresponding to the identification result II as unit data II, and storing the unit data II according to the sequence of the trigger time to obtain a video identification result queue II;
s3: judging whether a rainy day exists, if so, taking a video identification result as a detection result, and if not, entering the next step;
s4: matching the laser identification result queue I and the video identification result queue II by adopting the following method, and outputting a detection result according to the matching result;
the matching method comprises the following steps:
s41: acquiring a vehicle type classification recognition result I of a certain vehicle in a laser recognition result queue I, simultaneously acquiring the vehicle type classification recognition result I in a preset time period after the current vehicle triggering time, and determining the laser triggering time t of the current vehicle Laser light
S42: according to the time difference delta t between trigger lines and the trigger time t of the laser Laser light Determining video trigger time t Video frequency
S42: judging whether the vehicle type classification and identification result I in the preset time period is empty, if so, entering a step S43, and if not, entering a step S44;
s43: according to the video trigger time t Video frequency Acquiring a model classification recognition result II of the current vehicle from the video recognition result queue II;
judging whether the vehicle type classification recognition result I and the vehicle type classification recognition result II of the current vehicle are consistent, if so, successfully matching, outputting a detection result, otherwise, entering step S45;
s44: root of Chinese characterAccording to the video trigger time t Video frequency Acquiring a model classification recognition result II of a current vehicle in a video recognition result queue II, and simultaneously acquiring the video triggering time t of the current vehicle Video frequency The vehicle type classification and identification result II in the later preset time period;
judging whether the vehicle type classification recognition result I of the current vehicle is consistent with the vehicle type classification recognition result II, if so, further judging whether the distribution of the vehicle type classification recognition result I in the preset time period is consistent with the distribution of the vehicle type classification recognition result II in the preset time period, if so, successfully matching, outputting a detection result, otherwise, entering step S45;
s45: and determining the length of the current vehicle according to the video speed measurement and the laser scanning time length, and judging the type of the vehicle according to the length of the vehicle, wherein the matching result is the identification result closest to the type of the vehicle.
Further, step S4 further includes: if according to the video trigger time t Video frequency And the vehicle type classification recognition result II of the current vehicle cannot be obtained in the video recognition result queue II, the vehicle type classification recognition result I of the current vehicle is a detection result, and the matching is exited.
Further, in the matching process of step S43 and step S44, if M times of non-matching thinking appear continuously, all data are cleared and re-matched.
Further, the time difference Δt between the trigger lines is determined by the following method:
wherein Δt represents the time difference between trigger lines, D represents the distance between trigger lines, v Video frequency The video speed measurement is represented, and the v laser represents the laser speed measurement;
D=(t laser light -t Video frequency )*V Video frequency (2)
Wherein D represents the distance between trigger lines, t Laser light Indicating the trigger time of the laser, t Video frequency Representing video trigger time, v Video frequency Representing video velocimetry.
Further, the video speed measurement v Video frequency The method is adopted for determination as follows:
wherein v is Video frequency Representing video speed measurement, v x Representing the velocity measurement in the X direction, v y The speed measurement in the y direction is shown, and Deltav shows the actual difference of the vehicle speed;
wherein, the velocity measurement v in the X direction x And velocity measurement v in y direction y The method is adopted for determination as follows:
wherein v is x Representing the velocity measurement in the X direction, v y The velocity measurement in the y direction is represented, f represents the sampling frequency, and X A Representing the corresponding X-direction distance, X of the point A in the actual scene B Representing the distance of the point B in the corresponding X direction in the actual scene, Y A Representing the Y-direction distance corresponding to the point A in the actual scene, Y B Representing the Y-direction distance corresponding to the point B in the actual scene, F A Representing the frame number corresponding to the point A, F B The frame number corresponding to the point B is represented, and t represents the tracking time length from the point A to the point B;
the laser speed measuring v laser is determined by the following method:
wherein, v laser represents laser speed measurement, p 1 Data points, t, representing the current cross-section scanned by laser I 1 Indicating the time when the laser I first scans the current vehicle, t 2 Indicating the time when the laser II first scans the current vehicle, d 1j Indicating the current distance the vehicle has traveled through lasers I and II.
Further, the laser acquisition unit I comprises a laser I and a laser II erected according to a preset height, wherein the laser I is perpendicular to the ground, the laser II and the laser I are erected at a preset angle alpha, and the preset angle is required to meet the requirement that the scanning distance of the laser I and the laser II is smaller than the length of a car.
Further, the preset angle alpha is determined by the following method:
α=arctan H/L (6)
wherein alpha represents an included angle between the laser I and the laser II, H represents a preset erection height of the laser I and the laser II, and L represents the length of a car.
Further, the laser recognition determines the length of the vehicle, the width of the vehicle and the height of the vehicle by the following method;
the width of the vehicle is determined by the following method:
wherein W is i Indicating the width of the vehicle, l 1 Representing the corresponding distance, l, of a first scan of a certain cross-section to the vehicle 2 Represents the distance, θ, corresponding to the last scanned point on the vehicle k Indicating the rotation angle of the laser;
the height of the vehicle is determined by the following method:
h ij ={H-l ij cosθ j },i=1,2,3,...,m,j=1,2,3,...,p n (8)
wherein h is ij Represents the height of the vehicle, H represents the mounting height of the laser I, l ij Data point array, θ, representing a vehicle j Indicating the included angle between the laser head of the laser I and the central axis at a certain moment;
the length of the vehicle adopts the following method:
wherein L represents the length of the vehicle, and m represents the laser lightThe number of sections scanned by the device I on the current vehicle, j represents a variable, f represents the laser scanning frequency, and p 1 Data points, t, representing the current cross-section scanned by laser I 1 Indicating the time when the laser I first scans the current vehicle, t 2 Indicating the time when the laser II first scans the current vehicle, d 1j Indicating the current distance the vehicle has traveled through lasers I and II.
The beneficial technical effects of the invention are as follows: according to the method, the vehicle type classification is realized by adopting a method of combining a video technology based on deep learning with a laser technology, the problem caused by a single method is solved, and the deep learning method is introduced into a vehicle type classification method clock, so that the influence of factors such as light, weather and environment on the traditional video image processing is overcome, and the accurate detection of vehicle type classification is realized.
Drawings
The invention is further described below with reference to the accompanying drawings and examples:
fig. 1 is a model classification standard of the present application.
Fig. 2 is a coordinate system establishment method of the present application.
Detailed Description
The invention is further described below with reference to the accompanying drawings of the specification:
the invention provides a vehicle model analysis method based on laser and video technology time sequence and feature fusion, which is characterized by comprising the following steps of: the method comprises the following steps:
s1: the method comprises the steps of constructing an acquisition unit, wherein the acquisition unit comprises a laser acquisition unit I and a video acquisition unit II; the laser is installed at the position of the existing portal frame and street lamp, so the height of the laser is basically between 6 and 8 meters, wherein the laser I is installed perpendicular to the ground and used for determining the length, width and height of a vehicle by scanning the vehicle to construct a vehicle three-dimensional model, the laser II is erected at a certain angle with the ground and used for determining the speed of the vehicle, the laser counting is accurate and the occurrence of the vehicle leakage is prevented, and the included angle of the two lasers is required to be ensured to be smaller than the distance of a car according to the erection height of the lasers. The video acquisition unit comprises a camera, and the erection height of the camera is between 6 and 8 meters from the ground, namely, the camera is arranged at the existing portal frame and street lamp position.
S2: based on the information acquired by the acquisition unit, determining a vehicle type identification result by a vehicle type identification method, wherein the vehicle type identification method comprises laser identification and video identification;
the laser identification comprises the steps of acquiring the laser acquisition unit data, determining a vehicle type classification identification result I according to the data, taking the identification result I and the triggering time corresponding to the identification result I as unit data I, and storing the unit data I according to the sequence of the triggering time to obtain a laser identification result queue I;
the video identification comprises the steps of obtaining video information of the video acquisition unit, determining a vehicle type classification identification result II according to the video information, taking the identification result II and trigger time corresponding to the identification result II as unit data II, and storing the unit data II according to the sequence of the trigger time to obtain a video identification result queue II;
s3: judging whether a rainy day exists, if so, taking a video identification result as a detection result, and if not, entering the next step; the laser scanning principle is that a laser beam scans an object, reflected light beams are reflected back to form images, in rainy days, the laser beams are reflected back when encountering raindrops, vehicles cannot be scanned, errors are brought to detection results in the rainy days, therefore, weather conditions are judged through data change conditions of laser scanning results, and when irregular and disordered scanning results appear in laser data, the rainy days are judged. Because the camera is installed with a certain installation angle, one of the two lasers is vertically installed, in order to prevent the shielding, the laser detection result is taken as the main part in the rainy day, and the same detection data in the video result queue is searched in the laser detection result queue.
S4: matching the laser identification result queue I and the video identification result queue II by adopting the following method, and outputting a detection result according to the matching result;
the matching method comprises the following steps:
s41: acquiring a vehicle type classification recognition result I of a certain vehicle in a laser recognition result queue I, simultaneously acquiring the vehicle type classification recognition result I in a preset time period after the current vehicle triggering time, and determining the laser triggering time t of the current vehicle Laser light
S42: according to the time difference delta t between trigger lines and the trigger time t of the laser Laser light Determining video trigger time t Video frequency
S42: judging whether the vehicle type classification and identification result I in the preset time period is empty, if so, entering a step S43, and if not, entering a step S44;
s43: according to the video trigger time t Video frequency Acquiring a model classification recognition result II of the current vehicle from the video recognition result queue II;
judging whether the vehicle type classification recognition result I and the vehicle type classification recognition result II of the current vehicle are consistent, if so, successfully matching, outputting a detection result, otherwise, entering step S45;
s44: according to the video trigger time t Video frequency Acquiring a model classification recognition result II of a current vehicle in a video recognition result queue II, and simultaneously acquiring the video triggering time t of the current vehicle Video frequency The vehicle type classification and identification result II in the later preset time period;
judging whether the vehicle type classification recognition result I of the current vehicle is consistent with the vehicle type classification recognition result II, if so, further judging whether the distribution of the vehicle type classification recognition result I in the preset time period is consistent with the distribution of the vehicle type classification recognition result II in the preset time period, if so, successfully matching, outputting a detection result, otherwise, entering step S45;
s45: and determining the length of the current vehicle according to the video speed measurement and the laser scanning time length, and judging the type of the vehicle according to the length of the vehicle, wherein the matching result is the identification result closest to the type of the vehicle. According to the method, the vehicle type classification is realized by adopting a method of combining a video technology based on deep learning with a laser technology, the problem caused by a single method is solved, and the deep learning method is introduced into a vehicle type classification method clock, so that the influence of factors such as light, weather and environment on the traditional video image processing is overcome, and the accurate detection of vehicle type classification is realized. The laser scanning time length is the time consumption of the laser II from the beginning of scanning to the ending of scanning of one vehicle.
In this embodiment, step S4 further includes: if according to the video trigger time t Video frequency And the vehicle type classification recognition result II of the current vehicle cannot be obtained in the video recognition result queue II, the vehicle type classification recognition result I of the current vehicle is a detection result, and the matching is exited. If no matching result is queried in the video queue, the video result is likely to have missed detection, and a laser detection result is output at the moment; according to the technical scheme, the detection result can be output even if the detection is missed, and the stability of the method is ensured.
In this embodiment, in the matching process of step S43 and step S44, if M times of non-matching thinking appear continuously, all data are cleared and re-matched. The number of M is preset, and is set according to an empirical value, such as 3 times and 5 times, and a person skilled in the art can set the value of M according to actual needs. If the multiple matches are unsuccessful, the method is restarted. The technical scheme can effectively avoid invalidation of multiple matching and improve the accuracy of the method.
In this embodiment, the time difference Δt between the trigger lines is determined by the following method:
wherein Δt represents the time difference between trigger lines, D represents the distance between trigger lines, v Video frequency The video speed measurement is represented, and the v laser represents the laser speed measurement;
D=(t laser light -t Video frequency )*V Video frequency (2)
Wherein D represents the distance between trigger lines, t Laser light Indicating the trigger time of the laser, t Video frequency Representing video trigger time, v Video frequency Representing video velocimetry.
The video triggering is performed before the laser triggering, so that the time of the laser triggering is the time when the vehicle completely passes through the laser according to the time difference between the laser and the video triggering, the video triggering time is the time when the vehicle track tracking is finished, and the distance between trigger lines of the two methods is determined by the formula (2) in combination with the video speed measurement result. The following problems are noted in determining the distance between trigger lines: because of the relation of the erection angles of the camera and the laser, when calculating the distance between the two trigger lines, the distance is calculated by dividing lanes; and selecting a small time of the vehicle, namely calculating the distance between two triggering lines of each lane under the condition that the two triggering methods have no external interference in the matching process. The specific determination steps are as follows: calibrating the lane range. The video detection method can be calibrated through the lane line position, the lane line position in the actual coordinates is converted into the pixel coordinate position, and the lane where the vehicle is determined by combining the lane line position when the result is obtained; the lane line position determination of the laser method is to determine the scanning range of each lane according to the lane width and the scanning data, so as to determine the lane; the distance between the two triggering wires is determined by the formula (1) to ensure the accuracy of the distance between the triggering wires, and a method of multiple measurement and continuous correction can be adopted to gradually reduce the measurement error. In this embodiment, the method adopted is: after the distance between the trigger lines is obtained through each calculation, the maximum and minimum calculation results are removed, and the average value of the rest detection results is obtained, so that the average value is used as the current distance between the trigger lines, and the accuracy of the calculation results is improved.
After the distance between the two trigger lines is obtained, the time difference between the two trigger lines of the vehicle is determined according to the speed requirement condition of the detected road section. As the basis for the subsequent matching. Because the vehicle may have acceleration, deceleration, etc. during running, the average value of the video speed measurement result and the laser speed measurement result is selected when the time difference between the trigger lines is calculated. The deterministic formula is shown as formula (1).
In this embodiment, the video speed v Video frequency The method is adopted for determination as follows:
wherein v is Video frequency Representing video speed measurement, v x Representing the velocity measurement in the X direction, v y The speed measurement in the y direction is shown, and Deltav shows the actual difference of the vehicle speed;
wherein, the velocity measurement v in the X direction x And velocity measurement v in y direction y The method is adopted for determination as follows:
wherein v is x Representing the velocity measurement in the X direction, v y The velocity measurement in the y direction is represented, f represents the sampling frequency, and X A Representing the corresponding X-direction distance, X of the point A in the actual scene B Representing the distance of the point B in the corresponding X direction in the actual scene, Y A Representing the Y-direction distance corresponding to the point A in the actual scene, Y B Representing the Y-direction distance corresponding to the point B in the actual scene, F A Representing the frame number corresponding to the point A, F B The frame number corresponding to the point B is represented, and t represents the tracking time length from the point A to the point B;
video vehicle type classification method based workflow
The video-based vehicle type classification method comprises the following basic processes: mapping point calibration, mapping table creation, target identification, vehicle type classification, target tracking and vehicle speed determination, and the specific implementation is as follows:
because the length and the interval of the lane separation lines are known parameters, the starting point and the ending point of the lane separation lines are used as datum points, and multipoint calibration is carried out;
deducing a conversion relation between an image coordinate system and an actual coordinate system according to the mapping relation of the camera, so as to facilitate calculation of the vehicle speed;
the step of establishing the mapping relation comprises the steps of calibrating mapping points and establishing a mapping table,
the calibration mapping points are used for determining the corresponding relation between an object in a world coordinate system and imaging of the object on an image plane by determining the position, internal and external parameters of a camera and establishing an imaging model,
in an actual application scene, as the distance between lane dividing lines is known, the required mapping relation can be obtained by calibrating known points and combining the imaging principle of a camera;
said building a mapping table, i.e
Let the coordinates of a point in world coordinate system be W (X, Y, Z), the projected point I (X, Y) can be obtained due to the proportional relationship of similar triangles, wherein
f is the intersection point coordinate of the projection point and the world coordinate system;
the division operation of the variables contained in the formula (4-1) belongs to nonlinear transformation, when homogeneous coordinates are introduced to convert the homogeneous coordinates into a linear matrix for calculation, a homogeneous coordinate matrix is obtained as the formula (4-2), wherein Z, K is a scale factor,
in the vehicle speed calculation process, since it is not necessary to know the height information of the vehicle, the expression (4-2) is simplified to obtain the following transformation matrix
The coordinate expression of the midpoint (X, Y) of the world coordinate system can be obtained by the formula (4-3) as
Substituting the actual distance between the pixel coordinates of the known distance points in the IMAGE coordinate system and the world coordinate system into (4-4), and solving the mapping relation between the pixel distances and the actual distances, thereby establishing a mapping table of the two coordinate systems, namely a mapping table MapTable [ IMAGE_SIZE ], wherein IMAGE_SIZE is the product of the IMAGE width and the IMAGE height, and in the subsequent calculation process, the actual distance corresponding to the point can be obtained when the pixel coordinates of the target point to be inquired are input;
the speed calculation step is to substitute the space positions of each point in the moving vehicle target track into the mapping relation table to obtain the actual distance represented by each target centroid characteristic point in the track,
wherein Dis i [i].x、Dis i [i]Y is the actual distance corresponding to a certain point in the transverse direction and the longitudinal direction, mapTable is the established mapping table, position [ i ]]X is the X coordinate of a pixel point, position [ i ]]Y is the Y coordinate of a pixel point, and width is the image width;
in a rectangular coordinate system, describing the spatial position of a tracking point by (x, y), acquiring a target motion track through related information of a starting point and an end point of a target tracking track of a moving vehicle, and obtaining the following information by setting the starting point of a certain target as A and the end point as B:
in the formula (16), the values of X and Y are obtained by looking up a mapping table, F is a frame number corresponding to a certain point, F is a sampling frequency, namely 25 frames/s, XA is an X-direction distance corresponding to a point A in an actual scene, XB is an X-direction distance corresponding to a point B in the actual scene, YA is a Y-direction distance corresponding to a point A in the actual scene, and YB is a Y-direction distance corresponding to a point B in the actual scene. FA is the frame number corresponding to point a,
obtaining a moving speed of the moving vehicle target from (4-6)
The speed correction step is that if the calculated speed is generally larger or smaller in a certain scene (the speed of the expressway is specified as the highest speed which must not exceed 120 km per hour and the lowest speed which must not be lower than 60 km per hour, if the calculated speed is out of the range, the calculated speed is considered to be larger or smaller), the calculated speed is inaccurate due to the deviation of the mapping relation in the calculation process, and the measured speed can be corrected according to the actual situation, namely
Deltav is the actual difference in vehicle speed.
The laser speed measuring v laser is determined by the following method:
wherein, v laser represents laser speed measurement, p 1 Data points, t, representing the current cross-section scanned by laser I 1 Indicating the time when the laser I first scans the current vehicle, t 2 Indicating the time when the laser II first scans the current vehicle, d 1j Indicating the current distance the vehicle has traveled through lasers I and II.
In this embodiment, the laser acquisition unit I includes a laser I and a laser II erected according to a preset height, where the laser I is perpendicular to the ground, and the laser II and the laser I are erected at a preset angle α, and the preset angle needs to satisfy that the scanning distance of the two is less than the length of a car.
The preset angle alpha is determined by the following method:
α=arctan H/L (6)
wherein alpha represents an included angle between the laser I and the laser II, H represents a preset erection height of the laser I and the laser II, and L represents the length of a car.
In this embodiment, the laser recognition determines the length of the vehicle, the width of the vehicle, and the height of the vehicle by the following method;
the width of the vehicle is determined by the following method:
wherein W is i Indicating the width of the vehicle, l 1 Representing the corresponding distance, l, of a first scan of a certain cross-section to the vehicle 2 Represents the distance, θ, corresponding to the last scanned point on the vehicle k Indicating the rotation angle of the laser;
the height of the vehicle is determined by the following method:
h ij ={H-l ij cosθ j },i=1,2,3,...,m,j=1,2,3,...,p n (8)
wherein h is ij Represents the height of the vehicle, H represents the mounting height of the laser I, l ij Data point array, θ, representing a vehicle j Indicating the included angle between the laser head of the laser I and the central axis at a certain moment;
the length of the vehicle adopts the following method:
wherein L represents the length of the vehicle, m represents the number of sections scanned by the laser I on the current vehicle, j represents a variable, f represents the laser scanning frequency, and p 1 Data points, t, representing the current cross-section scanned by laser I 1 Indicating the time when the laser I first scans the current vehicle, t 2 Indicating the time when the laser II first scans the current vehicle, d 1j Indicating the current distance the vehicle has traveled through lasers I and II.
The vehicle length, width, and height calculation methods are as follows.
According to the installation mode of the laser, a coordinate system is established, wherein the bottom end of the upright post for installing the laser sensor is taken as an original point, the direction perpendicular to the running direction of the vehicle is taken as an x axis, and the upward direction along the upright post is taken as a positive direction of a y axis, and the example of the coordinate system is shown in fig. 2.
Assuming that the road surface is a horizontal plane and a point on the road surface where y=0 is constant, under this condition y=0 when the laser scans the road surface, i.e. no vehicle passes. The discrete points obtained by the laser head in one scanning period are on the same plane, the plane is parallel to the xoy plane formed by the laser installation, the plane is called a laser scanning interface, in the running process of a vehicle, the interface formed by the rotation period of the laser head on the vehicle and the interface formed when the laser head is stationary form a certain angle according to the different vehicle speeds, and when the laser scanning frequency is 50HZ, the rotation period is as follows:
because of its extremely short period, the class assumes that the cross section is parallel to the xoy plane.
When the height data of the same section scanned by the laser head is always 0, i.e
n is the number of points obtained in one period, namely the reflection point of the laser can be considered to be always positioned on the road surface; when the cross section y+.0 appears, that is, the cross section is counted, and the cross section y=0 appears again, that is, the counting is stopped, the number m of cross sections of the traveling vehicle can be obtained.
At any time t, the rotation angle of the laser head is
θ=θ 0 +Δθ*t (7-3)
Delta theta is the stepping angle of the laser rotation, theta 0 For the initial angle of the laser head to start rotating, the distance corresponding to the first scanning point on the vehicle on a certain section is set as l 1 The last scanned point on the vehicle corresponds to a distance of l 2 In the process, the rotation angle of the laser is theta k The width of the vehicle is as shown (7)
For vertically placed lasers, the laser head co-scans a traveling vehiclem sections, the number of data points on a single section being p n So that the data points for each vehicle can form m p n The matrix isAccording to each l ij The scanning height is the height of the automobile, and the method for determining the height of the automobile is as shown in the formula (8)
Assume that the time for the two lasers to sweep to the same vehicle for the first time is t respectively 1 、t 2 The angle between the two lasers is alpha, the erection height of the lasers is H, the scanning frequency of the lasers is f, and the distance between the two lasers is the vehicle passing through
d 1j ={l 1j cosθ j tanα},j=1,2,…,p n (7-4)
Taking the distance as
The vehicle speed is as in equation (5). It is assumed that the vehicle is traveling at a constant speed, whereby the vehicle length can be obtained as in equation (9).
Thus, three-dimensional information construction of the vehicle is realized.
After the three-dimensional information of the passing vehicle is determined in a preset vehicle type comparison table, the vehicle type is determined through the three-dimensional information of the vehicle, and the comparison table is shown in fig. 1.
In this embodiment, the video identification uses a yolov5 network architecture. According to the method, a camera is required to be installed and used for acquiring real-time videos of the vehicle, and a light supplementing lamp can be added to prevent insufficient illumination at night and incapability of acquiring clear videos. The basic principle of the algorithm is that a yolov5 network architecture is adopted to mark and train various vehicle model samples, deep learning weights are obtained, a deep learning method is adopted to obtain targets and vehicle model classification results, a video tracking algorithm is adopted to track detection targets, so that vehicle speed is obtained, and the follow-up fusion with a laser detection method is facilitated. In order to ensure the accuracy of video speed measurement, the monitoring distance of the camera should be at least 100 meters.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered by the scope of the claims of the present invention.

Claims (8)

1. A vehicle model analysis method based on laser and video technology time sequence and feature fusion is characterized in that: the method comprises the following steps:
s1: the method comprises the steps of constructing an acquisition unit, wherein the acquisition unit comprises a laser acquisition unit I and a video acquisition unit II;
s2: based on the information acquired by the acquisition unit, determining a vehicle type identification result by a vehicle type identification method, wherein the vehicle type identification method comprises laser identification and video identification;
the laser identification comprises the steps of acquiring the laser acquisition unit data, determining a vehicle type classification identification result I according to the data, taking the identification result I and the triggering time corresponding to the identification result I as unit data I, and storing the unit data I according to the sequence of the triggering time to obtain a laser identification result queue I;
the video identification comprises the steps of obtaining video information of the video acquisition unit, determining a vehicle type classification identification result II according to the video information, taking the identification result II and trigger time corresponding to the identification result II as unit data II, and storing the unit data II according to the sequence of the trigger time to obtain a video identification result queue II;
s3: judging whether a rainy day exists, if so, taking a video identification result as a detection result, and if not, entering the next step;
s4: matching the laser identification result queue I and the video identification result queue II by adopting the following method, and outputting a detection result according to the matching result;
the matching method comprises the following steps:
s41: acquiring a vehicle type classification recognition result I of a certain vehicle in a laser recognition result queue I, simultaneously acquiring the vehicle type classification recognition result I in a preset time period after the current vehicle triggering time, and determining the laser triggering time t of the current vehicle Laser light
S42: according to the time difference delta t between trigger lines and the trigger time t of the laser Laser light Determining video trigger time t Video frequency
S42: judging whether the vehicle type classification and identification result I in the preset time period is empty, if so, entering a step S43, and if not, entering a step S44;
s43: according to the video trigger time t Video frequency Acquiring a model classification recognition result II of the current vehicle from the video recognition result queue II;
judging whether the vehicle type classification recognition result I and the vehicle type classification recognition result II of the current vehicle are consistent, if so, successfully matching, outputting a detection result, otherwise, entering step S45;
s44: according to the video trigger time t Video frequency Acquiring a model classification recognition result II of a current vehicle in a video recognition result queue II, and simultaneously acquiring the video triggering time t of the current vehicle Video frequency The vehicle type classification and identification result II in the later preset time period;
judging whether the vehicle type classification recognition result I of the current vehicle is consistent with the vehicle type classification recognition result II, if so, further judging whether the distribution of the vehicle type classification recognition result I in the preset time period is consistent with the distribution of the vehicle type classification recognition result II in the preset time period, if so, successfully matching, outputting a detection result, otherwise, entering step S45;
s45: and determining the length of the current vehicle according to the video speed measurement and the laser scanning time length, and judging the type of the vehicle according to the length of the vehicle, wherein the matching result is the identification result closest to the type of the vehicle.
2. The vehicle model analysis method based on laser and video technology time sequence and feature fusion of claim 1, wherein the method comprises the following steps: step S4 further includes: if triggered according to videoInterval t Video frequency And the vehicle type classification recognition result II of the current vehicle cannot be obtained in the video recognition result queue II, the vehicle type classification recognition result I of the current vehicle is a detection result, and the matching is exited.
3. The vehicle model analysis method based on laser and video technology time sequence and feature fusion of claim 2, characterized by comprising the following steps: in the matching process of step S43 and step S44, if M times of non-matching thinking appear continuously, all data are cleared and re-matched.
4. The vehicle model analysis method based on laser and video technology time sequence and feature fusion of claim 1, wherein the method comprises the following steps: the time difference delta t between the trigger lines is determined by the following method:
wherein Δt represents the time difference between trigger lines, D represents the distance between trigger lines, v Video frequency The video speed measurement is represented, and the v laser represents the laser speed measurement;
D=(t laser light -t Video frequency )*V Video frequency (2)
Wherein D represents the distance between trigger lines, t Laser light Indicating the trigger time of the laser, t Video frequency Representing video trigger time, v Video frequency Representing video velocimetry.
5. The vehicle model analysis method based on laser and video technology time sequence and feature fusion of claim 1, wherein the method comprises the following steps: the video speed v Video frequency The method is adopted for determination as follows:
wherein v is Video frequency Representing videoVelocity measurement v x Representing the velocity measurement in the X direction, v y The speed measurement in the y direction is shown, and Deltav shows the actual difference of the vehicle speed;
wherein, the velocity measurement v in the X direction x And velocity measurement v in y direction y The method is adopted for determination as follows:
wherein v is x Representing the velocity measurement in the X direction, v y The velocity measurement in the y direction is represented, f represents the sampling frequency, and X A Representing the corresponding X-direction distance, X of the point A in the actual scene B Representing the distance of the point B in the corresponding X direction in the actual scene, Y A Representing the Y-direction distance corresponding to the point A in the actual scene, Y B Representing the Y-direction distance corresponding to the point B in the actual scene, F A Representing the frame number corresponding to the point A, F B The frame number corresponding to the point B is represented, and t represents the tracking time length from the point A to the point B;
the laser speed measuring v laser is determined by the following method:
wherein, v laser represents laser speed measurement, p 1 Data points, t, representing the current cross-section scanned by laser I 1 Indicating the time when the laser I first scans the current vehicle, t 2 Indicating the time when the laser II first scans the current vehicle, d 1j Indicating the current distance the vehicle has traveled through lasers I and II.
6. The vehicle model analysis method based on laser and video technology time sequence and feature fusion of claim 1, wherein the method comprises the following steps: the laser acquisition unit I comprises a laser I and a laser II which are erected according to a preset height, wherein the laser I is perpendicular to the ground, the laser II and the laser I are erected at a preset angle alpha, and the preset angle is required to meet the requirement that the scanning distance of the laser I and the laser II is smaller than the length of a car.
7. The vehicle model analysis method based on the fusion of laser and video technology time sequences and characteristics according to claim 6, wherein the method is characterized in that: the preset angle alpha is determined by the following method:
α=arctan H/L (6)
wherein alpha represents an included angle between the laser I and the laser II, H represents a preset erection height of the laser I and the laser II, and L represents the length of a car.
8. The vehicle model analysis method based on laser and video technology time sequence and feature fusion of claim 7, wherein the method comprises the following steps: the laser identification adopts the following method to determine the length of the vehicle, the width of the vehicle and the height of the vehicle;
the width of the vehicle is determined by the following method:
wherein W is i Indicating the width of the vehicle, l 1 Representing the corresponding distance, l, of a first scan of a certain cross-section to the vehicle 2 Represents the distance, θ, corresponding to the last scanned point on the vehicle k Indicating the rotation angle of the laser;
the height of the vehicle is determined by the following method:
h ij ={H-l ij cosθ j },i=1,2,3,...,m,j=1,2,3,...,p n (8)
wherein h is ij Represents the height of the vehicle, H represents the mounting height of the laser I, l ij Data point array, θ, representing a vehicle j Indicating the included angle between the laser head of the laser I and the central axis at a certain moment;
the length of the vehicle adopts the following method:
wherein L represents the length of the vehicle, m represents the number of sections scanned by the laser I on the current vehicle, j represents a variable, f represents the laser scanning frequency, and p 1 Data points, t, representing the current cross-section scanned by laser I 1 Indicating the time when the laser I first scans the current vehicle, t 2 Indicating the time when the laser II first scans the current vehicle, d 1j Indicating the current distance the vehicle has traveled through lasers I and II.
CN202111335287.XA 2021-11-11 2021-11-11 Vehicle model analysis method based on laser and video technology time sequence and feature fusion Active CN114118238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111335287.XA CN114118238B (en) 2021-11-11 2021-11-11 Vehicle model analysis method based on laser and video technology time sequence and feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111335287.XA CN114118238B (en) 2021-11-11 2021-11-11 Vehicle model analysis method based on laser and video technology time sequence and feature fusion

Publications (2)

Publication Number Publication Date
CN114118238A CN114118238A (en) 2022-03-01
CN114118238B true CN114118238B (en) 2024-03-22

Family

ID=80378633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111335287.XA Active CN114118238B (en) 2021-11-11 2021-11-11 Vehicle model analysis method based on laser and video technology time sequence and feature fusion

Country Status (1)

Country Link
CN (1) CN114118238B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11185195A (en) * 1997-12-19 1999-07-09 Hitachi Zosen Corp Vehicle type discrimination method and device
JPH11203588A (en) * 1998-01-20 1999-07-30 Denso Corp Vehicle type discriminating device
CN104183133A (en) * 2014-08-11 2014-12-03 广州普勒仕交通科技有限公司 Method for acquiring and transmitting road traffic flow dynamic information
CN104361752A (en) * 2014-10-27 2015-02-18 北京握奇智能科技有限公司 Laser scanning based vehicle type recognition method for free flow charging
CN105427614A (en) * 2015-08-28 2016-03-23 北京动视元科技有限公司 Model classification system and method
JP2016164756A (en) * 2015-03-06 2016-09-08 三菱重工メカトロシステムズ株式会社 Vehicle model discrimination system, and vehicle model discrimination method and program
CN107256636A (en) * 2017-06-29 2017-10-17 段晓辉 A kind of traffic flow acquisition methods for merging laser scanning and video technique
CN107945518A (en) * 2017-12-20 2018-04-20 武汉万集信息技术有限公司 Laser type vehicle type recognition device and recognition methods without queue

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11185195A (en) * 1997-12-19 1999-07-09 Hitachi Zosen Corp Vehicle type discrimination method and device
JPH11203588A (en) * 1998-01-20 1999-07-30 Denso Corp Vehicle type discriminating device
CN104183133A (en) * 2014-08-11 2014-12-03 广州普勒仕交通科技有限公司 Method for acquiring and transmitting road traffic flow dynamic information
CN104361752A (en) * 2014-10-27 2015-02-18 北京握奇智能科技有限公司 Laser scanning based vehicle type recognition method for free flow charging
JP2016164756A (en) * 2015-03-06 2016-09-08 三菱重工メカトロシステムズ株式会社 Vehicle model discrimination system, and vehicle model discrimination method and program
CN105427614A (en) * 2015-08-28 2016-03-23 北京动视元科技有限公司 Model classification system and method
CN107256636A (en) * 2017-06-29 2017-10-17 段晓辉 A kind of traffic flow acquisition methods for merging laser scanning and video technique
CN107945518A (en) * 2017-12-20 2018-04-20 武汉万集信息技术有限公司 Laser type vehicle type recognition device and recognition methods without queue

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
兰晓晨 ; 宋飞 ; .车辆类型与数量的自动检测建模方法.南方农机.2018,(第14期),全文. *
刘伟铭 ; 王超 ; 梁雪 ; 刘一霄 ; .基于激光扫描仪的ETC车道系统探析.中国交通信息化.2018,(第S1期),全文. *
段发阶 ; 吴冰颖 ; 梁春疆 ; 傅骁 ; 蒋佳佳 ; .基于激光阵列的运行车辆宽度投影轮廓在线测量方法.测控技术.2020,(第08期),全文. *

Also Published As

Publication number Publication date
CN114118238A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN109059954B (en) Method and system for supporting high-precision map lane line real-time fusion update
US20240036207A1 (en) Multiple Resolution, Simultaneous Localization And Mapping Based On 3-D Lidar Measurements
CN111352112B (en) Target detection method based on vision, laser radar and millimeter wave radar
CN107705331B (en) Vehicle video speed measurement method based on multi-viewpoint camera
KR100201739B1 (en) Method for observing an object, apparatus for observing an object using said method, apparatus for measuring traffic flow and apparatus for observing a parking lot
AU2014202300B2 (en) Traffic monitoring system for speed measurement and assignment of moving vehicles in a multi-target recording module
CN112149550B (en) Automatic driving vehicle 3D target detection method based on multi-sensor fusion
CN109272482B (en) Urban intersection vehicle queuing detection system based on sequence images
CN113359097B (en) Millimeter wave radar and camera combined calibration method
WO2018194721A1 (en) Method of providing interference reduction and a dynamic region of interest in a lidar system
Zhang et al. Background filtering and vehicle detection with roadside lidar based on point association
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
CN113743171A (en) Target detection method and device
CN115943439A (en) Multi-target vehicle detection and re-identification method based on radar vision fusion
CN109785632A (en) A kind of magnitude of traffic flow statistical method and device
CN108596117A (en) A kind of scene monitoring method based on scanning laser range finder array
CN114821526A (en) Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud
JPH0933232A (en) Object observation method and object observation apparatus using this method, as well as traffic-flow measuring apparatus using this apparatus, and parking-lot observation apparatus
CN114758504A (en) Online vehicle overspeed early warning method and system based on filtering correction
CN115690713A (en) Binocular camera-based radar-vision fusion event detection method
Sun et al. Obstacle Detection of Intelligent Vehicle Based on Fusion of Lidar and Machine Vision.
CN114118238B (en) Vehicle model analysis method based on laser and video technology time sequence and feature fusion
CN107621229A (en) Real-time railroad track width measure system and method based on face battle array black and white camera
US20220404170A1 (en) Apparatus, method, and computer program for updating map
JPH10187974A (en) Physical distribution measuring instrument

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant