CN114489079A - Signal rule-based automatic driving front vehicle cut-in scene extraction method - Google Patents

Signal rule-based automatic driving front vehicle cut-in scene extraction method Download PDF

Info

Publication number
CN114489079A
CN114489079A CN202210099105.1A CN202210099105A CN114489079A CN 114489079 A CN114489079 A CN 114489079A CN 202210099105 A CN202210099105 A CN 202210099105A CN 114489079 A CN114489079 A CN 114489079A
Authority
CN
China
Prior art keywords
vehicle
data
signal
extracting
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210099105.1A
Other languages
Chinese (zh)
Inventor
陆思宇
李开兴
梁斯硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210099105.1A priority Critical patent/CN114489079A/en
Publication of CN114489079A publication Critical patent/CN114489079A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a signal rule-based automatic driving front vehicle cut-in scene extraction method, which comprises the following steps of: s1, acquiring the driving data and the position data of the vehicle, the position relation data of the target vehicle and the vehicle, and the video data which is time-synchronous with the driving data of the vehicle; s2, extracting a signal segment in a preset first time range according to a preset signal rule, and extracting a video segment in a preset second time range corresponding to the signal segment in the video data; and S3, carrying out scene marking on the video clip and storing the video clip. The method is based on the signal of the automatic driving controller, and utilizes the rule to identify the running track of the target vehicle in the signal, thereby judging whether the target vehicle cuts into the lane and extracting the video meeting the signal rule time period, so as to improve the extraction efficiency, liberate human resources and achieve higher extraction accuracy.

Description

Signal rule-based automatic driving front vehicle cut-in scene extraction method
Technical Field
The invention belongs to the technical field of automatic driving algorithms, and particularly relates to a signal rule-based automatic driving front vehicle cut-in scene extraction method.
Background
Automatic driving has gradually become an important field of automobile industry research in the future, and with the improvement of automatic driving grades, more and more computers intervene in driving, and more preparations are needed for guaranteeing the safety of automatic driving. According to statistics, a large number of traffic accidents occur when vehicles change lanes, so that the identification of the cut-in of the front vehicle in the automatic driving process is particularly important. In order to train a visual algorithm capable of identifying a person before a vehicle is cut, a large amount of video data of corresponding scenes are required to be trained, tested, simulated and the like. At present, videos acquired by the acquisition vehicles are continuous and have no mass data marked by specific scenes, the scenes cut by the front vehicles are relatively few, and a lot of manpower and time are consumed when people want to manually extract the scenes.
The chinese invention patent CN202010121033.7 proposes an extraction method of cut-in and cut-out scenes of an automatic driving front vehicle, which comprises the following steps: s1, installing data acquisition equipment on the vehicle, and acquiring data information of the vehicle, the target vehicle and the lane line; and S2, judging the cut-in and cut-out of the target vehicle by combining the data information collected in the step S1, and extracting the video or the picture at the position. The method for extracting the cut-in and cut-out scene of the automatic driving front vehicle can directly acquire the scene meeting the conditions from the scene database. The lane changing scene data is extracted without manually comparing the videos, so that the labor and time costs are saved. The method for judging whether a vehicle cuts in or cuts out a driving lane or not based on the sudden change of the longitudinal distance of the target vehicle is adopted, but the method cannot eliminate the sudden change of the longitudinal distance caused by sudden deceleration or acceleration of the target vehicle, so that the extraction result is not accurate enough.
Disclosure of Invention
In order to solve the problems, the invention provides a signal rule-based automatic driving front vehicle cut-in scene extraction method, which is based on an automatic driving controller signal and utilizes a rule to identify a target vehicle running track in a signal, so as to judge whether a target vehicle cuts into a lane and extract a video meeting a signal rule time period, thereby improving extraction efficiency, liberating human resources and achieving higher extraction accuracy.
In order to solve the technical problem, the technical scheme adopted by the invention is as follows: a signal rule-based automatic driving front vehicle cut-in scene extraction method comprises the following steps:
s1, acquiring the driving data and the position data of the vehicle, the position relation data of the target vehicle and the vehicle, and the video data time-synchronized with the driving data of the vehicle;
s2, extracting a signal segment in a preset first time range according to a preset signal rule, and extracting a video segment in a preset second time range corresponding to the signal segment in the video data;
and S3, carrying out scene marking on the video clip and storing the video clip.
As an optimization, the method for presetting the signal rule comprises the following steps:
s201, presetting a first time range for extracting a signal segment, and setting a coordinate system;
s202, setting a starting frame triggering condition for extracting the signal segment, recording a starting frame when the position relation data of the target vehicle and the vehicle meets the triggering condition, and executing the next step;
s203, setting a trigger condition of an end frame of the extracted signal segment, recording the end frame when the position relation data of the target vehicle and the vehicle meets the trigger condition, calculating the time difference between the end frame and the start frame, and returning to S202 if the time difference is larger than the first time range; if the time difference is smaller than the first time range, executing the next step;
s204, judging the driving state of the vehicle in the time difference according to the driving data of the vehicle, and executing the next step if the vehicle is in the driving state; otherwise, returning to S202;
s205, judging the lane change state of the vehicle in the time difference according to the position data of the vehicle, and returning to S202 if the vehicle is in the lane change state; otherwise, recording the ending frame as a successful switching-in frame of the target vehicle, and executing the next step;
s206, extracting a video clip in a preset second time range in the video data as a cut-in scene video by taking the successful cut-in frame as a time node;
s207, traversing the multiple target vehicles and executing S202-S206, and extracting multiple cut-in scene videos corresponding to the target vehicles.
As an optimization, the travel data includes a vehicle speed of the host vehicle; the position data comprises the transverse distance between the vehicle and lane lines on two sides of the vehicle; the position relation data of the target vehicle and the vehicle comprises the transverse distance and the longitudinal distance between the target vehicle and the vehicle.
In the optimization, the coordinate system is set by taking the central position of the head of the vehicle as a coordinate origin, taking the central position of the tail of the target vehicle as a coordinate point of the target vehicle, taking the width direction of the lane as a horizontal axis and taking the length direction of the lane as a vertical axis.
As optimization, the triggering condition of the start frame is that the longitudinal distance between the target vehicle and the vehicle is less than a preset maximum longitudinal distance threshold, and the transverse distance between the target vehicle and the vehicle is greater than a preset maximum transverse distance threshold.
As for optimization, the triggering condition of the end frame is that the longitudinal distance between the target vehicle and the host vehicle is smaller than a preset maximum longitudinal distance threshold, the transverse distance between the target vehicle and the host vehicle is changed from being larger than the preset maximum transverse distance threshold to being smaller than a preset minimum transverse distance threshold, and the time difference is smaller than the first time range.
Preferably, the driving state is that the vehicle speed of the vehicle is greater than zero.
And optimally, the lane changing state is that the distance between the vehicle and the left/right lane lines of the lane lines on the two sides is reduced from being larger than zero to zero, and then jumps from zero to being larger than zero.
Compared with the prior art, the invention has the following advantages:
1. the method comprises the steps of collecting the vehicle state of the vehicle and the position relation between the vehicle and a target vehicle, judging whether the vehicle and the relation between the vehicle and the target vehicle meet preset signal rule conditions or not, conveniently and quickly judging whether the vehicle is actively lane-changing or front vehicle cut-in or not, extracting scene videos cut-in by the front vehicle in video data according to the judgment result, and carrying out scene marking and storing the scene videos into a scene library so as to facilitate subsequent screening and application.
2. The judgment condition is preset according to the position relation and the time relation, so that the process from the fact that the front vehicle has the cut-in intention to the fact that the cut-in is completed is accurately judged, the vehicle state of the vehicle is combined, whether the vehicle is actively changed is further judged, the condition is simple and easy to achieve, and the extraction accuracy and efficiency are further improved.
3. The judgment conditions or the triggering conditions or the parameters set by the method are spatial position relations or time relations, so that the time is conveniently judged through simple hardware equipment, the judgment rule or the method is simple, the application is convenient, and the accuracy and the extraction efficiency are further improved.
Drawings
FIG. 1 is a general flow diagram of the present invention;
FIG. 2 is a flow chart of signal rule determination according to the present invention;
FIG. 3 is a schematic view of a road vehicle according to the present invention.
Detailed Description
The invention will be further explained with reference to the drawings and the embodiments.
Example (b): with reference to figures 1-3 of the drawings,
a signal rule-based automatic driving front vehicle cut-in scene extraction method comprises the following steps:
and S1, acquiring the driving data and the position data of the host vehicle, the position relation data of the target vehicle and the host vehicle, and the video data in time synchronization with the driving data of the host vehicle.
Firstly, acquiring data and video data of a vehicle-mounted controller, aligning the data and the video data in time, and acquiring running data, position data and synchronous video data of the vehicle, wherein the running data comprises the speed of the vehicle; the position data comprises the transverse distance between the vehicle and lane lines on two sides of the vehicle; the position relation data of the target vehicle and the host vehicle comprises the transverse distance and the longitudinal distance of the target vehicle and the host vehicle. The vehicle Speed signal is Speed, the distance between the vehicle and the Left lane line is Left _ line _ distance, the distance between the vehicle and the Right lane line is Right _ line _ distance, the transverse distance between the center of the tail of the target vehicle and the center of the head of the vehicle is Lat _ distance, when the target vehicle is on the Left side of the vehicle, the Lat _ distance is negative, when the vehicle is on the Right side of the vehicle, the Lat _ distance is positive, the distance between the center of the tail of the target vehicle and the longitudinal distance of the head of the vehicle is Long _ distance, when the center point of the tail of the target vehicle is in front of the center point of the head of the vehicle, the Long _ distance is positive, and vice versa.
And S2, extracting a signal segment in a preset first time range according to a preset signal rule, and extracting a video segment in a preset second time range corresponding to the signal segment in the video data.
The preset method of the signal rule comprises the following steps:
s201, presetting a first time range for extracting the signal segment, and setting a coordinate system. The coordinate system is set by taking the central position of the head of the vehicle as a coordinate origin, taking the central position of the tail of the target vehicle as a coordinate point of the target vehicle, taking the width direction of the lane as a horizontal axis and taking the length direction of the lane as a vertical axis. According to the actual road condition survey, most of the complete cut-in events occur within 8 to 9 seconds, so in this embodiment, we set the time range from when the target vehicle has the cut-in intention to when the target vehicle successfully cuts in, i.e. the first time range for extracting the signal segment, as 8 seconds.
S202, setting a starting frame triggering condition for extracting the signal segment, recording a starting frame when the position relation data of the target vehicle and the vehicle meets the triggering condition, and executing the next step. The triggering condition of the starting frame is that the longitudinal distance between the target vehicle and the vehicle is smaller than a preset maximum longitudinal distance threshold (set as 50 meters), and the transverse distance between the target vehicle and the vehicle is larger than a preset maximum transverse distance threshold (set as 3 meters). Specifically, it is determined whether the longitudinal distance Long _ distance between the target vehicle and the host vehicle is less than 50 meters (in this embodiment, 50 meters is taken as an example and can be adjusted according to the service requirement), and whether the transverse distance Lat _ distance between the target vehicle and the host vehicle is greater than 3 meters, if yes, the time signal frame is regarded as the start frame, and if the subsequent signal frames also satisfy the condition, the start frame is updated.
S203, setting an end frame triggering condition for extracting the signal segment, recording an end frame when the position relation data of the target vehicle and the vehicle meets the triggering condition, calculating the time difference between the end frame and the start frame, and returning to S202 if the time difference is larger than the first time range; and if the time difference is smaller than the first time range, executing the next step. The triggering condition of the ending frame is that the longitudinal distance between the target vehicle and the vehicle is smaller than a preset maximum longitudinal distance threshold, the transverse distance between the target vehicle and the vehicle is changed from being larger than the preset maximum transverse distance threshold to being smaller than a preset minimum transverse distance threshold (set to be 0.5 m), and the time difference is smaller than the first time range. Specifically, after the start frame occurs, it is determined whether a time signal in the subsequent signals satisfies that a transverse distance Lat _ distance between the target vehicle and the host vehicle is less than 0.5 m, a longitudinal distance Long _ distance between the target vehicle and the host vehicle is less than 50 m, a difference between a time of the time signal frame and a time of the start frame is within 8 seconds, if yes, the next step of determination is performed, and if not, the step returns to S202.
S204, judging the driving state of the vehicle in the time difference according to the driving data of the vehicle, and executing the next step if the vehicle is in the driving state; otherwise, the process returns to S202. The driving state is that the speed of the vehicle is greater than zero. Specifically, in this embodiment, assuming that the difference between the extracted end frame and the start frame is 6 seconds, it is determined whether the vehicle speed is greater than 0 within 6 seconds before the end frame, if not, the process returns to S202, and if yes, the process proceeds to the next determination.
S205, judging the lane change state of the vehicle in the time difference according to the position data of the vehicle, and returning to S202 if the vehicle is in the lane change state; otherwise, recording the ending frame as a successful switching-in frame of the target vehicle, and executing the next step. The lane changing state is that the distance between the vehicle and the left/right lane lines of the lane lines on the two sides is reduced from being larger than zero to zero, and then jumps from zero to being larger than zero. Specifically, it is determined whether the vehicle is actively changing lanes within the 6-second time difference, and the specific method is as follows: when the target vehicle Lat _ distance is negative within 6 seconds, namely the target vehicle is on the Left side of the vehicle, judging whether the vehicle actively changes the lane to the Left side or not, namely judging whether the line distance Left _ line _ distance between the vehicle and the Left lane is changed from being more than 0 to being less than 0 within 6 seconds, and then jumping to be more than 0 (target change of the Left lane line), if the condition is met, the vehicle actively changes the lane to the Left side, returning to S202, otherwise, recording an end frame as a successful cut-in frame of the target vehicle and entering the next step; when the target vehicle Lat _ distance is positive within the 6 seconds, namely the target vehicle is on the right side of the vehicle, judging whether the vehicle is actively changing the lane to the right side, namely judging whether the line distance Left _ line _ distance from the vehicle to the right lane is changed from more than 0 to less than 0 within the 6 seconds, and then jumping to be more than 0 (changing the Left lane line target), if the condition is met, the vehicle is actively changing the lane to the right side, returning to S202, otherwise, recording an end frame as a successful frame cut-in frame of the target vehicle and entering the next step;
and S206, extracting a video clip in a preset second time range in the video data as a cut-in scene video by taking the successful cut-in frame as a time node. In this embodiment, the second time range is set to a time range of 20 seconds in total, which is 10 seconds before and after the successful frame cut, and may be set to a time range of coincidence or non-coincidence before and after the frame cut, as required. And according to the successful cut-in frame, extracting signal segments 10 seconds before and after the successful cut-in frame as signal segments meeting the conditions, and extracting video segments corresponding to the signal segments as cut-in scene videos.
S207, traversing a plurality of target vehicles and executing S202-S206, and extracting a plurality of cut-in scene videos corresponding to the target vehicles
And S3, carrying out scene marking on the video clip and storing the video clip. And storing the video clips (cut-in scene videos) into a scene library, and labeling for later screening and application.
The invention can conveniently and quickly judge whether the vehicle is actively lane-changing or front vehicle cut-in or not by acquiring the vehicle state of the vehicle and the position relation with the target vehicle and judging whether the relationship between the vehicle and the target vehicle meets the preset signal rule condition or not, extracts the scene video cut-in by the front vehicle in the video data according to the judgment result, and carries out scene marking and storage on the scene video to the scene library so as to facilitate subsequent screening and application.
The judgment condition is preset according to the position relation and the time relation, so that the process from the fact that the front vehicle has the cut-in intention to the fact that the cut-in is completed is accurately judged, the vehicle state of the vehicle is combined, whether the vehicle is actively changed is further judged, the condition is simple and easy to achieve, and the extraction accuracy and efficiency are further improved.
The judgment conditions or the triggering conditions or the parameters set by the method are spatial position relations or time relations, so that the time is conveniently judged through simple hardware equipment, the judgment rule or the method is simple, the application is convenient, and the accuracy and the extraction efficiency are further improved.
The method is based on the signal of the automatic driving controller, and utilizes the rule to identify the running track of the target vehicle in the signal, thereby judging whether the target vehicle cuts into the lane and extracting the video meeting the signal rule time period, so as to improve the extraction efficiency, liberate human resources and achieve higher extraction accuracy.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the technical solutions, and those skilled in the art should understand that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all that should be covered by the claims of the present invention.

Claims (8)

1. A signal rule-based automatic driving front vehicle cut-in scene extraction method is characterized by comprising the following steps:
s1, acquiring the driving data and the position data of the vehicle, the position relation data of the target vehicle and the vehicle, and the video data time-synchronized with the driving data of the vehicle;
s2, extracting a signal segment in a preset first time range according to a preset signal rule, and extracting a video segment in a preset second time range corresponding to the signal segment in the video data;
and S3, carrying out scene marking on the video clip and storing the video clip.
2. The method for extracting the automatic driving front vehicle cut-in scene based on the signal rule as claimed in claim 1, wherein the method for presetting the signal rule comprises the following steps:
s201, presetting a first time range for extracting a signal segment, and setting a coordinate system;
s202, setting a starting frame triggering condition for extracting the signal segment, recording a starting frame when the position relation data of the target vehicle and the vehicle meets the triggering condition, and executing the next step;
s203, setting an end frame triggering condition for extracting the signal segment, recording an end frame when the position relation data of the target vehicle and the vehicle meets the triggering condition, calculating the time difference between the end frame and the start frame, and returning to S202 if the time difference is larger than the first time range; if the time difference is smaller than the first time range, executing the next step;
s204, judging the driving state of the vehicle in the time difference according to the driving data of the vehicle, and executing the next step if the vehicle is in the driving state; otherwise, returning to S202;
s205, judging the lane change state of the vehicle in the time difference according to the position data of the vehicle, and returning to S202 if the vehicle is in the lane change state; otherwise, recording the ending frame as a successful switching-in frame of the target vehicle, and executing the next step;
s206, extracting a video clip in a preset second time range in the video data as a cut-in scene video by taking the successful cut-in frame as a time node;
s207, traversing the multiple target vehicles and executing S202-S206, and extracting multiple cut-in scene videos corresponding to the target vehicles.
3. The signal rule-based automatic driving front vehicle cut-in scene extraction method according to claim 2, wherein the driving data comprises a vehicle speed of a host vehicle; the position data comprises the transverse distance between the vehicle and lane lines on two sides of the vehicle; the position relation data of the target vehicle and the vehicle comprises the transverse distance and the longitudinal distance between the target vehicle and the vehicle.
4. The method as claimed in claim 3, wherein the coordinate system is set with a center position of a head of the vehicle as a coordinate origin, a center position of a tail of the target vehicle as a coordinate point of the target vehicle, a width direction of the lane as a horizontal axis, and a length direction of the lane as a vertical axis.
5. The method as claimed in claim 3 or 4, wherein the triggering condition of the start frame is that the longitudinal distance between the target vehicle and the host vehicle is less than a preset maximum longitudinal distance threshold, and the lateral distance between the target vehicle and the host vehicle is greater than a preset maximum lateral distance threshold.
6. The method as claimed in claim 5, wherein the triggering condition of the end frame is that the longitudinal distance between the target vehicle and the host vehicle is less than a preset maximum longitudinal distance threshold, the lateral distance between the target vehicle and the host vehicle changes from greater than the preset maximum lateral distance threshold to less than a preset minimum lateral distance threshold, and the time difference is less than the first time range.
7. The method according to claim 5, wherein the driving state is that the vehicle speed of the vehicle is greater than zero.
8. The method as claimed in claim 5, wherein the lane change state is a state in which a distance between the host vehicle and a left/right lane line of the lane lines on both sides is reduced from greater than zero to zero and then jumps from zero to greater than zero.
CN202210099105.1A 2022-01-27 2022-01-27 Signal rule-based automatic driving front vehicle cut-in scene extraction method Withdrawn CN114489079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210099105.1A CN114489079A (en) 2022-01-27 2022-01-27 Signal rule-based automatic driving front vehicle cut-in scene extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210099105.1A CN114489079A (en) 2022-01-27 2022-01-27 Signal rule-based automatic driving front vehicle cut-in scene extraction method

Publications (1)

Publication Number Publication Date
CN114489079A true CN114489079A (en) 2022-05-13

Family

ID=81475792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210099105.1A Withdrawn CN114489079A (en) 2022-01-27 2022-01-27 Signal rule-based automatic driving front vehicle cut-in scene extraction method

Country Status (1)

Country Link
CN (1) CN114489079A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984824A (en) * 2023-02-28 2023-04-18 安徽蔚来智驾科技有限公司 Scene information screening method based on track information, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984824A (en) * 2023-02-28 2023-04-18 安徽蔚来智驾科技有限公司 Scene information screening method based on track information, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10573173B2 (en) Vehicle type identification method and device based on mobile phone data
CN109993969B (en) Road condition judgment information acquisition method, device and equipment
CN110400478A (en) A kind of road condition notification method and device
CN110136447A (en) Lane change of driving a vehicle detects and method for distinguishing is known in illegal lane change
CN107301776A (en) Track road conditions processing and dissemination method based on video detection technology
CN108460968A (en) A kind of method and device obtaining traffic information based on car networking
CN111599181A (en) Typical natural driving scene recognition and extraction method for intelligent driving system test
CN106408935B (en) Motor vehicle continuous lane change behavior monitoring system and method based on navigation
CN108986472A (en) One kind turns around vehicle monitoring method and device
CN113470371B (en) Method, system, and computer-readable storage medium for identifying an offending vehicle
CN108932849B (en) Method and device for recording low-speed running illegal behaviors of multiple motor vehicles
CN113127466B (en) Vehicle track data preprocessing method and computer storage medium
CN109523787A (en) A kind of fatigue driving analysis method based on vehicle pass-through track
WO2022213542A1 (en) Method and system for clearing information-controlled intersection on basis of lidar and trajectory prediction
CN107590999A (en) A kind of traffic state judging method based on bayonet socket data
CN114489079A (en) Signal rule-based automatic driving front vehicle cut-in scene extraction method
CN110827537B (en) Method, device and equipment for setting tidal lane
CN111105619A (en) Method and device for judging road side reverse parking
WO2018209470A1 (en) License plate identification method and system
CN107393311A (en) A kind of car plate tamper Detection device and method
CN116704750B (en) Traffic state identification method based on clustering algorithm, electronic equipment and medium
CN109658698A (en) A kind of detection of motor vehicle illegal running and grasp shoot method based on deep learning
CN115394089A (en) Vehicle information fusion display method, sensorless passing system and storage medium
CN109515478B (en) Vehicle positioning method applied to safety auxiliary protection system for shunting operation of rail vehicle
CN109118760B (en) Comprehensive test system and method for unmanned vehicle traffic sign visual detection and response

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220513

WW01 Invention patent application withdrawn after publication