CN115222779A - Vehicle cut-in detection method and device and storage medium - Google Patents

Vehicle cut-in detection method and device and storage medium Download PDF

Info

Publication number
CN115222779A
CN115222779A CN202111094997.8A CN202111094997A CN115222779A CN 115222779 A CN115222779 A CN 115222779A CN 202111094997 A CN202111094997 A CN 202111094997A CN 115222779 A CN115222779 A CN 115222779A
Authority
CN
China
Prior art keywords
vehicle
target vehicle
frame
frame number
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111094997.8A
Other languages
Chinese (zh)
Other versions
CN115222779B (en
Inventor
祁玉晓
王振男
蔡璐珑
何俏君
李梓龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Automobile Group Co Ltd filed Critical Guangzhou Automobile Group Co Ltd
Priority to CN202111094997.8A priority Critical patent/CN115222779B/en
Publication of CN115222779A publication Critical patent/CN115222779A/en
Application granted granted Critical
Publication of CN115222779B publication Critical patent/CN115222779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a vehicle cut-in detection method, a device and a storage medium, wherein the method comprises the following steps: carrying out real-time tracking detection on target vehicles entering the surrounding area of the vehicle; when the target vehicle is continuously in the early warning area of the vehicle and the target vehicle has a trend of continuously cutting into the lane where the vehicle is located, determining whether a first frame number of the target vehicle continuously in the early warning area is larger than a first frame number threshold value, and determining whether a second frame number of the target vehicle having a trend of continuously cutting into the lane where the vehicle is located is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle; if the first frame number is larger than the first frame number threshold value and the second frame number is larger than the second frame number threshold value, determining that the target vehicle cuts into the lane where the vehicle is located; the invention realizes the adaptivity of the judgment threshold value, reduces the influence of the motion noise of the vehicle, and improves the accuracy of the vehicle cut-in detection result.

Description

Vehicle cut-in detection method and device and storage medium
Technical Field
The invention relates to the technical field of vehicle control, in particular to a vehicle cut-in detection method, a vehicle cut-in detection device and a storage medium.
Background
In the traditional driving mode, a driver can judge the cut-in trend of other vehicles, so that the vehicles are correspondingly controlled and operated to reduce the occurrence of safety accidents. In the conventional automatic driving control, because no driver artificially judges the cut-in tendency of other vehicles, the automatic driving system needs to detect and judge the cut-in condition of other vehicles, so that corresponding operation is executed according to a detection result.
However, in the conventional vehicle cut-in detection method, the curvature radius of the vehicle constant-curvature running path is obtained based on the motion information of the vehicle, then the relative motion information of other vehicles relative to the vehicle constant-curvature running path is obtained according to the curvature radius, and whether the other vehicles have a cut-in trend is determined based on the relative motion information. However, in the vehicle cut-in detection method, the judgment basis is relative motion information, which is easily affected by the motion noise of the vehicle (for example, the relative motion information changes due to the change of the vehicle motion information), so that the relative motion information has errors, and the vehicle cut-in detection result is not accurate enough.
Disclosure of Invention
The invention provides a vehicle cut-in detection method, a device and a storage medium, which aim to solve the problem that the vehicle cut-in detection result is not accurate enough because the vehicle is easily influenced by the motion noise of the vehicle in the conventional vehicle cut-in detection method.
Provided is a vehicle cut-in detection method, including:
carrying out real-time tracking detection on target vehicles entering the surrounding area of the vehicle;
when the target vehicle is continuously in the early warning area of the vehicle and the target vehicle has a trend of continuously cutting into the lane where the vehicle is located, determining whether a first frame number of the target vehicle continuously in the early warning area is larger than a first frame number threshold value, and determining whether a second frame number of the target vehicle having a trend of continuously cutting into the lane where the vehicle is located is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle;
and if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into the lane where the vehicle is located.
Provided is a vehicle cut-in detection device including:
the detection module is used for carrying out real-time tracking detection on the target vehicles entering the surrounding area of the vehicle;
the first determining module is used for determining whether a first frame number of the target vehicle continuously in the early warning area is greater than a first frame number threshold value or not and determining whether a second frame number of the target vehicle continuously cutting into the lane of the vehicle is greater than a second frame number threshold value or not when the target vehicle continuously is in the early warning area of the vehicle and the target vehicle has a trend of continuously cutting into the lane of the vehicle, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle;
and the second determining module is used for determining that the target vehicle cuts into the lane where the vehicle is located if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value.
The vehicle cut-in detection device comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, and the steps of the vehicle cut-in detection method are realized when the processor executes the computer program.
A readable storage medium is provided, which stores a computer program that, when executed by a processor, carries out the steps of the above-mentioned vehicle cut-in detection method.
The vehicle cut-in detection method, the vehicle cut-in detection device, the computer equipment and the storage medium track and detect the target vehicles entering the surrounding area of the vehicle in real time; when the target vehicle is continuously in the early warning area of the vehicle and the target vehicle has a trend of continuously cutting into the lane where the vehicle is located, determining whether a first frame number of the target vehicle continuously in the early warning area is larger than a first frame number threshold value, and determining whether a second frame number of the target vehicle having a trend of continuously cutting into the lane where the vehicle is located is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle; if the first frame number is larger than the first frame number threshold value and the second frame number is larger than the second frame number threshold value, determining that the target vehicle cuts into the lane where the vehicle is located; according to the method and the device, the frame number threshold is adjusted according to the transverse speed of the target vehicle, the adaptivity of the judgment threshold is realized, the lane cut-in trend judgment can be flexibly carried out according to the actual situation of the target vehicle, the influence of the motion noise of the vehicle is reduced, the accuracy of the vehicle cut-in detection algorithm is improved, and the accuracy of the vehicle cut-in detection result is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor.
FIG. 1 is a schematic flow chart of a vehicle cut-in detection method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of different coordinate systems of a top view of a vehicle according to an embodiment of the present invention;
FIG. 3 is a schematic view of a side view of a vehicle showing different coordinate systems according to an embodiment of the present invention;
FIG. 4 is a schematic view of different coordinate systems of a front view of a vehicle according to an embodiment of the present invention;
FIG. 5 is a schematic view of a lane line and early warning area in an embodiment of the present invention;
FIG. 6 is a schematic diagram of a vehicle cut-in detection apparatus according to an embodiment of the present invention;
fig. 7 is another schematic structural diagram of the vehicle cut-in detection device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The vehicle cut-in detection method provided by the embodiment of the invention can be applied to a vehicle cut-in detection system, and the vehicle cut-in detection system comprises a vehicle, a target vehicle (a plurality of target vehicles can be used) and a vehicle cut-in detection device. The host vehicle and the vehicle cut-in detection device communicate with each other via a vehicle bus. The vehicle cut-in detection device carries out real-time tracking detection on a target vehicle entering a surrounding area of the vehicle through a sensor; when the target vehicle is continuously in the early warning area of the vehicle and the target vehicle has a trend of continuously cutting into the lane where the vehicle is located, determining whether a first frame number of the target vehicle continuously in the early warning area is larger than a first frame number threshold value, and determining whether a second frame number of the target vehicle having a trend of continuously cutting into the lane where the vehicle is located is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle; if the first frame number is larger than the first frame number threshold value and the second frame number is larger than the second frame number threshold value, determining that the target vehicle cuts into the lane where the vehicle is located; according to the method and the device, the frame number threshold is adjusted according to the transverse speed of the target vehicle, the adaptivity of the judgment threshold is realized, the lane cut-in trend judgment can be flexibly carried out according to the actual situation of the target vehicle, the influence of the motion noise of the vehicle is reduced, the accuracy of the vehicle cut-in detection algorithm is improved, and the accuracy of the vehicle cut-in detection result is improved.
The vehicle cut-in detection system including the vehicle, the target vehicle, and the vehicle cut-in detection device is only an exemplary illustration, and in other embodiments, the vehicle cut-in detection system may further include other devices, which are not described herein again.
In an embodiment, as shown in fig. 1, a vehicle cut-in detection method is provided, which is described by taking the vehicle cut-in detection device as an example, and includes the following steps:
s10: and carrying out real-time tracking detection on the target vehicles entering the surrounding area of the vehicle.
In the present embodiment, a sensor is mounted on the vehicle body of the host vehicle. During the driving of the vehicle, the sensor on the vehicle detects other vehicles around the vehicle to determine whether the other vehicles enter the area around the vehicle, if the other vehicles enter the area around the vehicle, the other vehicles entering the area around the vehicle are taken as target vehicles, and the sensor performs real-time tracking detection on the target vehicles at a certain detection frame rate, so as to obtain the detection data of the target vehicles in each frame. If other vehicles do not enter the surrounding area of the vehicle, the vehicles are not tracked and detected, so that the data processing amount is reduced, the load of the vehicle cut-in detection device is reduced, the calculation efficiency is improved, in addition, the noise influence of the vehicle detection data in an excessively long distance on the vehicle cut-in detection algorithm can be reduced to a certain extent, and the precision of the vehicle cut-in detection algorithm is ensured. The method comprises the steps of obtaining a plurality of frames of detection data of a target vehicle, wherein the detection data of each frame obtained by tracking the target vehicle by a sensor is recorded in an accumulated mode, and thus the detection data of the target vehicle can be obtained.
The area around the host vehicle may be a three-dimensional area determined by the host vehicle coordinate system. As shown in fig. 2, 3, and 4, the origin o of the coordinates of the host vehicle coordinate system (host vehicle coordinate system) v The center of a vehicle head logo of the vehicle takes the direction of the vehicle head of the vehicle as the positive direction of an x axis, the left side direction (the side of a driver) of the vehicle body of the vehicle as the positive direction of a y axis, and the upward direction vertical to the roof of the vehicle as the positive direction of a z axis; the x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the target vehicle in the coordinate system of the vehicle are respectively x v 、y v 、z v . The surrounding area of the vehicle can be set according to the coordinate system of the vehicle, and the surrounding area of the vehicle is (x) vmax ,x vmin ,y vmax ,y vmin ,z vmax ,zv min ) For example, the area around the vehicleThe domain may be (100, -50, 51, -51,3.8, -0.2).
S20: and determining whether the target vehicle is continuously in the early warning area or not, and determining whether the target vehicle has a tendency of continuously cutting into the lane where the vehicle is located or not.
After the target vehicles entering the surrounding area of the vehicle are tracked and detected in real time, whether the target vehicles are continuously in the early warning area of the vehicle and whether the target vehicles have the tendency of continuously cutting into the lanes where the vehicle is located need to be determined according to detection data for real-time tracking and detection of the target vehicles.
In the running process of the vehicle, the vehicle cut-in detection device needs to perform real-time tracking detection on a target vehicle entering a surrounding area of the vehicle so as to obtain multi-frame detection data of the target vehicle, and therefore coordinate position judgment and cut-in trend judgment are performed on the target vehicle according to the received multi-frame detection data. Judging the coordinate position of the target vehicle to determine whether the target vehicle is in the early warning area of the vehicle or not for each frame of received detection data, wherein if the target vehicle is in the early warning area in the continuous multi-frame detection data, the target vehicle is in the early warning area and indicates that the target vehicle is continuously in the early warning area of the vehicle; on the contrary, if the detection data that the target vehicle is in the early warning area is discontinuous, it indicates that the target vehicle is not continuously in the early warning area of the vehicle. Meanwhile, for each frame of received detection data, the cut-in trend of the target vehicle is required to be judged so as to determine whether the target vehicle continuously approaches to the lane where the vehicle is located, and if the target vehicle approaches to the lane where the vehicle is located in the continuous multi-frame detection data, the target vehicle has a trend of continuously cutting into the lane where the vehicle is located; on the contrary, if the detection data of the target vehicle approaching the lane where the vehicle is located is discontinuous, the target vehicle does not have the tendency of continuously cutting into the lane where the vehicle is located.
The early warning area of the vehicle is a preset area around the vehicle, and when the target vehicle is in the early warning area of the vehicle, the target vehicle is likely to cut into the lane where the vehicle is located, and the vehicle needs to cut into the target vehicle for warning.
S30: if the target vehicle is continuously in the early warning area of the vehicle and the target vehicle has the trend of continuously cutting into the lane where the vehicle is located, determining a first frame number of the target vehicle continuously in the early warning area and determining a second frame number of the target vehicle having the trend of continuously cutting into the lane where the vehicle is located.
After determining whether the target vehicle is continuously in the early warning area of the vehicle and determining whether the target vehicle has a trend of continuously cutting into the lane where the vehicle is located, if the target vehicle is continuously in the early warning area of the vehicle and the target vehicle has a trend of continuously cutting into the lane where the vehicle is located, determining a first frame number of the target vehicle which is continuously in the early warning area, and determining a second frame number of the target vehicle which has a trend of continuously cutting into the lane where the vehicle is located.
The method comprises the steps of determining that a target vehicle in multi-frame detection data is in an early warning area of the vehicle, and taking the number of continuous frames with the latest frame as an end frame as a first number of frames. The method comprises the steps of firstly determining detection data of a target vehicle in an early warning area of the vehicle in multi-frame detection data, determining whether continuous frames with the latest frame as an end frame exist in the multi-frame detection data meeting the condition, and if yes, taking the number of the continuous frames as a second frame number.
For example, after the sensor performs real-time tracking detection on the target vehicle, N frames of detection data are obtained, and according to the latest frame of detection data of the N frames of detection data, it is determined that the target vehicle is in the early warning area of the vehicle, and then the detection data of the early warning area of the target vehicle in the N frames of detection data is used as the first target data, and the number of the first target data in the N frames of detection data is 10 in total, which are respectively the 1 st frame, the 2 nd frame, the 3 rd frame, the 4 th frame, the 7 th frame, the 8 th frame, the N-3 rd frame, the N-2 nd frame, the N-1 st frame, and the N th frame, where N-3 ≠ 9 and the N th frame is the latest frame, and then in the 10 frames of first target data, the N-3 rd frame, the N-2 nd frame, the N-1 st frame, and the N th frame are consecutive frames with the latest frame as an end frame, and then it is determined that the first frame is 4.
In this embodiment, the N frames of detection data have 10 frames of first target data in total, the N-3 th frame, the N-2 th frame, the N-1 th frame, and the nth frame are consecutive detection data frames using the latest frame as an end frame, the number of the first target data, the consecutive detection data frames, and the number of the first frames 4 are exemplary, and may be other frames in other embodiments, which are not described herein again.
And determining the number of continuous frames taking the latest frame as the end frame as the second frame number, wherein the target vehicle in the multi-frame detection data has the tendency of cutting into the lane where the vehicle is located. The method comprises the steps of firstly determining detection data of a target vehicle having a tendency of cutting into a lane where the vehicle is located in multi-frame detection data, determining whether continuous frames with the latest frame as an end frame exist in the multi-frame detection data meeting the condition, and if so, taking the number of the continuous frames as a second number of frames.
The target vehicle has a tendency to cut into the lane where the vehicle is located, and various judgment modes can be provided. For example, if it is determined from the multi-frame detection data that the relative distance between the geometric center of the target vehicle and the center line of the road in the lane in which the vehicle is located decreases, it indicates that the target vehicle has a tendency to cut into the lane in which the vehicle is located. Compared with the prior art that the relative distance between the two vehicles is used as a judgment index, the method takes the relative distance between the geometric center of the target vehicle and the road center line of the lane where the vehicle is located as a judgment index, and is more accurate.
For example, if 6 frames of detection data in the target data of N frames satisfy the second preset condition, it is determined that the consecutive frames in the 6 frames of detection data satisfying the second preset condition are respectively the 2 nd frame, the 3 rd frame, the 6 th frame, the 7 th frame, the N-1 st frame, and the N th frame, where N-1 ≠ 8, and the N th frame is the latest frame, it is determined that the number of the second frames is 2.
In this embodiment, in the N frames of target data, the second preset condition that is met by 6 frames of detection data is only an exemplary description, in the 6 frames of detection data that meet the second preset condition, the consecutive frames are the 2 nd frame, the 3 rd frame, the 6 th frame, the 7 th frame, the N-1 th frame, and the N th frame are only exemplary descriptions, and the second frame number is 2 only exemplary descriptions.
In this embodiment, the determination that the target vehicle has the tendency to cut into the lane where the vehicle is located is performed only by way of example by determining that the relative distance between the geometric center of the target vehicle and the road center line of the lane where the vehicle is located decreases, and in other embodiments, the determination that the target vehicle has the tendency to cut into the lane where the vehicle is located may also be performed by other ways, which is not described herein again.
S40: determining whether the first frame number is greater than a first frame number threshold, and determining whether the second frame number is greater than a second frame number threshold.
After determining the first frame number and the second frame number, determining whether the first frame number is greater than a first frame number threshold, and determining whether the second frame number is greater than a second frame number threshold. The first frame number threshold and the second frame number threshold are frame number thresholds adjusted according to the transverse speed of the target vehicle.
In this embodiment, the detection data of each frame includes information such as the length, width, and height of the target vehicle, the heading angle, the longitudinal speed, the lateral speed, and the speed variance. After the detection data of multiple frames of target vehicles are obtained, a first frame number threshold value and a second frame number threshold value are determined according to the transverse speed of the target vehicle in the latest frame of detection data.
S50: and if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into the lane where the vehicle is located.
After determining whether the first frame number is greater than a first frame number threshold and determining whether the second frame number is greater than a second frame number threshold, if the first frame number is greater than the first frame number threshold and the second frame number is greater than the second frame number threshold, determining that the target vehicle cuts into the lane where the vehicle is located, and indicating that the target vehicle is continuously located in the early warning area and has a tendency of cutting into the lane where the vehicle is located, and determining that the target vehicle cuts into the lane where the vehicle is located. The method has the advantages that the cut-in trend of the target vehicle is judged according to the two cut-in judgment indexes whether the target vehicle is continuously in the early warning area or not and whether the target vehicle is continuously close to the central line of the road of the lane where the vehicle is located, so that the accuracy of the cut-in judgment result is guaranteed, meanwhile, the frame number threshold value which is self-adaptively adjusted is adopted, the frame number threshold value which meets the self condition is determined according to the actual transverse speed of the target vehicle, and compared with the traditional fixed threshold value, the method has better adaptability, the accuracy of a vehicle cut-in detection algorithm is improved, and the accuracy of the vehicle cut-in detection result is improved.
In the embodiment, by performing real-time tracking detection on a target vehicle entering a surrounding area of the vehicle, when the target vehicle is continuously located in an early warning area of the vehicle and the target vehicle has a continuous tendency of cutting into a lane where the vehicle is located, determining whether a first frame number of the target vehicle continuously located in the early warning area is greater than a first frame number threshold, and determining whether a second frame number of the target vehicle having a continuous tendency of cutting into the lane where the vehicle is located is greater than a second frame number threshold, where the first frame number threshold and the second frame number threshold are frame number thresholds adjusted according to a transverse speed of the target vehicle; if the first frame number is larger than the first frame number threshold value and the second frame number is larger than the second frame number threshold value, determining that the target vehicle cuts into the lane where the vehicle is located; according to the method and the device, the frame number threshold is adjusted according to the transverse speed of the target vehicle, the adaptivity of the judgment threshold is realized, the lane cut-in trend judgment can be flexibly carried out according to the actual situation of the target vehicle, the influence of the motion noise of the vehicle is reduced, the accuracy of the vehicle cut-in detection algorithm is improved, and the accuracy of the vehicle cut-in detection result is improved.
In one embodiment, in step S50, the first frame number threshold and the second frame number threshold are determined by:
s51: and acquiring detection data for real-time tracking detection of the target vehicle.
After real-time tracking detection is performed on a target vehicle entering a surrounding area of the vehicle, detection data for performing real-time tracking detection on the target vehicle needs to be acquired, so that a first frame number threshold and a second frame number threshold are determined according to a transverse speed of the target vehicle in the latest frame of detection data.
S52: and determining the transverse speed of the target vehicle in the latest frame of detection data, and determining the detection frame rate for carrying out real-time tracking detection on the target vehicle.
After multi-frame detection data of the target vehicle are acquired, the transverse speed of the target vehicle in the latest frame of detection data needs to be determined, and the detection frame rate of the sensor for performing real-time tracking detection on the target vehicle needs to be determined.
S53: a predetermined lateral velocity threshold, a first time threshold and a second time threshold are determined.
Meanwhile, a preset transverse speed threshold, a first time threshold and a second time threshold are also required to be determined, wherein the first time threshold is a time threshold when the target vehicle is continuously in the early warning area, and the second time threshold is a threshold when the target vehicle continuously approaches to the lane where the vehicle is located. For example, the second time threshold may be a time threshold in which the relative distance between the geometric center of the target vehicle and the center line of the road in the lane where the host vehicle is located continuously decreases.
The first time threshold is a time threshold of the target vehicle continuously being in the early warning area at a preset vehicle speed, and the first time threshold may be 3s. The second time threshold is a time threshold at which the target vehicle continuously approaches to the center line of the road in the lane where the vehicle is located at a preset vehicle speed (a preset standard vehicle speed), that is, a time threshold at which the relative distance between the geometric center of the target vehicle and the center line of the road in the lane where the vehicle is located continuously decreases, and the second time threshold may be 3s. The lateral velocity threshold is a preset lateral velocity (a pre-calibrated standard lateral velocity), wherein the lateral velocity threshold may be 0.6m/s.
In this embodiment, the first time threshold is 3s, the second time threshold is 3s, and the lateral velocity threshold is 0.6m/s, which are only exemplary illustrations, and in other embodiments, the first time threshold, the second time threshold, and the lateral velocity threshold may be other values, and are not described herein again.
S54: and calculating to obtain a first frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and the first time threshold of the target vehicle.
Wherein, the first frame number threshold can be calculated by the following formula:
F th_1 =f*th 1_std *(v lateral_std /v lateral );
wherein, F th_1 Is the first threshold value of the number of frames, f is the detection frame rate of the sensor to the target vehicle, th 1_std Is a first time threshold, v lateral_std Is a lateral velocity threshold, v lateral Is the lateral velocity of the target vehicle.
S55: and calculating to obtain a second frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and the second time threshold of the target vehicle.
Wherein, the second frame number threshold may be calculated by the following formula:
F th_2 =f*th 2_std *(v lateral_std /v lateral );
wherein, F th_2 F is the second threshold value of the number of frames, th is the detection frame rate of the sensor to the target vehicle 2_std Is a second time threshold, v lateral_std Is a lateral velocity threshold, v lateral Is the lateral velocity of the target vehicle.
In an embodiment, in order to ensure that the subsequent determination of the relevant frame number threshold needs to ensure the first frame number threshold and the positive integer of the first frame number threshold, in this embodiment, a rounding method is adopted to round the first frame number threshold and the first frame number threshold.
Then, the first frame number threshold is calculated by the following formula:
F th_1 =round(f*th 1_std *(v lateral_std /v lateral ));
wherein, F th_1 Is the first frame number threshold, round is rounding, f is the detection frame rate, th 1_std Is a first time threshold value, v lateral_std Is a lateral velocity threshold, v lateral Is the lateral velocity of the target vehicle.
Then, the second frame number threshold is calculated by the following formula:
F th_1 =round(f*th 2_std *(v lateral_std /v lateral ));
wherein, F th_2 Is the second frame number threshold, round is rounding, f is the detection frame rate, th 2_std Is a second time threshold, v lateral_std Is a lateral velocity threshold, v lateral Is the lateral velocity of the target vehicle.
In the embodiment, the detection data for real-time tracking detection of the target vehicle is acquired, then the transverse speed of the target vehicle in the latest frame of detection data is determined, the detection frame rate for real-time tracking detection of the target vehicle is determined, the preset transverse speed threshold, the preset first time threshold and the preset second time threshold are determined, the first frame number threshold is calculated according to the transverse speed, the detection frame rate, the transverse speed threshold and the first time threshold of the target vehicle, the second frame number threshold is calculated according to the transverse speed, the detection frame rate, the transverse speed threshold and the second time threshold of the target vehicle, the specific determination process according to the first frame number threshold and the second frame number threshold is determined, and an accurate basis is provided for the subsequent vehicle cut-in judgment according to the self-adaptive frame number threshold.
In an embodiment, after performing real-time tracking detection on a target vehicle entering a preset area around the vehicle, in step S51, that is, acquiring detection data for performing real-time tracking detection on the target vehicle, the method specifically includes the following steps:
s511: first tracking data of the first sensor for real-time tracking detection of the target vehicle are obtained.
In this embodiment, two types of sensors are mounted on the vehicle body of the vehicle: the detection frame rates of the first sensor and the second sensor are the same.
In the driving process of the vehicle, first tracking data of the first sensor for carrying out real-time tracking detection on the target vehicle needs to be acquired. The first sensor performs real-time tracking detection on the target vehicle to obtain tracking data of the target vehicle in a first sensor coordinate system (such as a camera coordinate system), and then converts each frame of the obtained tracking data into the vehicle coordinate system to obtain first tracking data of the target vehicle in the vehicle coordinate system.
The first tracking data comprises coordinate information of the target vehicle in the vehicle coordinate system, and also comprises attribute information of the target vehicle, such as the length, the width and the height of the target vehicle, the tracking ID, the course angle, the longitudinal speed, the transverse speed, the speed variance and the like.
The length, width and height of the target vehicle, the tracking ID of the target vehicle and other attribute information are irrelevant to the coordinate systems and are not changed in different coordinate systems, and in addition, according to the definition of the first sensor coordinate system and the vehicle coordinate system, the indication directions of corresponding coordinate axes in the two coordinate systems are the same, and only the positions of origin points of the coordinates are different, so that the attributes of the target, such as the course angle, the longitudinal speed, the transverse speed, the speed variance and the like, are not changed in the two coordinate systems. Therefore, each frame of the obtained tracking data is converted into the coordinate system of the vehicle, and mainly the coordinate information of the target vehicle in the first sensor coordinate system is converted into the coordinate information of the target vehicle in the coordinate system of the vehicle.
As shown in fig. 2 to 4, taking the first sensor as the smart camera as an example, the camera coordinate system takes the mounting position center of the smart camera on the vehicle as the origin of coordinates o c The direction of the front of the vehicle is taken as the positive direction of an x axis, the direction of the left side of the vehicle body of the vehicle is taken as the positive direction of a y axis, and the upward direction vertical to the roof of the vehicle is taken as the positive direction of a z axis. The x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the target vehicle in the camera coordinate system are respectively x c 、y c 、z c That is, the coordinate information of the target vehicle in the camera coordinate system is (x) c ,y c ,z c ) Then the coordinate information of the target vehicle in the camera coordinate system is converted into the coordinate information (x) of the target vehicle in the vehicle coordinate system v ,y v ,z v ) The conversion formula is:
Figure BDA0003268773860000091
wherein x is v 、y v 、z v Respectively is the x-axis coordinate of the target vehicle under the coordinate system of the vehicleY-axis coordinates, z-axis coordinates; r v Is a rotation matrix, R v Is a 3*3 matrix; t is v For translation matrices, T v Is a 3*1 matrix; r v And R v The calibration matrix is a matrix calibrated in advance, and the calibration can be carried out according to the position of a first sensor (such as an intelligent camera) and the relative position between the coordinate origin position (such as the center of a vehicle head logo of the vehicle) in the coordinate system of the vehicle.
S512: and acquiring second tracking data of the second sensor for tracking and detecting the target vehicle in real time.
In the driving process of the vehicle, second tracking data of the second sensor for real-time tracking detection of the target vehicle needs to be acquired. The second sensor performs real-time tracking detection on the target vehicle to obtain tracking data of the target vehicle in a second sensor coordinate system (such as a laser radar coordinate system), and then converts each frame of the obtained tracking data into the vehicle coordinate system to obtain second tracking data of the target vehicle in the vehicle coordinate system.
The second tracking data includes coordinate information of the target vehicle in the coordinate system of the vehicle, and the second tracking data also includes length, width and height of the target vehicle, and attribute information such as tracking ID, course angle, longitudinal speed, transverse speed, speed variance and the like of the target vehicle.
As shown in fig. 2 to 4, taking the second sensor as the lidar, the lidar coordinate system takes the installation position center of the lidar on the vehicle as the coordinate origin o i The direction of the front of the vehicle is taken as the positive direction of an x axis, the direction of the left side of the vehicle body of the vehicle is taken as the positive direction of a y axis, and the upward direction vertical to the roof of the vehicle is taken as the positive direction of a z axis. The x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the target vehicle in the laser radar coordinate system are respectively x i 、y i 、z i That is, the coordinate information of the target vehicle in the laser radar coordinate system is (x) i ,y i ,z i ) Converting the coordinate information of the target vehicle in the laser radar coordinate system into the coordinate information (x) of the target vehicle in the vehicle coordinate system v ,y v ,z v ) The conversion formula is:
Figure BDA0003268773860000101
wherein x is v 、y v 、z v Respectively an x-axis coordinate, a y-axis coordinate and a z-axis coordinate of the target vehicle under the coordinate system of the vehicle; r v Is a rotation matrix, R v Is a 3*3 matrix; t is v For translation matrices, T v Is a 3*1 matrix; r v And R v The calibration matrix is a pre-calibrated matrix, and the calibration can be performed according to the relative position between the position of the second sensor (such as a laser radar) and the position of the origin of coordinates in the coordinate system of the vehicle (such as the center of the head logo of the vehicle).
S513: and matching and fusing the first tracking data and the corresponding second tracking data of each frame based on a Hungarian matching algorithm and a Kalman filtering algorithm to obtain multi-frame detection data.
When the first sensor and the second sensor perform real-time tracking detection on a target vehicle, the first tracking data and the second tracking data of one frame are respectively obtained, namely the first tracking data and the second tracking data of each frame are sent to the vehicle cut-in detection device, so that the vehicle cut-in detection device performs matching fusion on the first tracking data of each frame, the corresponding second tracking data and the fusion data of the previous frame based on a Hungarian matching algorithm and a Kalman filtering algorithm to obtain detection data of each frame, and more accurate multi-frame detection data are obtained.
In one embodiment, the first sensor may be a smart camera and the second sensor may be a lidar. To facilitate data collection, the camera may be installed at a central position of a boundary between a front windshield and a roof of the vehicle, and the lidar may be installed at a central position of the roof of the vehicle, as shown in fig. 2 to 4. The intelligent camera and the laser radar comprise a detection tracking algorithm, and can directly acquire image information and point cloud information of a target vehicle in the driving process of the vehicle, track and detect the target vehicle through the detection tracking algorithm, and then directly output coordinate information, length, width and height of the target vehicle, and attribute information such as a course angle, longitudinal speed, transverse speed, speed variance and the like of the target vehicle.
Taking the example that the first sensor may be an intelligent camera and the second sensor may be a laser radar, the first tracking data and the second tracking data each include a tracking ID of the target vehicle. The vehicle cut-in detection device performs target vehicle matching by adopting a Hungarian matching algorithm based on the tracking ID of the target vehicle so as to determine first tracking data and second tracking data corresponding to the fused data in the previous frame in the current frame. Because laser radar is more accurate to the detection of position, size, and the data is comparatively accurate directly perceived, consequently regard the first frame second tracking data that laser radar obtained as first frame detected data, then adopt kalman filter algorithm to carry out data fusion (including prediction and update) to the first tracking data of follow-up next frame of obtaining and its second tracking data that corresponds, specifically include: predicting the first frame detection data by adopting a Kalman filtering algorithm to obtain prediction data corresponding to a second frame; and matching the target vehicle with the first tracking data and the corresponding second tracking data of the second frame respectively and the predicted data of the second frame based on the tracking ID and Hungarian matching algorithm of the target vehicle, fusing the first tracking data corresponding to the matched target vehicle, the second tracking data corresponding to the first tracking data and the predicted data corresponding to the second frame by adopting a Kalman filtering algorithm after matching is finished, so as to update the predicted data of the second frame and obtain the detected data of the second frame, and repeating the steps to complete matching and fusing the first tracking data, the corresponding second tracking data and the detected data of the previous frame, so that more accurate multi-frame detected data is obtained.
In this embodiment, the first sensor is an intelligent camera, the second sensor is a lidar, and the positions of the camera and the lidar are only exemplary descriptions, in other embodiments, the first sensor and the second sensor may also be two other types of different sensors, and the installation positions of the first sensor and the second sensor may also be other positions convenient for data acquisition, which is not described herein again.
In the embodiment, the target vehicle matching is performed by the Hungary matching algorithm, the accuracy of target vehicle data is guaranteed, then the Kalman filtering algorithm is used for denoising tracking data, the accuracy of the obtained detection data is guaranteed, the detection data is obtained by fusing the tracking data of two different types of sensors, the problems of target omission, low accuracy and the like caused by environmental influence easily due to detection based on a vehicle-mounted camera or other single sensors in the traditional vehicle cut-in detection algorithm are solved, the accuracy of the detection data can be effectively improved, and the accuracy of a subsequent vehicle cut-in detection result is guaranteed.
In the embodiment, the first tracking data of the first sensor for performing real-time tracking detection on the target vehicle is acquired, the second tracking data of the second sensor for performing real-time tracking detection on the target vehicle is acquired, the first sensor and the second sensor are sensors of two different types, then, on the basis of a Hungary matching algorithm and a Kalman filtering algorithm, matching fusion is performed on each frame of the first tracking data and the corresponding second tracking data to obtain multi-frame detection data, the specific steps of performing real-time tracking detection on the target vehicle entering the surrounding area of the vehicle to obtain the detection data of the multi-frame target vehicle are defined, the accuracy of the detection data is guaranteed, and an accurate data base is provided for subsequent calculation.
In an embodiment, after performing real-time tracking detection on a target vehicle entering an area around the vehicle to obtain detection data of multiple frames of target vehicles, step S30, namely determining a first frame number of the target vehicles continuously located in an early warning area, specifically includes the following steps:
SA31: and determining whether the target vehicle is in the early warning area or not according to the coordinate information of the target vehicle in the detection data.
During the driving process of the vehicle, the vehicle cut-in detection device needs to perform real-time tracking detection on the target vehicle entering the surrounding area of the vehicle so as to obtain each frame of detection data of the target vehicle. After the detection data of the target vehicle is obtained, judging the coordinate position of the target vehicle according to the coordinate information of the target vehicle in the latest frame of detection data to determine whether the target vehicle is in the early warning area of the vehicle, and if the target vehicle is in the early warning area of the vehicle, executing a cut-in judgment strategy of the subsequent steps; and if the target vehicle is not in the early warning area of the vehicle, the cut-in judgment strategy of the subsequent steps is not required to be executed, the detection data of the next frame is continuously obtained and judged until the target vehicle is determined to be in the early warning area of the vehicle according to the detection data of a certain frame.
The latest frame of detection data is the detection data acquired by the vehicle cut-in detection device at the latest time point. For example, when the vehicle cut-in detection device obtains the first frame of detection data, the first frame of detection data is the latest frame of detection data; when the vehicle cut-in detection device obtains the second frame of detection data, the second frame of detection data is the latest first frame of detection data …, and when the vehicle cut-in detection device obtains the nth frame of detection data, the nth frame of detection data is the latest first frame of detection data. The multi-frame detection data is the latest frame of detection data and the historical detection data before the latest frame of detection data. Whether the target vehicle meets the first preset condition or not is determined according to the latest frame of detection data, and each frame of detection data can be judged to determine whether the target vehicle is in the early warning area of the vehicle, so that the vehicle cut-in behavior can be judged timely according to the judgment result, the possibility of missed judgment and delayed judgment is reduced, and the driving experience and the driving safety are improved.
SA32: and if the target vehicle is in the early warning area, determining the detection data as first target data.
After whether the target vehicle is in the early warning area of the vehicle is determined according to the detection data, if the target vehicle is in the early warning area of the vehicle, the target vehicle is shown to be likely to cut into the lane where the vehicle is located, at this time, the latest frame of detection data is recorded as first target data, and the detection data received in each frame is sequentially judged, so that multiple frames of first target data can be obtained.
SA33: and taking the number of continuous frames taking the latest frame as an end frame in the multi-frame first target data as the first frame number.
After obtaining the first target data of multiple frames, it is also necessary to determine the number of consecutive frames in the first target data of multiple frames, which takes the latest frame as the end frame, as the first frame number. That is, it is necessary to determine whether the target vehicle starts from a certain historical frame until the latest frame detected currently is in the early warning region of the vehicle according to the detection data, so as to increase the accuracy of the vehicle cut-in algorithm and reduce the misjudgment caused by the vehicle control change.
For example, after the sensor performs real-time tracking detection on the target vehicle, N frames of detection data are obtained, and according to the latest frame of detection data of the N frames of detection data, it is determined that the target vehicle is in the early warning area of the vehicle, and then the detection data of the early warning area of the target vehicle in the N frames of detection data is used as the first target data, and the number of the first target data in the N frames of detection data is 10 in total, which are respectively the 1 st frame, the 2 nd frame, the 3 rd frame, the 4 th frame, the 7 th frame, the 8 th frame, the N-3 rd frame, the N-2 nd frame, the N-1 st frame, and the N th frame, where N-3 ≠ 9 and the N th frame is the latest frame, and then in the 10 frames of first target data, the N-3 rd frame, the N-2 nd frame, the N-1 st frame, and the N th frame are consecutive frames with the latest frame as an end frame, and then it is determined that the first frame is 4.
In this embodiment, the N frames of detection data have 10 frames of first target data in total, the N-3 th frame, the N-2 th frame, the N-1 th frame, and the N-th frame are consecutive detection data frames using the latest frame as an end frame, the number of the first frames 4 is exemplary, and in other embodiments, the number of the first target data, the consecutive detection data frames, and the number of the first frames may also be other frames, which is not described herein again.
In this embodiment, whether the target vehicle is in the early warning region is determined according to coordinate information of the target vehicle in the detection data, if the target vehicle is in the early warning region, the detection data is determined to be first target data, a frame number of continuous frames in a plurality of frames of the first target data, in which a latest frame is an end frame, is used as a first frame number, a specific step of determining that the target vehicle is continuously in the early warning region is determined, and a frame number of continuous frames in the plurality of frames of the first target data, in which the latest frame is the end frame, is used as the first frame number, so that the accuracy of a vehicle cut-in algorithm is increased, and misjudgment caused by vehicle control change is reduced.
In an embodiment, in step S30, that is, determining that the target vehicle has the second frame number of the trend of continuously cutting into the lane where the vehicle is located includes the following steps:
SB31: and determining the y-axis coordinate value of the target vehicle in the detection data according to the coordinate information of the target vehicle in the detection data.
After the latest frame of detection data is acquired, the y-axis coordinate value of the target vehicle in each frame of detection data is determined according to the coordinate information of the target vehicle in the frame of detection data. The coordinate information of the target vehicle is coordinate information using the coordinate system of the vehicle as a reference coordinate, that is, the y-axis coordinate value of the target vehicle in each frame of detection data is y v
SB32: and performing lane allocation on the target vehicle according to the y-axis coordinate value of the target vehicle.
In determining the y-axis coordinate value y of the target vehicle v Then, performing lane assignment on the target vehicle according to the y-axis coordinate value of the target vehicle, wherein the left side of the vehicle body of the vehicle is taken as the positive direction of the y-axis in the vehicle coordinate system, so that if the y-axis coordinate value of the target vehicle is negative, the target vehicle is determined to be positioned at the right side of the vehicle; and if the y-axis coordinate value of the target vehicle is positive, determining that the target vehicle is positioned on the left side of the vehicle. After the target vehicle is determined to be positioned on the left side or the right side of the vehicle, the lane where the target vehicle is positioned is distributed according to the coordinate information and the lane line information of the target vehicle. For example, after the target vehicle is determined to be located on the left side of the host vehicle, if the absolute value of the y-axis coordinate value of the target vehicle is smaller than 1.5 lane widths (the width between the left and right lane lines of the host vehicle), the target vehicle may be determined to be located on the left lane of the host vehicle, and if the absolute value of the y-axis coordinate value of the target vehicle is larger than 1.5 lane widths, no assignment is made; after the target vehicle is determined to be located on the right side of the host vehicle, if the absolute value of the y-axis coordinate value of the target vehicle is smaller than 1.5 lane widths, the target vehicle can be determined to be located on the right lane of the host vehicle, and if the absolute value of the y-axis coordinate value of the target vehicle is larger than 1.5 lane widths, no distribution is performed.
In other embodiments, the lane assignment may be performed on the target vehicle according to other manners, for example, determining a plurality of lane lines (including a left lane line, a right lane line, and a right lane line) around the host vehicle, determining the position of the target vehicle according to the coordinate information of the target vehicle, and if the target vehicle is located between the left lane line and the right lane line, assigning the target vehicle to the lane where the host vehicle is located (at this time, the target vehicle is located in the lane where the host vehicle is located, and the cut-in determination is not performed on the target vehicle); if the target vehicle is located between the left lane line and the left lane line, the target vehicle is distributed to the left lane of the vehicle; and if the target vehicle is positioned between the right lane line and the right lane line, the target vehicle is distributed to the right lane of the vehicle, and the target vehicle exceeding the left lane line and the right lane line is not subjected to lane distribution or cut-in judgment.
SB33: if the target vehicle is distributed in the left lane of the vehicle, whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the previous frame of target data is determined.
After lane assignment is performed on the target vehicle according to the y-axis coordinate value of the target vehicle, if the target vehicle is assigned to the left lane of the vehicle, whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the target data of the previous frame or not is determined.
SB34: and if the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the previous frame of detection data, determining the detection data as second target data.
After determining whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the previous frame of target data, if the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the previous frame of detection data, determining that the relative distance between the geometric center of the target vehicle and the road centerline of the lane where the vehicle is located is reduced, and determining that the detection data is the second target data.
Since the left side of the vehicle body of the vehicle is taken as the positive direction of the y-axis in the coordinate system of the vehicle, when the target vehicle is positioned in the left lane of the vehicle, the coordinate value of the y-axis of the target vehicle is positive, namely, the y-axis coordinate value of the target vehicle is positive v If it is greater than 0, if y in the frame detection dataThe coordinate value of the axis is smaller than the coordinate value of the y axis in the target data of the previous frame, the transverse distance between the target vehicle and the vehicle is reduced, and the target vehicle approaches the vehicle, so that the frame of detection data is determined to have the tendency of cutting into the lane where the vehicle is located, and the detection data is used as second target data; otherwise, the transverse distance between the target vehicle and the host vehicle is not reduced, and the target vehicle is not close to the host vehicle, and the frame detection data is determined not to cut into the lane where the host vehicle is located.
SB35: and taking the number of continuous frames which are enough for the latest frame to be the end frame in the multi-frame second target data as the second frame number.
According to the determination methods of steps S43 to S44, each frame of the multi-frame detection data is sequentially determined to determine the a-frame second target data having a tendency to cut into the lane where the vehicle is located, and the number of consecutive frames in the history frames having the latest frame as the end frame is determined as the second number of frames in the a-frame second target data.
In the embodiment, according to the coordinate information of the target vehicle in each frame of detection data, the y-axis coordinate value of the target vehicle in each frame of detection data is determined, the coordinate information of the target vehicle is the coordinate information taking the vehicle coordinate system as the reference coordinate, and then the lane assignment is performed on the target vehicle according to the y-axis coordinate value of the target vehicle; if the target vehicle is distributed on the left lane of the vehicle, determining whether a y-axis coordinate value in the detection data is smaller than a y-axis coordinate value in the previous frame of target data, if so, determining the detection data as second target data, and finally taking the frame number of continuous frames taking the latest frame as an end frame in multi-frame second target data as a second frame number.
In an embodiment, after step SB32, that is, after performing lane assignment on the target vehicle according to the y-axis coordinate value of the target vehicle, the method specifically includes the following steps:
SB36: and if the target vehicle is distributed in the right lane of the vehicle, judging whether the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the detection data of the previous frame.
After the lane assignment is performed on the target vehicle according to the y-axis coordinate value of the target vehicle, if the target vehicle is assigned to the right lane of the vehicle, whether the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the detection data of the previous frame or not is determined.
SB37: and if the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the previous frame of detection data, determining the detection data as second target data.
Since the left side of the vehicle body of the vehicle is taken as the positive direction of the y-axis in the coordinate system of the vehicle, when the target vehicle is positioned in the right lane of the vehicle, the coordinate value of the y-axis of the target vehicle is negative, namely, the y-axis coordinate value of the target vehicle is negative v If the y-axis coordinate value in the frame of detection data is less than 0 and is greater than the y-axis coordinate value in the previous frame of target data, the transverse distance between the target vehicle and the vehicle is reduced, and the target vehicle approaches the vehicle, the frame of detection data is determined to meet a second preset condition; otherwise, it indicates that the transverse distance between the target vehicle and the vehicle is not reduced, and the target vehicle is not close to the vehicle, and it is determined that the frame detection data does not satisfy the second preset condition.
SB38: and taking the number of continuous frames taking the latest frame as an end frame in the multi-frame second target data as the second frame number.
According to the judgment manner of steps S46 to S47, each frame of the multi-frame detection data is sequentially judged to determine b-frame detection data satisfying a second preset condition, and the number of consecutive frames in the history frames having the latest frame as the end frame in the b-frame detection data is determined as a second frame number.
In an embodiment, after the lane assignment is performed on the target vehicle according to the y-axis coordinate value of the target vehicle, a smoothing process with a preset window size (for example, the preset window size is 3) needs to be performed on the y-axis coordinate value in each frame of detection data to obtain a smoothed y-axis coordinate value, which is used as the y-axis coordinate value in the detection data, so as to determine whether the y-axis coordinate value in the detection data is smaller (or larger) than the y-axis coordinate value in the previous frame of target data or not in the subsequent step. The y-axis coordinate value in each frame of detection data is smoothed, so that the accurate y-axis coordinate value can be obtained, the follow-up judgment errors caused by data acquisition errors of the sensor in the actual process are reduced, and the accuracy of the second frame number is further improved.
In this embodiment, after lane assignment is performed on a target vehicle according to a y-axis coordinate value of the target vehicle, if the target vehicle is assigned to a right lane of the vehicle, it is determined whether the y-axis coordinate value in the detection data is greater than a y-axis coordinate value in the previous frame of detection data, and if the y-axis coordinate value in the detection data is greater than the y-axis coordinate value in the previous frame of detection data, the detection data is second target data, and then a number of consecutive frames that satisfy a second preset condition and have a latest frame as an end frame in multiple frames of second target data is used as a second number of frames.
In an embodiment, before step S20, that is, before determining whether the target vehicle is continuously located in the early warning area of the host vehicle, the method specifically includes the following steps:
s01: and acquiring lane line detection data of the left side and the right side of the vehicle.
Before determining whether a target vehicle is continuously located in an early warning area of the vehicle, lane line detection data on the left side and the right side of the vehicle need to be acquired through a sensor, wherein the lane detection data are described in the form of pixel points.
S02: and converting the lane detection data into coordinate point data under the coordinate system of the vehicle, and fitting the coordinate point data to obtain a plurality of lane lines on the left side and the right side of the vehicle.
After lane line detection data on the left side and the right side of the vehicle are obtained, the lane detection data are converted into coordinate point data under a coordinate system of the vehicle, and curve fitting is carried out on the coordinate point data by adopting a least square method to obtain a plurality of lane lines on the left side and the right side of the vehicle. The plurality of lane lines at least comprise a first lane line and a second lane line which are respectively positioned on the left side and the right side of the vehicle.
For example, lane line detection data (images) on the left and right sides of the host vehicle may be acquired by the intelligent camera, where the lane line detection data includes detection data of 4 lane lines, i.e., a left lane line, a right lane line, and a right lane line; the lane line detection data is a bird's-eye view image which is described in a pixel form by taking an image coordinate system as a reference coordinate system. The image coordinate system is defined as: the image in the image coordinate system is 512 x 512 pixels, the origin of coordinates in the image coordinate system (0,0) is the upper left corner of the image, the image is positive to the right for the x-axis, the image is positive to the down for the y-axis, and one pixel represents a 20 cm grid.
The coordinates of the host vehicle in the image (lane line detection data) are (256, 411), and the pixel values of the left lane line, the right lane line, and the right lane line in the image are: 100. 150, 200, 250, then 4 lane lines can be represented as 4 point sets according to different pixel values of different lane lines:
Figure BDA0003268773860000171
Figure BDA0003268773860000172
Figure BDA0003268773860000173
Figure BDA0003268773860000174
wherein R is 1 、R 3 、R 2 And R 4 Respectively a left lane line, a right lane line and a right lane lineA set of points for the right lane line; concentration of points a ij =(x ij ,y ij ) Representing a point in the image coordinate system; f =1, 2, 3, 4,j = n 1 、n 2 、n 3 、n 4 ;n 1 、n 2 、n 3 、n 4 Respectively represent point sets R 1 、R 3 、R 2 And R 4 The number of midpoints.
Point a in image coordinate system ij =(x ij ,y ij ) Converted into points of the vehicle coordinate system, point coordinates (x) of the vehicle coordinate system v ,y v ,z v ) Calculated by the following formula:
x v =(O vy -y ij )*0.2;
y v =(O vx -x ij )*0.2;
z v =0;
wherein x is ij Is a point a in the image coordinate system ij X-axis coordinate value of (2), y ij Is a point a in the image coordinate system ij Y-axis coordinate values of (a); x is the number of v 、y v 、z v Respectively setting an x-axis coordinate, a y-axis coordinate and a z-axis coordinate under the vehicle coordinate system; (O) vx ,O vy ) The coordinates of the host vehicle in the image (lane line detection data) are shown.
In this embodiment, the coordinates of the host vehicle in the image are (256, 411), and the pixel values of the left lane line, the right lane line, and the right lane line in the image are: 100. 150, 200, and 250 are only exemplary, in other embodiments, the coordinates of the host vehicle in the image may also be other coordinates determined according to actual situations, and the pixel values of the left lane line, the right lane line, and the right lane line in the image may also be other actual pixel values, which is not described herein again.
S03: and taking the areas in the preset width ranges on the left and right sides of the first lane line as first early warning areas, and taking the areas in the preset width ranges on the left and right sides of the second lane line as second early warning areas.
After a first lane line and a second lane line which are respectively positioned on the left side and the right side of the vehicle are obtained, areas within the preset width range on the left side and the right side of the first lane line are used as first early warning areas, and areas within the preset width range on the left side and the right side of the second lane line are used as second early warning areas.
Wherein, the preset width can be one third of the lane width, namely lane _ width/3. The lane width (lane _ width) may be a fixed width preset according to a general lane, for example, the lane width may be 3.5m. The lane width may also be a lane width of a lane where the host vehicle is located, and the lane width of the lane where the host vehicle is located is a distance between a left lane line and a right lane line of the host vehicle.
As shown in fig. 5, the lower vehicle in the figure is the host vehicle, shows the left lane line (first lane line), the left lane line, the right lane line (second lane line), and the right lane line of the host vehicle, and shows the first warning region (left warning region) and the second warning region (right warning region).
In this embodiment, before determining whether the target vehicle is continuously located in the early warning region of the host vehicle, lane line detection data on the left and right sides of the host vehicle is obtained, where the lane detection data is described in the form of pixels, the lane detection data is converted into coordinate point data in the coordinate system of the host vehicle, and the coordinate point data is fitted to obtain a plurality of lane lines on the left and right sides of the host vehicle, where the plurality of lane lines at least include a first lane line and a second lane line respectively located on the left and right sides of the host vehicle, and a region within a preset width range on the left and right sides of the first lane line is used as the first early warning region, and a region within a preset width range on the left and right sides of the second lane line is used as the second early warning region, so as to make a specific process of determining lane lines around the host vehicle, and determining the early warning region of the host vehicle according to the lane lines, and provide a basis for subsequently determining whether the target vehicle satisfies the first preset condition according to the latest frame detection data. Meanwhile, lane lines around the vehicle are determined, so that lane allocation can be performed on the target vehicle subsequently, and a basis is provided for judging whether the target vehicle meets a second preset condition.
In an embodiment, in step SA31, determining whether the target vehicle is in the early warning area according to the coordinate information of the target vehicle in each frame of detection data, specifically includes the following steps:
SA311: and determining whether the target vehicle is in the first early warning area or the second early warning area according to the coordinate information of the target vehicle in the detection data.
After the detection data of the target vehicles in the multiple frames are obtained, whether the target vehicles are in the first early warning area or the second early warning area is determined according to the coordinate information of the target vehicles in the latest frame of detection data.
The first early warning area and the second early warning area are areas within a preset width range of left and right lane lines of the vehicle, and the preset width is lane _ width/3, so that the farthest distance between the first early warning area and the second early warning area and the center of a lane where the vehicle is located is L =5lane \uwidth/6,L, which can indicate the distance between the farthest boundary line of the early warning area and the vehicle, and whether the target vehicle is in the first early warning area or the second early warning area can be determined according to the y axis and L of the target vehicle by determining the y-axis coordinate value of the target vehicle.
For example, if the absolute value of the y-axis coordinate value of the target vehicle is less than or equal to L, it indicates that the geometric center of the target vehicle is located in the early warning region (the first early warning region or the second early warning region), and it is determined that the target vehicle is located in the first early warning region or the second early warning region; if the absolute value of the y-axis coordinate value of the target vehicle is greater than L, the geometric center of the target vehicle is not located in the early warning area (the first early warning area or the second early warning area) of the vehicle, and the target vehicle is determined not to be located in the first early warning area or the second early warning area.
In other embodiments, the coordinates of the first early warning region and the second early warning region may be determined according to a preset width, whether the coordinates of the target vehicle fall within the coordinates of the first early warning region or the second early warning region is determined, if the coordinates of the target vehicle fall within the coordinates of the first early warning region or the second early warning region, it indicates that the target vehicle is correspondingly located in the first early warning region or the second early warning region, otherwise, it indicates that the target vehicle is not located in the first early warning region or the second early warning region, and the specific process is not described in detail.
SA312: and if the target vehicle is in the first early warning area or the second early warning area, determining that the target vehicle is in the early warning area.
After whether the target vehicle is in the first early warning area or the second early warning area is determined, if the target vehicle is in the first early warning area or the second early warning area and the geometric center of the target vehicle is located in the early warning area of the vehicle, the target vehicle is determined to be in the early warning area.
SA313: and if the target vehicle is not in the first early warning area and the target vehicle is not in the second early warning area, determining that the target vehicle is not in the early warning area.
After determining whether the target vehicle is in the first early warning area or the second early warning area, if the target vehicle is not in the first early warning area or the second early warning area, and the geometric center of the target vehicle is not located in the early warning area of the vehicle, it is determined that the target vehicle is not located in the early warning area.
In this embodiment, whether the target vehicle is in the first warning area or the second warning area is determined according to the coordinate information of the target vehicle in the latest frame of detection data, if the target vehicle is in the first warning area or the second warning area, it is determined that the target vehicle is in the warning area, and if the target vehicle is not in the first warning area and the target vehicle is not in the second warning area, it is determined that the target vehicle is not in the warning area.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In one embodiment, a vehicle cut-in detection device is provided, which corresponds to the vehicle cut-in detection method in the above embodiments one to one. As shown in fig. 6, the vehicle cut-in detection apparatus includes a detection module 601, a first determination module 602, and a second determination module 603. The detailed description of each functional module is as follows:
the detection module 601 is used for performing real-time tracking detection on a target vehicle entering a surrounding area of the vehicle;
a first determining module 602, configured to determine, when a target vehicle is continuously in an early warning region of the vehicle and the target vehicle has a trend of continuously cutting into a lane where the vehicle is located, whether a first frame number of the target vehicle continuously in the early warning region is greater than a first frame number threshold, and determine whether a second frame number of the target vehicle having a trend of continuously cutting into the lane where the vehicle is located is greater than a second frame number threshold, where the first frame number threshold and the second frame number threshold are frame number thresholds that are adjusted according to a lateral speed of the target vehicle;
the second determining module 603 is configured to determine that the target vehicle cuts into the lane where the vehicle is located if the first frame number is greater than the first frame number threshold and the second frame number is greater than the second frame number threshold.
Further, the vehicle cut-in detection apparatus further includes a third determining module 604, where the third determining module 604 is specifically configured to determine the first frame number threshold and the second frame number threshold by:
acquiring detection data for real-time tracking detection of the target vehicle;
determining the transverse speed of the target vehicle in the latest frame of detection data, and determining a detection frame rate for performing real-time tracking detection on the target vehicle;
determining a preset transverse speed threshold, a first time threshold and a second time threshold;
calculating to obtain a first frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and the first time threshold of the target vehicle;
and calculating to obtain a second frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and the second time threshold of the target vehicle.
Further, the third determining module 604 is specifically configured to calculate the first frame number threshold by using the following formula:
F th_1 =round(f*th 1_std *(v lateral_std /v lateral ));
wherein, F th_1 Is a firstThe threshold value of the frame number, round is the rounding method, f is the detection frame rate, th 1_std Is a first time threshold, v lateral_std Is a lateral velocity threshold, v lateral Is the lateral velocity of the target vehicle.
Further, the third determining module 604 is specifically configured to:
acquiring first tracking data of a first sensor for real-time tracking detection of a target vehicle;
acquiring second tracking data of a second sensor for tracking and detecting the target vehicle in real time, wherein the first sensor and the second sensor are two different types of sensors;
and matching and fusing the first tracking data and the corresponding second tracking data of each frame based on a Hungarian matching algorithm and a Kalman filtering algorithm to obtain multi-frame detection data.
Further, the second determining module 603 is specifically configured to determine the first frame number by:
determining whether the target vehicle is in an early warning area or not according to the coordinate information of the target vehicle in each frame of detection data;
if the target vehicle is in the early warning area, determining the detection data as first target data;
and taking the number of continuous frames taking the latest frame as an end frame in the multi-frame first target data as the first frame number.
Further, the second determining module 603 is specifically configured to determine the second frame number by:
determining a y-axis coordinate value of the target vehicle in each frame of detection data according to the coordinate information of the target vehicle in each frame of detection data, wherein the coordinate information of the target vehicle is coordinate information taking the coordinate system of the vehicle as a reference coordinate;
performing lane allocation on the target vehicle according to the y-axis coordinate value of the target vehicle;
if the target vehicle is distributed on the left lane of the vehicle, determining whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the previous frame of target data;
if the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the previous frame of detection data, determining second target data of the detection data;
and taking the number of continuous frames taking the latest frame as an end frame in the multi-frame second target data as the second frame number.
Further, after the lane assignment is performed on the target vehicle according to the y-axis coordinate value of the target vehicle, the second determining module 603 is further specifically configured to:
if the target vehicle is distributed in the right lane of the vehicle, whether the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the previous frame of detection data is judged;
if the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the previous frame of detection data, determining the detection data as second target data;
and taking the number of continuous frames taking the latest frame as an end frame in the multi-frame second target data as the second frame number.
Further, before determining whether a first frame number of the target vehicle continuously located in the early warning area is greater than a first frame number threshold, the detecting module 601 is further specifically configured to:
acquiring lane line detection data of the left side and the right side of the vehicle, wherein the lane detection data are described in a pixel point form;
converting the lane detection data into coordinate point data under the coordinate system of the vehicle, and fitting the coordinate point data to obtain a plurality of lane lines on the left side and the right side of the vehicle, wherein the lane lines at least comprise a first lane line and a second lane line which are respectively positioned on the left side and the right side of the vehicle;
and taking the areas in the preset width ranges on the left and right sides of the first lane line as first early warning areas, and taking the areas in the preset width ranges on the left and right sides of the second lane line as second early warning areas.
Further, the third determining module 604 is specifically configured to calculate the second frame number threshold by using the following formula:
F th_2 =round(f*th 2_std *(v lateral_std /v lateral ));
wherein, F th_2 Is the second frame numberThreshold, round is the rounding method, f is the detection frame rate, th 2_std Is a second time threshold, v lateral_std Is a transverse velocity threshold, v lateral Is the lateral velocity of the target vehicle.
For specific limitations of the vehicle cut-in detection device, reference may be made to the above limitations of the vehicle cut-in detection method, which are not described herein again. The various modules in the vehicle cut-in detection apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a vehicle cut-in detection device is provided that includes a processor, a memory, and a database connected by a bus. Wherein the processor of the vehicle cut-in detection device is configured to provide computational and control capabilities. The memory of the vehicle cut-in detection device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing related data such as multi-frame detection data, multi-frame target data, a first frame number, a second frame number, a threshold value of the first frame number, a threshold value of the second frame number and the like. The computer program is executed by a processor to implement a vehicle cut-in detection method.
In one embodiment, as shown in fig. 7, a vehicle cut-in detection device is provided, which includes a memory, a processor and a computer program stored on the memory and operable on the processor, wherein the processor implements the steps of the vehicle cut-in detection method when executing the computer program.
In one embodiment, a readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the above-mentioned vehicle cut-in detection method.
Those skilled in the art will appreciate that all or part of the processes in the methods of the embodiments described above can be implemented by instructing the relevant hardware by a computer program, and the computer program can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, processors, databases, or other media used in the embodiments provided herein may include non-volatile and/or volatile memory, among others.
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the device is divided into different functional units or modules, so as to perform all or part of the above described functions.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A vehicle cut-in detection method, comprising:
carrying out real-time tracking detection on target vehicles entering the surrounding area of the vehicle;
when the target vehicle is continuously in the early warning area of the vehicle and the target vehicle has a trend of continuously cutting into the lane where the vehicle is located, determining whether a first frame number of the target vehicle continuously in the early warning area is greater than a first frame number threshold value, and determining whether a second frame number of the target vehicle having a trend of continuously cutting into the lane where the vehicle is located is greater than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle;
and if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into the lane where the vehicle is located.
2. The vehicle cut-in detection method of claim 1, wherein the first frame number threshold and the second frame number threshold are determined by:
acquiring detection data for real-time tracking detection of the target vehicle;
determining the transverse speed of the target vehicle in the latest frame of detection data, and determining a detection frame rate for real-time tracking detection of the target vehicle;
determining a preset transverse speed threshold, a first time threshold and a second time threshold;
calculating to obtain a first frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and a first time threshold of the target vehicle;
and calculating to obtain a second frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and a second time threshold of the target vehicle.
3. The vehicle cut-in detection method of claim 2, wherein the obtaining detection data for real-time tracking detection of the target vehicle comprises:
acquiring first tracking data of a first sensor for real-time tracking detection of the target vehicle;
acquiring second tracking data of a second sensor for real-time tracking detection of the target vehicle, wherein the first sensor and the second sensor are two different types of sensors;
and matching and fusing the first tracking data and the corresponding second tracking data of each frame based on a Hungarian matching algorithm and a Kalman filtering algorithm to obtain multiple frames of detection data.
4. The vehicle cut-in detection method of claim 2, wherein the first number of frames is determined by:
determining whether the target vehicle is in the early warning area or not according to the coordinate information of the target vehicle in each frame of the detection data;
if the target vehicle is in the early warning area, determining that the detection data is first target data;
and taking the number of continuous frames taking the latest frame as an end frame in the multiple frames of the first target data as the first frame number.
5. The vehicle cut-in detection method of claim 2, wherein the second number of frames is determined by:
determining a y-axis coordinate value of the target vehicle in each frame of the detection data according to the coordinate information of the target vehicle in each frame of the detection data, wherein the coordinate information of the target vehicle is coordinate information taking the coordinate system of the vehicle as a reference coordinate;
performing lane allocation on the target vehicle according to the y-axis coordinate value of the target vehicle;
if the target vehicle is distributed in a left lane of the vehicle, determining whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the target data of the previous frame;
if the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the detection data of the previous frame, determining that the detection data is second target data;
and taking the number of continuous frames taking the latest frame as an end frame in the second target data of the plurality of frames as the second frame number.
6. The vehicle cut-in detection method of claim 5, wherein after the assigning the lane to the target vehicle according to the y-axis coordinate value of the target vehicle, the method further comprises:
if the target vehicle is distributed on the right lane of the vehicle, whether the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the detection data of the previous frame or not is judged;
if the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the detection data of the previous frame, determining that the detection data is the second target data;
and taking the number of continuous frames taking the latest frame as an end frame in the multiple frames of the second target data as the second frame number.
7. The vehicle cut-in detection method of any one of claims 1-6, wherein the determining whether the target vehicle is continuously in the early warning region before a first number of frames is greater than a first frame number threshold, the method further comprises:
acquiring lane detection data of the left side and the right side of the vehicle, wherein the lane detection data are described in a pixel point form;
converting the lane detection data into coordinate point data under a coordinate system of the vehicle, and fitting the coordinate point data to obtain a plurality of lane lines on the left side and the right side of the vehicle, wherein the lane lines at least comprise a first lane line and a second lane line which are respectively positioned on the left side and the right side of the vehicle;
and taking the areas in the preset width ranges on the left side and the right side of the first lane line as first early warning areas, and taking the areas in the preset width ranges on the left side and the right side of the second lane line as second early warning areas.
8. The vehicle cut-in detection method of any one of claims 1-6, wherein the first frame number threshold is calculated by the following equation:
F th_1 =round(f*th 1_std *(v lateral_std /v lateral ));
wherein, F th_1 The first frame number threshold value is round, f is the detection frame rate, th 1_std Is a first time threshold, v lateral_std Is a lateral velocity threshold, v lateral Is the lateral velocity of the target vehicle.
9. A vehicle cut-in detection device, comprising:
the detection module is used for carrying out real-time tracking detection on the target vehicles entering the surrounding area of the vehicle;
a first determining module, configured to determine, when the target vehicle is continuously in the early warning region of the host vehicle and the target vehicle has a trend of continuously cutting into the lane of the host vehicle, whether a first frame number of the target vehicle continuously in the early warning region is greater than a first frame number threshold, and determine whether a second frame number of the target vehicle having a trend of continuously cutting into the lane of the host vehicle is greater than a second frame number threshold, where the first frame number threshold and the second frame number threshold are frame number thresholds that are adjusted according to a lateral speed of the target vehicle;
and the second determining module is used for determining that the target vehicle cuts into the lane where the vehicle is located if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value.
10. A readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, performs the steps of the vehicle cut-in detection method according to any one of claims 1 to 8.
CN202111094997.8A 2021-09-17 2021-09-17 Vehicle cut-in detection method and device and storage medium Active CN115222779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111094997.8A CN115222779B (en) 2021-09-17 2021-09-17 Vehicle cut-in detection method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111094997.8A CN115222779B (en) 2021-09-17 2021-09-17 Vehicle cut-in detection method and device and storage medium

Publications (2)

Publication Number Publication Date
CN115222779A true CN115222779A (en) 2022-10-21
CN115222779B CN115222779B (en) 2023-09-22

Family

ID=83606012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111094997.8A Active CN115222779B (en) 2021-09-17 2021-09-17 Vehicle cut-in detection method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115222779B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984824A (en) * 2023-02-28 2023-04-18 安徽蔚来智驾科技有限公司 Scene information screening method based on track information, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170305422A1 (en) * 2016-04-26 2017-10-26 Toyota Jidosha Kabushiki Kaisha Vehicle travel control apparatus
CN109720345A (en) * 2018-12-29 2019-05-07 北京经纬恒润科技有限公司 A kind of incision vehicle monitoring method and system
CN110458050A (en) * 2019-07-25 2019-11-15 清华大学苏州汽车研究院(吴江) Vehicle based on Vehicular video cuts detection method and device
US20200272835A1 (en) * 2018-08-22 2020-08-27 Beijing Sensetime Technology Development Co., Ltd. Intelligent driving control method, electronic device, and medium
CN111619564A (en) * 2020-05-29 2020-09-04 重庆长安汽车股份有限公司 Vehicle self-adaptive cruise speed control method, device, processor, automobile and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170305422A1 (en) * 2016-04-26 2017-10-26 Toyota Jidosha Kabushiki Kaisha Vehicle travel control apparatus
US20200272835A1 (en) * 2018-08-22 2020-08-27 Beijing Sensetime Technology Development Co., Ltd. Intelligent driving control method, electronic device, and medium
CN109720345A (en) * 2018-12-29 2019-05-07 北京经纬恒润科技有限公司 A kind of incision vehicle monitoring method and system
CN110458050A (en) * 2019-07-25 2019-11-15 清华大学苏州汽车研究院(吴江) Vehicle based on Vehicular video cuts detection method and device
CN111619564A (en) * 2020-05-29 2020-09-04 重庆长安汽车股份有限公司 Vehicle self-adaptive cruise speed control method, device, processor, automobile and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984824A (en) * 2023-02-28 2023-04-18 安徽蔚来智驾科技有限公司 Scene information screening method based on track information, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115222779B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
US20190188498A1 (en) Image Processing Method For Recognizing Ground Marking And System For Detecting Ground Marking
EP3070928B1 (en) Surrounding environment recognition device
US11023744B2 (en) Road parameter calculator
US7970178B2 (en) Visibility range estimation method and system
CN112084810B (en) Obstacle detection method and device, electronic equipment and storage medium
US20220373353A1 (en) Map Updating Method and Apparatus, and Device
CN107209998B (en) Lane line recognition device and lane line recognition method
US11842545B2 (en) Object collision prediction method and apparatus
KR101103526B1 (en) Collision Avoidance Method Using Stereo Camera
WO2023201904A1 (en) Abnormal vehicle traveling detection method, and electronic device and storage medium
CN110738081B (en) Abnormal road condition detection method and device
US10386849B2 (en) ECU, autonomous vehicle including ECU, and method of recognizing nearby vehicle for the same
US20160314359A1 (en) Lane detection device and method thereof, and lane display device and method thereof
JP2008117073A (en) Interruption vehicle detection device
JP4296287B2 (en) Vehicle recognition device
CN110843786A (en) Method and system for determining and displaying a water-engaging condition and vehicle having such a system
CN110843775B (en) Obstacle identification method based on pressure sensor
CN115222779B (en) Vehicle cut-in detection method and device and storage medium
CN115243932A (en) Method and device for calibrating camera distance of vehicle and method and device for continuously learning vanishing point estimation model
CN111104824A (en) Method for detecting lane departure, electronic device, and computer-readable storage medium
CN112215214A (en) Method and system for adjusting camera offset of intelligent vehicle-mounted terminal
CN110834626B (en) Driving obstacle early warning method and device, vehicle and storage medium
CN115352436A (en) Automatic parking method and device for vehicle, vehicle and storage medium
CN115457506A (en) Target detection method, device and storage medium
CN111881245B (en) Method, device, equipment and storage medium for generating visibility dynamic map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant