CN114627409A - Method and device for detecting abnormal lane change of vehicle - Google Patents

Method and device for detecting abnormal lane change of vehicle Download PDF

Info

Publication number
CN114627409A
CN114627409A CN202210178725.4A CN202210178725A CN114627409A CN 114627409 A CN114627409 A CN 114627409A CN 202210178725 A CN202210178725 A CN 202210178725A CN 114627409 A CN114627409 A CN 114627409A
Authority
CN
China
Prior art keywords
vehicle
data
coordinate system
attribute information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210178725.4A
Other languages
Chinese (zh)
Inventor
程云飞
张希
吴风炎
衣佳政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Hisense Group Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Holding Co Ltd filed Critical Hisense Group Holding Co Ltd
Priority to CN202210178725.4A priority Critical patent/CN114627409A/en
Publication of CN114627409A publication Critical patent/CN114627409A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a method and a device for detecting abnormal lane change of a vehicle, which aim at a first road monitoring area and respectively acquire first data acquired by video image acquisition equipment and second data acquired by radar equipment; the number of vehicles in the second data is greater than or equal to the number of vehicles in the first data; the second data includes motion attribute information of the vehicle; determining video attribute information of the vehicle in the first road monitoring area according to the second data and the first data; and determining the abnormal lane changing condition of the vehicle in the first road monitoring area according to the video attribute information of the vehicle, the motion attribute information of the vehicle and the lane information in the first road monitoring area. The second data acquired by the radar equipment can correct the first data of the video image acquisition equipment so as to ensure that the abnormal lane changing condition of the vehicle at a longer distance is detected.

Description

Method and device for detecting abnormal lane change of vehicle
Technical Field
The application relates to the technical field of vehicle networking, in particular to a method and a device for detecting abnormal lane changing of a vehicle.
Background
The vehicle-road cooperation technology promotes the innovative development of the automobile industry, can detect the abnormal lane changing behavior of the vehicle in real time, and has important significance in the aspect of improving the traffic efficiency and the safety level. The method of pure video is usually adopted to detect the abnormal lane change behavior of the vehicle, but because the visual distance range of the video acquisition equipment is limited, and the video acquisition equipment is easily affected by the natural environment (heavy fog, heavy rain and the like), the detection distance is short, and whether the abnormal lane change occurs to the vehicle cannot be well determined.
Based on this, a new method for detecting an abnormal lane change of a vehicle is needed to solve the above problems.
Disclosure of Invention
The application provides a method and a device for detecting abnormal lane change of a vehicle, which are used for improving the detection distance of the abnormal lane change of the vehicle and further improving the detection precision of the abnormal lane change.
In a first aspect, the present application provides a method for detecting an abnormal lane change of a vehicle, which may be performed by an electronic device, where the electronic device may be a computing terminal or the like, and the method includes:
respectively acquiring first data acquired by video image acquisition equipment and second data acquired by radar equipment aiming at a first road monitoring area; the number of vehicles in the second data is greater than or equal to the number of vehicles in the first data; the second data includes motion attribute information of the vehicle; determining video attribute information of the vehicle in the first road monitoring area according to the second data and the first data; and determining the abnormal lane changing condition of the vehicle in the first road monitoring area according to the video attribute information of the vehicle, the motion attribute information of the vehicle and the lane information in the first road monitoring area.
In this application, to first road monitoring area, electronic equipment can gather first data and gather the second data based on radar equipment based on video image collection equipment, and video image collection equipment receives the influence of environment and the stadia itself easily usually, and the regional scope of collection probably is shorter, and radar equipment carries out target (vehicle) location through transmission electromagnetic wave etc. can not receive the influence of environment and can gather more distant target. According to the method, the first data collected by the video image collecting device are adjusted through the second data collected by the radar device, the first data can be corrected through the method, so that the electronic device can obtain more target information, monitoring of abnormal lane changing of vehicles at a longer distance is achieved, the electronic device can determine the abnormal lane changing condition of each vehicle in the first road monitoring area based on the video attribute information, the motion attribute information and the lane information in the first road area of the vehicle, and the method can improve the detection accuracy of the abnormal lane changing.
In an optional mode, in the range of the visual range of the video image acquisition device, a first target vehicle is determined according to the video attribute information of the vehicle; determining whether the reference line of the first target vehicle intersects with the lane solid line or not according to the motion attribute information of the first target vehicle and the lane information in the first road monitoring area; and if so, determining that the first target vehicle has an abnormal lane change.
According to the method and the device, the abnormal lane changing condition of the first target vehicle is determined according to the intersection condition of the reference line of the first target vehicle and the lane solid line in the sight distance range of the video image acquisition equipment, and the detection precision and the detection efficiency of the abnormal lane changing can be improved through the method.
In an alternative mode, in the range of the visual distance of the video image acquisition device, a second target vehicle is determined according to the video attribute information of the vehicle; determining the current motion attribute information of the second target vehicle outside the sight distance range of the video image acquisition equipment; predicting a first position area where the second target vehicle is located in the next radar period based on a Kalman filtering algorithm; acquiring a second position area of the second target vehicle in the next radar period; and if the first position area is different from the second position area, and the distance between the second position area and the lane solid line is smaller than the width threshold of the second target vehicle, determining that the second target vehicle changes lanes abnormally.
According to the method and the device, the abnormal lane changing condition of the second target vehicle is determined according to the relation between the predicted position area of the second target vehicle in the next radar period and the actual position area outside the sight distance range of the video image acquisition equipment, and the detection precision and the detection efficiency of the abnormal lane changing can be improved through the method.
In an optional mode, according to the video attribute information of the vehicle, determining that the abnormal lane-changing vehicle is a third target vehicle; the third target vehicle is one or more of: police cars, ambulances, fire trucks and construction vehicles; and determining the longitude and latitude coordinates, the vehicle type and the license plate information of the third target vehicle.
The method and the system consider that the abnormal lane changing behavior can be special vehicles such as police cars, ambulances and the like, and can mark the abnormal lane changing behavior so as to remind the intelligent vehicle that the abnormal lane changing behavior is the special vehicles and the intelligent vehicle is in avoidance during driving.
In an optional mode, converting the coordinate values of the coordinate system of the radar equipment where the vehicle is located in the second data into coordinate values of a pixel coordinate system; the pixel coordinate system is a coordinate system where the first data is located; performing video fusion on the second data and the first data, and determining the number of vehicles in the first road monitoring area and the area where the missing vehicle is in the first data; processing data of the area where the missing vehicle is located in the first data based on deep learning, and determining color information, vehicle type information, license plate information and the like of the missing vehicle; determining video attribute information of the vehicles in the first road monitoring area according to the first data, the color information, the vehicle type information and the license plate information of the missing vehicles; the video attribute information includes: color information, vehicle type information, license plate information, quantity information, and the like.
According to the method and the device, the coordinate system and the pixel coordinate system of the radar equipment are converted, and then the second data and the first data are subjected to data fusion, so that more vehicles can be detected in a first road monitoring area.
In an alternative mode, the lane information in the first road monitoring area is determined according to a high-precision map of the first road monitoring area, or is determined by performing data analysis on image data of the first road monitoring area through a video image algorithm.
The lane information is acquired to accurately determine whether the vehicle has abnormal lane change behavior, so that the data processing efficiency can be improved.
In an optional mode, the road side device is informed of the abnormal lane change condition of the vehicle in the first road monitoring area.
By means of the method, the roadside device can inform the intelligent vehicle of the abnormal lane changing condition of the vehicle so as to remind the intelligent vehicle to avoid, and safe driving is guaranteed.
In an alternative mode, the coordinate values of the radar coordinate system where the vehicle is located in the second data are converted into the coordinate values of the world coordinate system through the first conversion matrix; the first conversion matrix is determined based on the position relation of the radar equipment and the video image acquisition equipment; converting the coordinate value of the world coordinate system into the coordinate value of the coordinate system of the video image acquisition equipment through a second conversion matrix; the second transformation matrix is different from the first transformation matrix; and converting the coordinate value of the coordinate system where the video image acquisition equipment is positioned into the coordinate value of the pixel coordinate system by adopting a preset rule.
The coordinate value of the second data in the pixel coordinate system determined by the method is more accurate and reliable.
In an optional mode, performing imaging projection on a coordinate value of a coordinate system where the video image acquisition equipment is located based on the focal length of the video image acquisition equipment to obtain a coordinate value of a phase plane coordinate system of the video image acquisition equipment; and determining the coordinate value of the second data in the pixel coordinate system after discretization based on the size of the pixel on the photosensitive chip of the video image acquisition equipment, the plane center of the phase plane coordinate system and the coordinate value of the second data in the phase plane coordinate system.
The coordinate value of the second data in the pixel coordinate system determined by the method is more accurate and reliable.
In a second aspect, the present application provides a device for detecting an abnormal lane change of a vehicle, including an acquisition unit, a first determination unit, and a second determination unit.
The system comprises an acquisition unit, a radar device and a video image acquisition device, wherein the acquisition unit is used for respectively acquiring first data acquired by the video image acquisition device and second data acquired by the radar device aiming at a first road monitoring area; the number of vehicles in the second data is greater than or equal to the number of vehicles in the first data; the second data includes motion attribute information of the vehicle; the first determining unit is used for determining video attribute information of the vehicles in the first road monitoring area according to the second data and the first data; and the second determining unit is used for determining the abnormal lane changing condition of the vehicle in the first road monitoring area according to the video attribute information of the vehicle, the motion attribute information of the vehicle and the lane information in the first road monitoring area.
In this application, to first road monitoring area, electronic equipment can gather first data and gather the second data based on radar equipment based on video image collection equipment, and video image collection equipment receives the influence of environment and the stadia of itself easily usually, and the regional scope of collection probably is shorter, and radar equipment carries out target (vehicle) location through launching electromagnetic wave etc. can not receive the influence of environment and can gather more distant target. According to the method, the first data collected by the video image collecting device are adjusted through the second data collected by the radar device, the first data can be corrected through the method, so that the electronic device can obtain more target information, monitoring of abnormal lane changing of vehicles at a longer distance is achieved, the electronic device can determine abnormal lane changing conditions of all vehicles in the first road monitoring area based on the video attribute information, the motion attribute information and the lane information in the first road area of the vehicles, and the method can improve the detection accuracy of the abnormal lane changing.
In an optional manner, the second determining unit is specifically configured to determine, within a range of a line of sight of the video image capturing device, a first target vehicle according to video attribute information of the vehicle; determining whether the reference line of the first target vehicle intersects with the lane solid line or not according to the motion attribute information of the first target vehicle and the lane information in the first road monitoring area; and if so, determining that the first target vehicle has an abnormal lane change.
In an optional manner, the second determining unit is specifically configured to determine, within a range of a line of sight of the video image capturing device, a second target vehicle according to the video attribute information of the vehicle; determining the current motion attribute information of the second target vehicle outside the sight distance range of the video image acquisition equipment; predicting a first position area where the second target vehicle is located in the next radar period based on a Kalman filtering algorithm; acquiring a second position area of the second target vehicle in the next radar period; and if the first position area is different from the second position area, and the distance between the second position area and the lane solid line is smaller than the width threshold of the second target vehicle, determining that the second target vehicle changes lanes abnormally.
In an optional mode, the second determining unit is further configured to determine, according to the video attribute information of the vehicle, that the abnormal lane change vehicle is a third target vehicle; the third target vehicle is one or more of: police cars, ambulances, fire trucks and construction vehicles; and determining the longitude and latitude coordinates, the vehicle type and the license plate information of the third target vehicle.
In an optional manner, the first determining unit is specifically configured to convert coordinate values of a coordinate system of the radar device where the vehicle is located in the second data into coordinate values of a pixel coordinate system; the pixel coordinate system is a coordinate system where the first data is located; performing video fusion on the second data and the first data, and determining the number of vehicles in the first road monitoring area and the area where the missing vehicle is in the first data; performing data processing on the area where the missing vehicle is located in the first data based on deep learning, and determining color information, vehicle type information and license plate number information of the missing vehicle; determining video attribute information of the vehicles in the first road monitoring area according to the first data, the color information, the vehicle type information and the license plate number information of the missing vehicles; the video attribute information includes: color information, vehicle type information, quantity information, license plate number information and the like.
In an alternative mode, the lane information in the first road monitoring area is determined according to a high-precision map of the first road monitoring area, or is determined by performing data analysis on image data of the first road monitoring area through a video image algorithm.
In an optional manner, the apparatus further includes a notification unit, configured to notify the roadside device of an abnormal lane change condition of the vehicle in the first road monitoring area.
In an optional manner, the first determining unit is specifically configured to convert, by using a first conversion matrix, coordinate values of a radar coordinate system in which the vehicle is located in the second data into coordinate values of a world coordinate system; the first conversion matrix is determined based on the position relation of the radar equipment and the video image acquisition equipment; converting the coordinate values of the world coordinate system into coordinate values of a coordinate system in which the video image acquisition equipment is located through a second conversion matrix; the second transformation matrix is different from the first transformation matrix; and converting the coordinate value of the coordinate system where the video image acquisition equipment is positioned into the coordinate value of the pixel coordinate system by adopting a preset rule.
In an optional manner, the first determining unit is specifically configured to perform imaging projection on a coordinate value of a coordinate system where the video image acquisition device is located based on a focal length of the video image acquisition device to obtain a coordinate value of a phase plane coordinate system of the video image acquisition device; and determining the coordinate value of the second data in the pixel coordinate system after discretization based on the size of the pixel on the photosensitive chip of the video image acquisition equipment, the plane center of the phase plane coordinate system and the coordinate value of the second data in the phase plane coordinate system.
In a third aspect, the present application provides a computing device comprising: a memory and a processor; a memory for storing program instructions; a processor for calling the program instructions stored in the memory and executing the method of the first aspect according to the obtained program.
In a fourth aspect, the present application provides a computer storage medium storing computer-executable instructions for performing the method of the first aspect.
For technical effects that can be achieved by the second aspect to the fourth aspect, please refer to a description of the technical effects that can be achieved by a corresponding possible design scheme in the first aspect, and the description of the technical effects is not repeated herein.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a layout of a device;
FIG. 2 is a schematic flowchart of a method for detecting an abnormal lane change of a vehicle according to an embodiment of the present application;
fig. 3 is a schematic diagram of a coordinate transformation process provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of coordinate transformation provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a transformation between a camera coordinate system and a phase plane coordinate system according to an embodiment of the present application;
FIG. 6 is a schematic view of a vehicle detection provided by an embodiment of the present application;
FIG. 7 is a schematic view of vehicle detection provided by an embodiment of the present application;
FIG. 8-1 is a schematic view of a vehicle detection provided by an embodiment of the present application;
8-2 is a schematic view of vehicle detection provided by the embodiment of the application;
FIG. 9 is a schematic flowchart of another method for detecting an abnormal lane change of a vehicle according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a device for detecting an abnormal lane change of a vehicle according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
It should be noted that the terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
As described in the background art, the conventional scheme mainly adopts a video algorithm to realize abnormal lane change detection of the vehicle, and faces a series of practical difficulties and defects.
1) Environmental factors such as illumination and weather have a great influence on the detection precision of the video algorithm, and the image characteristics of the same vehicle target under different illumination and different weather can be changed greatly, so that the conditions of target missing detection and false detection often occur in the vehicle target video detection algorithm, and the subsequent abnormal lane change detection precision is low.
2) The effective detection distance of the pure video scheme is influenced by algorithm design, the optimal detection distance of the video is only 50-70 meters, and vehicle targets with small pixels in the image are not easy to detect by the video algorithm, so that the application distance of the video algorithm is greatly limited.
3) The video algorithm has poor speed and distance measuring capability, cannot accurately acquire the motion information of the abnormal lane-changing vehicle, and cannot provide accurate traffic incident positioning coordinates for other road traffic participants in a vehicle-road cooperation scene.
In order to solve the above problems, the present application proposes a new method for detecting an abnormal lane change of a vehicle.
The following describes the detection process of the abnormal lane change of the vehicle. In the following embodiments of the present application, "and/or" describes an association relationship of associated objects, indicating that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple. The singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. And, unless stated to the contrary, the embodiments of the present application refer to the ordinal numbers "first", "second", etc., for distinguishing between a plurality of objects, and do not limit the sequence, timing, priority or importance of the plurality of objects. For example, the first task execution device and the second task execution device are only for distinguishing different task execution devices, and do not indicate a difference in priority, degree of importance, or the like between the two task execution devices.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
As shown in fig. 1, millimeter wave radars and video cameras are arranged on urban roads, and an edge computing terminal (MEC) acquires video images collected by the cameras and data collected by the millimeter wave radars in real time. The millimeter wave radar is a radar whose operating frequency band is in the millimeter wave band. The millimeter wave radar can actively transmit electromagnetic wave signals, receive echoes and obtain the relative distance, the relative speed and the relative direction of the vehicle target according to the time difference of transmitting and receiving the electromagnetic wave signals. Generally, the millimeter wave radar and the video camera may be disposed on the same horizontal plane as shown in fig. 1, or may be disposed at other positions, which is not specifically limited herein. The MEC can carry out data analysis according to the video image that the camera was gathered and the data that the millimeter wave radar gathered to confirm the unusual lane change condition of vehicle.
As shown in fig. 2, the present application provides a method for detecting an abnormal lane change of a vehicle, which may be executed by an electronic device or a server, and the present application is not limited in particular to the MEC and the like described above. Taking the electronic device as an example, the following steps can be executed:
step 201, respectively acquiring first data acquired by video image acquisition equipment and second data acquired by radar equipment aiming at a first road monitoring area; the number of vehicles in the second data is greater than or equal to the number of vehicles in the first data; the second data includes motion attribute information of the vehicle.
It should be noted that the visual field range of the video image capturing device is limited, for example, the visual distance is 50 to 70 meters, so that the video image capturing device can only capture the video image information within the visual distance range, and in the case of poor environmental quality (heavy fog, heavy rain, and other weather), the video image information may only be captured within the range of 10 to 20 meters. However, the radar device determines whether a vehicle exists by emitting electromagnetic wave signals, so that the radar device is not influenced by the environment, the range of collected data may be 100-200 meters, and the like, and vehicle information in a farther range can be collected compared with the video image collection device, but the radar device cannot obtain information of colors, vehicle types, license plates, and the like of the vehicle. Therefore, in the application, in the first road monitoring area, under the condition that the video image acquisition equipment and the radar equipment are simultaneously arranged and controlled, the number of the vehicles in the first data acquired by the video image acquisition equipment is smaller than or equal to the number of the vehicles of the second equipment acquired by the radar equipment.
Step 202, determining video attribute information of the vehicles in the first road monitoring area according to the second data and the first data.
It should be noted that the number of vehicles included in the second data may be greater than the number of vehicles included in the first data, and the vehicles in the second data are fused with the vehicles in the first data, so that more vehicle information can be obtained, and based on the image information acquired by the video image acquisition device, video attribute information such as colors, vehicle types, license plates, and the like of the vehicles can be determined.
Optionally, the electronic device may convert coordinate values of a coordinate system of the radar device in which the vehicle is located in the second data into coordinate values of a pixel coordinate system; the pixel coordinate system is a coordinate system in which the first data is located. Specifically, the second data detected by the radar device is mapped into the pixel coordinate system through coordinate transformation, so as to achieve spatial synchronization of the radar target and the video target, and the flow of the coordinate transformation is as shown in fig. 3 below. The coordinate system of the radar device is converted into a world coordinate system through rotation and translation, the world coordinate system is converted into a camera coordinate system through rotation and translation, the camera coordinate system is converted into a phase plane coordinate system through projection imaging, and the phase plane coordinate system is converted into a pixel coordinate system through discretization.
Optionally, the electronic device may convert the coordinate values of the radar coordinate system in which the vehicle is located in the second data into coordinate values of a world coordinate system through the first conversion matrix; the first conversion matrix is determined based on the position relation between the radar equipment and the video image acquisition equipment (namely, the following formula 1, because the video camera and the radar equipment are on the same horizontal plane and do not rotate by an angle, the conversion matrix only comprises a translation matrix and does not comprise a rotation matrix); converting the coordinate values of the world coordinate system into coordinate values of a coordinate system in which the video image capturing device is located by a second conversion matrix (i.e., the following formula 2, which is understood with reference to the following description and is not set forth herein); the second transformation matrix is different from the first transformation matrix; converting the coordinate values of the coordinate system where the video image capturing device is located into the coordinate values of the pixel coordinate system by using a preset rule (wherein the preset rule may include a conversion rule of coordinates of a phase plane and a pixel plane, a preset logic rule, and the like, and is not limited in particular).
There are many layout ways for the radar device and the video image capturing device, and the layout way in fig. 1 is taken as an example, and the position relationship between the corresponding radar device coordinate system and the video camera coordinate system is shown in fig. 4. Suppose OlIs the origin of coordinates, O, of the coordinate system of the radar apparatuswIs a world coordinate system origin centered on a video camera (i.e., a video image capture device). Firstly, coordinates under a coordinate system of the radar equipment are converted into a world coordinate system, wherein the radar equipment can obtain x-axis and y-axis coordinate information of a target (namely a vehicle), and cannot obtain z-axis coordinate information of the target. Thus, the coordinate system O of the radar apparatus can be setlTo world coordinate system OwIs regarded as a transformation of a two-dimensional x-y coordinate system, OlAnd OwThe relationship between them is realized by the transformation matrix (i.e. the first transformation matrix). The transformation matrix is composed of two parts, one is a rotation matrix caused by angles, and the other is a translation matrix generated by translation. In this application, the radar device and the video camera are installed in the same horizontal plane, so that it can be understood that the rotation angle α of the radar coordinate system in the rotation matrix relative to the world coordinate system is 0, as shown in the following formula 1. Wherein R is2A rotation matrix, t, representing said radar coordinate system to said world coordinate system2A translation matrix representing the radar coordinate system to the world coordinate system; (x)l,yl) Represents the coordinate position of the second data in the radar coordinate system (x)w,yw) Representing the second data through the rotation matrix R2And the above translation matrix t2After coordinate conversion is carried out atCoordinate positions in the world coordinate system.
Figure BDA0003521418340000081
There are four planar coordinate systems in the camera model: a pixel plane coordinate system (u, v), a phase plane coordinate system (an image physical coordinate system (x, y)), a camera coordinate system (namely a coordinate system where the video image acquisition equipment is located) (x)c,yc,zc) And world coordinate system (x)w,yw,zw). Because the radar equipment cannot obtain the z-axis coordinate information of the target, the coordinate position of the world coordinate system obtained after the coordinate conversion from the radar coordinate system to the world coordinate system has no zwThe value z is assumed using a priori knowledge of the vehicle height, assuming that the height H of the vehicle target is 1.8m, zw1.8. Thereby, the coordinates (x) of the second data in the world coordinate system can be obtainedw,yw,zw)。
The relation between the world coordinate system and the camera coordinate system may also be obtained by means of a transformation matrix, i.e. a second transformation matrix, i.e. by means of a rotation matrix R1And a translation matrix t1And the conversion from the world coordinate system to the camera coordinate system is realized. As shown in equation 2 below. Wherein R is1A rotation matrix, t, representing the world coordinate system to the camera coordinate system1A translation matrix representing the world coordinate system to the camera coordinate system; (x)w,yw,zw) Represents the coordinate position of the second data in the world coordinate system (x)c,yc,zc) Representing second data through said rotation matrix R1And the above translation matrix t1And (4) performing coordinate conversion to obtain coordinate positions in the camera coordinate system. Wherein the world coordinate system rotates around the x-axis by an angle alpha1Angle of rotation about the y-axis is beta1The angle of rotation about the z-axis being θ1Translation distance in x-axis is txTranslation distance in y-axis is tyTranslation distance in z-axis is tz
Figure BDA0003521418340000082
Figure BDA0003521418340000083
Figure BDA0003521418340000084
Optionally, the electronic device may perform imaging projection on a coordinate value of a coordinate system where the video image capturing device is located based on the focal length of the video image capturing device to obtain a coordinate value of a phase plane coordinate system of the video image capturing device (that is, the coordinate position of the phase plane is obtained by performing projection in a manner provided by formula 3 according to the focal length of the video camera and the coordinate position of the second data in the camera coordinate system); based on the size of the pixel on the video image acquisition device photosensitive chip, the plane center of the phase plane coordinate system, and the coordinate value of the second data in the phase plane coordinate system, determining the coordinate value in the pixel coordinate system after discretization of the second data (i.e. discretizing according to the actual size of the pixel on the video camera photosensitive chip, the plane center of the phase plane coordinate system, and the coordinate value of the second data in the phase plane coordinate system by adopting the method provided by formula 4 to obtain the coordinate position of the phase plane).
As shown in fig. 5 below, the camera coordinate system and the phase plane coordinate system have an imaging projection relationship, and are closely related to the focal length f of the camera. As shown in equation 3 below. f represents the focal length of the video camera; (x)c,yc,zc) The coordinate position of the second data in the camera coordinate system is shown, and (x, y) the coordinate position of the second data in the phase plane coordinate system after imaging projection is shown.
Figure BDA0003521418340000091
Phase plane coordinate system by discretizationThe conversion to the pixel coordinate system. As shown in equation 4 below. dx and dy represent the actual size of pixels on the video camera photosensitive chip; u. of0、v0The plane center of the phase plane coordinate system is represented, (x, y) the coordinate position of the second data in the phase plane coordinate system, and (u, v) the coordinate position of the second data in the pixel coordinate system after discretization.
Figure BDA0003521418340000092
In the application, the parameter of the coordinate conversion can be obtained through calibration of the radar equipment and the video camera, and the radar coordinate of the second data acquired by the radar equipment can be converted into the pixel coordinate system through the parameter.
The electronic equipment can perform video fusion on the second data and the first data to determine the number of vehicles in the first road monitoring area and the area where the missing vehicles exist in the first data; processing data of the area where the missing vehicle is located in the first data based on deep learning, and determining color information, vehicle type information, license plate number information and the like of the missing vehicle; determining video attribute information of the vehicles in the first road monitoring area according to the first data, the color information, the vehicle type information and the license plate information of the missing vehicles; the video attribute information includes: color information, vehicle type information, quantity information, license plate number information, and the like.
By taking fig. 6 as an example, 6-1 in fig. 6 has 6 target vehicles Car1, Car2 … … Car6 within the effective range of the radar apparatus; in 6-2 in fig. 6, the millimeter wave radar detects 6 radar target vehicles Car1 and Car2 … … Car6 existing in the road through active transmission of electric measuring waves; due to possible illumination, weather, background pixels, camera resolution and the like, in fig. 6, 6-3 are within the effective range of the video camera, and the video algorithm only detects 2 video target vehicles Car5 and Car 6; by the above coordinate transformation, the radar target vehicle and the video target vehicle are mapped into the same pixel coordinate system in 6-4 in fig. 6. The video processing unit can perform multi-target detection and positioning on the video image based on a deep learning target detection algorithm, the deep learning target detection algorithm mentioned in the application can be a single-stage target detection algorithm such as a you only (you only look) series (i.e. one of the target detection algorithms), single-stage multi-frame detection (SSD) and the like, or a two-stage target detection algorithm such as a Faster regional convolutional neural network (Faster regional convolutional neural network, Faster R-CNN) and the like, the application can complete vehicle target detection in the video image by using a YOLOV5 target detection algorithm, and the YOLOV5 target detection algorithm can return the type of the vehicle detected in the video image, the position coordinates of the vehicle target in the video image and the pixel length and width of the vehicle target.
The radar equipment comprises a radio frequency module and an array antenna module, wherein the radio frequency module emits electromagnetic beams outwards, the array antenna module receives returned electromagnetic beams, and the point cloud data of the millimeter wave radar can be obtained through the returned electromagnetic beams. Clustering point cloud data of the same target vehicle, and taking the middle points in all the point cloud data of the target vehicle as the centroid of the target vehicle, thereby determining the x coordinate (x) of the target vehicle in the coordinate system of the radar equipmentpos) And y coordinate (y)pos) And taking the minimum value and the maximum value in the y coordinate from the point cloud data to obtain the vehicle length (length) of the target vehicle. As shown in fig. 6-4, the target vehicle Car4 in the dashed box (i.e. the area where the missing vehicle is located) is not detected by the video algorithm, and in order to identify the type of the target vehicle, the present application uses the x coordinate (x) of the target vehicle in the coordinate system of the radar devicepos) And y coordinate (y)pos) Converting the coordinate system into a pixel coordinate system to obtain an x ' coordinate (x ') of the target vehicle in the pixel coordinate system 'pos) And y 'coordinate (y'pos) To is (x'pos,y′pos) And taking the pixel length' of the vehicle length in the pixel coordinate system as the side length as the central point, and intercepting the image area in the pixel coordinate system. The type of the target vehicle in the area is identified based on the deep learning vehicle type detection network. Thereby ensuring that the algorithm can acquire all target vehicle types within the video effective range in the road.
In addition, in order to obtain richer video attribute information of the vehicle, the method and the device intercept image areas where Car4, Car5 and Car6 are located, and identify attribute information such as vehicle color, vehicle logo and license plate number through the target vehicle attribute detection network. The vehicle type detection network and the vehicle attribute detection network can be light-weight neural networks such as MobileNet and ShuffleNet, and can also be large and medium-sized networks such as ResNet. In order to achieve a better detection effect, the ResNet50 can be used as a backbone network of a vehicle type detection network and a vehicle attribute detection network to obtain vehicle visual attribute information such as vehicle types, vehicle colors, vehicle logos and the like, for example, a ResNet50 network model can be trained in advance through a vehicle picture sample set containing different colors (for example, black, white, blue and brown). Inputting the image area of the missing vehicle Car4 into the trained ResNet50 network model, wherein the ResNet50 network model performs feature extraction on the image area of the missing vehicle Car4, outputs the probabilities that the image area of the missing vehicle Car4 belongs to black, white, blue and brown respectively, and determines that the Car4 is black when the probability that the Car4 is black is the highest.
In addition, in a good environment, the license plate number of the target vehicle may be recognized by using an Optical Character Recognition (OCR) technology, which is not specifically limited herein.
And step 203, determining the abnormal lane changing condition of the vehicle in the first road monitoring area according to the video attribute information of the vehicle, the motion attribute information of the vehicle and the lane information in the first road monitoring area.
Optionally, the lane information in the first road monitoring area is determined according to a high-precision map of the first road monitoring area, or is determined by performing data analysis on image data of the first road monitoring area through a video image algorithm.
In practical application, a high-precision map in which the type of lane line in the road and the high-precision coordinates of the start point of the lane solid line region are recorded in detail can be recorded in the electronic device in advance. If the high-precision map is not pre-recorded in the electronic equipment, lane information can be determined by detecting lane lines in the video image in real time based on a deep learning algorithm. The lane line type can be a lane line type which prohibits abnormal lane changing, such as a white solid line, a single yellow solid line, a double yellow solid line, a yellow virtual solid line, a white virtual solid line and the like, and meanwhile, the coordinate position of a lane solid line area in a pixel point coordinate system can be obtained based on a video algorithm.
The pixel width and the pixel length of a target vehicle in a pixel coordinate system can be detected by using a target detection algorithm in a visual distance range of video image acquisition equipment, the pixel length of the detected target vehicle is 90 pixels, the pixel width is 36 pixels, the length of the target vehicle detected by radar equipment is 4.3 meters, and the real vehicle width of the target vehicle is about 4.3 × 36/90 ═ 1.72 m.
If the target detection algorithm does not detect the pixel width and the pixel length of the target vehicle in the pixel coordinate system within the range of sight distance of the video image capturing device, such as Car4 in fig. 6-4, since the vehicle type of Car4 has been detected in the vehicle type detection network, it is assumed that Car4 detects a Car, the length of the Car is generally between 3.8 meters and 4.3 meters, and the width of the Car is generally between 1.6 meters and 1.8 meters, and if the radar device detects that the length of the target vehicle Car4 is 4.0 meters, the real vehicle width of the target vehicle Car4 can be estimated to be about 1.68 meters according to the size range of the Car; since the pixel length 'of the target vehicle Car4 in the pixel coordinate system has been obtained in the above coordinate conversion process, assuming that length' is 88, the pixel width of the target vehicle Car4 in the pixel coordinate system is about 88 × 1.68/4 ≈ 37, and then the pixel length of Car4 in the pixel coordinate system can be estimated to be about 37. The actual vehicle width of the target vehicle and the pixel width in the pixel coordinate system can thus be obtained from the first data and the second data.
Optionally, the electronic device determines a first target vehicle according to the video attribute information of the vehicle within the range of sight distance of the video image acquisition device; determining whether a reference line of the first target vehicle intersects with a lane solid line or not according to the motion attribute information of the first target vehicle and the lane information in the first road monitoring area; and if so, determining that the first target vehicle abnormally changes the lane.
As shown in fig. 7 below, the coordinate points in the graph are the coordinate points in the pixel coordinate system of the lane solid line region detected from the high-precision map or the video algorithm, and in the video image of the video frame1, the radar fusion algorithm detects that the lane where the target vehicle CarA is located is the left 2 nd lane, and acquires the coordinate points in the pixel coordinate system of the target vehicle. When the video image of the video frame2 detects that the reference line of the target vehicle CarA is pressed into the lane dotted line, abnormal lane changing does not occur, the video image of the video frame3 detects that the lane where the target vehicle CarA is located is the 3 rd lane on the left side, when the target vehicle CarA changes lanes, whether the connecting line of the pixel width positioning coordinates on the two sides of the vehicle body of the target vehicle intersects with the connecting line of the lane solid line area positioning coordinates or not is calculated, and when the two do not intersect, the abnormal lane changing alarm of the target vehicle is not triggered. The video image of the video frame4 detects that the reference line of the target vehicle CarA is pressed into the lane solid line, the connecting line of the pixel width positioning coordinates at the two sides of the vehicle body of the target vehicle is intersected with the connecting line of the lane solid line area positioning coordinates, and the target vehicle gives an alarm for abnormal lane change.
In the vehicle abnormal lane change detection, whether the abnormal lane change vehicles in the video frames of frame1, frame2, frame3 and frame4 are the same ID vehicle or not can be determined by the video attribute information of the target vehicle. And when the license plate number of the target vehicle is detected to be the same in the video frame, determining that the abnormal lane change behavior of the vehicle with a certain ID occurs. And when the license plate number of the target vehicle cannot be detected in the video frame, determining the lane-changing vehicle as the same vehicle by using a vehicle re-identification algorithm according to the appearance characteristics of the image area of the target vehicle. When the abnormal lane changing behavior of the vehicle is judged, the electronic equipment records high-precision positioning, namely longitude and latitude coordinates when the target vehicle is subjected to the abnormal lane changing, uploads attribute information such as vehicle type, vehicle body color, vehicle type, license plate number and vehicle speed to a vehicle road cooperative cloud platform together, and synchronously sends the attribute information to a Road Side Unit (RSU), and the RSU sends the traffic event to intelligent vehicles in the vehicle network through broadcasting.
According to the method and the device, the abnormal lane changing condition of the first target vehicle is determined according to the intersection condition of the reference line of the first target vehicle and the lane solid line in the sight distance range of the video image acquisition equipment, and the detection accuracy and the detection efficiency of the abnormal lane changing can be improved through the method.
Optionally, the electronic device determines a second target vehicle according to the video attribute information of the vehicle within the range of sight distance of the video image acquisition device; determining the motion attribute information of the current second target vehicle outside the sight distance range of the video image acquisition equipment (simultaneously within the detection range of the radar equipment); predicting a first position area where a second target vehicle is located in the next radar period based on a Kalman filtering algorithm; acquiring a second position area where a second target vehicle is located in the next radar period; and if the first position area is different from the second position area, and the distance between the second position area and the lane solid line is smaller than the width threshold of the second target vehicle, determining that the second target vehicle has an abnormal lane change.
As shown in fig. 8-1 below, the electronic device predicts, based on the kalman filter algorithm, a position area where the target vehicle may arrive in the radar coordinate system in the next radar cycle based on the motion attributes such as the speed of the target vehicle in the current radar cycle, as shown in fig. 8-1, a solid line frame is an actual position area where the target vehicles CarA and CarB are located in the current radar cycle, a dashed line frame is a position area where the target vehicles CarA and CarB may arrive in the next radar cycle predicted by the electronic device based on the kalman filter algorithm, and in the next radar cycle, when the radar device detects that the target vehicles CarA and CarB appear at the position predicted in the previous radar cycle, it is determined that the target vehicle has not undergone an abnormal lane change behavior.
As shown in fig. 8-2 below, the solid line frame is an actual position area where the target vehicles CarA, CarB, and CarC in the current radar cycle are located, the dashed line frame is a position area where the target vehicles CarA, CarB, and CarC in the radar coordinate system may arrive in the next radar cycle predicted by the electronic device based on the kalman filter algorithm, at the next radar cycle, the radar device detects that the target vehicles CarB and CarC are present at the position predicted in the previous radar cycle, but the target vehicle CarA is not present at the predicted position (as shown by the hatched dashed line frame in fig. 8-2), the target vehicle is present at a position intermediate between the adjacent left 1 st lane and the adjacent 2 nd lane, and the distance from the target vehicle to the left 2 nd lane is less than 1/2 of the vehicle width, and it is determined that the target vehicle CarA has abnormal lane change behavior.
Similarly, when the abnormal lane changing behavior of the vehicle is judged, the electronic equipment records high-precision positioning, namely longitude and latitude coordinates when the target vehicle is subjected to the abnormal lane changing, uploads attribute information such as vehicle type, vehicle body color, vehicle type, license plate number, vehicle speed and the like detected in the effective range of the video to the vehicle-road cooperative cloud platform together, and synchronously sends the attribute information to road side equipment (RSU), and the RSU sends the traffic event to intelligent vehicles in the vehicle network through broadcasting.
The method and the device for detecting the abnormal lane change of the second target vehicle determine the abnormal lane change condition of the second target vehicle according to the relation between the predicted position area of the second target vehicle in the next radar period and the actual position area outside the visual range of the video image acquisition device, and can improve the detection accuracy and the detection efficiency of the abnormal lane change.
In addition, the electronic equipment can also determine that the abnormal lane-changing vehicle is a third target vehicle (namely a special vehicle) according to the video attribute information of the vehicle; the third target vehicle is one or more of: police cars, ambulances, fire trucks and construction vehicles; and determining the longitude and latitude coordinates, the vehicle type and the license plate information of the third target vehicle.
When a vehicle target is detected to turn on the double flashing lamps in the effective range of the video and abnormal lane changing occurs, high-precision positioning, namely longitude and latitude coordinates of lane changing behaviors, and information such as vehicle types, vehicle body colors, vehicle types, license plate numbers, vehicle speeds, vehicle double flashing lamp turning states and the like are recorded. The special vehicles and the fault vehicles are used as special road traffic participants, and when abnormal lane changing behaviors occur, the roadside device emergently broadcasts the abnormal lane changing behaviors to the intelligent vehicles through the C-V2X technology to remind the vehicles of emergently avoiding the vehicles, so that the driving safety of the intelligent vehicles is guaranteed.
The method and the system have the advantages that the road side equipment is informed of the abnormal lane changing condition of the vehicle in the first road monitoring area, so that the road side equipment can inform the intelligent vehicle of the abnormal lane changing condition of the vehicle, the intelligent vehicle is reminded to avoid, and safe driving is guaranteed.
The method is used for monitoring the abnormal lane-changing behavior of the vehicle in real time based on a radar and video fusion mode, and the technical scheme is as shown in the following figure 9.
The edge computing terminal realizes the access of video signals, respectively identifies and positions vehicle targets in the video images based on a deep learning algorithm, identifies and positions the vehicle targets in urban roads based on a millimeter wave radar and determines lane information based on the video images. Then, the coordinates of the radar target and the video target are fused, which can refer to the above description and is not repeated herein. And correcting the video target based on the radar target, and acquiring attribute information of the video target. And taking the radar target as a reference, extracting image target areas of the video target which is missed to be detected, and then extracting visual attributes of all corrected vehicle targets based on a deep learning algorithm. And then detecting the lane line type in the video image based on a deep learning algorithm, and completing high-precision positioning of lane solid line points and vehicle targets based on a high-precision positioning module. And in the effective range of the video, monitoring the position information of the same target vehicle in real time based on the high-precision positioning information, the visual attribute of the vehicle target and the motion attribute of the vehicle target, and recording the visual attribute information and the high-precision positioning information of the target vehicle when the same target vehicle crosses a lane solid line to finish the detection of the abnormal lane changing behavior of the vehicle. And outside the effective range of the video, the position information of the same target vehicle is monitored in real time by tracking and predicting the radar target, so that the abnormal lane-changing behavior of the vehicle outside the effective range of the video is detected. The type of the special vehicle, the opening state of an alarm lamp of the special vehicle and the opening state of a double-flashing lamp of a fault vehicle are detected in real time through a deep learning algorithm. By monitoring special vehicles such as police cars, ambulances, fire engines, engineering vehicles and the like and fault vehicles in real time, the comprehensive monitoring of the abnormal lane changing behavior of the vehicles is realized, and the running safety of road vehicles is guaranteed.
In this application, to first road monitoring area, electronic equipment can gather first data and gather the second data based on radar equipment based on video image collection equipment, and video image collection equipment receives the influence of environment and the stadia of itself easily usually, and the regional scope of collection probably is shorter, and radar equipment carries out target (vehicle) location through launching electromagnetic wave etc. can not receive the influence of environment and can gather more distant target. According to the method, the first data collected by the video image collecting device are adjusted through the second data collected by the radar device, the first data can be corrected through the method, so that the electronic device can obtain more target information, monitoring of abnormal lane changing of vehicles at a longer distance is achieved, the electronic device can determine the abnormal lane changing condition of each vehicle in the first road monitoring area based on the video attribute information, the motion attribute information and the lane information in the first road area of the vehicle, and the method can improve the detection accuracy of the abnormal lane changing.
Based on the same concept, the embodiment of the present application provides a device for detecting an abnormal lane change of a vehicle, as shown in fig. 10, including: an acquisition unit 1001, a first determination unit 1002, and a second determination unit 1003.
The acquiring unit 1001 is configured to acquire, for a first road monitoring area, first data acquired by video image acquisition equipment and second data acquired by radar equipment, respectively; the number of vehicles in the second data is greater than or equal to the number of vehicles in the first data; the second data includes motion attribute information of the vehicle; a first determining unit 1002, configured to determine video attribute information of a vehicle in the first road monitoring area according to the second data and the first data; a second determining unit 1003, configured to determine an abnormal lane change condition of the vehicle in the first road monitoring area according to the video attribute information of the vehicle, the motion attribute information of the vehicle, and the lane information in the first road monitoring area.
In this application, to first road monitoring area, electronic equipment can gather first data and gather the second data based on radar equipment based on video image collection equipment, and video image collection equipment receives the influence of environment and the stadia of itself easily usually, and the regional scope of collection probably is shorter, and radar equipment carries out target (vehicle) location through launching electromagnetic wave etc. can not receive the influence of environment and can gather more distant target. According to the method, the first data collected by the video image collecting device are adjusted through the second data collected by the radar device, the first data can be corrected through the method, so that the electronic device can obtain more target information, monitoring of abnormal lane changing of vehicles at a longer distance is achieved, the electronic device can determine the abnormal lane changing condition of each vehicle in the first road monitoring area based on the video attribute information, the motion attribute information and the lane information in the first road area of the vehicle, and the method can improve the detection accuracy of the abnormal lane changing.
In an optional manner, the second determining unit 1003 is specifically configured to determine, within a range of a line of sight of the video image capturing device, a first target vehicle according to video attribute information of the vehicle; determining whether the reference line of the first target vehicle intersects with the lane solid line or not according to the motion attribute information of the first target vehicle and the lane information in the first road monitoring area; and if so, determining that the first target vehicle abnormally changes the lane.
According to the method and the device, the abnormal lane changing condition of the first target vehicle is determined according to the intersection condition of the reference line of the first target vehicle and the lane solid line in the sight distance range of the video image acquisition equipment, and the detection precision and the detection efficiency of the abnormal lane changing can be improved through the method.
In an optional manner, the second determining unit 1003 is specifically configured to determine, within a range of a line of sight of the video image capturing device, a second target vehicle according to video attribute information of the vehicle; determining the current motion attribute information of the second target vehicle outside the sight distance range of the video image acquisition equipment; predicting a first position area where the second target vehicle is located in the next radar period based on a Kalman filtering algorithm; acquiring a second position area of the second target vehicle in the next radar period; and if the first position area is different from the second position area, and the distance between the second position area and the lane solid line is smaller than the width threshold of the second target vehicle, determining that the second target vehicle changes lanes abnormally.
According to the method and the device, the abnormal lane changing condition of the second target vehicle is determined according to the relation between the predicted position area of the second target vehicle in the next radar period and the actual position area outside the sight distance range of the video image acquisition equipment, and the detection precision and the detection efficiency of the abnormal lane changing can be improved through the method.
In an optional manner, the second determining unit 1003 is further configured to determine, according to the video attribute information of the vehicle, that the abnormal lane change vehicle is a third target vehicle; the third target vehicle is one or more of: police cars, ambulances, fire trucks, and construction trucks; and determining the longitude and latitude coordinates, the vehicle type and the license plate information of the third target vehicle.
The method and the system consider that the abnormal lane changing behavior can be special vehicles such as police cars, ambulances and the like, and can mark the abnormal lane changing behavior so as to remind the intelligent vehicle that the abnormal lane changing behavior is the special vehicles and the intelligent vehicle is in avoidance during driving.
In an optional manner, the first determining unit 1002 is specifically configured to convert the coordinate values of the coordinate system of the radar device where the vehicle is located in the second data into coordinate values of a pixel coordinate system; the pixel coordinate system is a coordinate system where the first data is located; performing video fusion on the second data and the first data, and determining the number of vehicles in the first road monitoring area and the area where the missing vehicle is in the first data; performing data processing on the area where the missing vehicle is located in the first data based on deep learning, and determining color information, vehicle type information and license plate number information of the missing vehicle; determining video attribute information of the vehicles in the first road monitoring area according to the first data, the color information, the vehicle type information and the license plate number information of the missing vehicles; the video attribute information includes: color information, vehicle type information, quantity information, license plate number information and the like.
According to the method and the device, the coordinate system and the pixel coordinate system of the radar equipment are converted, and then the second data and the first data are subjected to data fusion, so that more vehicles can be detected in a first road monitoring area.
In an alternative mode, the lane information in the first road monitoring area is determined according to a high-precision map of the first road monitoring area, or is determined by performing data analysis on image data of the first road monitoring area through a video image algorithm.
The lane information is acquired to accurately determine whether the vehicle has abnormal lane change behavior, so that the data processing efficiency can be improved.
In an optional manner, the apparatus further includes a notification unit, configured to notify the roadside device of an abnormal lane change condition of the vehicle in the first road monitoring area.
By means of the method, the roadside device can inform the intelligent vehicle of the abnormal lane changing condition of the vehicle so as to remind the intelligent vehicle to avoid, and safe driving is ensured.
In an alternative manner, the first determining unit 1002 is specifically configured to convert, by using a first conversion matrix, coordinate values of a radar coordinate system in which the vehicle is located in the second data into coordinate values of a world coordinate system; the first conversion matrix is determined based on the position relation of the radar equipment and the video image acquisition equipment; converting the coordinate values of the world coordinate system into coordinate values of a coordinate system in which the video image acquisition equipment is located through a second conversion matrix; the second transformation matrix is different from the first transformation matrix; and converting the coordinate value of the coordinate system where the video image acquisition equipment is positioned into the coordinate value of the pixel coordinate system by adopting a preset rule.
The coordinate value of the second data in the pixel coordinate system determined by the method is more accurate and reliable.
In an alternative manner, the first determining unit 1002 is specifically configured to perform imaging projection on a coordinate value of a coordinate system where the video image acquisition device is located based on a focal length of the video image acquisition device, so as to obtain a coordinate value of a phase plane coordinate system of the video image acquisition device; and determining the coordinate value of the second data in the pixel coordinate system after discretization based on the size of the pixel on the photosensitive chip of the video image acquisition device, the plane center of the phase plane coordinate system and the coordinate value of the second data in the phase plane coordinate system.
The coordinate value of the second data in the pixel coordinate system determined by the method is more accurate and reliable.
Having described the method and apparatus for detecting an abnormal lane change of a vehicle in an exemplary embodiment of the present application, a computing device in another exemplary embodiment of the present application is described next.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device according to the present application may include at least one processor, and at least one memory. The memory stores therein a computer program that, when executed by the processor, causes the processor to perform the steps of the method for detecting an abnormal lane change of a vehicle according to various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform steps 201-203 as shown in fig. 2.
The computing device 130 according to this embodiment of the present application is described below with reference to fig. 11. The computing device 130 shown in fig. 11 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present application. As shown in fig. 11, the computing device 130 is embodied in the form of a general purpose smart terminal. Components of computing device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures. The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323. Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Computing device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.) and/or any device (e.g., router, modem, etc.) that enables computing device 130 to communicate with one or more other intelligent terminals. Such communication may occur via input/output (I/O) interfaces 135. Also, computing device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via network adapter 136. As shown, network adapter 136 communicates with other modules for computing device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computing device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, various aspects of the method for detecting a vehicle abnormal lane change provided by the present application may also be implemented in the form of a program product including a computer program for causing a computer device to perform the steps of the method for detecting a vehicle abnormal lane change according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device. For example, the processor may perform steps 201-203 as shown in FIG. 2.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for detection of abnormal lane changes of a vehicle according to an embodiment of the present application may employ a portable compact disc read only memory (CD-ROM) and include a computer program, and may be run on a smart terminal. The program product of the present application is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with a readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable access frequency predicting device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable access frequency predicting device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable access device to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for detecting an abnormal lane change of a vehicle, comprising:
respectively acquiring first data acquired by video image acquisition equipment and second data acquired by radar equipment aiming at a first road monitoring area; the number of vehicles in the second data is greater than or equal to the number of vehicles in the first data; the second data includes motion attribute information of the vehicle;
determining video attribute information of vehicles in the first road monitoring area according to the second data and the first data;
and determining the abnormal lane changing condition of the vehicle in the first road monitoring area according to the video attribute information of the vehicle, the motion attribute information of the vehicle and the lane information in the first road monitoring area.
2. The method of claim 1, wherein determining an abnormal lane change condition of a vehicle in the first road monitoring area based on the video attribute information of the vehicle, the motion attribute information of the vehicle, and the lane information in the first road monitoring area comprises:
determining a first target vehicle according to the video attribute information of the vehicle within the sight distance range of the video image acquisition equipment;
determining whether a reference line of the first target vehicle intersects with a lane solid line or not according to the motion attribute information of the first target vehicle and the lane information in the first road monitoring area;
and if so, determining that the first target vehicle abnormally changes the lane.
3. The method of claim 1, wherein determining an abnormal lane change condition of a vehicle in the first road monitoring area based on the video attribute information of the vehicle, the motion attribute information of the vehicle, and the lane information in the first road monitoring area comprises:
determining a second target vehicle according to the video attribute information of the vehicle within the sight distance range of the video image acquisition equipment;
determining the motion attribute information of the current second target vehicle outside the sight distance range of the video image acquisition equipment;
predicting a first position area where the second target vehicle is located in a next radar cycle based on a Kalman filtering algorithm;
acquiring a second position area of the second target vehicle in the next radar period;
and if the first position area is different from the second position area, and the distance between the second position area and the lane solid line is smaller than the width threshold of the second target vehicle, determining that the second target vehicle changes lanes abnormally.
4. The method of claim 2 or 3, further comprising:
determining an abnormal lane-changing vehicle as a third target vehicle according to the video attribute information of the vehicle; the third target vehicle is one or more of: police cars, ambulances, fire trucks and construction vehicles;
and determining the longitude and latitude coordinates, the vehicle type and the license plate information of the third target vehicle.
5. The method of any of claims 1-3, wherein determining video attribute information for vehicles in the first road-monitoring area based on the second data and the first data comprises:
converting the coordinate value of the coordinate system of the radar equipment where the vehicle is located in the second data into the coordinate value of a pixel coordinate system; the pixel coordinate system is a coordinate system where the first data is located;
performing video fusion on the second data and the first data, and determining the number of vehicles in the first road monitoring area and the area where the missing vehicle is in the first data;
performing data processing on the region where the missing vehicle is located in the first data based on deep learning, and determining color information, vehicle type information and license plate information of the missing vehicle;
determining video attribute information of the vehicles in the first road monitoring area according to the first data, the color information, the vehicle type information and the license plate information of the missing vehicles; the video attribute information includes: color information, vehicle type information, license plate information, and quantity information.
6. The method according to claim 5, wherein the converting the coordinate values of the coordinate system of the radar device in which the vehicle is located in the second data into coordinate values of a pixel coordinate system comprises:
converting the coordinate value of the radar coordinate system in which the vehicle is located in the second data into the coordinate value of a world coordinate system through a first conversion matrix; the first conversion matrix is determined based on the position relation of the radar equipment and the video image acquisition equipment;
converting the coordinate value of the world coordinate system into a coordinate value of a coordinate system in which the video image acquisition equipment is located through a second conversion matrix; the second transformation matrix is different from the first transformation matrix;
and converting the coordinate value of the coordinate system where the video image acquisition equipment is positioned into the coordinate value of the pixel coordinate system by adopting a preset rule.
7. The method according to claim 6, wherein the converting the coordinate values of the coordinate system of the video image capturing device into coordinate values of a pixel coordinate system based on discretization processing by using a predetermined rule comprises:
performing imaging projection on the coordinate value of the coordinate system where the video image acquisition equipment is located based on the focal length of the video image acquisition equipment to obtain the coordinate value of the phase plane coordinate system of the video image acquisition equipment;
and determining the coordinate value of the second data in the pixel coordinate system after discretization based on the size of the pixel on the video image acquisition equipment photosensitive chip, the plane center of the phase plane coordinate system and the coordinate value of the second data in the phase plane coordinate system.
8. A detection device for an abnormal lane change of a vehicle, characterized by comprising:
the acquisition unit is used for respectively acquiring first data acquired by video image acquisition equipment and second data acquired by radar equipment aiming at a first road monitoring area; the number of vehicles in the second data is greater than or equal to the number of vehicles in the first data; the second data includes motion attribute information of the vehicle;
the first determining unit is used for determining video attribute information of the vehicles in the first road monitoring area according to the second data and the first data;
and the second determining unit is used for determining the abnormal lane changing condition of the vehicle in the first road monitoring area according to the video attribute information of the vehicle, the motion attribute information of the vehicle and the lane information in the first road monitoring area.
9. A computing device, comprising: a memory and a processor;
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any one of claims 1 to 7 in accordance with the obtained program.
10. A computer storage medium storing computer-executable instructions for performing the method of any one of claims 1-7.
CN202210178725.4A 2022-02-25 2022-02-25 Method and device for detecting abnormal lane change of vehicle Pending CN114627409A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210178725.4A CN114627409A (en) 2022-02-25 2022-02-25 Method and device for detecting abnormal lane change of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210178725.4A CN114627409A (en) 2022-02-25 2022-02-25 Method and device for detecting abnormal lane change of vehicle

Publications (1)

Publication Number Publication Date
CN114627409A true CN114627409A (en) 2022-06-14

Family

ID=81899876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210178725.4A Pending CN114627409A (en) 2022-02-25 2022-02-25 Method and device for detecting abnormal lane change of vehicle

Country Status (1)

Country Link
CN (1) CN114627409A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019511A (en) * 2022-06-29 2022-09-06 九识(苏州)智能科技有限公司 Method and device for identifying illegal lane change of motor vehicle based on automatic driving vehicle
CN115908838A (en) * 2022-12-12 2023-04-04 南京慧尔视智能科技有限公司 Vehicle existence detection method, device, equipment and medium based on radar vision fusion

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106652468A (en) * 2016-12-09 2017-05-10 武汉极目智能技术有限公司 Device and method for detection of violation of front vehicle and early warning of violation of vehicle on road
CN108091142A (en) * 2017-12-12 2018-05-29 公安部交通管理科学研究所 For vehicle illegal activities Tracking Recognition under highway large scene and the method captured automatically
CN110178167A (en) * 2018-06-27 2019-08-27 潍坊学院 Crossing video frequency identifying method violating the regulations based on video camera collaboration relay
CN110189523A (en) * 2019-06-13 2019-08-30 智慧互通科技有限公司 A kind of method and device based on Roadside Parking identification vehicle violation behavior
CN110570664A (en) * 2019-09-23 2019-12-13 山东科技大学 automatic detection system for highway traffic incident
CN111291676A (en) * 2020-02-05 2020-06-16 清华大学 Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN112946628A (en) * 2021-02-08 2021-06-11 江苏中路工程技术研究院有限公司 Road running state detection method and system based on radar and video fusion
CN112967501A (en) * 2021-02-23 2021-06-15 长安大学 Early warning system and method for dangerous driving-off behavior of vehicles on ramp
CN113643534A (en) * 2021-07-29 2021-11-12 北京万集科技股份有限公司 Traffic control method and equipment
CN113688662A (en) * 2021-07-05 2021-11-23 浙江大华技术股份有限公司 Motor vehicle passing warning method and device, electronic device and computer equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106652468A (en) * 2016-12-09 2017-05-10 武汉极目智能技术有限公司 Device and method for detection of violation of front vehicle and early warning of violation of vehicle on road
CN108091142A (en) * 2017-12-12 2018-05-29 公安部交通管理科学研究所 For vehicle illegal activities Tracking Recognition under highway large scene and the method captured automatically
CN110178167A (en) * 2018-06-27 2019-08-27 潍坊学院 Crossing video frequency identifying method violating the regulations based on video camera collaboration relay
CN110189523A (en) * 2019-06-13 2019-08-30 智慧互通科技有限公司 A kind of method and device based on Roadside Parking identification vehicle violation behavior
CN110570664A (en) * 2019-09-23 2019-12-13 山东科技大学 automatic detection system for highway traffic incident
CN111291676A (en) * 2020-02-05 2020-06-16 清华大学 Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN112946628A (en) * 2021-02-08 2021-06-11 江苏中路工程技术研究院有限公司 Road running state detection method and system based on radar and video fusion
CN112967501A (en) * 2021-02-23 2021-06-15 长安大学 Early warning system and method for dangerous driving-off behavior of vehicles on ramp
CN113688662A (en) * 2021-07-05 2021-11-23 浙江大华技术股份有限公司 Motor vehicle passing warning method and device, electronic device and computer equipment
CN113643534A (en) * 2021-07-29 2021-11-12 北京万集科技股份有限公司 Traffic control method and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019511A (en) * 2022-06-29 2022-09-06 九识(苏州)智能科技有限公司 Method and device for identifying illegal lane change of motor vehicle based on automatic driving vehicle
CN115908838A (en) * 2022-12-12 2023-04-04 南京慧尔视智能科技有限公司 Vehicle existence detection method, device, equipment and medium based on radar vision fusion
CN115908838B (en) * 2022-12-12 2023-11-07 南京慧尔视智能科技有限公司 Vehicle presence detection method, device, equipment and medium based on radar fusion

Similar Documents

Publication Publication Date Title
US10643472B2 (en) Monitor apparatus and monitor system
CN114627409A (en) Method and device for detecting abnormal lane change of vehicle
JP2021165080A (en) Vehicle control device, vehicle control method, and computer program for vehicle control
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN113435237B (en) Object state recognition device, recognition method, and computer-readable recording medium, and control device
US11379995B2 (en) System and method for 3D object detection and tracking with monocular surveillance cameras
CN111382735B (en) Night vehicle detection method, device, equipment and storage medium
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN112597807B (en) Violation detection system, method and device, image acquisition equipment and medium
CN113257003A (en) Traffic lane-level traffic flow counting system, method, device and medium thereof
CN111323038B (en) Method and system for positioning unmanned vehicle in tunnel and electronic equipment
Qu et al. Improving maritime traffic surveillance in inland waterways using the robust fusion of AIS and visual data
JP2020067896A (en) Travelable direction detector and travelable direction detection method
Bai et al. Cyber mobility mirror: Deep learning-based real-time 3d object perception and reconstruction using roadside lidar
CN115294169A (en) Vehicle tracking method and device, electronic equipment and storage medium
CN114495512A (en) Vehicle information detection method and system, electronic device and readable storage medium
KR20210064492A (en) License Plate Recognition Method and Apparatus for roads
CN115601738B (en) Parking information acquisition method, device, equipment, storage medium and program product
Qiu et al. Intelligent Highway Lane Center Identification from Surveillance Camera Video
JP2021128705A (en) Object state identification device
Kanhere Vision-based detection, tracking and classification of vehicles using stable features with automatic camera calibration
JP2021163432A (en) Signal light state identification apparatus, signal light state identification method, signal light state-identifying program, and control apparatus
CN115761668A (en) Camera stain recognition method and device, vehicle and storage medium
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
Suganuma et al. Current status and issues of traffic light recognition technology in Autonomous Driving System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination