CN115123237A - Intelligent lane changing auxiliary system easy to install - Google Patents

Intelligent lane changing auxiliary system easy to install Download PDF

Info

Publication number
CN115123237A
CN115123237A CN202210727466.6A CN202210727466A CN115123237A CN 115123237 A CN115123237 A CN 115123237A CN 202210727466 A CN202210727466 A CN 202210727466A CN 115123237 A CN115123237 A CN 115123237A
Authority
CN
China
Prior art keywords
vehicle
camera
data
data acquired
early warning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210727466.6A
Other languages
Chinese (zh)
Inventor
窦智超
朱淑娟
潘正祥
隋晓
庄仲杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Science and Technology
Original Assignee
Shandong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Science and Technology filed Critical Shandong University of Science and Technology
Priority to CN202210727466.6A priority Critical patent/CN115123237A/en
Publication of CN115123237A publication Critical patent/CN115123237A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18163Lane change; Overtaking manoeuvres
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/25Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the sides of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0953Predicting travel path or likelihood of collision the prediction being responsive to vehicle dynamic parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/207Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using multi-purpose displays, e.g. camera image and navigation or video on same display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8046Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for replacing a rear-view mirror system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo or light sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/53Road markings, e.g. lane marker or crosswalk
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • B60W2554/4023Type large-size vehicles, e.g. trucks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • B60W2554/804Relative longitudinal speed

Abstract

The invention discloses an easily-installed intelligent lane change auxiliary system, and belongs to the technical field of intelligent detection. The system comprises a data acquisition module, a video display module, an intelligent monitoring module and an early warning module; the data acquisition module is used for acquiring video image data and depth data; the video display module is used for displaying video image data and early warning signals; the intelligent monitoring module is used for monitoring whether danger exists during the steering operation of the vehicle; the early warning module is used for sending an early warning signal. According to the invention, the TOF camera is used for acquiring the depth data to replace radar data, so that a user can upgrade a vehicle without depending on an automobile sensor.

Description

Intelligent lane changing auxiliary system easy to install
Technical Field
The invention belongs to the technical field of intelligent detection, and particularly relates to an easily-installed intelligent lane change auxiliary system.
Background
The rear view mirror is a tool for directly acquiring external data such as the rear and side of the vehicle by a driver sitting on a cab seat. Early market rear view mirror systems were only able to display image data from different directions of the vehicle.
In the prior art, the automobile rearview mirror is only used as equipment for a driver to observe surrounding vehicles, and the observation angle is small and fixed, so that traffic accidents are easily caused. Mainstream lane changing auxiliary system often passes through the supplementary lane changing of millimeter wave radar, however the unable real-time supervision lane line data of millimeter wave radar, millimeter wave radar equipment also needs the car to be installed in advance by the garage when going out of the field simultaneously, later stage upgrading installation difficulty, and is with high costs. There is a need for a lane-changing assist system that is easy to install, low in cost, and high in accuracy.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides the intelligent lane-changing auxiliary system which is easy to install, has reasonable design, overcomes the defects of the prior art and has good effect.
In order to achieve the purpose, the invention adopts the following technical scheme:
an easily-installed intelligent lane-changing auxiliary system comprises a data acquisition module, a video display module, an intelligent monitoring module and an early warning module;
the data acquisition module is configured for acquiring video image data and depth data behind and laterally behind the vehicle through the camera; the video image data and the depth data are transmitted to an intelligent monitoring module, and the image data are transmitted to an image display module;
the video display module is configured for displaying the video image data acquired by the camera and the early warning signal sent by the early warning module;
the intelligent monitoring module is configured for extracting lane line data and vehicle data in the video image, calculating relative speed, acceleration and distance data of the driving vehicle and surrounding vehicles according to the depth data, judging whether the vehicle body deviates or not by utilizing the relative speed, acceleration and distance data of the driving vehicle and the surrounding vehicles, and monitoring whether danger exists when the vehicle is subjected to steering operation or not;
an early warning module configured to transmit an early warning signal.
Preferably, the cameras comprise a left camera, a right camera and a rear camera; the left side camera, right side camera and rear camera are TOF and RGB binocular camera, the left side camera includes left side RGB and TOF binocular camera, the right side camera includes right side RGB and TOF binocular camera, the rear camera includes rear side RGB and TOF binocular camera, left side RGB and TOF binocular camera are located car left side rear-view mirror below, right side RGB and TOF binocular camera are located car right side rear-view mirror below, rear side RGB and TOF binocular camera are located car rear license plate top.
Preferably, the video display module displays video image data acquired by the RGB cameras and the early warning signal sent by the early warning module, and the video display module uses the automobile rearview mirror as a display and displays the image data acquired by the RGB cameras in different positions in a switching mode according to the automobile steering direction.
Preferably, the intelligent monitoring module comprises a lane line extraction unit, a vehicle extraction unit, a distance calculation unit, a relative speed calculation unit, an acceleration calculation unit, a steering analysis unit, a road deviation prediction unit and a collision prediction unit;
the Lane line extraction unit is configured to extract Lane line images in the video images through Lane Net and calculate a vehicle steering angle according to the Lane line images;
a vehicle extraction unit configured to identify vehicle data in the video image through a YOLOv5 network and classify different vehicles;
a distance calculation unit configured to calculate a distance between the driven vehicle and the surrounding vehicle by combining the depth data acquired by the TOF camera and the vehicle data acquired by the vehicle extraction unit;
a relative speed calculation unit configured to calculate a relative speed of the driven vehicle and the surrounding vehicle by combining the depth data acquired by the TOF camera and the vehicle data acquired by the vehicle extraction unit;
an acceleration calculation unit configured to calculate an acceleration of the surrounding vehicle by combining the depth data acquired by the TOF camera with the vehicle data acquired by the vehicle extraction unit;
a steering analysis unit configured to calculate a vehicle yaw angle and a yaw angular velocity from the lane line data extracted by the lane line extraction unit, and determine that the vehicle is about to perform a steering operation if the yaw angle or the yaw angular velocity is greater than a set threshold;
a road deviation prediction unit configured to determine whether the driven vehicle deviates from a lane of travel by the lane line data extracted by the lane line extraction unit and the deflection angle data acquired by the steering analysis unit;
a collision prediction unit configured to calculate a probability of collision of the driven vehicle with the surrounding vehicle by the lane line data acquired by the lane line extraction unit, the distance data acquired by the distance calculation unit, the accompanying speed data acquired by the relative speed calculation unit, the acceleration data of the surrounding vehicle acquired by the acceleration calculation unit, and the yaw angle and the yaw angular velocity data acquired by the steering analysis unit.
Preferably, the early warning module judges through the data obtained by the road deviation prediction unit and the collision prediction unit and respectively warns the road deviation and the collision;
judging through data acquired by a collision prediction unit, specifically, classifying and judging different types of vehicles around a driving vehicle, and if the corresponding predicted type of vehicle is greater than a set maximum collision probability threshold value, sending a collision early warning signal;
the road deviation early warning carries out yellow flashing display reminding through a video display module;
the collision early warning carries out red flashing display reminding through the video display module, and carries out sound reminding through voice equipment.
Preferably, the vehicles are classified into six categories of sedan, SUV, MPV, midbus, large bus, and large truck.
The invention has the following beneficial technical effects:
the invention improves the traditional automobile rearview mirror, uses the TOF camera to obtain the depth data to replace radar data, and enables a user to upgrade an automobile without depending on an automobile sensor.
Drawings
FIG. 1 is a schematic structural view of an easy-to-install intelligent Lane-changing auxiliary system of Lane Net;
s101, a data acquisition module; s102, a video display module; s103, an intelligent monitoring module; s104, an early warning module;
s201-right RGB and TOF binocular camera; s202-left RGB and TOF binocular camera; s203-a rear RGB and TOF binocular camera; s204, an automobile interior rear-view mirror;
FIG. 2 is a schematic view of a vehicle camera mounting position and a vehicle interior rearview mirror mounting position;
FIG. 3 is a flow chart of data processing for an intelligent monitoring module;
FIG. 4 is a diagram of Lane Net standard convolution block structure;
FIG. 5 is a diagram of a standard convolution block structure;
FIG. 6 is a diagram of a deep convolution block structure;
FIG. 7 is a schematic diagram of a point-by-point convolution block structure;
FIG. 8 is a view showing the structure of AF-FPN.
Detailed Description
The invention is described in further detail below with reference to the following figures and detailed description:
as shown in fig. 1, an easy-to-install intelligent lane-changing auxiliary system includes a data acquisition module S101, a video display module S102, an intelligent monitoring module S103, and an early warning module S104;
the data acquisition module S101 is used for acquiring video image data and depth data behind and beside the vehicle;
the video display module S102 is used for displaying video images and early warning signals by using an automobile interior rearview mirror;
the intelligent monitoring module S103 is used for extracting lane line data and vehicle data in the video image, calculating the relative speed, acceleration and distance data of the driving vehicle and the surrounding vehicles according to the depth data, judging whether the vehicle body deviates or not by using the relative speed, acceleration and distance data of the driving vehicle and the surrounding vehicles, and monitoring whether danger exists or not when the vehicle is subjected to steering operation;
the early warning module S104 is used for sending an early warning signal;
as shown in fig. 2, the data acquisition module S101 uses RGB and TOF binocular cameras as the cameras, which are a right RGB and TOF binocular camera S201, a left RGB and TOF binocular camera S202, and a rear RGB and TOF binocular camera S203, respectively. The data acquisition module acquires video image data and depth data through a right RGB and TOF binocular camera S201, a left RGB and TOF binocular camera S202 and a rear RGB and TOF binocular camera S203, transmits the data to the intelligent monitoring module S103, and outputs the image data to an automobile interior rearview mirror S204.
The video display module S102 is used for displaying video image data acquired by the RGB cameras and early warning signals sent by the early warning module, and switching and displaying the RGB camera image data at different positions according to the automobile steering direction by taking the automobile interior rearview mirror S204 as a display.
As shown in fig. 3, the intelligent monitoring module S103 includes a lane line extraction unit, a vehicle extraction unit, a distance calculation unit, a relative speed calculation unit, an acceleration calculation unit, a steering analysis unit, a road deviation prediction unit, and a collision prediction unit;
the Lane line extraction unit is used for extracting Lane line images in the video images through an improved Lane Net, and calculating the vehicle steering angle according to the Lane line images:
as shown in fig. 4, the Lane Net network structure is divided into two directions, one is semantic segmentation, the other is vector representation of vectors, and finally, the results of the two branches are clustered to obtain example segmentation results.
The embed branch-vector represents a branch; segmentation branch-semantic Segmentation branch;
it should be noted that semantic segmentation: performing secondary classification on the pixels, and judging whether the pixels belong to a lane line or a background; vector representation of vector: and performing embedded representation on the pixels, representing image features as an embedded space, and mapping the relationship between the features in the embedded space. Clustering: and clustering the results of the two branches to obtain the result of example segmentation.
As shown in fig. 5, 6 and 7, D k -convolution kernel size; m-input channel number; n-number of output channels;
in order to enhance the performance of the algorithm, reduce the operation time of the algorithm and improve the response speed of the equipment, the example decomposes the standard convolution blocks in Lane Net into deep convolution and point-by-point convolution, and the corresponding convolution formula is as follows:
standard convolution:
G k,l,n =∑ i,j,m K i,j,m,n *F k+i-p,l+j-p,m
in the formula, G k,l,n Is a feature map generated by standard convolution, where k is the number of rows of the feature map G, l is the number of columns of the feature map G, and n is the number of output channels; k i,j,m,n The method is a convolution kernel used for processing a feature graph F, wherein i and j correspond to the ith row and the jth column of the feature graph F, m is the number of input feature channels, and n is the number of output feature channels; f k+i-p,l+j-p,m Is the input feature map, where k is the number of rows of the input feature map F, l is the number of columns of the input feature map F, and p is the number of filled pixels; sigma i,j,m K i,j,m,n *F k+i-p,l+j-p,m The ith row and jth column features representing the mth channel to be generated are concatenated.
And (3) deep convolution:
Figure BDA0003711421590000041
in the formula (I), the compound is shown in the specification,
Figure BDA0003711421590000042
is a feature map generated by standard convolution, where k is the number of rows of the feature map G and l is the number of columns of the feature map G;
Figure BDA0003711421590000043
the method comprises the following steps of (1) processing a convolution kernel of a feature graph F, wherein i, j corresponds to the ith row and the jth column of the feature graph F, and m is the number of input feature channels; f k+i-p,l+j-p,m Is the input feature map, where k is the number of rows of the input feature map F, l is the number of columns of the input feature map F, and p is the number of filled pixels;
Figure BDA0003711421590000051
representing the concatenation of the generated ith row and jth column features.
The network calculated amount is reduced after improvement:
Figure BDA0003711421590000052
in the formula, D k Represents the convolution kernel size, where k is the convolution kernel width; m is the number of input characteristic diagram channels; d F Representing the input feature map size, where F is the feature map pixel count; n is the number of output signature mapping channels.
The vehicle extraction unit is used for identifying vehicle data in the video image through a modified YOLOv5 network and classifying different vehicles;
the vehicle classification classifies vehicles into six classifications of sedan, SUV, MPV, midbus, large bus, and large truck.
As shown in fig. 8, the improved YOLOv5 is improved by using an AF-FPN architecture, which adds an Adaptive Attention Module (AAM) and a Feature Enhancement Module (FEM) on the basis of a conventional feature pyramid network. The former reduces the feature channels and reduces the loss of context information in the high-level feature map. The latter part enhances the representation of the feature pyramid, improves the reasoning speed and realizes better performance at the same time.
In FIG. 8, { C 1 ,C 2 ,C 3 ,C 4 ,C 5 The feature mapping is used as the feature mapping; { M 3 ,M 4 ,M 5 ,M 6 The feature mapping is used as the feature mapping; { P 3 ,P 4 ,P 5 The feature mapping is used as the feature mapping; AAM is an adaptive attention module.
The distance calculation unit is used for calculating the distance between a driving vehicle and surrounding vehicles by combining depth data acquired by the TOF camera and vehicle data acquired by the vehicle extraction unit;
the method for combining the depth data acquired by the TOF camera with the vehicle data acquired by the vehicle extraction unit comprises the following steps of firstly aligning the depth data acquired by the TOF camera with the image data acquired by the RGB camera, then calculating the vehicle distance data of the corresponding position according to the position data of the vehicle data acquired by the vehicle extraction unit, and aligning the depth data acquired by the TOF camera with the image data acquired by the RGB camera:
firstly, the pixel points of the depth map are compared
Figure BDA0003711421590000053
And when the depth coordinate system is restored, the formula is as follows:
Figure BDA0003711421590000054
in the formula (I), the compound is shown in the specification,
Figure BDA0003711421590000055
is a space point under a coordinate system, wherein d represents a depth coordinate position, and c represents a color coordinate position; z is a Z-axis coordinate of the depth coordinate; k d Is deepMeasuring camera internal parameters, wherein d represents a depth coordinate position;
Figure BDA0003711421590000056
is a pixel at a point below the depth map, where u, v are the pixels at the u-th row and the v-th column;
then the depth points of the depth space coordinate system are determined
Figure BDA0003711421590000057
Converting into a color camera coordinate system, wherein the formula is as follows:
Figure BDA0003711421590000058
in the formula (I), the compound is shown in the specification,
Figure BDA0003711421590000061
is a spatial point in a color coordinate system, wherein cc represents coordinates for converting a depth point in a world coordinate system to a color camera coordinate system; t is a unit of w2c The method is characterized in that the method is a color camera external reference in the same scene, wherein w2c represents a position converted from world coordinates to color coordinates; t is a unit of w2d The method is characterized in that the method is an A depth camera external reference in the same scene, wherein w2d represents a position converted from world coordinates into depth coordinates;
depth point under color camera coordinate system
Figure BDA0003711421590000062
Mapped onto the color plane of Z ═ 1, the formula:
Figure BDA0003711421590000063
in the formula (I), the compound is shown in the specification,
Figure BDA0003711421590000064
is a pixel of a certain point under the color map, wherein u, v are the pixels of the u row and the v column, and c represents the color coordinate; k c Is an internal reference of the RGB camera, wherein c represents a color coordinate;
Figure BDA0003711421590000065
is a space point under a color coordinate system, wherein cc represents coordinates for converting a depth point of a world coordinate system to a color camera coordinate system; z is the Z-axis coordinate of the depth coordinate.
The relative speed calculating unit calculates the relative speed of the driving vehicle and the surrounding vehicles by combining the depth data acquired by the TOF camera and the vehicle data acquired by the vehicle extracting unit.
The acceleration calculating unit calculates the acceleration of the surrounding vehicle by combining the depth data acquired by the TOF camera and the vehicle data acquired by the vehicle extracting unit.
The steering analysis unit calculates a vehicle deflection angle and a deflection angle speed through the lane line data extracted by the lane line extraction unit, and if the deflection angle or the deflection angle speed is larger than a set threshold value, the fact that the automobile is about to perform steering operation is judged.
The road deviation prediction unit judges whether the driving vehicle deviates from a lane of travel or not by the lane line data extracted by the lane line extraction unit and the deflection angle data acquired by the steering analysis unit.
The collision prediction unit calculates the probability of collision between the driven vehicle and the surrounding vehicle by the lane line data acquired by the lane line extraction unit, the distance data acquired by the distance calculation unit, the accompanying speed data acquired by the relative speed calculation unit, the acceleration data of the surrounding vehicle acquired by the acceleration calculation unit, and the yaw angle and the yaw angular velocity data acquired by the steering analysis unit.
The early warning module judges through the data that road deviation prediction unit and collision prediction unit obtained, and early warning is respectively carried out to two kinds of condition of road deviation and collision.
The data obtained by the collision prediction unit are judged, specifically, different types of vehicles around the driving vehicle are classified and judged, and the maximum collision probability threshold values of the sedan, the SUV, the MPV, the midbus, the large bus and the large truck are respectively set as follows: 0.35, 0.25, 0.2, if the corresponding prediction category vehicle is larger than the set maximum collision probability threshold value, sending a collision early warning signal.
The road deviation early warning carries out yellow flashing display reminding through a video display module;
the collision early warning carries out red flashing display reminding through the video display module, and carries out sound reminding through the voice equipment.
It is to be understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art may make modifications, alterations, additions or substitutions within the spirit and scope of the present invention.

Claims (6)

1. The utility model provides an easy installation formula intelligence auxiliary system that changes lane which characterized in that: the intelligent monitoring system comprises a data acquisition module, a video display module, an intelligent monitoring module and an early warning module;
the data acquisition module is configured for acquiring video image data and depth data behind and behind the vehicle through the camera; the video image data and the depth data are transmitted to an intelligent monitoring module, and the image data are transmitted to an image display module;
the video display module is configured to display the video image data acquired by the camera and the early warning signal sent by the early warning module;
the intelligent monitoring module is configured for extracting lane line data and vehicle data in the video image, calculating relative speed, acceleration and distance data of the driving vehicle and surrounding vehicles according to the depth data, judging whether the vehicle body deviates or not by using the relative speed, acceleration and distance data of the driving vehicle and the surrounding vehicles, and monitoring whether danger exists or not when the vehicle is in steering operation;
an early warning module configured to transmit an early warning signal.
2. The easy-to-install intelligent lane-changing auxiliary system of claim 1, wherein: the camera comprises a left camera, a right camera and a rear camera; the left side camera, right side camera and rear camera are TOF and RGB binocular camera, the left side camera includes left side RGB and TOF binocular camera, the right side camera includes right side RGB and TOF binocular camera, the rear camera includes rear side RGB and TOF binocular camera, left side RGB and TOF binocular camera are located car left side rear-view mirror below, right side RGB and TOF binocular camera are located car right side rear-view mirror below, rear side RGB and TOF binocular camera are located car rear license plate top.
3. The easy-to-install intelligent lane-changing auxiliary system of claim 1, wherein: the video display module displays video image data acquired by the RGB cameras and early warning signals sent by the early warning module, the video display module takes the automobile rearview mirror as a display, and image data acquired by the RGB cameras in different positions are displayed in a switching mode according to the automobile steering direction.
4. The easy-to-install intelligent lane-changing auxiliary system of claim 1, wherein: the intelligent monitoring module comprises a lane line extraction unit, a vehicle extraction unit, a distance calculation unit, a relative speed calculation unit, an acceleration calculation unit, a steering analysis unit, a road deviation prediction unit and a collision prediction unit;
a Lane line extraction unit configured to extract a Lane line image in the video image by Lane Net and calculate a vehicle steering angle from the Lane line image;
a vehicle extraction unit configured to identify vehicle data in the video image through a YOLOv5 network and classify different vehicles;
a distance calculation unit configured to calculate a distance between the driven vehicle and the surrounding vehicle by combining the depth data acquired by the TOF camera and the vehicle data acquired by the vehicle extraction unit;
a relative speed calculation unit configured to calculate a relative speed of the driven vehicle and the surrounding vehicle by combining the depth data acquired by the TOF camera and the vehicle data acquired by the vehicle extraction unit;
an acceleration calculation unit configured to calculate an acceleration of the surrounding vehicle by combining the depth data acquired by the TOF camera with the vehicle data acquired by the vehicle extraction unit;
a steering analysis unit configured to calculate a vehicle yaw angle and a yaw angular velocity from the lane line data extracted by the lane line extraction unit, and determine that the vehicle is about to perform a steering operation if the yaw angle or the yaw angular velocity is greater than a set threshold;
a road deviation prediction unit configured to determine whether the driven vehicle deviates from a lane of travel by the lane line data extracted by the lane line extraction unit and the deflection angle data acquired by the steering analysis unit;
a collision prediction unit configured to calculate a probability of collision of the driven vehicle with the surrounding vehicle by the lane line data acquired by the lane line extraction unit, the distance data acquired by the distance calculation unit, the accompanying speed data acquired by the relative speed calculation unit, the acceleration data of the surrounding vehicle acquired by the acceleration calculation unit, and the yaw angle and the yaw angular velocity data acquired by the steering analysis unit.
5. The easy-to-install intelligent lane-changing auxiliary system of claim 1, wherein: the early warning module judges through the data acquired by the road deviation prediction unit and the collision prediction unit and respectively early warns the two conditions of road deviation and collision;
judging through data acquired by a collision prediction unit, specifically, classifying and judging different types of vehicles around a driving vehicle, and if the corresponding predicted type of vehicle is greater than a set maximum collision probability threshold value, sending a collision early warning signal;
the road deviation early warning carries out yellow flashing display reminding through a video display module;
the collision early warning carries out red flashing display reminding through the video display module, and carries out sound reminding through voice equipment.
6. The easy-to-install intelligent lane-changing auxiliary system of claim 1, wherein: the vehicles are classified into six categories of cars, SUVs, MPVs, midbuses, large buses, and trucks.
CN202210727466.6A 2022-06-24 2022-06-24 Intelligent lane changing auxiliary system easy to install Pending CN115123237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210727466.6A CN115123237A (en) 2022-06-24 2022-06-24 Intelligent lane changing auxiliary system easy to install

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210727466.6A CN115123237A (en) 2022-06-24 2022-06-24 Intelligent lane changing auxiliary system easy to install

Publications (1)

Publication Number Publication Date
CN115123237A true CN115123237A (en) 2022-09-30

Family

ID=83379030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210727466.6A Pending CN115123237A (en) 2022-06-24 2022-06-24 Intelligent lane changing auxiliary system easy to install

Country Status (1)

Country Link
CN (1) CN115123237A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107320A (en) * 2023-04-12 2023-05-12 长沙市宽华通信科技有限公司 Vision-based automatic driving method, system, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107320A (en) * 2023-04-12 2023-05-12 长沙市宽华通信科技有限公司 Vision-based automatic driving method, system, electronic equipment and storage medium
CN116107320B (en) * 2023-04-12 2023-08-25 长沙市宽华通信科技有限公司 Vision-based automatic driving method, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109190523B (en) Vehicle detection tracking early warning method based on vision
CN104115198B (en) Vehicle collaborates accessory system and method
CN108638999B (en) Anti-collision early warning system and method based on 360-degree look-around input
CN103448653A (en) Vehicle collision warning system and method
US11042996B2 (en) Recognition apparatus
CN108944668B (en) Auxiliary driving early warning method based on vehicle-mounted 360-degree look-around input
CN112382115B (en) Driving risk early warning device and method based on visual perception
CN113147733B (en) Intelligent speed limiting system and method for automobile in rain, fog and sand dust weather
CN112084232A (en) Vehicle driving risk assessment method and device based on visual field information of other target vehicles
CN102303563A (en) System and method for prewarning front vehicle collision
DE112021002953T5 (en) INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM
CN115123237A (en) Intelligent lane changing auxiliary system easy to install
Cualain et al. Multiple-camera lane departure warning system for the automotive environment
CN109552319B (en) Night intelligent auxiliary driving system and method
CN114932902A (en) Ghost probe early warning avoidance method and system based on Internet of vehicles technology
EP3703359A1 (en) Image processing device, image capturing device, driving support device, moving body, and image processing method
CN114043989A (en) Recursive graph and convolutional neural network-based driving style recognition model, lane change decision model and decision method
CN210760742U (en) Intelligent vehicle auxiliary driving system
CN112389454A (en) Error isolation of sensing systems in autonomous driving/active safety vehicles
Monwar et al. Vision-based potential collision detection for reversing vehicle
US11919451B2 (en) Vehicle data display system
CN113635845B (en) Integrated assistant driving system and working machine
DE102022123822A1 (en) INTELLIGENT VEHICLE SYSTEMS AND CONTROL LOGIC FOR THE EXPANSION OF THE ENVIRONMENTAL VIEW THROUGH RECOGNITION OF OBJECT MODELS
CN110889409B (en) Method, device, equipment and storage medium for monitoring and optimizing automobile radar
CN112926404B (en) Active interactive human-vehicle passing system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination