CN110386065B - Vehicle blind area monitoring method and device, computer equipment and storage medium - Google Patents

Vehicle blind area monitoring method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110386065B
CN110386065B CN201810362009.5A CN201810362009A CN110386065B CN 110386065 B CN110386065 B CN 110386065B CN 201810362009 A CN201810362009 A CN 201810362009A CN 110386065 B CN110386065 B CN 110386065B
Authority
CN
China
Prior art keywords
vehicle
lane
dynamic target
view image
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810362009.5A
Other languages
Chinese (zh)
Other versions
CN110386065A (en
Inventor
何敏政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201810362009.5A priority Critical patent/CN110386065B/en
Publication of CN110386065A publication Critical patent/CN110386065A/en
Application granted granted Critical
Publication of CN110386065B publication Critical patent/CN110386065B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle blind area monitoring method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring obstacle information around the vehicle and road information of the vehicle; constructing a position relation graph according to the road information; extracting a dynamic target from the barrier information, and marking the dynamic target in a position relation graph; and judging whether the current lane change of the vehicle has risks or not according to the state information of the dynamic target and the state information of the vehicle, and controlling the vehicle to execute a risk control strategy if the current lane change of the vehicle has risks. By the method, the lane change risk assessment can be performed by fusing the road information and the position information of the dynamic target, and the false detection probability under the complex road condition is effectively reduced, so that the lane change risk judgment accuracy is improved, and the performance of the blind area monitoring system is improved.

Description

Vehicle blind area monitoring method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of vehicle control, in particular to a method and a device for monitoring a vehicle blind area, computer equipment and a storage medium.
Background
With the development of social economy and the provision of living standard of people, more and more families have automobiles, and the keeping quantity of the automobiles in China is rapidly increased. The automobile brings convenience to life of people and causes increase of traffic accidents, so that the requirements of users on various vehicle-mounted active safety systems are improved.
Conventional Vehicle-mounted active safety systems include, for example, an Electronic Stability Program (ESP), a Vehicle Stability control system (VSA), a lane-change blind area monitoring system, and the like.
The lane change blind area monitoring system is an important system in a vehicle-mounted active safety system and can judge the risk of a lane change blind area. However, the existing lane-changing blind area monitoring system has a single judgment basis, and for the situation of complex road conditions, the risk judgment accuracy is low, and the blind area detection performance is poor.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present invention is to provide a vehicle blind area monitoring method, in which obstacle information around a vehicle and road information of the vehicle are obtained, a position relationship diagram is established according to the road information, a dynamic target is extracted from the obstacle information and marked in the position relationship diagram, a lane change risk is predicted according to state information of the dynamic target and state information of the vehicle, when there is a risk in lane change, the vehicle is controlled to execute a risk control strategy, lane change risk assessment can be performed by fusing lane line information and the position information of the dynamic target, and a false detection probability under a complex road condition is effectively reduced, so that an accuracy rate of lane change risk determination is improved, and a performance of a blind area monitoring system is improved.
A second object of the present invention is to provide a vehicle blind area monitoring device.
A third object of the invention is to propose a computer device.
A fourth object of the invention is to propose a non-transitory computer-readable storage medium.
A fifth object of the invention is to propose a computer program product.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a method for monitoring a vehicle blind area, including:
acquiring obstacle information around a vehicle and road information of the vehicle;
constructing a position relation graph according to the road information;
extracting a dynamic target from the obstacle information, and marking the dynamic target in the position relation graph;
and judging whether the current lane change of the vehicle has risks or not according to the state information of the dynamic target and the state information of the vehicle, and controlling the vehicle to execute a risk control strategy if the current lane change of the vehicle has risks.
According to the monitoring method for the vehicle blind area, the information of obstacles around the vehicle and the road information of the vehicle are obtained, the position relation graph is constructed according to the obtained road information, the dynamic target is extracted from the obstacle information and is marked in the position relation graph, further, whether the current lane change of the vehicle is risky or not is judged according to the state information of the dynamic target and the state information of the vehicle, and the vehicle is controlled to execute a risk control strategy when the risks exist. Therefore, the dynamic target is extracted by obtaining the obstacle information, the position relation graph is constructed by obtaining the road information of the vehicle, and the dynamic target is marked in the position relation graph, so that the lane change risk evaluation can be carried out by fusing the road information and the position information of the dynamic target, the false detection probability under the complex road condition is effectively reduced, the lane change risk judgment accuracy is improved, and the performance of the blind area monitoring system is improved.
In order to achieve the above object, a second embodiment of the present invention provides a vehicle blind area monitoring device, including:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring obstacle information around a vehicle and road information of the vehicle;
the construction module is used for constructing a position relation graph according to the road information;
the marking module is used for extracting a dynamic target from the obstacle information and marking the dynamic target in the position relation graph;
and the control module is used for judging whether the current lane change of the vehicle has risks or not according to the state information of the dynamic target and the state information of the vehicle, and controlling the vehicle to execute a risk control strategy if the current lane change of the vehicle has risks.
According to the monitoring device for the vehicle blind area, disclosed by the embodiment of the invention, the position relation graph is constructed according to the acquired road information by acquiring the obstacle information around the vehicle and the road information of the vehicle, the dynamic target is extracted from the obstacle information and marked in the position relation graph, and then whether the current lane change of the vehicle is risky or not is judged according to the state information of the dynamic target and the state information of the vehicle, and the vehicle is controlled to execute a risk control strategy when the risk exists. Therefore, the dynamic target is extracted by obtaining the obstacle information, the position relation graph is constructed by obtaining the road information of the vehicle, and the dynamic target is marked in the position relation graph, so that the lane change risk evaluation can be carried out by fusing the road information and the position information of the dynamic target, the false detection probability under the complex road condition is effectively reduced, the lane change risk judgment accuracy is improved, and the performance of the blind area monitoring system is improved.
In order to achieve the above object, an embodiment of a third aspect of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method for monitoring a blind area of a vehicle according to the embodiment of the first aspect is implemented.
To achieve the above object, a fourth aspect of the present invention provides a non-transitory computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the method for monitoring a blind area of a vehicle according to the first aspect.
To achieve the above object, an embodiment of a fifth aspect of the present invention provides a computer program product, wherein when the instructions of the computer program product are executed by a processor, the method for monitoring the blind area of the vehicle according to the embodiment of the first aspect is implemented.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 illustrates an exemplary system architecture to which an embodiment of a vehicle blind zone monitoring method or apparatus of the present invention may be applied;
FIG. 2 is a schematic flow chart illustrating a method for monitoring a vehicle blind area according to an embodiment of the present invention;
FIG. 3 is a schematic plan view of a vehicle coordinate system XOY;
fig. 4 is a schematic flow chart of another method for monitoring a lane change blind area according to an embodiment of the present invention;
fig. 5 is a schematic flow chart of another method for monitoring a lane change blind area according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart illustrating the labeling of dynamic objects in a location relationship diagram;
FIG. 7(a) is a first lane schematic for marking a dynamic target;
FIG. 7(b) is a second lane schematic for marking a dynamic target;
FIG. 7(c) is a third lane schematic for marking a dynamic target;
FIG. 7(d) is a lane schematic of a marked dynamic target;
fig. 8 is a schematic flowchart of a monitoring method for a lane change blind area according to another embodiment of the present invention;
fig. 9 is a schematic structural diagram of a monitoring device for a lane change blind area according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of another lane change blind area monitoring device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another lane change blind area monitoring device according to an embodiment of the present invention; and
fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A method, an apparatus, a computer device, and a storage medium for monitoring a vehicle blind area according to embodiments of the present invention are described below with reference to the accompanying drawings.
The existing lane change blind area monitoring system judges whether the lane change has risks or not based on a single factor. One solution is to install a millimeter wave radar on each of the left and right rear sides of the vehicle, and detect the presence of the vehicle in the lane change direction using the radar. The scheme can not identify lane lines and road traffic signs, and when the vehicle runs on a road with complex road conditions, false detection is easily caused due to reflection of objects outside lanes and the like, and the blind area detection result is influenced. One scheme is to utilize the camera to gather the image and carry out the blind area monitoring, but in this kind of scheme, the image that the camera was gathered only is used for showing, needs the driver to judge the risk of lane change blind area according to the image that shows, and the subjectivity is stronger, and is not showing to the promotion of lane change blind area monitoring system self performance.
In view of the above problems, an embodiment of the present invention provides a method for monitoring a lane change blind area to reduce a false detection probability under a complex road condition, so as to improve an accuracy rate of determining a lane change risk and improve performance of a blind area monitoring system.
Fig. 1 shows an exemplary system architecture to which an embodiment of a vehicle blind zone monitoring method or apparatus of the present invention may be applied.
As shown in fig. 1, the system architecture may include a left rear side blind spot radar 110, a right rear side blind spot radar 120, a camera 130, and a blind spot monitoring main controller 140. The left rear side blind Area radar 110 and the right rear side blind Area radar 120 are connected through a Controller Area Network (CAN) interface to form a CAN subnet, and the CAN interface on one of the blind Area radars is connected to the blind Area monitoring main Controller 140. The left rear blind area radar 110 may be connected to the blind area monitoring main controller 140 through a CAN interface, or the right rear blind area radar 120 may be connected to the blind area monitoring main controller 140 through a CAN interface, and fig. 1 only illustrates the connection to the blind area monitoring main controller 140 through the CAN interface of the left rear blind area radar 110, but does not limit the present invention.
The camera 130 is a rear-view camera of the vehicle, and adopts a 720P high-definition camera. The camera 130 is connected to the blind spot monitoring main controller 140 through a Low-Voltage Differential Signaling (LVDS) line, and transmits digital image data frames to the blind spot monitoring main controller 140 at a transmission rate of 30fps (30 frames per second), wherein the format of the transmitted digital image data frames may be, for example, RGB format, YUV format, or the like.
The blind area monitoring main controller 140 adopts a Digital Signal Processor (DSP) as a core device, and a first-level on-chip memory and a second-level on-chip memory are integrated on a DSP chip. The blind area monitoring host controller 140 further includes a CAN transceiver, an LVDS signal deserializer, a Double Data Rate (DDR) memory, a Flash program memory, a power chip, and the like. A software program for realizing the performance improvement of the lane-changing blind area monitoring system runs on a DSP processor, and the blind area monitoring main controller 140 realizes the monitoring method of the lane-changing blind area by executing the program on the DSP.
Fig. 2 is a flowchart illustrating a method for monitoring a vehicle blind area according to an embodiment of the present invention, which may be executed by the blind area monitoring main controller 140 (hereinafter referred to as a controller) shown in fig. 1.
As shown in fig. 2, the monitoring method of the vehicle blind area may include the steps of:
step 101, obtaining obstacle information around the vehicle and road information of the vehicle.
In this embodiment, as shown in fig. 1, blind area radars are respectively installed on the left rear side and the right rear side of the vehicle, and the blind area radars on the left rear side and the right rear side collect information of obstacles around the vehicle and send the information to the controller through the CAN interface. Meanwhile, a rear-view camera mounted on the vehicle collects a rear-view image behind the vehicle and sends the rear-view image to the controller through the LVDS interface, and the controller can acquire road information of the vehicle according to the rear-view image collected by the rear-view camera. The road information of the vehicle may be the road information of the lane where the vehicle is located.
In a possible implementation manner of the embodiment of the present invention, after the controller acquires the obstacle information around the vehicle and the road information of the lane where the vehicle is located, the controller may cache the acquired obstacle information and road information, and synchronize the cached obstacle information and road information in time.
In specific implementation, the DSP processor may cache detection data (obstacle information) detected by the left rear side blind area radar and the right rear side blind area radar to a radar First In First Out (FIFO) data area and cache a rear view image (road information) to an image FIFO data area by executing a software program on a chip. In general, the update rate of the detection data detected by the blind area radar is lower than that of the high-definition rear-view image, so that the radar FIFO data area and the image FIFO data area are respectively arranged to cache the detection data and the rear-view image, the time synchronization between the detection data and the rear-view image can be realized, and the time synchronization between the data reception and the data calculation is favorably realized. That is to say, when the DSP calculates and processes the data frame and the detection data of the rear view image, the DSP CAN still normally receive the detection data of the blind area radar transmitted subsequently through the CAN interface, and receive the rear view image data frame of the rear view camera transmitted through the LVDS interface, and ensure that the newly received data does not cover and tamper with the data being calculated and processed.
And 102, constructing a position relation graph according to the road information.
The road information may be two lane lines of a lane in which the vehicle is located, and the position relationship diagram may be a position relationship diagram between the vehicle and the lane lines.
In this embodiment, after the controller acquires the back view image of the back view camera, an efficient image detection algorithm can be run by using the first-level on-chip memory or the second-level on-chip memory on the DSP processor, and the back view image is detected by the image detection algorithm, so as to identify two lane lines of the lane where the vehicle is located from the back view image.
Furthermore, after the lane line of the lane where the vehicle is located is identified, the identified lane line may be projected into a pre-established XOY plane to construct a positional relationship diagram between the vehicle and the lane line.
Wherein the pre-established XOY plan view is shown in fig. 3. When an XOY plane is established, the origin of coordinates of the XOY plane coordinate system is positioned in the center of the rear axle of the vehicle; the X axis is along the rear axle direction of the vehicle and points to the right side of the vehicle; the Y-axis is perpendicular to the rear axis of the vehicle and points directly behind the vehicle.
Based on the established XOY plane, the identified lane lines can be mapped in the XOY plane, and a position relation graph between the vehicle and the lane lines is obtained.
And 103, extracting the dynamic target from the obstacle information, and marking the dynamic target in the position relation graph.
In this embodiment, according to the acquired obstacle information, the controller may extract a dynamic target from the obstacle information, and mark the dynamic target in the position relationship diagram. The dynamic target may be, for example, a vehicle other than the current vehicle.
It should be noted that, the specific implementation process of marking the dynamic target in the position relationship diagram will be given in the following, and will not be described in detail here.
And 104, judging whether the current lane change of the vehicle has risks or not according to the state information of the dynamic target and the state information of the vehicle, and controlling the vehicle to execute a risk control strategy if the current lane change of the vehicle has risks.
The state information includes, but is not limited to, position information, velocity information, acceleration information, and the like.
In this embodiment, whether the current lane change of the vehicle is risky or not can be determined according to the state information of the dynamic target and the state information of the vehicle.
Specifically, the relative positional relationship of the dynamic target and the vehicle may be determined based on the state information of the dynamic target and the state information of the vehicle, for example, the relative distance and the relative position between the dynamic target and the current vehicle may be determined based on the position information of the dynamic target and the position information of the vehicle, the relative velocity of the dynamic target with respect to the current vehicle may be determined based on the velocity information of the dynamic target and the velocity information of the vehicle, and the relative acceleration of the dynamic target with respect to the current vehicle may be determined based on the acceleration information of the dynamic target and the acceleration information of the vehicle. Furthermore, according to one or more information of the relative position, the relative distance, the relative speed and the relative acceleration between the dynamic target and the current vehicle, whether a risk exists when the current vehicle changes lanes can be judged.
When it is determined that there is a risk of lane change, the vehicle may be controlled to execute a risk control strategy. For example, the relevant control instruction CAN be sent to the entire vehicle CAN network according to the risk assessment result, and the relevant execution device on the vehicle executes the control instruction, so that risk control is realized.
According to the monitoring method for the vehicle blind area, the information of obstacles around the vehicle and the road information of the vehicle are obtained, the position relation graph is constructed according to the obtained road information, the dynamic target is extracted from the obstacle information and marked in the position relation graph, whether the current lane change of the vehicle is risky or not is judged according to the state information of the dynamic target and the state information of the vehicle, and the vehicle is controlled to execute a risk control strategy when the current lane change of the vehicle is risky. Therefore, the dynamic target is extracted by obtaining the obstacle information, the position relation graph is constructed by obtaining the road information of the vehicle, and the dynamic target is marked in the position relation graph, so that the lane change risk evaluation can be carried out by fusing the road information and the position information of the dynamic target, the false detection probability under the complex road condition is effectively reduced, the lane change risk judgment accuracy is improved, and the performance of the blind area monitoring system is improved.
In order to more clearly describe a specific implementation process of constructing a position relationship diagram according to road information in the foregoing embodiment, an embodiment of the present invention provides another method for monitoring a lane change blind area, and fig. 4 is a schematic flow diagram of another method for monitoring a lane change blind area provided in an embodiment of the present invention. In this embodiment, the road information is a rear view image of a rear view camera.
As shown in fig. 4, based on the embodiment shown in fig. 2, step 102 may include the following steps:
step 201, graying the rear view image to obtain a grayscale rear view image.
In this embodiment, when the lane line of the lane where the vehicle is located is identified from the rear-view image, the rear-view image may be grayed first to obtain a grayscale rear-view image.
As a possible implementation manner, the initial position of the rear-view image may be determined first, and then, according to the format of the rear-view image, the luminance value of each pixel point in the rear-view image is extracted from the initial position as the gray value of the pixel point, so as to generate a gray rear-view image.
For example, for a rear view image whose data frame format is YUV420SP data format, 1280 × 720Y values are arranged at the front, 1280 × 180U values and 1280 × 180V values are arranged at the back, and the U values and the V values are arranged alternately in one frame of the rear view image. Where Y represents brightness, i.e., a gray scale value, and U and V represent chroma, describing the color and saturation of an image. Therefore, the first address, which is the starting position of the rear-view image of one frame, is determined, and 1280 × 720Y values are extracted from the first address to form a grayscale rear-view image, thereby realizing the graying processing of the color rear-view image.
The brightness value is directly extracted from the rear-view image and used as the gray value of the pixel point to generate the gray rear-view image, so that the process of carrying out gray processing on the rear-view image is avoided, the data calculation amount and the memory access times are effectively reduced, and the image processing efficiency is greatly improved.
As a possible implementation manner, the RGB value of each pixel point of the rear-view image may be obtained, the RGB value is weighted, the luminance value of each pixel point is obtained as the gray value of the pixel point, and the gray-scale rear-view image is generated. For example, for a certain pixel, the gray scale value of the pixel can be obtained by using the formula p ═ 0.3R +0.59G +0.1B, where p represents the gray scale value of the pixel. And then, generating a gray rear view image by utilizing the gray values of all the pixel points.
It should be noted that, the gray value of the pixel point may also be calculated by using a maximum method, an average method, a component method, and other methods, which is not limited in the present invention.
And 202, carrying out binarization on the grayscale rear view image according to a preset binarization threshold value to obtain a binarization rear view image.
In this embodiment, after the grayscale rear view image is obtained, binarization may be performed on the grayscale rear view image according to a preset binarization threshold value to obtain a binarization rear view image.
Since a known binarization threshold value is needed to participate in the binarization of the grayscale back view image, in this embodiment, the binarization threshold value may be determined before the binarization processing is performed on the grayscale back view image.
As a possible implementation manner, a binarization threshold of the grayscale rearview image can be obtained according to a maximum inter-class difference method.
The maximum inter-class difference method, also called Otsu method for short, OTSU algorithm, was proposed by Otsu (Nobuyuki Otsu) in 1979 as a method for determining adaptive threshold. The principle of the OTUS algorithm is to divide an image into a foreground part and a background part according to the gray characteristic of the image, traverse different threshold values, calculate the inter-class variance between the foreground and the background corresponding to the different threshold values, and when the inter-class variance obtains a maximum value, determine the threshold value corresponding to the maximum value as a binarization threshold value.
Therefore, in this embodiment, the OTSU algorithm may be used to determine the binarization threshold of the grayscale rearview image.
As a possible implementation manner, the average gray value of the grayscale rearview image may be calculated according to the gray value of each pixel point on the grayscale rearview image, and the average gray value is used as the binarization threshold.
In this embodiment, after the binarization threshold is determined, the grayscale back view image may be binarized according to the binarization threshold to obtain the binarized back view image.
Specifically, when the grayscale rear-view image is binarized, the grayscale rear-view image may be divided to obtain grayscale rear-view image segments, and then, for each grayscale rear-view image segment, the grayscale rear-view image segment is convolved with a preset operator template to obtain a convolution result of the grayscale rear-view image segment. The preset operator template is shown as formula (1).
Figure BDA0001636200130000081
For example, for a grayscale rear view image including 1280 × 720Y values, the grayscale rear view image may be divided into 18 segments, the data size of each segment is 1280 × 40Y values, data of one segment is transferred to the first-level on-chip Memory or the second-level on-chip Memory of the DSP processor by using Direct Memory Access (DMA) each time, and the data of the segment is convolved by using an operator template shown in formula (1) to obtain a convolution result of the grayscale rear view image segment.
And then, determining the value of the pixel point according to the convolution result and the binarization threshold of the pixel point in the gray-scale rear-view image segment to form a binary image of the gray-scale rear-view image segment.
Specifically, when determining the value of the pixel point, the convolution result of the pixel point may be compared with a first numerical value generated by a binarization threshold for each pixel point, and the gray value of the pixel point may be compared with the binarization threshold, and if the convolution result of the pixel point is greater than the first numerical value and the gray value is greater than the binarization threshold, the gray value of the pixel point is updated to a preset first gray value, otherwise, the gray value is updated to a preset second gray value. Wherein the first gray value is 255 and the second gray value is 0. The process of determining the value of the pixel point is expressed by formula (2).
Figure BDA0001636200130000082
Wherein, g (x, Y) represents the pixel value of a pixel point in the gray-scale rearview image segment, namely the Y value; threshold represents a binarization threshold; (threshold) represents a function of the argument as a binarization threshold, i.e. a first value, which is an empirical value; conv (g (x, y)) represents a convolution result obtained for the pixel value g (x, y) of one pixel point in the grayscale rearview image segment.
Aiming at each pixel point in the gray-scale rear-view image segment, when the formula (2) is met, updating the gray value of the pixel point to be 255; and when the formula (2) is not satisfied, updating the gray value of the pixel point to be 0.
For a gray-scale rear-view image segment, after the gray values of all pixel points in the segment are updated, a binary image of the gray-scale rear-view image segment can be formed by using the updated pixel points.
And then, combining the binary images of the gray-scale rearview image segments to obtain a binary rearview image.
And step 203, carrying out edge detection on the binarized view image to obtain edge feature points belonging to the lane lines.
In this embodiment, after the binarized image of the view is obtained, edge detection may be performed on the binarized image of the view to obtain edge feature points belonging to a lane line.
Specifically, connected regions may be extracted from the binarized image, lane line regions may be screened out from the extracted connected regions according to the features of the lane lines, and then edge detection may be performed on the screened lane line regions to obtain edge feature points.
More redundant information exists in the binarized rear-view image, the lane lines are usually two narrow straight lines, in order to highlight lane line information, the binarized rear-view image can be marked with a connected region, and white regions with too many or too few white pixel points in the binarized rear-view image are filtered out to extract a lane line region. The continuous region marking is to mark the continuous region as the same mark, and the common marking algorithms include a four-neighborhood marking algorithm and an eight-neighborhood marking algorithm, which are both in the prior art, and the invention does not need to describe the two algorithms again.
In general, there is a step transition or a roof-like transition in the pixel gradation in the vicinity of the edge point, and the lane line edge in the road image is a set of pixels in which the pixel gradation between the lane line and the road surface has a roof-like change or a step change, and is one of the basic features of the lane line. Therefore, in this embodiment, a correlation algorithm may be used to perform edge detection on the lane line region to obtain edge feature points. For example, the edge feature points can be obtained by performing edge detection on the lane line regions by using a Roberts operator, a Sobel operator, a Laplace operator, a Krisch operator, a Prewitt operator, a Canny operator, and the like.
And 204, carrying out Hough transform on the extracted edge feature points, detecting candidate straight lines, tracking the candidate straight lines, and determining a lane line from the candidate straight lines.
In this embodiment, after the edge feature points of the lane line are obtained, hough transform may be performed on the detected edge feature points to detect candidate straight lines, and then, by tracking the candidate straight lines, the lane line may be determined from the candidate straight lines.
During specific implementation, straight lines which obviously do not accord with the attribute of the lane line can be removed from the candidate straight lines, the straight lines which are most likely to belong to the lane line in the candidate straight lines are tracked, and then the lane line and the type of the lane line are finally determined according to a tracking result, wherein the type of the lane line comprises a solid line and a dotted line.
And step 205, constructing a position relation graph between the vehicle and the lane line according to the lane line.
In this embodiment, after two lane lines of a lane where a vehicle is located are determined, the determined lane lines may be projected onto a pre-established XOY plane to obtain a position relationship diagram between the vehicle and the lane lines.
According to the monitoring method for the lane change blind area, the grayscale rear view image is obtained by graying the rear view image, the grayscale rear view image is binarized according to the preset binarization threshold value to obtain the binarized rear view image, the edge of the binarized rear view image is detected to obtain the edge feature points belonging to the lane line, Hough transform is performed on the edge feature points to detect candidate straight lines, the candidate straight lines are tracked, the lane line is determined from the candidate straight lines, and a position relation graph of the vehicle and the lane line is constructed.
In order to more clearly describe a specific implementation process of constructing a position relationship diagram between a vehicle and a lane line in the foregoing embodiment, an embodiment of the present invention provides a flow diagram of another method for monitoring a lane change blind area, and fig. 5 is a flow diagram of another method for monitoring a lane change blind area provided in the embodiment of the present invention.
As shown in fig. 5, based on the embodiment shown in fig. 4, step 205 may include the following steps:
step 301, determining the types of two lane lines of a lane where the vehicle is located, and the orientation relationship between each lane line and the vehicle.
In this embodiment, when determining the lane lines, the types of the two lane lines of the lane where the vehicle is located and the orientation relationship between the lane lines and the vehicle may be determined. For example, when the orientation relationship between each lane line and the vehicle is determined, if the determined lane line is close to the left side of the binarized rear view image, it may be determined that the lane line is located on the left side of the vehicle; if the determined lane line is close to the right side of the binarized rear view image, it may be determined that the lane line is located on the right side of the vehicle.
Step 302, according to the type and the azimuth relationship of each lane line, a position relationship diagram between the vehicle and the lane line is constructed.
In this embodiment, after the type of the lane line and the azimuth relationship between the lane line and the vehicle are determined, a position relationship diagram between the vehicle and the lane line may be constructed according to the type and azimuth relationship of each lane line.
During specific implementation, the lane line fitting can be performed according to the number of actually detected lane lines and the types of the left lane line and the right lane line detected by the lane where the current vehicle is located, and the lane lines are projected to the XOY plane to obtain a position relation graph between the vehicle and the lane lines.
It should be noted that 4 lane lines of 3 lanes at most are fitted and projected, that is, only the lane where the current vehicle is located and the left and right lanes adjacent to the current lane are focused. If the current driving of the vehicle on the road with 2 unidirectional lanes is judged according to the detection result of the lane lines, only the lane line of the current lane of the vehicle and the lane line of the other lane adjacent to the current lane are fitted and projected to the XOY plane; and if the current driving of the vehicle on the road with 1 lane in one direction is judged according to the detection result of the lane lines, only fitting the left lane line and the right lane line of the current lane of the vehicle, and projecting the lane lines to an XOY plane.
Based on the above embodiment, as shown in fig. 5, step 103 may include the following steps:
step 303, obtaining a dynamic target from the obstacle information.
And step 304, marking the dynamic target in the position relation diagram according to the type of each lane line, the installation position of the blind area radar and the distance between the dynamic target and the vehicle.
Wherein the types include a dotted line and a solid line.
Specifically, fig. 6 is a schematic flow chart illustrating the process of marking dynamic objects in the position relationship diagram. As shown in fig. 6, step 304 may include the steps of:
step 401, determining whether the types of the two lane lines of the lane where the vehicle is located are both solid lines.
In this embodiment, when the types of the two lane lines of the lane where the vehicle is located are both solid lines, step 402 is executed; otherwise, step 403 is performed.
And 402, marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the installation position of the blind area radar and the distance between the dynamic target and the vehicle.
When the types of the two lane lines of the lane where the vehicle is located are both solid lines, determining the dynamic targets located in the lane where the vehicle is located from all the dynamic targets according to the transverse distance in the distance of each dynamic target; the passing direction of a lane where the vehicle is located is a horizontal direction, and the direction perpendicular to the passing direction is a transverse direction; marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target; and the dynamic target is identified from the detection data of the corresponding blind area radar. The lane marking the dynamic target is shown in fig. 7 (a).
In step 403, it is determined whether the type of the left lane line of the lane where the vehicle is located is a dashed line and the type of the right lane line is a solid line.
In this embodiment, when the type of the lane line on the left side of the lane where the vehicle is located is a dashed line and the type of the lane line on the right side is a solid line, step 404 and step 402 are executed, that is, the dynamic targets in the lane where the vehicle is located and the left lane are marked in the position relationship diagram; otherwise, step 405 is performed.
And step 404, marking the dynamic target in the left lane at the left side of the lane where the vehicle is located in the position relation diagram according to the installation position of the blind area radar and the distance between the dynamic target and the vehicle.
When the type of the left lane line of the lane where the vehicle is located is a dotted line and the type of the right lane line is a solid line, acquiring a first boundary lane line on the other side of the left lane line, and marking the first boundary lane line in the position relation diagram; the left lane line and the first boundary lane line form a left lane located on the left side of the lane where the vehicle is located. When the other lane line of the lane on the left side of the lane where the vehicle is located is detected, the lane line is used as a first boundary lane line, and otherwise, one lane line is fitted to be used as the first boundary lane line. Further, according to the transverse distance in the distance of each dynamic target, respectively determining a dynamic target in a left lane and a dynamic target in a lane where the vehicle is located from all the dynamic targets; marking the dynamic target in the left lane in a position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target; and marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target. The lane marking the dynamic target is shown in fig. 7 (b).
In step 405, it is determined whether the type of the left lane line of the lane where the vehicle is located is a solid line and the type of the right lane line is a dashed line.
In this embodiment, when the type of the lane line on the left side of the lane where the vehicle is located is a solid line and the type of the lane line on the right side is a dashed line, step 406 and step 402 are executed, that is, the dynamic targets in the lane where the vehicle is located and the right lane are marked in the position relationship diagram; otherwise, step 407 and step 402 are executed.
And 406, marking the dynamic target in the right lane on the right side of the lane where the vehicle is located in the position relation diagram according to the installation position of the blind area radar and the distance between the dynamic target and the vehicle.
When the type of the left lane line of the lane where the vehicle is located is a solid line and the type of the right lane line is a dotted line, acquiring a second boundary lane line located on the other side of the right lane line, and marking the second boundary lane line in the position relation diagram; and the right lane line and the second boundary lane line form a right lane positioned on the right side of the lane where the vehicle is positioned. And when the other lane line of the lane on the right side of the lane where the vehicle is located is detected, taking the lane line as a second boundary lane line, otherwise, fitting one lane line as the second boundary lane line. Further, according to the transverse distance in the distance of each dynamic target, respectively determining the dynamic target in the right lane and the dynamic target in the lane where the vehicle is located from all the dynamic targets; marking the dynamic target in the right lane in a position relation graph according to the distance and the installation position of the blind area radar corresponding to the dynamic target; and marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target. The lane marking the dynamic target is shown in fig. 7 (c).
Step 407, when the types of the two lane lines of the lane where the vehicle is located are both dashed lines, marking the dynamic targets located in the left lane and the right lane in the position relation diagram according to the installation position of the blind area radar and the distance between the dynamic target and the vehicle.
When the types of the left lane line and the right lane line of the lane where the vehicle is located are both dotted lines, acquiring a first boundary lane line located on the other side of the left lane line and a second boundary lane line located on the other side of the right lane line, and marking the first boundary lane line and the second boundary lane line in the position relation diagram; the left lane line and the first boundary lane line form a left lane positioned on the left side of the lane where the vehicle is positioned; the right lane line and the second boundary lane line form a right lane located on the right side of the lane where the vehicle is located. When the other lane line of one lane on the left side of the lane where the vehicle is located is detected, taking the lane line as a first boundary lane line, otherwise, fitting one lane line as the first boundary lane line; and when the other lane line of one lane on the right side of the lane where the vehicle is located is detected, taking the lane line as a second boundary lane line, otherwise, fitting one lane line as the second boundary lane line. Further, according to the transverse distance in the distance of each dynamic target, respectively determining a dynamic target in a left lane, a dynamic target in a right lane and a dynamic target in a lane where the vehicle is located from all the dynamic targets; marking the dynamic target in the left lane in a position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target; marking the dynamic target in the right lane in a position relation graph according to the distance and the installation position of the blind area radar corresponding to the dynamic target; and marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target. The lane marking the dynamic target is shown in fig. 7 (d).
And filtering other dynamic targets which are not marked in the position relation diagram, wherein the dynamic targets do not participate in subsequent lane change risk assessment.
According to the monitoring method for the lane change blind area, the types of two lane lines of a lane where a vehicle is located and the azimuth relationship between each lane line and the vehicle are determined, a position relationship diagram between the vehicle and the lane lines is further constructed according to the types and the azimuth relationships of the lane lines, dynamic targets are marked in the position relationship diagram according to the types of the lane lines, the installation positions of blind area radars and the distances between the dynamic targets and the vehicle, the dynamic targets and the lane can be projected in one plane, and a foundation is laid for lane change risk assessment.
Fig. 8 is a schematic flow chart of another method for monitoring a lane change blind area according to an embodiment of the present invention.
As shown in fig. 8, the method for monitoring the lane-change blind area may include the following steps:
step 501, obstacle information around the vehicle and road information of the vehicle are acquired.
Step 502, according to the road information, a position relation graph is constructed.
Step 503, extracting the dynamic target from the obstacle information, and marking the dynamic target in the position relation graph.
It should be noted that steps 501 to 503 are described in this embodiment. Reference may be made to the description of relevant contents in the foregoing embodiments, which are not described herein again.
And step 504, acquiring relative position information, relative speed and relative acceleration between the dynamic target and the vehicle according to the state information of the dynamic target and the state information of the vehicle.
Where the status information includes, but is not limited to, position information, velocity information, and acceleration information.
Specifically, based on the position information of the dynamic target and the position information of the vehicle, the relative position information between the dynamic target and the current vehicle can be determined; according to the speed information of the dynamic target and the speed information of the vehicle, the relative speed of the dynamic target relative to the current vehicle can be determined; based on the acceleration information of the dynamic target and the acceleration information of the vehicle, the relative acceleration of the dynamic target with respect to the current vehicle may be determined.
And 505, calculating the current lane change risk coefficient of the vehicle according to the relative position information, the relative speed and the relative acceleration.
As an example, corresponding weights may be set according to the degree of influence of the relative position information, the relative speed and the relative acceleration on the lane change risk, and then weighted summation is performed, and the result is used as the risk coefficient of the current lane change of the vehicle.
And step 506, if the risk coefficient exceeds a preset threshold value, determining that the vehicle has lane change risk.
In this embodiment, a risk coefficient threshold may be preset and stored, and then the calculated risk coefficient is compared with the risk coefficient threshold, and when the risk coefficient reaches the risk coefficient threshold, it is determined that the vehicle has a lane change risk.
And step 507, acquiring a control strategy matched with the risk coefficient, and executing the control strategy.
In this embodiment, matching control strategies may be set for risk coefficients of different levels, and when it is determined that a lane change risk exists in the vehicle, a corresponding control strategy is determined according to the level of the risk coefficient, and the control strategy is executed to avoid the lane change risk.
For example, the control strategy may include different kinds of combined audible and visual warning alerts, seat belt pre-tightening, and sending a command to an Electric Power Steering (EPS) system to correct the Steering wheel when the level of the lane change risk factor is highest and the driver performs the lane change operation.
During specific implementation, the determined control strategy CAN be converted into a CAN message, the CAN message is sent to a finished automobile CAN network through a CAN interface of the controller, and the vehicle-related executive device executes the control strategy.
According to the monitoring method for the lane change blind area, the relative position information, the relative speed and the relative acceleration between the dynamic target and the vehicle are determined, the risk coefficient of the current lane change of the vehicle is calculated according to the relative position information, the relative speed and the relative acceleration, when the risk coefficient exceeds a preset threshold value, the lane change risk of the vehicle is determined, and the control strategy matched with the risk coefficient is obtained and executed, so that the lane change risk can be automatically identified, the corresponding control strategy is provided, the danger generated when a user changes lanes is avoided, and the driving safety of the user is ensured.
In order to realize the embodiment, the invention further provides a monitoring device for the lane change blind area.
Fig. 9 is a schematic structural diagram of a monitoring device for a lane change blind area according to an embodiment of the present invention.
As shown in fig. 9, the lane change blind area monitoring device 50 includes: an acquisition module 510, a construction module 520, a marking module 530, and a control module 540. Wherein,
the obtaining module 510 is configured to obtain obstacle information around the vehicle and road information of the vehicle.
The road information of the vehicle may be the road information of the lane where the vehicle is located.
In a possible implementation manner of the embodiment of the present invention, after the obtaining module 510 obtains the obstacle information around the vehicle and the road information of the lane where the vehicle is located, the obstacle information and the road information may be cached, and the cached obstacle information and the cached road information are synchronized in time.
And a building module 520, configured to build a position relationship graph according to the road information.
And a marking module 530, configured to extract a dynamic target from the obstacle information and mark the dynamic target in the position relationship diagram.
And the control module 540 is configured to determine whether there is a risk in the current lane change of the vehicle according to the state information of the dynamic target and the state information of the vehicle, and control the vehicle to execute a risk control strategy if there is a risk in the current lane change of the vehicle.
Further, in a possible implementation manner of the embodiment of the present invention, the road information is a rear view image of a rear view camera, and in this case, as shown in fig. 10, on the basis of the embodiment shown in fig. 9, the constructing module 520 includes:
a graying unit 521, configured to graye the rear view image to obtain a grayscale rear view image.
Specifically, the graying unit 521 is configured to determine an initial position of the rear view image, extract, from the initial position according to a format of the rear view image, a luminance value of each pixel in the rear view image as a pixel gray value, and generate a grayscale rear view image; or acquiring the RGB value of each pixel point of the rear-view image, weighting the RGB values, obtaining the brightness value of each pixel point as the gray value of the pixel point, and generating the gray rear-view image.
And a binarization unit 522, configured to binarize the grayscale rear view image according to a preset binarization threshold value, to obtain a binarized rear view image.
The binarization unit 522 may determine a binarization threshold value before binarizing the post-grayscale view image. Specifically, the binarization unit 522 may obtain a binarization threshold of the grayscale rearview image according to a maximum inter-class difference method; or, calculating the average gray value of the gray rear-view image according to the gray value of each pixel point on the gray rear-view image, and taking the average gray value as a binarization threshold.
The binarization unit 522 is specifically configured to divide the grayscale back view image to obtain grayscale back view image segments when obtaining the binarized back view image; performing convolution operation on the gray scale rear view image segments and a preset operator template aiming at each gray scale rear view image segment to obtain a convolution result of the gray scale rear view image segments; determining the value of a pixel point according to the convolution result and the binarization threshold of the pixel point in the gray-scale rear-view image segment to form a binary image of the gray-scale rear-view image segment; and combining the binary images of the gray-scale rearview image segments to obtain a binary rearview image.
When the binarization unit 522 determines the value of the pixel point, it may compare the convolution result of the pixel point with the first value generated by the binarization threshold and compare the gray value of the pixel point with the binarization threshold for each pixel point; and if the convolution result of the pixel point is larger than the first numerical value and the gray value is larger than the binarization threshold, updating the gray value of the pixel point to a preset first gray value, otherwise, updating to a preset second gray value.
An edge detection unit 523, configured to perform edge detection on the binarized view image to obtain edge feature points belonging to a lane line.
Specifically, the edge detection unit 523 is configured to extract a connected region from the binarized rear view image; screening out a lane line region from the extracted connected regions according to the characteristics of the lane line; and carrying out edge detection on the screened lane line area to obtain edge feature points.
A determining unit 524, configured to perform hough transform on the extracted edge feature points, detect candidate straight lines, track the candidate straight lines, and determine a lane line from the candidate straight lines.
The building unit 525 is configured to build a position relationship diagram between the vehicle and the lane line according to the lane line.
Specifically, the constructing unit 525 is configured to determine types of two lane lines of a lane where the vehicle is located, and a bearing relationship between each lane line and the vehicle; and constructing a position relation graph between the vehicle and the lane lines according to the type and the azimuth relation of each lane line.
The method comprises the steps of obtaining a gray-scale rear view image by graying the rear view image, carrying out binarization on the gray-scale rear view image according to a preset binarization threshold value to obtain a binarized rear view image, carrying out edge detection on the binarized rear view image to obtain edge feature points belonging to a lane line, carrying out Hough transform on the edge feature points to detect a candidate straight line, tracking the candidate straight line, determining the lane line from the candidate straight line, and further constructing a position relation graph between a vehicle and the lane line.
The marking module 530 includes:
an obtaining unit 531 is configured to obtain a dynamic target from the obstacle information.
A target marking unit 532, configured to mark a dynamic target in the position relationship diagram according to the type of each lane line, the installation position of the blind area radar, and the distance between the dynamic target and the vehicle; wherein the types include a dotted line and a solid line.
Specifically, the target marking unit 532 is configured to determine, when the types of the two lane lines of the lane where the vehicle is located are both solid lines, the dynamic targets located in the lane where the vehicle is located from all the dynamic targets according to the lateral distance in the distance of each dynamic target; the passing direction of a lane where the vehicle is located is a horizontal direction, and the direction perpendicular to the passing direction is a transverse direction; marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target; and the dynamic target is identified from the detection data of the corresponding blind area radar. When the type of the left lane line of the lane where the vehicle is located is a dotted line and the type of the right lane line is a solid line, acquiring a first boundary lane line on the other side of the left lane line, and marking the first boundary lane line in the position relation diagram; the left lane line and the first boundary lane line form a left lane positioned on the left side of the lane where the vehicle is positioned; according to the transverse distance in the distance of each dynamic target, respectively determining the dynamic target in the left lane and the dynamic target in the lane where the vehicle is located from all the dynamic targets; marking the dynamic target in the left lane in a position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target; and marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target. When the type of the left lane line of the lane where the vehicle is located is a solid line and the type of the right lane line is a dotted line, acquiring a second boundary lane line located on the other side of the right lane line, and marking the second boundary lane line in the position relation diagram; the right lane line and the second boundary lane line form a right lane positioned on the right side of the lane where the vehicle is positioned; according to the transverse distance in the distance of each dynamic target, respectively determining the dynamic target in the right lane and the dynamic target in the lane where the vehicle is located from all the dynamic targets; marking the dynamic target in the right lane in a position relation graph according to the distance and the installation position of the blind area radar corresponding to the dynamic target; and marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target. When the types of the left lane line and the right lane line of the lane where the vehicle is located are both dotted lines, acquiring a first boundary lane line located on the other side of the left lane line and a second boundary lane line located on the other side of the right lane line, and marking the first boundary lane line and the second boundary lane line in the position relation diagram; the left lane line and the first boundary lane line form a left lane positioned on the left side of the lane where the vehicle is positioned; the right lane line and the second boundary lane line form a right lane positioned on the right side of the lane where the vehicle is positioned; according to the transverse distance in the distance of each dynamic target, respectively determining a dynamic target in a left lane, a dynamic target in a right lane and a dynamic target in a lane where a vehicle is located from all the dynamic targets; marking the dynamic target in the left lane in a position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target; marking the dynamic target in the right lane in a position relation graph according to the distance and the installation position of the blind area radar corresponding to the dynamic target; and marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance and the installation position of the blind area radar corresponding to the dynamic target.
The dynamic target is marked in the position relation diagram according to the type of the lane line, the installation position of the blind area radar and the distance between the dynamic target and the vehicle, so that the dynamic target and the lane can be projected in one plane, and a foundation is laid for lane change risk assessment.
In a possible implementation manner of the embodiment of the present invention, as shown in fig. 11, on the basis of the embodiment shown in fig. 9, the control module 540 includes:
an information obtaining unit 541, configured to obtain relative position information, relative speed, and relative acceleration between the dynamic target and the vehicle according to the state information of the dynamic target and the state information of the vehicle.
And the calculating unit 542 is used for calculating the risk coefficient of the current lane change of the vehicle according to the relative position information, the relative speed and the relative acceleration.
And the risk determining unit 543 is used for determining that the vehicle has lane change risk when the risk coefficient exceeds a preset threshold value.
And a control unit 544, configured to obtain a control strategy matching the risk coefficient, and execute the control strategy.
The lane change risk can be automatically identified, a corresponding control strategy is provided, danger is avoided when a user changes lanes, and driving safety of the user is guaranteed.
It should be noted that the explanation of the foregoing embodiment of the method for monitoring a lane change blind area is also applicable to the monitoring device of the lane change blind area of this embodiment, and the implementation principle is similar, and is not described herein again.
The monitoring device for the vehicle blind area of the embodiment acquires the information of peripheral obstacles and the road information of the vehicle, constructs a position relation graph according to the acquired road information, extracts a dynamic target from the obstacle information, marks the dynamic target in the position relation graph, judges whether the current lane change of the vehicle has a risk according to the state information of the dynamic target and the state information of the vehicle, and controls the vehicle to execute a risk control strategy when the risk exists. Therefore, the dynamic target is extracted by obtaining the obstacle information, the position relation graph is constructed by obtaining the road information of the vehicle, and the dynamic target is marked in the position relation graph, so that the lane change risk evaluation can be carried out by fusing the road information and the position information of the dynamic target, the false detection probability under the complex road condition is effectively reduced, the lane change risk judgment accuracy is improved, and the performance of the blind area monitoring system is improved.
In order to implement the above embodiments, the present invention further provides a computer device.
Fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
As shown in fig. 12, the computer device 60 includes: the memory 601, the processor 602, and the computer program 603 stored on the memory 601 and operable on the processor 602, when the processor 602 executes the computer program 603, the vehicle blind area monitoring method according to the foregoing embodiment is implemented.
In order to achieve the above-mentioned embodiments, the present invention also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the monitoring method of the vehicle blind area as described in the foregoing embodiments.
In order to implement the above embodiments, the present invention further provides a computer program product, wherein when the instructions in the computer program product are executed by a processor, the method for monitoring the vehicle blind area according to the foregoing embodiments is implemented.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (18)

1. A method of monitoring a vehicle blind spot, comprising:
acquiring obstacle information around a vehicle and road information of the vehicle, wherein the road information is a rearview image of a rearview camera;
graying the rear view image to obtain a grayscale rear view image; dividing the gray-scale back view image to obtain gray-scale back view image segments; performing convolution operation on the gray scale rear view image segment and a preset operator template aiming at each gray scale rear view image segment to obtain a convolution result of the gray scale rear view image segment; determining the value of the pixel point according to the convolution result of the pixel point in the gray-scale rear-view image segment and a preset binarization threshold value to form a binary image of the gray-scale rear-view image segment; combining the binary images of the gray-scale back view image segments to obtain a binary back view image; constructing a position relation graph according to the binaryzation rear view image;
extracting a dynamic target from the obstacle information, and marking the dynamic target in the position relation graph;
and judging whether the current lane change of the vehicle has risks or not according to the state information of the dynamic target and the state information of the vehicle, and controlling the vehicle to execute a risk control strategy if the current lane change of the vehicle has risks.
2. The method according to claim 1, wherein constructing a position relation map from the binarized rearview image comprises:
performing edge detection on the binarized view image to obtain edge feature points belonging to a lane line;
carrying out Hough transform on the extracted edge feature points to detect candidate straight lines, tracking the candidate straight lines, and determining the lane lines from the candidate straight lines;
and constructing a position relation graph between the vehicle and the lane line according to the lane line.
3. The method according to claim 2, wherein the determining the value of the pixel point according to the convolution result and the binarization threshold of the pixel point in the grayscale rearview image segment comprises:
for each pixel point, comparing the convolution result of the pixel point with a first numerical value generated by the binarization threshold value, and comparing the gray value of the pixel point with the binarization threshold value;
and if the convolution result of the pixel point is greater than the first numerical value and the gray value is greater than the binarization threshold, updating the gray value of the pixel point to a preset first gray value, otherwise, updating to a preset second gray value.
4. The method of claim 2, wherein graying the back view image to obtain a grayscale back view image comprises:
determining an initial position of a rear-view image of the rear-view camera, extracting a brightness value of each pixel point in the rear-view image of the rear-view camera from the initial position as a gray value of the pixel point according to a format of the rear-view image of the rear-view camera, and generating a gray rear-view image;
or acquiring the RGB value of each pixel point of the rear view image of the rear view camera, weighting the RGB values to obtain the brightness value of each pixel point as the gray value of the pixel point, and generating the gray rear view image.
5. The method according to claim 2, wherein before binarizing the grayscale back view image, further comprising:
acquiring the binarization threshold value of the gray-scale back view image according to a maximum inter-class difference method; or,
and calculating the average gray value of the gray rear view image according to the gray value of each pixel point on the gray rear view image, and taking the average gray value as the binarization threshold.
6. The method according to claim 2, wherein the performing edge detection on the binarized image to obtain edge feature points belonging to a lane line comprises:
extracting a connected region from the binarized view image;
screening out a lane line area from the extracted communication area according to the features of the lane line;
and carrying out edge detection on the screened lane line area to obtain the edge feature points.
7. The method of claim 2, wherein the constructing the map of the positional relationship between the vehicle and the lane lines comprises:
determining the types of two lane lines of a lane where the vehicle is located and the azimuth relationship between each lane line and the vehicle;
and constructing a position relation graph between the vehicle and each lane line according to the type and the azimuth relation of each lane line.
8. The method of claim 7, wherein said extracting dynamic objects from the obstacle information, and marking the dynamic objects in the positional relationship graph, comprises:
acquiring the dynamic target from the obstacle information;
marking the dynamic target in the position relation diagram according to the type of each lane line, the installation position of the blind area radar and the distance between the dynamic target and the vehicle; wherein the types include a dotted line and a solid line.
9. The method of claim 8, wherein the marking the dynamic target in the position relationship map according to a type of each lane line, an installation position of a blind spot radar, and a distance between the dynamic target and the vehicle includes:
when the types of the two lane lines of the lane where the vehicle is located are both solid lines, determining the dynamic targets located in the lane where the vehicle is located from all the dynamic targets according to the transverse distance in the distance of each dynamic target; the traffic direction of a lane where the vehicle is located is a horizontal direction, and the direction perpendicular to the traffic direction is a transverse direction;
marking a dynamic target in a lane where the vehicle is located in the position relation diagram according to the distance between the dynamic target and the vehicle and the installation position of a blind area radar corresponding to the dynamic target; and the dynamic target is identified from the detection data of the corresponding blind area radar.
10. The method of claim 8, wherein the marking the dynamic target in the position relationship map according to a type of each lane line, an installation position of a blind spot radar, and a distance between the dynamic target and the vehicle includes:
when the type of the left lane line of the lane where the vehicle is located is a dotted line and the type of the right lane line is a solid line, acquiring a first boundary lane line located on the other side of the left lane line, and marking the first boundary lane line in the position relation diagram; the left lane line and the first boundary lane line form a left lane positioned on the left side of the lane where the vehicle is positioned;
according to the transverse distance in the distance of each dynamic target, respectively determining a dynamic target in the left lane and a dynamic target in a lane where the vehicle is located from all the dynamic targets;
marking the dynamic target in the left lane in the position relation diagram according to the distance between the dynamic target and the vehicle and the installation position of the blind area radar corresponding to the dynamic target;
and marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance between the dynamic target and the vehicle and the installation position of the blind area radar corresponding to the dynamic target.
11. The method of claim 8, wherein the marking the dynamic target in the position relationship map according to a type of each lane line, an installation position of a blind spot radar, and a distance between the dynamic target and the vehicle includes:
when the type of the left lane line of the lane where the vehicle is located is a solid line and the type of the right lane line is a dotted line, acquiring a second boundary lane line located on the other side of the right lane line, and marking the second boundary lane line in the position relation diagram; the right lane line and the second boundary lane line form a right lane positioned on the right side of the lane where the vehicle is positioned;
according to the transverse distance in the distance of each dynamic target, respectively determining the dynamic target in the right lane and the dynamic target in the lane where the vehicle is located from all the dynamic targets;
marking the dynamic target in the right lane in the position relation diagram according to the distance between the dynamic target and the vehicle and the installation position of the blind area radar corresponding to the dynamic target;
and marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance between the dynamic target and the vehicle and the installation position of the blind area radar corresponding to the dynamic target.
12. The method of claim 8, wherein the marking the dynamic target in the position relationship map according to a type of each lane line, an installation position of a blind spot radar, and a distance between the dynamic target and the vehicle includes:
when the types of the left lane line and the right lane line of the lane where the vehicle is located are both dotted lines, acquiring a first boundary lane line located on the other side of the left lane line and a second boundary lane line located on the other side of the right lane line, and marking the first boundary lane line and the second boundary lane line in the position relation diagram; the left lane line and the first boundary lane line form a left lane positioned on the left side of the lane where the vehicle is positioned; the right lane line and the second boundary lane line form a right lane positioned on the right side of the lane where the vehicle is positioned;
according to the transverse distance in the distance of each dynamic target, respectively determining a dynamic target in the left lane, a dynamic target in the right lane and a dynamic target in the lane where the vehicle is located from all the dynamic targets;
marking the dynamic target in the left lane in the position relation diagram according to the distance between the dynamic target and the vehicle and the installation position of the blind area radar corresponding to the dynamic target;
marking the dynamic target in the right lane in the position relation diagram according to the distance between the dynamic target and the vehicle and the installation position of the blind area radar corresponding to the dynamic target;
and marking the dynamic target in the lane where the vehicle is located in the position relation diagram according to the distance between the dynamic target and the vehicle and the installation position of the blind area radar corresponding to the dynamic target.
13. The method of claim 1, wherein the determining whether the vehicle is at risk for changing lanes currently according to the status information of the dynamic target and the status information of the vehicle comprises:
acquiring relative position information, relative speed and relative acceleration between the dynamic target and the vehicle according to the state information of the dynamic target and the state information of the vehicle;
calculating a current lane change risk coefficient of the vehicle according to the relative position information, the relative speed and the relative acceleration;
if the risk coefficient exceeds a preset threshold value, determining that the vehicle has lane change risk;
the controlling the vehicle to execute a risk control strategy comprising:
and acquiring a control strategy matched with the risk coefficient, and executing the control strategy.
14. The method according to claim 1, wherein after the obtaining of the obstacle information around the vehicle and the road information of the lane in which the vehicle is located, the method further comprises:
and caching the obstacle information and the road information, and synchronizing the cached obstacle information and the cached road information in time.
15. The method according to any one of claims 1 to 14, wherein the road information of the vehicle is road information of a lane in which the vehicle is located.
16. A monitoring device for a vehicle blind area, comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring the information of obstacles around a vehicle and the road information of the vehicle, and the road information is a rearview image of a rearview camera;
the construction module is used for graying the rear view image to obtain a grayscale rear view image; dividing the gray-scale back view image to obtain gray-scale back view image segments; performing convolution operation on the gray scale rear view image segment and a preset operator template aiming at each gray scale rear view image segment to obtain a convolution result of the gray scale rear view image segment; determining the value of the pixel point according to the convolution result of the pixel point in the gray-scale rear-view image segment and a preset binarization threshold value to form a binary image of the gray-scale rear-view image segment; combining the binary images of the gray-scale back view image segments to obtain a binary back view image; constructing a position relation graph according to the binaryzation rear view image;
the marking module is used for extracting a dynamic target from the obstacle information and marking the dynamic target in the position relation graph;
and the control module is used for judging whether the current lane change of the vehicle has risks or not according to the state information of the dynamic target and the state information of the vehicle, and controlling the vehicle to execute a risk control strategy if the current lane change of the vehicle has risks.
17. A computer device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the method for monitoring vehicle blind spots according to any of claims 1-15.
18. A non-transitory computer-readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements the method of monitoring a vehicle blind area according to any one of claims 1 to 15.
CN201810362009.5A 2018-04-20 2018-04-20 Vehicle blind area monitoring method and device, computer equipment and storage medium Active CN110386065B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810362009.5A CN110386065B (en) 2018-04-20 2018-04-20 Vehicle blind area monitoring method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810362009.5A CN110386065B (en) 2018-04-20 2018-04-20 Vehicle blind area monitoring method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110386065A CN110386065A (en) 2019-10-29
CN110386065B true CN110386065B (en) 2021-09-21

Family

ID=68283254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810362009.5A Active CN110386065B (en) 2018-04-20 2018-04-20 Vehicle blind area monitoring method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110386065B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4170604A1 (en) * 2021-10-19 2023-04-26 Stoneridge, Inc. Camera mirror system display for commercial vehicles including system for identifying road markings

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113022573B (en) * 2019-12-06 2022-11-04 华为技术有限公司 Road structure detection method and device
CN111090087B (en) * 2020-01-21 2021-10-26 广州赛特智能科技有限公司 Intelligent navigation machine, laser radar blind area compensation method and storage medium
CN111746543B (en) * 2020-06-30 2021-09-10 三一专用汽车有限责任公司 Control method and control device for vehicle lane change, vehicle and readable storage medium
CN111959511B (en) * 2020-08-26 2022-06-03 腾讯科技(深圳)有限公司 Vehicle control method and device
CN115116267B (en) * 2021-03-18 2024-06-25 上海汽车集团股份有限公司 Vehicle lane change processing system and vehicle
CN113103957B (en) * 2021-04-28 2023-07-28 上海商汤临港智能科技有限公司 Blind area monitoring method and device, electronic equipment and storage medium
CN113204026B (en) * 2021-05-07 2022-05-24 英博超算(南京)科技有限公司 Method for improving detection performance of rear millimeter wave radar blind area
CN114332818B (en) * 2021-12-28 2024-04-09 阿波罗智联(北京)科技有限公司 Obstacle detection method and device and electronic equipment
CN114162143A (en) * 2021-12-31 2022-03-11 阿维塔科技(重庆)有限公司 Method and device for requesting driver to take over by intelligent driving system, automobile and computer-readable storage medium
CN114582165A (en) * 2022-03-02 2022-06-03 浙江海康智联科技有限公司 Collaborative lane change safety auxiliary early warning method and system based on V2X

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102208019B (en) * 2011-06-03 2013-01-09 东南大学 Method for detecting lane change of vehicle based on vehicle-mounted camera
CN104657735B (en) * 2013-11-21 2018-01-23 比亚迪股份有限公司 Method for detecting lane lines, system, lane departure warning method and system
CN104952254B (en) * 2014-03-31 2018-01-23 比亚迪股份有限公司 Vehicle identification method, device and vehicle
KR102528002B1 (en) * 2016-08-17 2023-05-02 현대모비스 주식회사 Apparatus for generating top-view image and method thereof
CN106652559A (en) * 2016-11-21 2017-05-10 深圳市元征软件开发有限公司 Driving control method and apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4170604A1 (en) * 2021-10-19 2023-04-26 Stoneridge, Inc. Camera mirror system display for commercial vehicles including system for identifying road markings

Also Published As

Publication number Publication date
CN110386065A (en) 2019-10-29

Similar Documents

Publication Publication Date Title
CN110386065B (en) Vehicle blind area monitoring method and device, computer equipment and storage medium
JP3822515B2 (en) Obstacle detection device and method
CN106647776B (en) Method and device for judging lane changing trend of vehicle and computer storage medium
US9330320B2 (en) Object detection apparatus, object detection method, object detection program and device control system for moveable apparatus
US9818301B2 (en) Lane correction system, lane correction apparatus and method of correcting lane
JP6126094B2 (en) In-vehicle image recognition device
EP2463843B1 (en) Method and system for forward collision warning
EP2879111A1 (en) Image processing device
JP2021510227A (en) Multispectral system for providing pre-collision alerts
CN104321665A (en) Multi-surface model-based tracking
CN107886729B (en) Vehicle identification method and device and vehicle
JP3301995B2 (en) Stereo exterior monitoring device
CN109558765B (en) Automobile and lane line detection method and device
Cualain et al. Multiple-camera lane departure warning system for the automotive environment
JP2005157670A (en) White line detecting device
JP2002160598A (en) Outside car control device
CN114084129A (en) Fusion-based vehicle automatic driving control method and system
JP5434277B2 (en) Driving support device and driving support method
CN111104824A (en) Method for detecting lane departure, electronic device, and computer-readable storage medium
CN107886036B (en) Vehicle control method and device and vehicle
JP2014026519A (en) On-vehicle lane marker recognition device
JP2015231179A (en) Outside of vehicle environment recognition device
JP7460282B2 (en) Obstacle detection device, obstacle detection method, and obstacle detection program
JP3532896B2 (en) Smear detection method and image processing apparatus using the smear detection method
WO2024042607A1 (en) External world recognition device and external world recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant