CN110738081A - Abnormal road condition detection method and device - Google Patents

Abnormal road condition detection method and device Download PDF

Info

Publication number
CN110738081A
CN110738081A CN201810799435.5A CN201810799435A CN110738081A CN 110738081 A CN110738081 A CN 110738081A CN 201810799435 A CN201810799435 A CN 201810799435A CN 110738081 A CN110738081 A CN 110738081A
Authority
CN
China
Prior art keywords
area
determining
road
network
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810799435.5A
Other languages
Chinese (zh)
Other versions
CN110738081B (en
Inventor
朱江
邝宏武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810799435.5A priority Critical patent/CN110738081B/en
Publication of CN110738081A publication Critical patent/CN110738081A/en
Application granted granted Critical
Publication of CN110738081B publication Critical patent/CN110738081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides abnormal road condition detection methods and devices, the method comprises the steps of determining a non-road surface area and a safe driving area where a vehicle where the vehicle is located runs safely from a road image acquired by a vehicle-mounted camera, wherein a th intersection area exists between the safe driving area and the non-road surface area, determining an obstacle area from the road image, determining a second intersection area between an th intersection area and the obstacle area, determining the area of a residual area except the second intersection area from a th intersection area, determining whether the road surface in the road image is abnormal or not according to the area of the residual area and the area of the safe driving area, judging the abnormal road condition through the determined non-road surface area and the known obstacle area and effectively combining the safe driving area required by the vehicle for safe driving, and accurately giving an early warning on potential dangers and improving the safety of automatic driving.

Description

Abnormal road condition detection method and device
Technical Field
The application relates to the technical field of image processing, in particular to abnormal road condition detection methods and devices.
Background
At present, vehicles are popularized in people's lives, and automatic driving systems for realizing automatic driving decisions are also widely applied to vehicles , and the automatic driving systems detect various types of road targets (such as vehicles, pedestrians, lane lines, road signs, traffic signals and the like) through image or video processing technologies so as to directly control the vehicles or remind drivers to ensure driving safety.
However, under complex road conditions, due to environmental interference (such as the front obstacle is shielded by rain and fog, the texture of the front obstacle is complex, etc.), the automatic driving system cannot accurately detect the front obstacle, and usually has the problem of missed detection or false alarm, so that the potential danger cannot be accurately warned, and the safety of automatic driving is reduced.
Disclosure of Invention
In view of this, the present application provides methods and apparatuses for detecting abnormal road conditions, so as to solve the problem that the related art cannot accurately warn about a potential hazard.
According to , abnormal road condition detection methods are provided, the method comprises:
determining a non-road surface area and a safe driving area for safe driving of a vehicle where the vehicle-mounted camera is located from a road image acquired by the vehicle-mounted camera, wherein an th intersection area exists between the safe driving area and the non-road surface area;
determining an obstacle area from the off-road area;
determining a second intersection region between the th intersection region and the barrier region and determining an area of a remaining region from the th intersection region except the second intersection region;
and determining whether the road surface in the road image is abnormal or not according to the area of the residual area and the area of the safe driving area.
According to a second aspect of the embodiments of the present application, there are provided kinds of abnormal road condition detecting devices, the devices including:
the area determination module is used for determining a non-road area and a safe driving area for safe driving of a vehicle where the vehicle-mounted camera is located from a road image acquired by the vehicle-mounted camera, wherein an th intersection area exists between the safe driving area and the non-road area;
an obstacle determination module for determining an obstacle area from the non-road surface area;
an area determination module to determine a second intersection region between the th intersection region and the obstacle region and to determine an area of a remaining region from the th intersection region except the second intersection region;
an anomaly determination module to determine a second intersection region between the th intersection region and the obstacle region and to determine an area of a remaining region from the th intersection region except the second intersection region.
According to a third aspect of embodiments herein, there are provided electronic devices, the devices including a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor is configured to read the machine executable instruction on the readable storage medium, and execute the instruction to implement the steps of the abnormal road condition detection method.
According to a fourth aspect of embodiments of the present application, kinds of chips are provided, including a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor is configured to read the machine executable instruction on the readable storage medium, and execute the instruction to implement the steps of the abnormal road condition detection method.
By applying the embodiment of the application, the non-road surface area and the safe driving area where the vehicle-mounted camera is located safely drives are determined from the road image acquired by the vehicle-mounted camera, the intersection area exists between the safe driving area and the non-road surface area, then the obstacle area is determined from the non-road surface area, the second intersection area between the intersection area and the obstacle area is determined, the area of the remaining area except the second intersection area is determined from the intersection area, and whether the road surface in the road image is abnormal or not is determined according to the area of the remaining area and the area of the safe driving area.
Drawings
Fig. 1A is a flowchart of an embodiment of abnormal road condition detection methods according to an exemplary embodiment of the present application;
FIG. 1B is a diagram of a network model architecture for the st neural network shown in accordance with the embodiment of FIG. 1A;
FIG. 1C is a schematic diagram illustrating the marking of off-road areas and safe driving areas in road images according to the embodiment shown in FIG. 1A;
FIG. 1D is a diagram illustrating a network model architecture of second neural networks according to the embodiment shown in FIG. 1A;
FIG. 1E is a schematic diagram of the obstacle area in road images shown in the embodiment of FIG. 1A;
fig. 2 is a flowchart illustrating another abnormal road condition detection methods according to an exemplary embodiment of ;
FIG. 3 is a hardware block diagram of a electronic devices shown in accordance with an exemplary embodiment of ;
fig. 4 is a structural diagram of an embodiment of abnormal road condition detection devices according to an exemplary embodiment of the present application.
Detailed Description
The embodiments described in the exemplary embodiments below do not represent all embodiments consistent with the present application's patent, but rather are merely examples of apparatus and methods consistent with the present application's aspects patent, as detailed in the appended claims.
As used in this application and the appended claims, the singular forms "," "said," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It is to be understood that although the terms , second, third, etc. may be used herein to describe various information, these information should not be limited to these terms.
However, image perception in the automatic driving system is greatly influenced by environmental interference, and -determined missed detection rate and false alarm rate exist in perception results, and particularly in severe weather environments such as rainy days and foggy days, the automatic driving system cannot directly perceive the front obstacle due to the fact that the front obstacle is shielded by rain fog, so that potential danger cannot be accurately warned, and safety of automatic driving is reduced.
Based on the above, the method determines the non-road surface area and the safety driving area required by the vehicle from the image, then determines the known obstacle area (detectable obstacle area) from the non-road surface area, then determines the th intersection area between the safety driving area and the non-road surface area and the residual area (area where the obstacle cannot be detected) outside the known obstacle area, calculates the ratio of the area of the residual area to the area of the whole safety driving area, if the ratio exceeds the set proportion threshold value, the area occupied by the unknown obstacle area in the non-road surface area included in the safety driving area is relatively large, and determines that the road surface in the image is abnormal.
Based on the above description, after the th intersection region (the region where the vehicle cannot travel) between the non-road surface region and the safe travel region is determined, the abnormal road condition is determined according to the area of the th intersection region and the area of the remaining region other than the known obstacle region, because the remaining region refers to an unknown obstacle region, that is, a region where the obstacle cannot be directly detected, an unknown obstacle which is not easily detected may exist in the regions, for example, an unknown obstacle in a complex scene such as an obstacle blocked by rain fog or an obstacle with a complex texture cannot be directly detected, and the remaining region may also be understood as an unknown obstacle region, if the area of the unknown obstacle region occupies a relatively large area of the safe travel region, it is indicated that the road surface in the current road image is abnormal, so that the potential danger can be accurately warned, and the safety of automatic driving is improved.
The technical solution of the present application is explained in detail by the following specific examples.
Fig. 1A is a flowchart of an embodiment of abnormal road condition detection methods according to exemplary embodiments of the present application, where the abnormal road condition detection methods may be applied to an electronic device of a vehicle, and the electronic device may operate an automatic driving system to determine an abnormal road condition by analyzing and processing a road image collected by a vehicle-mounted camera mounted on the vehicle, as shown in fig. 1A, the abnormal road condition detection method includes the following steps:
step 101, determining a non-road surface area and a safe driving area for safe driving of a vehicle where the vehicle-mounted camera is located from a road image acquired by the vehicle-mounted camera, wherein an th intersection area exists between the safe driving area and the non-road surface area.
In , the road image may be input into neural network trained in advance, and the non-road surface area may be determined from the road image by the convolution network, the deconvolution network and the target detection network included in the neural network.
Because the th neural network inputs images and outputs semantically segmented images, and divides the images into a road area and a non-road area, the th neural network can adopt a Full Convolution Network (FCN) which is a neural network through deep learning, compared with the traditional detection mode, the output result is stable and accurate, and especially under the complex road condition in the severe weather environment, the output result is more stable.
It can be understood by those skilled in the art that the present application is not limited to the architecture of the full convolutional network, and for example, the full convolutional network may be based on an encoding-decoding architecture.
As shown in fig. 1B, the th neural network includes a convolution network, a deconvolution network, and an object detection network, wherein the convolution network is configured to perform convolution and pooling operations on an input road image to gradually reduce a spatial dimension of input data, the deconvolution network is configured to perform deconvolution operations on data output by the convolution network to gradually recover details of an object and a corresponding spatial dimension, and the object detection network is configured to perform obstacle frame operations on data output by the convolution network to assist the deconvolution network in better recovering details of the object, so as to achieve a good semantic segmentation effect.
In the embodiment, the braking distance may be determined using the current driving speed of the vehicle, the turning radius may be determined using the current front wheel steering angle of the vehicle and the wheelbase of the front and rear wheels, and the safe driving area of the vehicle may then be determined based on the braking distance and the turning radius.
The safe driving area refers to a driving area required by the vehicle to safely drive, and the determination process of the safe driving area is derived in detail through the conversion relation between the image coordinate system and the road surface coordinate system.
(1) Assuming that the vehicle is in circular motion, the turning radius of the vehicle is obtained from the current front wheel steering angle theta of the vehicle and the wheelbase b of the front and rear wheels of the vehicle:
R=b/sinθ
(2) from the current running speed v of the vehicle, the braking distance of the vehicle can be obtained as follows:
S=v2/(2×g×μ)
wherein g is 9.8m/s2Mu is friction coefficient, fine weather is 0.8, and rainy weather is 0.2. It will be appreciated by those skilled in the art that the determination of the current weather conditions may be accomplished by related techniques and will not be described in detail herein.
The obtained braking distance and turning radius are the running track of the vehicle under the road surface coordinate system, and two points can be taken every L meters within the range of the braking distance S, for example, if S is 100 meters and L is 10 meters, 20 points can be taken. And then mapping the points to the image coordinate system through the conversion relation from the road surface coordinate system to the image coordinate system.
(3) Assuming a point coordinate M (x) in a road coordinate systemg,yg) The formula of the conversion relationship to the point coordinates P (u, v) in the image coordinate system is as follows:
Figure BDA0001736760940000071
the conversion relation needs manual calibration of the installation height and angle of the vehicle-mounted camera on the vehicle in advance. (u)0,v0) And (f)x,fy) The calibrated vehicle-mounted camera outer parameters comprise a camera height (h), a pitch angle (α), wherein s1 equals sin (α), c1 equals cos (α), a deflection angle (β), wherein s2 equals sin (β), c2 equals cos (β), and a rotation angle (gamma), wherein s3 equals sin (gamma), and c3 equals cos (gamma).
(4) In the road image, the safe travel area is obtained from the points mapped to the image coordinate system.
In the embodiment, since the non-road surface region is a region where the vehicle cannot travel, the th intersection region existing between the safe travel region and the non-road surface region refers to a region where the vehicle cannot travel within a spatial range in the image, and since the safe travel region refers to a safe travel region where the vehicle travels safely, and does not include a sky region, and the non-road surface region refers to a region including an obstacle, a sky, and the like, under normal conditions, the th intersection region between the safe travel region and the non-road surface region should include a region where a detectable obstacle, which may be a movable obstacle that can be detected, may exist, and an unknown obstacle that cannot be detected easily may be included.
It should be noted that, if there is no intersection between the safe driving area and the non-road surface area, that is, there is no th intersection area, it indicates that there is no need to perform abnormal road condition detection on the road image.
In the example scenario, as shown in fig. 1C, the area framed by the white line is a safe driving area required by the vehicle, the area above the white line is a non-road area, which is an area where the vehicle cannot drive, such as an obstacle and sky, and the area below the white line is a road area (the area marked by the dotted line), the road area is an area where the vehicle can drive, such as a lane line, a road surface and a road edge, and the intersection of the non-road area and the safe driving area is the th intersection area.
Step 102: an obstacle area is determined from the road image.
In the embodiment, the road image may be input into a second neural network, from which obstacle regions are determined.
In , a sub-image corresponding to the non-road surface area may be cut from the road image, the sub-image may be input to the third neural network, and the third neural network may determine the obstacle area from the sub-image.
Wherein the obstacle determined from the road image is a moving obstacle, such as a motor vehicle, a non-motor vehicle, a pedestrian, or the like. The second and third neural networks may employ convolutional neural networks, such as region-based Fast R-CNN, and the like.
The second neural Network shown in fig. 1D includes a convolutional Network, RPN (Region candidate Network), location and type regression Network. The convolutional network is used for performing convolution operation and pooling operation on an input road image to obtain a feature map, the RPN is used for performing candidate frame operation on the feature map output by the convolutional network to generate a candidate frame, and the position and category regression network is used for determining the position and category of an obstacle by using the feature map output by the convolutional network and the candidate frame output by the RPN.
It should be noted that the obstacle detection in step 102 may also be implemented by the th neural network in step 101, that is, the output result of the target detection network in the th neural network described in step 101 is output as the detection result of the obstacle area.
With respect to the processes described in the above steps 101 to 102, in the exemplary scenario, as in the road image shown in fig. 1E, there are detected obstacles, i.e., front obstacles, the category of which is motor vehicles, in the -th intersection area between the non-road area and the safe driving area.
Step 103-determining a second intersection area between the th intersection area and the obstacle area and determining the area of the remaining area from the th intersection area except the second intersection area.
In the embodiment, since the non-road surface region refers to a region where vehicles such as an obstacle, sky, etc. cannot travel and the road surface region refers to a region where vehicles such as a lane line, a road surface, a road edge, etc. can travel, it is necessary to determine the area of the remaining region excluding the second intersection region from the intersection region of the safe travel region and the non-road surface region for determining whether the road condition is abnormal.
Based on the scenario of step 102, as shown in fig. 1E, obstacle regions, that is, a second intersection region, exist in the th intersection region where the vehicle cannot travel included in the safe travel region, and the remaining regions except the obstacle region in the th intersection region are all regions where an obstacle cannot be detected, that is, belong to unknown obstacle regions.
Step 104: and determining whether the road surface in the road image is abnormal or not according to the area of the residual area and the area of the safe driving area.
In the embodiment, the ratio of the area of the remaining area to the area of the safe driving area may be determined, and if the ratio exceeds a preset ratio threshold, it is determined that the road surface in the road image is abnormal, and if the ratio does not exceed the preset ratio threshold, it is determined that the road surface in the road image is normal.
The preset proportion threshold value can be set according to the test effect of a large number of samples. If the ratio exceeds the preset ratio threshold, the fact that the area occupied by the unknown obstacles in the non-road surface area contained in the safe driving area is too large can be determined, the fact that the road surface in the road image is abnormal can be determined, if the ratio does not exceed the preset ratio threshold, the fact that the area occupied by the unknown obstacles in the non-road surface area contained in the safe driving area is small can be determined, and the fact that the road surface in the road image belongs to the normal road condition can be determined.
In the embodiment, the number of frames of the road images with the abnormality may be counted, and when the number of frames of the road images with the abnormality reaches a preset number, an abnormality alarm prompt is output to ensure the stability of the abnormality determination result.
The preset number can also be set according to the test effect of a large number of samples. An abnormal alarm prompt can be output through an automatic driving system, so that a driver is informed that the road condition in front of the driver is potentially dangerous and needs to be carefully driven.
It should be noted that after determining that the road condition is normal, the automatic driving system may further make driving decisions by calculating the relative distance and relative speed between the vehicle and the obstacle ahead.
According to the embodiment of the application, the non-road surface area and the safe driving area where the vehicle-mounted camera is located safely drives are determined from the road image collected by the vehicle-mounted camera, the intersection area exists between the safe driving area and the non-road surface area, then the obstacle area is determined from the non-road surface area, the second intersection area between the intersection area and the obstacle area is determined, the area of the remaining area except the second intersection area is determined from the intersection area, and whether the road surface in the road image is abnormal or not is determined according to the area of the remaining area and the area of the safe driving area.
Fig. 2 is a flowchart of another abnormal road condition detection methods according to an exemplary embodiment of the present application, and based on the embodiment shown in fig. 1A, in combination with the network model structure diagram of the -th neural network shown in fig. 1B, an exemplary description is given by taking how to train the -th neural network as an example, and as shown in fig. 2, the process of training the -th neural network may include the following steps:
step 201, respectively acquiring th class images, second class images and third class images, marking a target frame in th class images, marking each pixel of the second class images with a road surface and an off-road surface, marking each pixel of the third class images with a road surface and an off-road surface, and marking the target frame in the third class images.
In the embodiment, since the neural network belongs to a multitask loss supervision model, tasks are semantic segmentation, and tasks are target detection, so that the shared parameters of the two tasks are feature maps output by the convolutional network, the class images are used for training the target detection tasks of the convolutional network and the target detection network, and target frames need to be marked in the class images, the second class images are used for training the semantic segmentation tasks of the convolutional network and the deconvolution network, pavement and non-pavement need to be marked for each pixel of the second class images, and the third class images are used for training the segmentation effect of the output of the neural network, namely pavement and non-pavement need to be marked for each pixel, and target frames also need to be marked.
The second-class images used for training the semantic segmentation task need to be labeled in a category mode for each pixel, the labeling workload is large, the -class images used for training the target detection task only need to be labeled in a target frame, and the labeling workload is small, so that the number of the second-class images can be properly reduced, and the number of the -class images is increased, so that the workload of manual labeling is reduced.
And 202, training the convolutional network and the target detection network by utilizing the labeled th class image, and continuing training the convolutional network and the deconvolution network by utilizing the labeled second class image.
In the embodiment, since the convolutional network is a network shared by the semantic segmentation task and the object detection task, the matrix coefficients in the convolutional network need to be adjusted in both training processes, wherein the training times of the semantic segmentation task and the object detection task can be set according to practical experience,
and step 203, determining the weight according to the number of th class images and the number of second class images.
In the embodiment, a weight representing a proportion of a loss value of the subsequent target detection network in a loss value of the neural network may be determined according to a ratio of the number of type images to the number of second type images, and the weight is 0-1.
Wherein, the smaller the number of the second type images relative to the th type images for training the semantic segmentation task, the greater the weight should be, as shown in table 1, which is exemplary ratio-to-weight relationship table.
Ratio of Weight of
0.5 0.3
1 0.5
2 0.7
3 0.9
TABLE 1
And 204, continuously training the convolutional network, the deconvolution network and the target detection network by using the marked third type of image until the loss value of the th neural network is lower than a preset threshold value or the training times reach preset times, stopping training, wherein the loss value of the th neural network is determined by the loss value of the deconvolution network, the loss value of the target detection network and the weight.
In the embodiment, since the labeled third kind of image is used to train the segmentation effect of the neural network output, the matrix coefficients of the convolutional network, the deconvolution network, and the target detection network need to be adjusted.
The output segmentation effect can be determined according to the loss value of the th neural network until the loss value of the th neural network is lower than a preset threshold value, which indicates that the output segmentation effect achieves an ideal effect, and the training is stopped, or the output segmentation effect can be stopped by judging whether the total training times reach the preset times, and the process of deriving the loss value of the th neural network is described in detail below.
(1) Deconvolution networks typically use Softmax loss as a cost function whose loss value is formulated as:
Figure BDA0001736760940000111
where N represents the total number of pixels per image in the second type of image, and lnRepresenting class truth for the nth pixel label, e.g. pavement label 0, non-pavement label 1, pn(ln) Indicating the prediction probability corresponding to the correct class of the nth pixel.
(2) The target detection network is an auxiliary network, and MSE (Mean Squared Error) loss can be used as a cost function, and the loss value formula of the cost function is as follows:
Figure BDA0001736760940000121
where x represents the input image, g (x) represents the predicted frame result, Y represents the marked target frame result, and M represents the number of target frames marked by the input image.
(3) The total loss value of the th neural network is L ═ LS+α·LD
So far, the flow shown in fig. 2 is completed, and the th neural network training is finally realized through the flow shown in fig. 2.
Fig. 3 is a hardware architecture diagram of electronic devices according to an exemplary embodiment of the present application, where the electronic devices include a communication interface 301, a processor 302, a machine-readable storage medium 303, and a bus 304, where the communication interface 301, the processor 302, and the machine-readable storage medium 303 complete communication with each other through the bus 304, and the processor 302 can execute the abnormal road condition detection method described above by reading and executing machine-executable instructions corresponding to control logic of the abnormal road condition detection method in the machine-readable storage medium 302, and the specific content of the method is described in the above embodiments and will not be described again here.
The machine-readable storage medium 303 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
Fig. 4 is a structural diagram of an embodiment of abnormal road condition detection devices according to an exemplary embodiment , where the abnormal road condition detection devices may be applied to an electronic device of a vehicle, as shown in fig. 4, the abnormal road condition detection devices include:
the area determining module 410 is configured to determine, from a road image acquired by a vehicle-mounted camera, a non-road surface area and a safe driving area where a vehicle in which the vehicle-mounted camera is located safely drives, where an th intersection area exists between the safe driving area and the non-road surface area;
an obstacle determination module 420 for determining an obstacle area from the off-road area;
an area determination module 430 for determining a second intersection area between the th intersection area and the obstacle area and determining an area of a remaining area from the th intersection area except the second intersection area;
an anomaly determination module 440 to determine a second intersection region between the th intersection region and the obstacle region and to determine an area of a remaining region from the th intersection region except the second intersection region.
In an optional implementation manner, the abnormality determining module 440 is specifically configured to determine a ratio between an area of the remaining area and an area of the safe driving area, determine that the road surface in the road image is abnormal if the ratio exceeds a preset ratio threshold, and determine that the road surface in the road image is normal if the ratio does not exceed the preset ratio threshold.
In an optional implementation manner, the region determining module 410 is specifically configured to, in a process of determining a non-road region from a road image acquired by a vehicle-mounted camera, input the road image into a th neural network trained in advance, and determine the non-road region from the road image through a convolution network, a deconvolution network, and an object detection network included in the th neural network.
In an alternative implementation, the apparatus further includes (not shown in fig. 4):
the training module is used for respectively acquiring th class images, second class images and third class images, marking a target frame in the th class images, marking a road surface and a non-road surface for each pixel of the second class images, marking a road surface and a non-road surface for each pixel of the third class images, marking a target frame in the third class images, training the convolutional network and the target detection network by using the marked th class images, continuously training the convolutional network and the deconvolution network by using the marked second class images, determining weights according to the number of the class images and the number of the second class images, continuously training the convolutional network, the deconvolution network and the target detection network by using the marked third class images until the loss value of the th neural network is lower than a preset threshold value or the training times reaches a preset number, and stopping the training, wherein the loss value of the th neural network is determined by the loss value of the convolutional network, the loss value of the deconvolution network and the loss value of the target detection network.
In an optional implementation, the obstacle determining module 420 is specifically configured to input the road image into a second neural network, determine an obstacle region from the road image by the second neural network, or intercept a sub-image corresponding to the non-road surface region from the road image, input the sub-image into a third neural network, and determine an obstacle region from the sub-image by the third neural network.
In an alternative implementation manner of , the area determining module 410 is specifically configured to, during the process of determining the safe driving area where the vehicle with the vehicle-mounted camera is located is safely driven from the road image, determine a braking distance by using a current driving speed of the vehicle, determine a turning radius by using a current front wheel turning angle of the vehicle and wheel bases of front and rear wheels, and determine the safe driving area where the vehicle is safely driven according to the braking distance and the turning radius.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
The above-described apparatus embodiments are merely illustrative, wherein the elements described as separate components may or may not be physically separate, that is, may be located in places, or may be distributed over a plurality of network elements.
The application further provides chips, where the chip includes a readable storage medium and a processor, the readable storage medium is used for storing machine executable instructions, and the processor is used for reading the machine executable instructions and executing the instructions to implement the steps of the abnormal road condition detection method in the above-described embodiment.
This application is intended to cover any variations, uses, or adaptations of the application following the -generic principles of the application and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains and as may be applied to the essential features hereinbefore set forth, the description and examples are to be regarded as illustrative only, and the true scope and spirit of the application is indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises the series of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (14)

1, abnormal road condition detection method, characterized in that, the method includes:
determining a non-road surface area and a safe driving area for safe driving of a vehicle where the vehicle-mounted camera is located from a road image acquired by the vehicle-mounted camera, wherein an th intersection area exists between the safe driving area and the non-road surface area;
determining an obstacle area from the road image;
determining a second intersection region between the th intersection region and the barrier region and determining an area of a remaining region from the th intersection region except the second intersection region;
and determining whether the road surface in the road image is abnormal or not according to the area of the residual area and the area of the safe driving area.
2. The method according to claim 1, wherein determining whether there is an abnormality in the road surface in the road image based on the area of the remaining region and the area of the safe driving region includes:
determining a ratio of an area of the remaining area to an area of the safe driving area;
if the ratio exceeds a preset ratio threshold, determining that the road surface in the road image is abnormal;
and if the ratio does not exceed a preset ratio threshold, determining that the road surface in the road image is normal.
3. The method of claim 1, wherein determining the off-road region from the road image captured by the vehicle-mounted camera comprises:
and inputting the road image into an th neural network obtained by training in advance, and determining a non-road surface area from the road image through a convolution network, a deconvolution network and an object detection network contained in the th neural network.
4. The method of claim 3, wherein the th neural network is pre-trained by:
respectively acquiring type images, second type images and third type images, marking a target frame in the type images, marking each pixel of the second type images with a road surface and an off-road surface, marking each pixel of the third type images with a road surface and an off-road surface, and marking the target frame in the third type images;
training the convolution network and the target detection network by using the labeled th class image, and continuing training the convolution network and the deconvolution network by using the labeled second class image;
determining a weight according to the number of the th class images and the number of the second class images;
continuing training the convolution network, the deconvolution network and the target detection network by using the marked third type of image until the loss value of the th neural network is lower than a preset threshold value or the training times reach preset times, and stopping training;
wherein the loss value of the th neural network is determined by the loss value of the deconvolution network, the loss value of the target detection network, and the weights.
5. The method of claim 1, wherein determining an obstacle region from the road image comprises:
inputting the road image into a second neural network, determining an obstacle region from the road image by the second neural network; alternatively, the first and second electrodes may be,
intercepting a sub-image corresponding to the non-road surface area from the road image; inputting the sub-images into a third neural network, determining by the third neural network the obstacle area from the sub-images.
6. The method of claim 1, wherein determining a safe driving area for safe driving of a vehicle in which the vehicle-mounted camera is located from the road image comprises:
determining a braking distance by using the current running speed of the vehicle;
determining a turning radius by using the current front wheel turning angle of the vehicle and the wheelbases of the front wheel and the rear wheel;
and determining a safe driving area for the vehicle to safely drive according to the braking distance and the turning radius.
7, kinds of unusual road conditions detection device, its characterized in that, the device includes:
the area determination module is used for determining a non-road area and a safe driving area for safe driving of a vehicle where the vehicle-mounted camera is located from a road image acquired by the vehicle-mounted camera, wherein an th intersection area exists between the safe driving area and the non-road area;
an obstacle determination module for determining an obstacle area from the road image;
an area determination module to determine a second intersection region between the th intersection region and the obstacle region and to determine an area of a remaining region from the th intersection region except the second intersection region;
an anomaly determination module to determine a second intersection region between the th intersection region and the obstacle region and to determine an area of a remaining region from the th intersection region except the second intersection region.
8. The device according to claim 7, characterized in that the abnormality determination module is specifically configured to determine a ratio of the area of the remaining region to the area of the safe driving region; if the ratio exceeds a preset ratio threshold, determining that the road surface in the road image is abnormal; and if the ratio does not exceed a preset ratio threshold, determining that the road surface in the road image is normal.
9. The device according to claim 7, wherein the region determining module is specifically configured to, in determining the non-road surface region from the road image captured by the vehicle-mounted camera, input the road image into an th neural network trained in advance, and determine the non-road surface region from the road image through a convolution network, a deconvolution network, and an object detection network included in the th neural network.
10. The apparatus of claim 9, further comprising:
the training module is used for respectively acquiring th class images, second class images and third class images, marking a target frame in the th class images, marking a road surface and a non-road surface for each pixel of the second class images, marking a road surface and a non-road surface for each pixel of the third class images, marking a target frame in the third class images, training the convolutional network and the target detection network by using the marked th class images, continuously training the convolutional network and the deconvolution network by using the marked second class images, determining weights according to the number of the class images and the number of the second class images, continuously training the convolutional network, the deconvolution network and the target detection network by using the marked third class images until the loss value of the th neural network is lower than a preset threshold value or the training times reaches a preset number, and stopping the training, wherein the loss value of the th neural network is determined by the loss value of the convolutional network, the loss value of the deconvolution network and the loss value of the target detection network.
11. The apparatus according to claim 7, wherein the obstacle determination module is configured to input the road image into a second neural network, determine an obstacle region from the road image by the second neural network; or intercepting a sub-image corresponding to the non-road surface area from the road image; inputting the sub-images into a third neural network, determining by the third neural network the obstacle area from the sub-images.
12. The device according to claim 7, wherein the area determination module is specifically configured to determine a braking distance by using a current driving speed of the vehicle in the process of determining a safe driving area where the vehicle with the vehicle-mounted camera is safely driven from the road image; determining a turning radius by using the current front wheel turning angle of the vehicle and the wheelbases of the front wheel and the rear wheel; and determining a safe driving area for the vehicle to safely drive according to the braking distance and the turning radius.
An electronic device of the type 13, , wherein the device includes a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor configured to read the machine executable instructions on the readable storage medium and execute the instructions to implement the steps of the method of any of claims 1-6.
14, kinds of chips, characterized by, including readable storage medium and processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor configured to read the machine executable instructions on the readable storage medium and execute the instructions to implement the steps of the method of any of claims 1-6.
CN201810799435.5A 2018-07-19 2018-07-19 Abnormal road condition detection method and device Active CN110738081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810799435.5A CN110738081B (en) 2018-07-19 2018-07-19 Abnormal road condition detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810799435.5A CN110738081B (en) 2018-07-19 2018-07-19 Abnormal road condition detection method and device

Publications (2)

Publication Number Publication Date
CN110738081A true CN110738081A (en) 2020-01-31
CN110738081B CN110738081B (en) 2022-07-29

Family

ID=69235582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810799435.5A Active CN110738081B (en) 2018-07-19 2018-07-19 Abnormal road condition detection method and device

Country Status (1)

Country Link
CN (1) CN110738081B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767831A (en) * 2020-06-28 2020-10-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN112639821A (en) * 2020-05-11 2021-04-09 华为技术有限公司 Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system
CN113283273A (en) * 2020-04-17 2021-08-20 上海锐明轨交设备有限公司 Front obstacle real-time detection method and system based on vision technology
CN114721404A (en) * 2022-06-08 2022-07-08 超节点创新科技(深圳)有限公司 Obstacle avoidance method, robot and storage medium
CN116343176A (en) * 2023-05-30 2023-06-27 济南城市建设集团有限公司 Pavement abnormality monitoring system and monitoring method thereof
WO2023155903A1 (en) * 2022-02-19 2023-08-24 Huawei Technologies Co., Ltd. Systems and methods for generating road surface semantic segmentation map from sequence of point clouds

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030114751A1 (en) * 2001-11-30 2003-06-19 Christoph Pedain Device and method for administering a substance
US20150039156A1 (en) * 2012-03-07 2015-02-05 Hitachi Automotive Systems, Ltd. Vehicle Travel Control Apparatus
CN106428000A (en) * 2016-09-07 2017-02-22 清华大学 Vehicle speed control device and method
CN106558058A (en) * 2016-11-29 2017-04-05 北京图森未来科技有限公司 Parted pattern training method, lane segmentation method, control method for vehicle and device
CN107454969A (en) * 2016-12-19 2017-12-08 深圳前海达闼云端智能科技有限公司 Obstacle detection method and device
KR20180058624A (en) * 2016-11-24 2018-06-01 고려대학교 산학협력단 Method and apparatus for detecting sudden moving objecj appearance at vehicle
CN108227712A (en) * 2017-12-29 2018-06-29 北京臻迪科技股份有限公司 The avoidance running method and device of a kind of unmanned boat

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030114751A1 (en) * 2001-11-30 2003-06-19 Christoph Pedain Device and method for administering a substance
US20150039156A1 (en) * 2012-03-07 2015-02-05 Hitachi Automotive Systems, Ltd. Vehicle Travel Control Apparatus
CN106428000A (en) * 2016-09-07 2017-02-22 清华大学 Vehicle speed control device and method
KR20180058624A (en) * 2016-11-24 2018-06-01 고려대학교 산학협력단 Method and apparatus for detecting sudden moving objecj appearance at vehicle
CN106558058A (en) * 2016-11-29 2017-04-05 北京图森未来科技有限公司 Parted pattern training method, lane segmentation method, control method for vehicle and device
CN107454969A (en) * 2016-12-19 2017-12-08 深圳前海达闼云端智能科技有限公司 Obstacle detection method and device
CN108227712A (en) * 2017-12-29 2018-06-29 北京臻迪科技股份有限公司 The avoidance running method and device of a kind of unmanned boat

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283273A (en) * 2020-04-17 2021-08-20 上海锐明轨交设备有限公司 Front obstacle real-time detection method and system based on vision technology
CN113283273B (en) * 2020-04-17 2024-05-24 上海锐明轨交设备有限公司 Method and system for detecting front obstacle in real time based on vision technology
CN112639821A (en) * 2020-05-11 2021-04-09 华为技术有限公司 Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system
CN112639821B (en) * 2020-05-11 2021-12-28 华为技术有限公司 Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system
CN111767831A (en) * 2020-06-28 2020-10-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN111767831B (en) * 2020-06-28 2024-01-12 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
WO2023155903A1 (en) * 2022-02-19 2023-08-24 Huawei Technologies Co., Ltd. Systems and methods for generating road surface semantic segmentation map from sequence of point clouds
CN114721404A (en) * 2022-06-08 2022-07-08 超节点创新科技(深圳)有限公司 Obstacle avoidance method, robot and storage medium
CN114721404B (en) * 2022-06-08 2022-09-13 超节点创新科技(深圳)有限公司 Obstacle avoidance method, robot and storage medium
CN116343176A (en) * 2023-05-30 2023-06-27 济南城市建设集团有限公司 Pavement abnormality monitoring system and monitoring method thereof
CN116343176B (en) * 2023-05-30 2023-08-11 济南城市建设集团有限公司 Pavement abnormality monitoring system and monitoring method thereof

Also Published As

Publication number Publication date
CN110738081B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN110738081B (en) Abnormal road condition detection method and device
CN110517521B (en) Lane departure early warning method based on road-vehicle fusion perception
US10853673B2 (en) Brake light detection
CN110178167B (en) Intersection violation video identification method based on cooperative relay of cameras
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
CN106647776B (en) Method and device for judging lane changing trend of vehicle and computer storage medium
US10339812B2 (en) Surrounding view camera blockage detection
US11373532B2 (en) Pothole detection system
CN109919074B (en) Vehicle sensing method and device based on visual sensing technology
CN110298307B (en) Abnormal parking real-time detection method based on deep learning
CN110929655B (en) Lane line identification method in driving process, terminal device and storage medium
CN109753841B (en) Lane line identification method and device
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN111091037A (en) Method and device for determining driving information
US20190213427A1 (en) Detection and Validation of Objects from Sequential Images of a Camera
JP7226368B2 (en) Object state identification device
CN108154119B (en) Automatic driving processing method and device based on self-adaptive tracking frame segmentation
CN115240471B (en) Intelligent factory collision avoidance early warning method and system based on image acquisition
CN116524454A (en) Object tracking device, object tracking method, and storage medium
Irshad et al. Real-time lane departure warning system on a lower resource platform
CN112070839A (en) Method and equipment for positioning and ranging rear vehicle transversely and longitudinally
JP6718025B2 (en) Device and method for identifying a small object area around a vehicle
CN113569663B (en) Method for measuring lane deviation of vehicle
CN111611942B (en) Method for extracting and building database by perspective self-adaptive lane skeleton
CN114312838B (en) Control method and device for vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant