CN112633055A - General automatic driving navigation system based on pavement disease detection - Google Patents

General automatic driving navigation system based on pavement disease detection Download PDF

Info

Publication number
CN112633055A
CN112633055A CN202011138745.6A CN202011138745A CN112633055A CN 112633055 A CN112633055 A CN 112633055A CN 202011138745 A CN202011138745 A CN 202011138745A CN 112633055 A CN112633055 A CN 112633055A
Authority
CN
China
Prior art keywords
module
vehicle
information
road surface
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011138745.6A
Other languages
Chinese (zh)
Inventor
陈艳艳
刘卓
严海
陈宁
侯越
王俊涛
杨湛宁
潘硕
曹丹丹
史宏宇
李春杰
吕璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HEBEI PROVINCIAL COMMUNICATIONS PLANNING AND DESIGN INSTITUTE
Beijing University of Technology
Original Assignee
HEBEI PROVINCIAL COMMUNICATIONS PLANNING AND DESIGN INSTITUTE
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HEBEI PROVINCIAL COMMUNICATIONS PLANNING AND DESIGN INSTITUTE, Beijing University of Technology filed Critical HEBEI PROVINCIAL COMMUNICATIONS PLANNING AND DESIGN INSTITUTE
Priority to CN202011138745.6A priority Critical patent/CN112633055A/en
Publication of CN112633055A publication Critical patent/CN112633055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a general automatic driving navigation system based on pavement damage detection, wherein system hardware refers to improvement based on an open source Neo suite, the Neo suite is composed of a vehicle module, a Giraffe module, a Panda module and an Eon module, and the improvement part comprises the step of combining the functions of the Giraffe module and the Panda module so as to lead information of a vehicle CAN port to be directly led out from a USB port, and the heat dissipation function of the Eon module is added to ensure long-term high-speed operation. The software comprises an open source Openpilot software framework and consists of eight parts, namely a card interface module, a grain module, a board dd module, a card module, a Loggerd module, a Controlsd module, a Vision module and a Radar module. The invention combines the image processing technology based on deep learning with the guidance of the automatic driving strategy, identifies the road surface diseases and other road surface objects and selects the corresponding strategy, thereby laying an important foundation for the automation and the unmanned detection of the future road surface diseases.

Description

General automatic driving navigation system based on pavement disease detection
Technical Field
The invention belongs to the field of deep learning image processing, and relates to a technology for recognizing road surface conditions and guiding a driving strategy. The invention is suitable for the intelligent aspect of road disease detection.
Background
As the economy continues to increase, the total investment of highway construction in China also shows an increasing situation. Although the road construction in China has achieved good results, the road maintenance cannot be kept up to the end. The pavement disease directly relates to the service life and service performance of the pavement, and the public traffic safety is seriously influenced. At present, pavement diseases are mostly detected by manual methods, which wastes time and labor. Therefore, the timeliness and necessity of pavement disease identification become particularly important.
Due to the lack of technical support, the current disease identification work is mainly carried out through manual on-site detection or a road surface detection vehicle based on manual driving. The two methods have the defects of low speed of disease detection and result uploading, large human resource input and the like. Meanwhile, most of the current researches on the road surface disease image recognition adopt an image set which is acquired by a road surface detection vehicle and is vertical to the ground, and for realizing automatic driving, the image set has no authenticity and generalization and lacks of practical application value. Secondly, due to the difference between the camera and the environmental condition, the difference between different images is large, which may still not be suitable for the batch processing of the road surface crack image, and the calculation speed and efficiency are relatively low.
Disclosure of Invention
The invention aims to identify road surface diseases and other road surface objects and select corresponding strategies by combining an image processing technology based on deep learning with automatic driving strategy guidance, thereby laying an important foundation for the automation and the unmanned detection of the future road surface diseases. The technical scheme adopted by the invention is a general automatic driving navigation system based on pavement damage detection.
Compared with the automatic driving system popular in the market, the system has the following advantages and characteristics: except traditional road surface information detection, such as vehicle, pedestrian, it can realize real-time road surface disease, including crack, pot hole, and the produced target of special weather influence, including ponding, snow, detect to adjust corresponding driving strategy, combine together autopilot and road surface disease discernment, can carry out quick discernment and the feedback of disease, promoted the detection and the discernment speed of disease greatly, and reduced manpower resources.
The system mainly comprises hardware and software.
The hardware refers to improvement based on an open source Neo suite, the Neo suite is mainly composed of a vehicle module, a Giraffe module, a Panda module and an Eon module, and the improvement comprises the steps of combining the functions of the Giraffe module and the Panda module to enable information of a CAN port of a vehicle to be directly led out from a USB port, adding a heat dissipation function of the Eon module and ensuring long-term high-speed operation.
The software comprises an open source Openpilot software framework which mainly comprises eight parts, namely a Carinterface module, a grain module, a board dd module, a Car module, a Loggerd module, a Controlsd module, a Vision module and a Radar module. And a PID control method is used, and a 'paveInterface' is added to receive a control command. In another aspect, the software includes a deep learning network: the Faster R-CNN realizes the identification of the road information through training and guides the adjustment of the driving strategy of the vehicle.
The specific system implementation method comprises the following steps:
hardware part: the Giraffe module converts information read from a CAN port of a vehicle into a standard OBD port; the Panda module converts information acquired from a standard OBD port into a microcomputer or a USB port which can only be read by a mobile terminal; the Eon module receives information sent by a USB port of the Panda through a type C port; the control flow of the vehicle is opposite to the information flow, and the control decision information is returned to the vehicle through the Eon module, the Panda module and the Giraffe module, so that the automatic control of the vehicle is realized.
A software part: the Carinterface module is mainly responsible for the butt joint with the vehicle. The Boardd module realizes direct conversion between the information of the vehicle CANO (controllerrAreaNet) and a data protocol defined by researchers, and a specific conversion standard is specified by Cereal. The Controlsd module is a core component of the controller for receiving data from the video and radar. And the vehicle which is controlled horizontally and vertically is realized by using a PID (Proportional-Integral-Derivative) control method. A "paveinterface" connected to the Controlsd module is added to receive control parameters (control commands) according to the actual road surface or traffic conditions. The Loggerd module is responsible for receiving and sending data. The software system can realize three parts of core functions, including: vehicle information reception, vehicle control, and vehicle travel control.
The system comprises the following concrete implementation steps:
the method comprises the following steps: universal automatic navigation system
The realization part of the general automatic navigation system is mainly divided into two parts of hardware and software.
Hardware part:
the present invention employs an open source Neo suite framework, as shown in fig. 1. And the equipment is improved to meet the long-term working requirement.
The Neo suite framework is mainly composed of four parts, namely a vehicle module, a Giraffe module, a Panda module and an Eon module. Specifically, the Giraffe module converts information read from a CAN port of the vehicle into a standard OBD port; the Panda module converts information acquired from a standard OBD port into a microcomputer or a USB port which can only be read by a mobile terminal; the Eon module receives information sent by a USB port of the Panda module through a type C port; the control flow of the vehicle is opposite to the information flow, and the control decision information is returned to the vehicle through the Eon module, the Panda module and the Giraffe module, so that the automatic control of the vehicle is realized.
Firstly, combining the functions of a Giraffe module and a Panda module together, and leading the information of a CAN port of a vehicle to be directly exported from a USB port, as shown in FIG. 2; secondly, the Eon module part is improved, a heat dissipation system is added, and long-term high-efficiency operation of autonomous driving is guaranteed, as shown in fig. 3.
A software part:
the invention is based on an open source Openpilot software framework. The core framework of the software system is shown in fig. 4.
The software system mainly comprises a card interface module, a grain module, a board dd module, a card module, a Loggerd module, a Controlsd module, a Vision module and a Radar module. The Carinterface module is mainly responsible for the butt joint with the vehicle. The Boardd module realizes direct conversion between the information of the vehicle CANO (controllerrArea network) and a data protocol defined by researchers, and a specific conversion standard is specified by Cereal. The Controlsd module is a core component of the controller for receiving data from the video and radar.
And the vehicle which is controlled horizontally and vertically is realized by using a PID (Proportional-Integral-Derivative) control method. In order to further control the driverless vehicle driving strategy, the invention adds a 'Pave Interface' connected with a Controlsd module to receive control parameters (control commands) according to the actual road surface or traffic conditions, which will be shown in part by the following autonomous driving strategy adjustment based on the road surface conditions.
The Loggerd module is responsible for data transceiving throughout the vehicle network. The core function of the software system comprises three parts: vehicle information reception, vehicle control, and vehicle travel control. The method specifically comprises the following steps:
1. and receiving the vehicle information, namely acquiring the vehicle comprehensive information.
In order to realize automatic driving of the vehicle, static parameters and dynamic operation information of the vehicle are firstly acquired. Compared with the traditional mode of collecting vehicle information through an OBD port of a vehicle, the software framework of the invention directly reads vehicle data information through the CAN port by using the DBC file, and increases the quantity of the obtained information from the commonly used 27 to 84, as shown in FIG. 5, so as to obtain more information.
Meanwhile, the invention uses an open source framework-Openpilot which can adapt to the automatic driving of 60 vehicle types, and further develops and designs an information acquisition module based on the Openpilot framework, so that not only can vehicle information be acquired through a vehicle-mounted local area network system, but also real-time information can be remotely acquired through a 4G wireless network. The real-time performance and the comprehensiveness of the vehicle information and the running state are improved.
2. Vehicle remote control
The invention designs a vehicle remote control module as shown in fig. 6, and currently uses a 4G network to transmit control signals. The 100ms transceiving delay caused by the method cannot be directly used for vehicle real-time control, but the construction of the Chinese 5G network in 2019 finally solves the problem.
3. Automatic cruise task of road surface detection vehicle
Use of the Vission module trained by Openpilot as an auto cruise training module in the present invention, the main function at this module is to simulate the driver's driving behavior in order for the vehicle to remain in the lane and drive at the desired speed. In Openpilot, a countermeasure network GAN is generated for simulating driving behavior, and human driving behavior is learned according to input of signals such as video, an accelerator, a brake and steering to generate simulated driving behavior. As shown in fig. 7, according to the road condition, the detection and automatic driving of the vehicle are realized by using the PID control method in combination with the driving behavior generated by the generation simulation. When the system does not detect road or traffic conditions as mentioned in step two, automatic cruising of the vehicle will be employed.
Step two: road surface condition recognition
The invention adopts a candidate region-based Faster R-CNN algorithm to realize the automatic detection of the road object. Fast R-CNN is an improved algorithm for Fast R-CNN that uses Convolutional Neural Networks (CNN) to directly generate candidate regions. The algorithm can ensure the detection precision and save a large amount of area selection time.
The basic structure of Faster R-CNN is shown in FIG. 8. The method can be divided into four modules according to functions, namely convolutional neural networks (CONV layers), regional recommendation networks (RPN), RoI pooling and classifiers.
A convolutional neural network. Features of the image are extracted using a convolutional neural network (CONV layer), with the entire image as input and the extracted features as output. The processing image of the automatic driving vehicle has the characteristics of high complexity, obvious characteristics of various targets and high identification degree, so that the VGG-16 model pre-trained by ImageNet is used as a basic network of a classification task. The input size of VGG is 1280 × 720 × 3. The output of the convolutional layer is used as the output characteristic diagram of the Conv layer.
The area suggests the network. And processing the extracted convolution feature map by using the area suggestion network. Which is used to find areas that may contain a predefined number of objects. It is a convolutional network that inputs feature maps from the Conv layer and outputs proposed regions. In the area proposal network, the anchor point is a fixed-size box placed on the picture by using a variety of different sizes and scales as a reference box for predicting the position of the object for the first time. The output of the classification layer is the probability of whether each anchor point is background or not. The regression layer of the regional suggestion network outputs location information of the anchor box that matches the predicted object.
And (4) performing RoI pooling. RoI pooling converts different sized inputs into fixed length outputs. The input is different candidate regions, the clipped feature map is fixed to 14 × 512 size using interpolation, and 7 × 512 feature map of fixed size is output after maximum pooling.
Classification and regression. The output of this layer is the category to which the candidate region belongs and the exact location of the candidate region in the image. Its role in fast R-CNN is similar to the fully-connected layer of a conventional convolutional neural network. The two output layers contain n +1 and 4n neurons. One is to give a score for each candidate object, and the other outputs position information of the n-type prediction box, respectively. The network is consistent with the RPN function, so that the weight sharing of the two networks can greatly improve the calculation speed.
The random gradient descent (SGD) method is adopted in the training of the invention, and a Momentum optimization algorithm is used. The relevant parameters are set as follows: the present invention specifically recognizes that the road characteristic information is 13 types, as shown in fig. 9. The target detection is mainly classified into three categories, namely common (typical) unmanned target detection, road surface disease and target detection considering climate influence. Common unmanned detection targets include pedestrians, bicycles, marking lines, large-sized vehicles and small-sized vehicles; the pavement disease detection target comprises a pit, a crack, a large-area repair, a crack repair, a damaged marking and a well cover; the detection targets considering the influence of climate include accumulated rain and accumulated snow.
Pedestrians and bicycles are the key to target detection for typical unmanned target detection, and since they are very vulnerable individuals on the road relative to average vehicles, timely detection and making corresponding decisions are the key to ensure safety. Small and large vehicles are also important components in the transportation process, which will guide unmanned vehicles to better route planning.
For pavement diseases, pits and cracks needing to be repaired by asphalt in time are the key points for identification. They will affect the safety and comfort of the autonomous vehicle. The statistical data of the pavement marker and the damaged marker can be used for evaluating the condition of the road marker under the normal operation. The repair of cracks and extensive repairs were made to analyze the condition of asphalt pavement. Meanwhile, in an actual road surface, due to the difference of rigidity of the well cover and the asphalt mixture and stress conduction of a joint of the well cover and the asphalt mixture, the well cover is an unstable part of a road surface structure and needs to be paid attention.
For target detection considering climate factors, rain accumulation and snow accumulation are the objects of important consideration in the present invention. The rain and snow in the asphalt road area can not only reduce the structural stability of the asphalt road surface, which leads to the occurrence of road surface diseases such as pits, but also reduce the anti-skid performance of the road surface and increase the probability of traffic accidents.
Step three: automatic driving strategy adjustment based on road surface condition
The method is specifically divided into the following 3 parts:
1. determining an actual effective line-of-sight distance
The design purposes of the automatic driving comprehensive system for detecting the road surface diseases provided by the invention are two: one is that the road surface damage is detected fully automatically; another is to ensure the comfort and safety of the autonomous vehicle. Thus, the proposed autonomous driving strategy differs from other widely used strategies that only consider traffic flow and not actual road conditions. In the present study, the adjustment of the driving strategy depends largely on the detection and judgment of various road conditions by the video detection sensor, wherein the detection precision depends on the effective detection visual range of the sensor, and the visual range is greatly influenced by the arrangement position of the sensor. Fig. 13 is a layout diagram of video detection sensors on a detection vehicle and corresponding line-of-sight distances.
As shown in fig. 12(a), the maximum detection visual range of the video detection sensor is LVThe included angle between the sight line distance and the horizontal line is thetaVThe sensor layout height is hv. Since the sight line of the video detection sensor intersects the horizon line, the effective sight distance LdNot equal to the maximum line of sight of the video detection sensor, but as follows:
Ld=hv/tanθv (1)
as shown in FIG. 12(b), since the line of sight of the video detector sensor does not intersect the horizon (although the extension intersects the horizon), the effective line of sight LdThe maximum visual distance of the video detector can be calculated as follows:
Ld=Lv/cosθv (2)
as shown in fig. 12(a), the effective visual distance LdDependent on the height h of the layout of the video detection sensorsvMaximum viewing distance LvThe included angle theta between the sight distance line and the horizontal linevThe relationship between them is as follows:
Figure BDA0002737531990000061
according to the arrangement experience of the expressway video monitoring equipment, the actual effective visual range is 80% of the maximum visual range of the video sensing equipment. Suppose Ld' is the actual maximum viewing distance:
L′d=0.8Ld (4)
considering the shielding of the vehicle and the environmental factors to the sight line distance, the actual effective sight line distance is set to be 0.8Ld′。
2. Altering autopilot behavior
In the road surface condition identification part, there are 13 road surfaces and traffic objects to be detected, which are respectively: road markings, pavement damage markings, pits, cracks, repair cracks, large area repair, pedestrians, large vehicles, small cars, bicycles, well lids, snow, water accumulation. According to the difference of the autonomous driving behaviors, five types can be divided:
1) normal driving
In this case, the vehicle may normally perform an automatic driving maneuver when detecting road markings, manhole covers, and normal road surface conditions. Such road surface conditions do not affect the driving behavior, the vehicle will travel according to normal conditions, i.e. the vehicle will travel according to the set speed and path, and the video detection result does not affect the driving behavior. In this case, there is no need to report a check alarm to the road authority.
2) Small range deceleration
Second, a small range of deceleration. The automatic detection vehicle can slightly slow down when detecting road surface broken marking lines, repairing cracks in a large area and repairing small vehicles, and carries out more detailed detection. Under the normal condition of traveling, detect that the vehicle traveles in the lane, the damaged road surface marking can not produce obvious influence to the action of driving: the large-area repairing and crack repairing process relieves the road surface diseases to a great extent, and has little influence on normal driving behaviors. The small-sized vehicle safely adopts a small-range speed reduction. For all these types of road surfaces and traffic conditions, they do not normally significantly affect the comfort and safety of the automatically detected vehicle, and therefore do not require significant changes in driving behavior. But at present, the conditions of pavement marking damage, large pavement repairing area, crack closure and the like need to be carefully checked. On the premise of meeting the requirements of local traffic laws and regulations, the speed of the vehicle is slightly reduced, and the frequency and the quality of road surface detection images shot by a common camera can be improved. In this case, a document is required to record detailed information of road surface broken markings, large-area repairs, and cracks repair, and then report the information to a road administration.
When the video detection sensor detects the road surface broken marked line, large-area repairing and crack repairing, the vehicle speed is V0. Setting a deceleration target VL1The minimum speed limit value of the road section is VLDistance of deceleration section is Ld'. For detecting the speed v of the vehicle from point AtVelocity v to point B0The decrement process of (2), as shown in fig. 13, has:
Figure BDA0002737531990000071
thus, the average acceleration a of the deceleration section is obtained:
Figure BDA0002737531990000072
wherein v istSatisfy vt=max{VL1,VL}。
3) Large range of deceleration
The third is relatively large deceleration, which belongs to the strategy for detecting road surface cracks, snow and accumulated water. In this case, the roadThe face crack may cause slight vibration of the test vehicle, thereby affecting driving behavior, but not seriously. But snow and water accumulation pose a serious threat to normal driving. The larger range deceleration process is similar to the smaller range deceleration process. When the vehicle is detected to decelerate and pass through a road surface crack, on one hand, the driving safety is improved; on the other hand, the common camera is utilized to improve the shooting frequency and quality with a certain cost, so that more pavement crack information can be conveniently collected. In this process, the average acceleration of the deceleration section can also be calculated according to equation (6). Here, vt=max{VL2,VLIn which V isL2Is a deceleration with a relatively large range of preset speed values.
4) Synchronous speed reduction and lane change
Fourthly, synchronous speed reduction and lane change are carried out. The pavement pit can cause the detected vehicle to generate obvious vibration, thereby seriously influencing the driving behavior of the vehicle, and particularly when the vehicle runs at high speed, the safety of the detected vehicle can be endangered by unpredictable horizontal movement and elevation jolt. Therefore, if the road surface pothole is detected, the automatic detection vehicle can perform driving behaviors such as synchronous deceleration and lane change, and the like, so as to avoid the road surface pothole and improve the driving safety, as shown in fig. 14.
When the detection vehicle at the position A detects the road surface pit at the position C, the vehicle runs to an outer lane through synchronous turning and speed reduction operation, and risks caused by the road surface pit are avoided. The acceleration is calculated as shown in equation (6). The average steering angle of the vehicle determines the steering amplitude and speed of the vehicle, belonging to the lateral driving behavior of the vehicle, and is calculated as follows:
Figure BDA0002737531990000081
where ω is the average steering angular velocity and θ is the total steering angle of the vehicle. We therefore have:
Figure BDA0002737531990000082
in this case, detailed information of the road potholes needs to be quickly alarmed/immediately reported to the road authorities to ensure public safety.
5) Hand-operated driving
In the detection of pedestrians, bicycles and oversize vehicles, the driving strategy is changed to manual driving for the purpose of detecting the safety of the vehicles on pedestrians and road surfaces.
3. Algorithm
The main flow of the autonomous driving behavior process based on the intelligent detection of the road surface condition is as follows, as shown in fig. 15:
step 1, calculating the maximum effective detection visual range of a video detector according to the layout position and the angle of a sensor on a detected vehicle, and determining the visual range basis of driving behavior classification;
and 2, dividing the driving behaviors into 5 types according to 13 road surfaces and traffic conditions, namely normal driving, small-range speed reduction, large-range speed reduction, synchronous speed reduction and lane change and manual driving. These control parameters will be sent to the pave interface to control the autopilot system. Detecting that the vehicle runs at a set speed and a set path when the vehicle normally runs; for the deceleration in a small range and the deceleration in a larger range, the vehicle still runs in the same lane, and the average acceleration of the vehicle is calculated and detected according to the difference value of the preset deceleration threshold value, so that the aim of carrying out fine information acquisition on the road surface diseases is fulfilled; the method adopts synchronous deceleration and lane change, avoids serious diseases such as pavement pits and the like, and collects detailed information of the pavement diseases. The deceleration process is consistent with the step 4, and the steering behavior mainly depends on the initial speed, the speed change and the sight distance; for complex traffic conditions such as pedestrians, bicycles, oversize vehicles and the like, an automatic driving strategy is changed into manual driving.
The present invention is a research on different automatic driving behavior changes, which is a supplement to the current widely used research mainly based on traffic information, and is not only replaced by considering the road surface condition, but is different from the prior automatic driving related technology, and only identifies traffic signals and traffic participants. The invention can be used for comprehensively researching the automatic driving strategy by combining the traffic information and the road surface condition, and has wider and more accurate research. Meanwhile, the invention realizes the data transmission by combining the 5G technology and has the characteristics of rapidness and real-time performance.
In addition, the suggested deceleration/lane change is to account for changes in driving strategy, and the actual operation must strictly comply with local traffic regulations.
Drawings
FIG. 1 is a hardware block diagram of a Neo autopilot system.
FIG. 2 is a functional integration diagram of Giraffe and Panda.
Fig. 3 increases Eon of the heat dissipation system.
Figure 4 core framework diagram of the Openpilot software.
Fig. 5 collects vehicle information. (a) Through the OBD port (b) through the CAN port.
FIG. 6 is a schematic diagram of a vehicle information remote collection module.
Fig. 7 dynamic control of automatic detection of the road surface.
FIG. 8 is a schematic diagram of the basic structure of fast R-CNN.
Fig. 9 labels the object sample map. (a) The road marking (b) the damaged marking (c) repairs (d) the pit groove (e) the well cover (f) the crack (g) and repairs the crack (h) the pedestrian (i) the rider (j) the large-sized vehicle (k) the small-sized vehicle (l) the snow (m) and the rain.
FIG. 10 vehicle operating parameter information collection test. (a) The method comprises the following steps of (a) vehicle door opening and closing test, (b) safety belt plugging and unplugging test, (c) left turning signal test, (d) accelerator signal test, (e) brake signal test, (f) steering signal test, (g) gear shifting signal test.
FIG. 11 is a schematic diagram of a vehicle remote control test.
FIG. 12 detects a layout of video detection sensors on a vehicle and a corresponding line-of-sight distance map.
Fig. 13 is a diagram of a small-range deceleration scenario.
FIG. 14 is a schematic view of a synchronous deceleration and lane change scenario.
FIG. 15 is a flow chart of autonomous driving behavior based on intelligent detection of road conditions
Detailed Description
We performed relevant tests in the software part of the generic autopilot system.
For a vehicle information receiving part, the invention uses an open source framework of automatic driving, namely Openpilot, and further develops and designs an information acquisition module based on the Openpilot framework. And meanwhile, vehicle operation parameters such as vehicle door opening and closing, safety belt plugging and unplugging, steering signals, braking signals, steering signals, gear shifting signals and the like are subjected to information acquisition tests. The result is shown in fig. 10, which shows that the section allows the user to quickly and comprehensively acquire the vehicle information and the running state.
For the vehicle remote control part, a remote control function is tested, and left-right turning information is sent through a notebook computer. FIG. 11 is a test procedure for remote control of a vehicle.
The road surface identification portion of the present invention.
The real case uses the Berkeley Deepdrive video data set (BDDV). It is an open source software for an autopilot dataset shared by the Berkeley deep drive research group. The video is collected from a front camera of the vehicle, and each video segment is about 40-50 seconds. Approximately 1 million video shots are taken for a total of 4 pictures. And selecting an image with a large distribution of detection targets as an original data set.
A total of 10020 pictures are marked, which contain all 13 class objects shown in table 1. The original data set was divided into a training set and a test set at a ratio of 9:1, where the training set had 9019 pictures and the test set was 901, as shown in table 2.
TABLE 1 calibration summary
Figure BDA0002737531990000101
Figure BDA0002737531990000111
TABLE 2 data partitioning
Figure BDA0002737531990000112
In the training of the invention, the Momentum in the Momentum optimization algorithm is set to be 0.9. During training, the size of the mini-batch is set to 256. The learning rate is set to 0.001. After 30,000 epochs (one epoch equals one training session with all samples in the training set), the learning rate is set to 0.0001. All models of the present invention are based on TensorFlow implementation.
After 10 ten thousand training times, the Faster R-CNN performs well on the test set for 18 hours in total, and can meet the basic requirements.

Claims (10)

1. The utility model provides a general autopilot navigation based on road surface disease detects which characterized in that: including hardware and software;
the hardware refers to improvement based on an open source Neo suite, wherein the Neo suite is composed of a vehicle module, a Giraffe module, a Panda module and an Eon module; combining the functions of the Giraffe module and the Panda module to lead the information of the CAN port of the vehicle to be directly led out from the USB port and increase the heat dissipation function of the Eon module;
the software comprises an open source Openpilot software framework and consists of eight parts, namely a card interface module, a grain module, a board dd module, a card module, a Loggerd module, a Controlsd module, a Vision module and a Radar module; and a PID control method is used, and a 'paveInterface' is added to receive a control command; the software comprises a deep learning network: the Faster R-CNN realizes the identification of the road information through training and guides the adjustment of the driving strategy of the vehicle.
2. The system of claim 1, wherein the system comprises: the Giraffe module converts information read from a CAN port of a vehicle into a standard OBD port; the Panda module converts information acquired from a standard OBD port into a microcomputer or a USB port which can only be read by a mobile terminal; the Eon module receives information sent by a USB port of the Panda through a type C port; the control flow of the vehicle is opposite to the information flow, and the control decision information is returned to the vehicle through the Eon module, the Panda module and the Giraffe module, so that the automatic control of the vehicle is realized.
3. The system of claim 1, wherein the system comprises: the interface module is responsible for butt joint with the vehicle; the Boardd module realizes direct conversion between the CANO information of the vehicle and a defined data protocol, and the conversion standard is specified by Cereal; the Controlsd module is a core component of the controller and is used for receiving data from the video and the radar; a vehicle which uses a PID control method to realize horizontal and vertical control; adding a 'Pave Interface' connected with a Controlsd module to receive control parameters according to the actual road surface or traffic conditions; the Loggerd module is responsible for receiving and transmitting data; the software system realizes the functions of vehicle information receiving, vehicle control and vehicle running control.
4. The system of claim 1, wherein the system comprises: in order to realize automatic driving of the vehicle, static parameters and dynamic operation information of the vehicle need to be acquired; and directly reading the vehicle data information by using the DBC file through the CAN port.
5. The system of claim 1, wherein the system comprises: the information acquisition module based on the openpilot frame can acquire vehicle information through a vehicle-mounted local area network system and can also remotely acquire real-time information through a 4G wireless network.
6. The system of claim 1, wherein the system comprises: adopting a candidate region-based fast R-CNN algorithm to realize automatic detection of the road object; the basic structure of the Faster R-CNN can be divided into four modules according to functions, namely a convolutional neural network CONV layers, a regional recommendation network RPN, a RoI pooling and a classifier.
7. The system of claim 6, wherein the system comprises: extracting the characteristics of the image by using a CONV layer of a convolutional neural network CONV layers, wherein the whole image is used as input, and the extracted characteristics are used as output; adopting a VGG-16 model pre-trained by ImageNet as a basic network of a classification task; the output of the convolutional layer is used as the output characteristic diagram of the Conv layer.
8. The system of claim 7, wherein the system comprises: processing the extracted convolution characteristic graph by using the RPN; finding areas that may contain a predefined number of objects; it is a convolutional network that inputs feature maps from the Conv layer and outputs proposed regions; in the area proposal network, an anchor point is a fixed-size box placed on a picture by using a plurality of different sizes and scales as a reference box for predicting the position of an object for the first time; the output of the classification layer is the probability of whether each anchor point is a background; the regression layer of the regional suggestion network outputs location information of the anchor box that matches the predicted object.
9. The system of claim 8, wherein the system comprises: performing RoI pooling; the RoI pooling converts inputs of different sizes into outputs of fixed length; the input is different candidate regions, the clipped feature map is fixed to 14 × 512 size using interpolation, and 7 × 512 feature map of fixed size is output after maximum pooling.
10. The system of claim 9, wherein the system comprises: the output of the classification and regression layer of the classifier is the category to which the candidate region belongs and the exact position of the candidate region in the image; the role in Faster R-CNN is similar to the fully connected layer of a conventional convolutional neural network; the two output layers contain n +1 and 4n neurons.
CN202011138745.6A 2020-10-22 2020-10-22 General automatic driving navigation system based on pavement disease detection Pending CN112633055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011138745.6A CN112633055A (en) 2020-10-22 2020-10-22 General automatic driving navigation system based on pavement disease detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011138745.6A CN112633055A (en) 2020-10-22 2020-10-22 General automatic driving navigation system based on pavement disease detection

Publications (1)

Publication Number Publication Date
CN112633055A true CN112633055A (en) 2021-04-09

Family

ID=75302891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011138745.6A Pending CN112633055A (en) 2020-10-22 2020-10-22 General automatic driving navigation system based on pavement disease detection

Country Status (1)

Country Link
CN (1) CN112633055A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096517A (en) * 2021-04-13 2021-07-09 北京工业大学 Pavement damage intelligent detection trolley and sand table display system based on 5G and automatic driving

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096517A (en) * 2021-04-13 2021-07-09 北京工业大学 Pavement damage intelligent detection trolley and sand table display system based on 5G and automatic driving

Similar Documents

Publication Publication Date Title
CN109213126B (en) Automatic driving automobile test system and method
EP3533681B1 (en) Method for detecting safety of driving behavior, apparatus and storage medium
CN109849922B (en) Visual information and GIS information fusion-based method for intelligent vehicle
CN106845547A (en) A kind of intelligent automobile positioning and road markings identifying system and method based on camera
GB2569750A (en) Comfort-based self-driving planning method
CN105892471A (en) Automatic automobile driving method and device
CN109948418A (en) A kind of illegal automatic auditing method of violation guiding based on deep learning
CN107985189A (en) Towards driver's lane change Deep Early Warning method under scorch environment
CN110356412A (en) The method and apparatus that automatically rule for autonomous driving learns
CN114475573B (en) Fluctuating road condition identification and vehicle control method based on V2X and vision fusion
CN116457800A (en) Architecture for map change detection in an autonomous vehicle
CN116957345B (en) Data processing method for unmanned system
CN113359673B (en) Automatic driving automobile performance judgment system based on big data
CN110599025A (en) Method for evaluating reliability index of driving behavior of automatic driving automobile
CN116205024A (en) Self-adaptive automatic driving dynamic scene general generation method for high-low dimension evaluation scene
CN105620486B (en) Driving mode judgment means and method applied to vehicle energy management
CN114915646A (en) Data grading uploading method and device for unmanned mine car
CN116128360A (en) Road traffic congestion level evaluation method and device, electronic equipment and storage medium
CN114926984A (en) Real-time traffic conflict collection and road safety evaluation method
CN112633055A (en) General automatic driving navigation system based on pavement disease detection
CN117198057A (en) Experimental method and system for road side perception track data quality inspection
CN115965926B (en) Vehicle-mounted road sign marking inspection system
CN114895682B (en) Unmanned mine car walking parameter correction method and system based on cloud data
CN115451987A (en) Path planning learning method for automatic driving automobile
CN114743168A (en) Driving risk source identification and evaluation method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination