CN111814667B - Intelligent road condition identification method - Google Patents

Intelligent road condition identification method Download PDF

Info

Publication number
CN111814667B
CN111814667B CN202010649661.2A CN202010649661A CN111814667B CN 111814667 B CN111814667 B CN 111814667B CN 202010649661 A CN202010649661 A CN 202010649661A CN 111814667 B CN111814667 B CN 111814667B
Authority
CN
China
Prior art keywords
neural network
road condition
vehicle
training
acceleration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010649661.2A
Other languages
Chinese (zh)
Other versions
CN111814667A (en
Inventor
方亚东
杨勤
王洪添
徐宏伟
宋设
姚民伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Cloud Service Information Technology Co Ltd
Original Assignee
Shandong Inspur Cloud Service Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Cloud Service Information Technology Co Ltd filed Critical Shandong Inspur Cloud Service Information Technology Co Ltd
Priority to CN202010649661.2A priority Critical patent/CN111814667B/en
Publication of CN111814667A publication Critical patent/CN111814667A/en
Application granted granted Critical
Publication of CN111814667B publication Critical patent/CN111814667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent road condition identification method, which belongs to the technical field of intelligent driving, and comprises the steps of collecting the instantaneous speed of a vehicle, collecting image information and depth distance information of road conditions, constructing a convolutional neural network training and verifying set by using the collected information, training a convolutional neural network model by using the training and verifying set, continuously optimizing parameters, applying the continuously optimized neural network parameter model to acceleration and deceleration of automatic driving, and adaptively controlling the acceleration and deceleration of the vehicle in real-time depth image identification. The convolutional neural network technology based on the depth image can provide an optimized solution for automatic driving from the aspects of road condition identification and acceleration and deceleration, and effectively improve the speed control precision and safety of automatic driving.

Description

Intelligent road condition identification method
Technical Field
The invention relates to the technical field of intelligent driving, in particular to an intelligent road condition identification method.
Background
Under the background of the explosive development of artificial intelligence and machine learning, the automatic driving technology is also produced. The SAE american automobile engineers classify the automatic driving technology into L0 to L5, and the current mainstream automatic driving in the market is L1 and L2 automatic driving, which is limited by the development speed of the intelligent identification technology and the complexity of the road scene, and the L3 automatic driving technology is not common in mass production of automobiles. The L3 automatic driving technology requires that vehicles can realize most vehicle-mounted machine function operations, can deal with vehicles in most situations, and can actively finish a series of actions such as acceleration and deceleration, lane changing, overtaking, left and right turning, traffic light identification and the like, so that the requirements on road condition identification and speed control are high.
Disclosure of Invention
The technical task of the invention is to provide an intelligent road condition identification method aiming at the defects, and the method can effectively improve the speed control precision and the safety of the L3 level automatic driving.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an intelligent road condition identification method comprises the steps of collecting the instantaneous speed of a vehicle, collecting image information and depth distance information of road conditions, constructing a convolutional neural network training and verifying set by utilizing the collected information, training a convolutional neural network model by using the training and verifying set, continuously optimizing parameters, applying the continuously optimized neural network parameter model to acceleration and deceleration of automatic driving, and adaptively controlling the acceleration and deceleration of the vehicle in real-time depth image identification.
The method combines the characteristics of large data volume, complex road conditions and different driving habits of different drivers, acquires depth images aiming at data of various scenes, integrates an object recognition model which is more in line with human mind at a training position by combining the judgment of the driver on the road conditions and the increase and decrease conditions of the vehicle speed, and provides a certain auxiliary basis for the road condition recognition and the acceleration and deceleration of automatic driving.
Preferably, the Kinect depth camera is used for shooting and acquiring image information and depth distance information of road conditions.
Preferably, the vehicle instantaneous speed is collected using the vehicle's OBD module. The OBD interface of the automobile is connected with the computer through the gateway device, so that automobile data can be checked in real time, automobile faults can be detected, and the like.
Specifically, the method utilizes the comparison of the change of the same image depth distance in a certain time period and the starting point vehicle speed to identify the image, and then trains the convolutional neural network by using the identification image set to obtain a training model capable of identifying road condition information of different scenes, so that the speed control precision and the safety of the L3 level automatic driving can be improved to a certain extent.
Further, the specific implementation manner of the method is as follows:
1) Acquiring the instantaneous speed v1 of the vehicle, and acquiring a depth image a of the road condition in front of the vehicle and a depth distance da corresponding to a certain point x;
2) After a short time period t, acquiring a depth image b of the road condition in front of the vehicle and a depth distance db corresponding to a certain point x;
3) Using the formula
Figure BDA0002574434390000021
Obtaining the average speed vt of the vehicle in a time period t;
4) If vt is larger than or equal to v1, marking the depth image a as barrier-free and accelerating; if vt is less than v1, marking the depth image a as an obstacle, and decelerating;
5) Forming a training data set by using the information obtained in the step 1) and the step 4), performing convolutional neural network training, storing a training result and continuously optimizing a neural network model;
6) And applying the continuously optimized neural network parameter model to an acceleration and deceleration program of automatic driving, and adaptively controlling the acceleration and deceleration of the vehicle within a certain range in real-time depth image recognition so as to achieve better safety.
Preferably, the time period t is 500ms in time length.
Preferably, the convolutional neural network training is performed by using a convolutional neural network and taking 1/3 and 1/4 of the total pixels of the acquired picture as convolutional kernels respectively.
Preferably, the neural network parameter model is applied to an acceleration and deceleration program of automatic driving of the vehicle through a driving computer interface.
The invention also claims an intelligent road condition recognition device, comprising: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor is used for calling the machine readable program and executing the method.
The invention also claims a computer readable medium having stored thereon computer instructions which, when executed by a processor, cause the processor to perform the above-described method.
Compared with the prior art, the intelligent road condition identification method has the following beneficial effects:
the method combines the characteristics of large data volume, complex road conditions and different driving habits of different drivers, acquires depth images aiming at data of various scenes, integrates an object recognition model which is more in line with the thinking of a human brain at a training position by combining the judgment of the driver on the road conditions and the increase and decrease conditions of the vehicle speed, and provides an optimized solution for L3 automatic driving from the aspects of road condition recognition and acceleration and deceleration based on the convolutional neural network technology of the depth images.
Drawings
Fig. 1 is a flowchart of an intelligent road condition identification method according to an embodiment of the present invention;
fig. 2 is a diagram of an application example provided by an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
In practical application, some neural network structures adopt an improved data enhancement algorithm to solve the problem of insufficient scale of training data, so that the effect of increasing the training data is achieved. However, as the driving scenes of automobiles are more and the road conditions are more and more complex, it is always necessary to continuously collect and integrate different training sets, if the number of training data is increased from the algorithm perspective, the error is often larger in practical application, meanwhile, different drivers react differently to different driving road conditions, and the recognition of the road conditions is optimized by combining the driving habits of the drivers and other factors.
The invention aims to solve the existing problems, provides application of a convolutional neural network technology based on a depth image in intelligent road condition identification, and provides an optimized solution for L3 automatic driving from the aspects of road condition identification and acceleration and deceleration.
An intelligent road condition identification method combines the characteristics of large data volume, complex road condition and different driving habits of different drivers, acquires depth images aiming at data of various scenes, integrates the judgment of the drivers on the road condition and the increase and decrease of the vehicle speed, and integrates an object identification model which is more in line with human brain thinking at the training position:
acquiring the instantaneous speed of the vehicle by using an OBD (on-board diagnostics) module of the vehicle, and shooting and acquiring image information and depth distance information of the front road condition by using a Kinect depth camera;
constructing a convolutional neural network training and verifying set by using the acquired information;
training the convolutional neural network model by using the training and verification set, continuously optimizing parameters, applying the continuously optimized neural network parameter model to acceleration and deceleration of automatic driving, and adaptively controlling the acceleration and deceleration of a vehicle in real-time depth image recognition.
Kinectfor Xbox 360, abbreviated as Kinect, is developed by microsoft and applied to peripheral devices of the Xbox 360 host. The Kinect has three lenses, the middle lens is an RGB color camera, which is used to collect color images. The left and right lenses are 3D structured light depth sensors formed by an infrared emitter and an infrared CMOS camera, respectively, and are used to collect depth data (the distance from an object to the camera in the scene of this embodiment). The color camera can maximally support 1280 × 960 resolution imaging, and the infrared camera can maximally support 640 × 480 imaging. Kinect has still mated the technique of following focus, and the base motor can follow with focusing the removal of object and rotate.
The OBD interface is also called a vehicle-mounted automatic diagnosis system and can detect the working conditions of an engine electric control system and other functional modules of a vehicle in the running process of the vehicle. The OBD interface of the automobile is connected with the computer, so that automobile data can be checked in real time, automobile faults can be detected, and convenience is provided for automobile repair, automobile related function development and automobile equipment modification. In the embodiment, the OBD interface is connected with a computer through a gateway device, so that the real-time checking of automobile data, the detection of automobile faults and the like are realized.
Specifically, the method utilizes the comparison of the change of the same image depth distance in a certain time period and the starting point vehicle speed to identify the image, and then trains the convolutional neural network by using the identification image set to obtain a training model capable of identifying road condition information of different scenes, so that the speed control precision and the safety of the L3 level automatic driving can be improved to a certain extent.
Convolutional Neural Networks (CNN) are an efficient identification method that has been developed in recent years and has attracted considerable attention. CNN has become one of the research hotspots in many scientific fields, especially in the field of pattern classification, and since the network avoids the complex preprocessing of the image and can directly input the original image, it has been more widely applied.
In general, the basic structure of CNN includes two layers, one of which is a feature extraction layer, and the input of each neuron is connected to a local acceptance domain of the previous layer and extracts the feature of the local acceptance domain. Once the local feature is extracted, the position relation between the local feature and other features is determined; the other is a feature mapping layer, each calculation layer of the network is composed of a plurality of feature mappings, each feature mapping is a plane, and the weights of all neurons on the plane are equal. The feature mapping structure adopts a sigmoid function with small influence function kernel as an activation function of the convolution network, so that the feature mapping has displacement invariance. In addition, since the neurons on one mapping surface share the weight, the number of free parameters of the network is reduced. Each convolutional layer in the convolutional neural network is followed by a computation layer for local averaging and quadratic extraction, which reduces the feature resolution.
CNN is mainly used for identifying two-dimensional graphs of displacement, scaling and other forms of distortion invariance, and the part of functions are mainly realized by a pooling layer. Since the feature detection layer of CNN learns from the training data, explicit feature extraction is avoided when CNN is used, while learning from the training data is implicit; moreover, because the weights of the neurons on the same feature mapping surface are the same, the network can learn in parallel, and the advantage of the convolutional network relative to the network in which the neurons are connected with one another is also realized. The convolution neural network has unique superiority in the aspects of voice recognition and image processing by virtue of a special structure with shared local weight, the layout of the convolution neural network is closer to that of an actual biological neural network, the complexity of the network is reduced by virtue of weight sharing, and particularly, the complexity of data reconstruction in the processes of feature extraction and classification is avoided by virtue of the characteristic that an image of a multi-dimensional input vector can be directly input into the network.
If a classical neural network model is used, the whole image needs to be read as an input of the neural network model (i.e. a fully connected mode), and when the size of the image is larger, the connected parameters of the image become more, so that the calculation amount is very large.
The human cognition to the outside world generally is from local to global, namely, the human cognition is that the local part is perceived firstly and then the whole body is perceived gradually, which is the human cognition mode. The spatial relationship in the image is similar, the relationship between pixels in the local range is relatively close, and the correlation between pixels at a relatively long distance is relatively weak. Therefore, each neuron does not need to sense a global image, only needs to sense a local part, and then integrates local information at a higher layer to obtain global information. This model is an important artifact in convolutional neural networks to reduce the number of parameters: local receptive field.
And then, after the images are subjected to feature extraction, convolution, pooling and full connection, a convolution neural network model with accurate parameters can be obtained through training.
In summary, the convolutional neural network is mainly composed of two parts, one part is feature extraction (convolution, activation function, pooling), and the other part is classification identification (full-link layer). In essence, it is an input-to-output mapping that is able to learn a large number of input-to-output mappings without any precise mathematical expression between the inputs and outputs, and the network has the ability to map between inputs and outputs as long as the convolutional network is trained with known patterns.
One feature of CNN is that it is lightweight (the closer to the input weights, the smaller the closer to the output weights), and exhibits a form of a transverse inverted triangle, which well avoids the situation where the gradient loss is too fast when back-propagating in the BP neural network.
The embodiment of the invention also provides an intelligent road condition identification method, which has the following specific implementation mode:
1) Acquiring the instantaneous speed v1 of the vehicle by using an OBD module of the vehicle; acquiring a depth image a of a road condition in front of a vehicle and a depth distance da corresponding to a certain point x by using a Kinect camera;
2) Acquiring a depth image b of the road condition in front of the vehicle and a depth distance db corresponding to a certain point x after a short time period t (t is 500 ms);
3) Using the formula
Figure BDA0002574434390000051
Obtaining the average speed vt of the vehicle in a time period t;
4) If vt is larger than or equal to v1, marking the depth image a as free, and UP accelerating (marked by 'U'); if vt < v1, then mark depth image a as obstructed, DOWN decelerates (marked with "D");
5) Forming a training data set by using the information obtained in the step 1) and the step 4), respectively taking 1/3 and 1/4 of the total pixels of the collected pictures as convolution kernels by using a convolution neural network, carrying out convolution neural network training, storing a training result and continuously optimizing a neural network model;
6) And applying the continuously optimized neural network parameter model to an automatic driving acceleration and deceleration program through a driving computer interface, and adaptively controlling the acceleration and deceleration of the vehicle within a certain range in real-time depth image recognition so as to achieve better safety.
The method realizes intelligent road condition identification based on a convolutional neural network of a depth image, and the architecture diagram of the method is shown in figure 2.
The embodiment of the present invention further provides an intelligent road condition identification device, including: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor is configured to invoke the machine-readable program to execute the intelligent road condition identification method according to any of the embodiments of the present invention.
An embodiment of the present invention further provides a computer readable medium, where a computer instruction is stored, and when the computer instruction is executed by a processor, the processor is enabled to execute the method for identifying an intelligent road condition in any of the embodiments of the present invention. Specifically, a system or an apparatus equipped with a storage medium on which software program codes that realize the functions of any of the above-described embodiments are stored may be provided, and a computer (or a CPU or MPU) of the system or the apparatus is caused to read out and execute the program codes stored in the storage medium.
In this case, the program code itself read from the storage medium can realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code constitute a part of the present invention.
Examples of the storage medium for supplying the program code include a flexible disk, hard disk, magneto-optical disk, optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD + RW), magnetic tape, nonvolatile memory card, and ROM. Alternatively, the program code may be downloaded from a server computer via a communications network.
Further, it should be clear that the functions of any one of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform a part or all of the actual operations based on instructions of the program code.
Further, it is to be understood that the program code read out from the storage medium is written to a memory provided in an expansion board inserted into the computer or to a memory provided in an expansion unit connected to the computer, and then a CPU or the like mounted on the expansion board or the expansion unit is caused to perform part or all of the actual operations based on instructions of the program code, thereby realizing the functions of any of the embodiments described above.
While the invention has been shown and described in detail in the drawings and in the preferred embodiments, it is not intended to limit the invention to the embodiments disclosed, and it will be apparent to those skilled in the art that various combinations of the code auditing means in the various embodiments described above may be used to obtain further embodiments of the invention, which are also within the scope of the invention.

Claims (9)

1. An intelligent road condition identification method is characterized in that the instantaneous speed of a vehicle is collected, image information and depth distance information of the road condition are collected, a convolutional neural network training and verification set is constructed by utilizing the collected information, a convolutional neural network model is trained by using the training and verification set, parameters are continuously optimized, the continuously optimized neural network parameter model is applied to acceleration and deceleration of automatic driving, and the acceleration and the deceleration of the vehicle are adaptively controlled in real-time depth image identification;
the method is realized in the following specific way:
1) Acquiring the instantaneous speed v1 of the vehicle, and acquiring a depth image a of the road condition in front of the vehicle and a depth distance da corresponding to a certain point x;
2) Acquiring a depth image b of the road condition in front of the vehicle and a depth distance db corresponding to a certain point x after a time period t;
3) Using the formula vt =
Figure DEST_PATH_IMAGE001
Obtaining the average speed vt of the vehicle in the time period t;
4) If vt is larger than or equal to v1, marking the depth image a as barrier-free and accelerating; if vt is less than v1, marking the depth image a as an obstacle, and decelerating;
5) Forming a training data set by using the information obtained in the step 1) and the step 4), performing convolutional neural network training, storing a training result and continuously optimizing a neural network model;
6) And applying the continuously optimized neural network parameter model to an acceleration and deceleration program of automatic driving, and adaptively controlling the acceleration and the deceleration of the vehicle in real-time depth image recognition.
2. The intelligent road condition identification method as claimed in claim 1, wherein a Kinect depth camera is used to capture the image information and depth distance information of the collected road condition.
3. An intelligent road condition identification method as claimed in claim 1, wherein the vehicle instantaneous speed is collected by an OBD module of the vehicle.
4. The intelligent road condition identification method according to claim 1, 2 or 3, characterized in that the images are identified by comparing the change of the same image depth distance in a certain time period with the starting point vehicle speed, and then the convolutional neural network is trained by the identification image set to obtain a training model capable of identifying road condition information of different scenes.
5. The method as claimed in claim 1, wherein the time period t is 500ms.
6. The intelligent road condition identification method according to claim 1, wherein the convolutional neural network training is performed by taking 1/3 and 1/4 of the total pixels of the acquired images as convolutional kernels respectively by using the convolutional neural network.
7. The intelligent road condition identification method as claimed in claim 1, wherein the neural network parameter model is applied to an acceleration and deceleration program of vehicle automatic driving through a driving computer interface.
8. An intelligence road conditions recognition device which characterized in that includes: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor configured to invoke the machine readable program to perform the method of any of claims 1 to 7.
9. A computer readable medium having stored thereon computer instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1 to 7.
CN202010649661.2A 2020-07-08 2020-07-08 Intelligent road condition identification method Active CN111814667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010649661.2A CN111814667B (en) 2020-07-08 2020-07-08 Intelligent road condition identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010649661.2A CN111814667B (en) 2020-07-08 2020-07-08 Intelligent road condition identification method

Publications (2)

Publication Number Publication Date
CN111814667A CN111814667A (en) 2020-10-23
CN111814667B true CN111814667B (en) 2022-10-14

Family

ID=72842316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010649661.2A Active CN111814667B (en) 2020-07-08 2020-07-08 Intelligent road condition identification method

Country Status (1)

Country Link
CN (1) CN111814667B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926274A (en) * 2021-04-15 2021-06-08 成都四方伟业软件股份有限公司 Method and device for simulating urban traffic system by using convolutional neural network
CN113276863B (en) * 2021-07-01 2022-09-13 浙江吉利控股集团有限公司 Vehicle control method, apparatus, device, medium, and program product
CN113610970A (en) * 2021-08-30 2021-11-05 上海智能网联汽车技术中心有限公司 Automatic driving system, device and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108944944A (en) * 2018-07-09 2018-12-07 深圳市易成自动驾驶技术有限公司 Automatic Pilot model training method, terminal and readable storage medium storing program for executing
CN110458214A (en) * 2019-07-31 2019-11-15 上海远眸软件有限公司 Driver replaces recognition methods and device
CN110610153A (en) * 2019-09-10 2019-12-24 重庆工程职业技术学院 Lane recognition method and system for automatic driving

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198425A (en) * 2018-02-10 2018-06-22 长安大学 A kind of construction method of Electric Vehicles Driving Cycle
CN108803604A (en) * 2018-06-06 2018-11-13 深圳市易成自动驾驶技术有限公司 Vehicular automatic driving method, apparatus and computer readable storage medium
CN109747659B (en) * 2018-11-26 2021-07-02 北京汽车集团有限公司 Vehicle driving control method and device
CN110745136B (en) * 2019-09-20 2021-05-07 中国科学技术大学 Driving self-adaptive control method
CN111016901A (en) * 2019-12-30 2020-04-17 苏州安智汽车零部件有限公司 Intelligent driving decision method and system based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108944944A (en) * 2018-07-09 2018-12-07 深圳市易成自动驾驶技术有限公司 Automatic Pilot model training method, terminal and readable storage medium storing program for executing
CN110458214A (en) * 2019-07-31 2019-11-15 上海远眸软件有限公司 Driver replaces recognition methods and device
CN110610153A (en) * 2019-09-10 2019-12-24 重庆工程职业技术学院 Lane recognition method and system for automatic driving

Also Published As

Publication number Publication date
CN111814667A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN111814667B (en) Intelligent road condition identification method
CN112912920B (en) Point cloud data conversion method and system for 2D convolutional neural network
CN110163187B (en) F-RCNN-based remote traffic sign detection and identification method
US10964033B2 (en) Decoupled motion models for object tracking
CN112889071B (en) System and method for determining depth information in a two-dimensional image
KR20170140214A (en) Filter specificity as training criterion for neural networks
CN112654998B (en) Lane line detection method and device
CN112883991A (en) Object classification method, object classification circuit and motor vehicle
CN115280373A (en) Managing occlusions in twin network tracking using structured dropping
CN111553188B (en) End-to-end automatic driving vehicle steering control system based on deep learning
US11308324B2 (en) Object detecting system for detecting object by using hierarchical pyramid and object detecting method thereof
Zhang et al. Road marking segmentation based on siamese attention module and maximum stable external region
Ostankovich et al. Application of cyclegan-based augmentation for autonomous driving at night
CN113793371B (en) Target segmentation tracking method, device, electronic equipment and storage medium
Shen et al. Lane line detection and recognition based on dynamic ROI and modified firefly algorithm
WO2022125236A1 (en) Systems and methods for object detection using stereovision information
CN111210411B (en) Method for detecting vanishing points in image, method for training detection model and electronic equipment
CN117011819A (en) Lane line detection method, device and equipment based on feature guidance attention
CN117113647A (en) Simulation method, device and equipment for verifying lane line keeping auxiliary system
CN110991337B (en) Vehicle detection method based on self-adaptive two-way detection network
JP7309817B2 (en) Method, system and computer program for detecting motion of vehicle body
CN117250947A (en) Automatic driving method based on condition imitation learning
CN118674939A (en) Element identification system, element identification method, electronic device, and storage medium
CN117636122A (en) Training method and detection method for vehicle type axle recognition
CN116872974A (en) BEV visual angle probability motion prediction system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant