CN109858372A - A kind of lane class precision automatic Pilot structured data analysis method - Google Patents

A kind of lane class precision automatic Pilot structured data analysis method Download PDF

Info

Publication number
CN109858372A
CN109858372A CN201811641455.6A CN201811641455A CN109858372A CN 109858372 A CN109858372 A CN 109858372A CN 201811641455 A CN201811641455 A CN 201811641455A CN 109858372 A CN109858372 A CN 109858372A
Authority
CN
China
Prior art keywords
lane
road
model
branch
road surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811641455.6A
Other languages
Chinese (zh)
Other versions
CN109858372B (en
Inventor
缪其恒
吴长伟
苏志杰
孙焱标
王江明
许炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zero Run Technology Co Ltd
Original Assignee
Zhejiang Zero Run Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zero Run Technology Co Ltd filed Critical Zhejiang Zero Run Technology Co Ltd
Priority to CN201811641455.6A priority Critical patent/CN109858372B/en
Publication of CN109858372A publication Critical patent/CN109858372A/en
Application granted granted Critical
Publication of CN109858372B publication Critical patent/CN109858372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention is a kind of lane class precision automatic Pilot structured data analysis method, including establishes multitask depth convolutional neural networks model, off-line training, front-end platform transplanting and driving path scene analysis and output post-processing.The invention has the advantages that enhancing the ability in feature extraction of road marking line using depth convolutional neural networks, application scenarios are wider, robustness is stronger;Travelable region branch is increased, expand related application can to unstructured road;Increase lane boundary branch, promote congestion scene or Lane Mark be blocked in the case of accuracy;Increase lane structure data, including lane boundary type, the attributes such as lane direction enable high-grade automatic Pilot aware application;Increase road surface identification information, including road surface identification type and position etc., enables high-grade automatic Pilot positioning application;Using shared convolution feature, calculation resources can be shared with other visual perception modules, integrated high-efficient.

Description

A kind of lane class precision automatic Pilot structured data analysis method
Technical field
The present invention relates to automatic Pilot field more particularly to a kind of lane class precision automatic Pilot structured data analysis sides Method.
Background technique
Intelligence is one of the important trend of nowadays China Automobile Industry, and vision system is applied in vehicle active safety field It is increasingly wider.Single binocular forward sight, backsight and 360 degree of viewing systems have become the mainstream of existing advanced DAS (Driver Assistant System) Perception device.Existing commercialization lane auxiliary system is mostly based on the perception of vision Lane Mark, such most applied field of system Scape is single, is only applicable to road marking line clearly structured road.Its method can be summarized as lane markings line feature extraction, vehicle Road tag line feature clustering and Lane Mark fitting.In application, such system exists as following drawbacks: (i) Lane Mark It is not high to extract accuracy, is influenced vulnerable to factors such as illumination, road wears;(ii) is for congestion in road situation (occlusion road Tag line), lane recall rate is lower;(iii) lane fitting precision is poor, can be used for closely yawing auxiliary, not be suitable for certainly Dynamic driving path planning;(iv) output data structure is too simple, and lane and frontier properties are not explicitly defined; (v) is low with other vision algorithms (such as module of target detection) correlation, and total algorithm is inefficient after fusion.
Summary of the invention
It present invention mainly solves the above problem, provides that a kind of accuracy is high, have a wide range of application, robustness is good, has The lane class precision automatic Pilot structured data analysis method of lane attribute output.
The technical solution adopted by the present invention to solve the technical problems is a kind of lane class precision automatic Pilot structuring number According to analysis method, comprising the following steps:
S1: the multitask depth convolutional neural networks model based on shared shallow-layer convolution feature is established;
S2: off-line training is carried out to multitask depth convolutional neural networks model;
S3: the multitask depth convolutional neural networks model front-end platform after training is transplanted;
S4: driving path scene analysis is carried out using the model after transplanting and to carrying out output post-processing.
It, can be with the shared fortune of other visual perception modules using the module of the method for the present invention using shared shallow-layer convolution feature Resource is calculated, vehicle vision sensory perceptual system integrated efficiency with higher is made;Pass through the multitask depth convolutional Neural net to foundation Network model carries out a large amount of off-line training, determines and shares convolutional layer feature weight coefficient and each branching networks weight system in model Number, improves the accuracy of model calculation, post-processes to the output of model into output, converts distinctive lane structure for output Data, including road information, lane information and road surface identification information run convenient for subsequent automatic Pilot program.
Scheme as a preference of the above scheme, the multitask depth convolutional neural networks model include input layer, Sharing feature coding layer and lane structure data export decoding layer, and the sharing feature coding layer is by cascading conv+relu+BN Group is combined into, and the lane structure data output decoding layer includes that can travel region branch, lane boundary branch and road marking Know branch.The input of sharing feature coding layer is the image of arbitrary resolution, down-sampled by the progress of convolution step-length, can export three The characteristic layer of a different scale respectively corresponds large, medium and small scale feature scene and separates, and lane structure data export decoding layer Network structure be made of deconvolution and softmax layers, the input of decoding layer is to cascade shared convolutional layer feature, is exported as network Input the corresponding structural data binary system mask of size.
Scheme as a preference of the above scheme, off-line training in the step S2 the following steps are included:
S11: acquiring all kinds of driving path contextual datas offline, extracts multiple discrete time series training samples;
S12: manually marking sample, generates sample label;
S13: multitask depth convolutional neural networks model is trained using the sample marked, according to training result Establish loss function
L=α * L_bond+ β * L_lane+ γ * L_mark
Training is shared convolution characteristic layer weight coefficient and is solidified, α by the way of stochastic gradient descent, and beta, gamma is configurable Parameter, default value are that 0.33, L_bond, L_lane and L_mark is respectively that can travel region, lane boundary, road surface identification Divide loss function, is softmaxLoss;
S14: travelable area loss function L is utilized1=L_bond, lane boundary loss function L2=L_lane, road marking Know loss function L3=L_mark using stochastic gradient descent mode training branching networks weight coefficient and solidifies.Sample label Including can travel region mask, lane boundary mask and road surface identification mask three parts, can use when sample data is less Image geometry spatial alternation carries out sample expansion.
Scheme as a preference of the above scheme, multitask depth convolutional neural networks model front end in the step S3 Platform transplantation, comprising the following steps:
S21: the data type of the multitask depth convolutional neural networks model after judging off-line training whether be before hold level with both hands The optimal data type that platform is supported, if it is not, then quantifying model, utilizes preset test if so, entering in next step Collect and relative accuracy loss estimating is carried out to the model after conversion, enters in next step if relative accuracy is lost less than 1%, conversely, It then needs to enter back into next step after carrying out model retraining;
S22: judging whether front-end platform hardware supports sparse operation, if not supporting, enters in next step, if support, Model is subjected to model rarefaction, loss of significance estimation is carried out to the model after conversion using preset test set, if relatively smart Degree loss enters in next step less than 1%, conversely, then needing to enter back into next step after carrying out model retraining;
S23: model deployment is carried out in front-end platform.
Scheme as a preference of the above scheme, driving path scene analysis in the step S4, comprising the following steps:
S31: Image Acquisition and pretreatment;
S32: pretreated image is input in the model after transplanting;
S33: output post-processing is carried out to the model output after transplanting and is converted into lane structure data.Image preprocessing packet Include automatic exposure, automatic white balance, distortion correction, shake removal, smothing filtering and ROI scaling and interception etc..
Scheme as a preference of the above scheme, the lane structure data include uppermost road information, secondary Grade lane information and lowermost level road markings information, the road information include road id, road boundary, road towards, Road width, road relating attribute and attached lane quantity, the lane information include that lane id, lane boundary are associated with lane Attribute, the road surface identification information include road surface identification id, road surface identification classification and road surface identification position.
Scheme as a preference of the above scheme, the output post-processing include after can travel the output of region branch Reason, the output post-processing of lane branch and the output post-processing of road surface identification branch, the travelable region branch output post-processing packet Include following steps:
S41: travelable region branch is exported and carries out image procossing;
S42: the right boundary that can travel region to treated carries out luminance curve fitting;
S43: output road boundary information.Image procossing includes dilation erosion and based on road barrier testing result Mask filling, is defaulted as conic section when carrying out luminance curve fitting.
Scheme as a preference of the above scheme, lane branch output post-processing the following steps are included:
S51: cluster is carried out to the boundary mask of lane boundary branch output and forms several lane line candidate regions;
S52: each lane line candidate region is classified and is fitted using convolutional neural networks;
S53: lane second level analytical expression under output lane boundary classification and image coordinate system.
Scheme as a preference of the above scheme, road surface identification branch output post-processing the following steps are included:
S61: the boundary mask of road pavement mark branch output carries out cluster and forms several road surface identification candidate regions;
S62: classified using convolutional neural networks to each road surface identification candidate region;
S63: center under output road surface identification classification and image coordinate system.
Scheme as a preference of the above scheme, the convolutional neural networks include neural network input layer, nerve net Network sharing feature coding layer and neural network export decoding layer, and the neural network sharing feature coding layer is by cascading conv+ Relu+BN group is combined into, and the neural network output decoding layer is that lane boundary classification and road surface identification classification export.
The invention has the advantages that enhancing the ability in feature extraction of road marking line using depth convolutional neural networks, answer With scene is wider, robustness is stronger;Travelable region branch is increased, expand related application can to unstructured road;Increase Accuracy in the case of having added lane boundary branch, promotion congestion scene or Lane Mark to be blocked;Increase lane structure Data, including lane boundary type, the attributes such as lane direction, enable high-grade automatic Pilot aware application;Increase road surface identification Information, including road surface identification type and position etc. enable high-grade automatic Pilot positioning application;It, can using shared convolution feature Calculation resources are shared with other visual perception modules, are integrated high-efficient.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the invention.
Fig. 2 is a kind of structural schematic diagram of multitask depth convolutional neural networks model in the present invention.
Fig. 3 is a kind of flow diagram of off-line training in the present invention.
Fig. 4 is a kind of flow diagram of multitask depth convolutional neural networks model front-end platform transplanting in the present invention.
Fig. 5 is a kind of flow diagram of driving path scene analysis in the present invention.
Fig. 6 is a kind of flow diagram that can travel the output post-processing of region branch in the present invention.
Fig. 7 is a kind of flow diagram of lane branch output post-processing in the present invention.
Fig. 8 is a kind of flow diagram of road surface identification branch output post-processing in the present invention.
Fig. 9 is a kind of structural schematic diagram of convolutional neural networks in the present invention.
1- input layer 2- sharing feature coding layer 3- can travel region branch 4- lane boundary branch 5- road surface identification Branch 6- neural network input layer 7- neural network sharing feature coding layer 8- neural network exports decoding layer.
Specific embodiment
Below with reference to the embodiments and with reference to the accompanying drawing further description of the technical solution of the present invention.
Embodiment:
A kind of lane class precision automatic Pilot structured data analysis method of the present embodiment, as shown in Figure 1, including following step It is rapid:
S1: the multitask depth convolutional neural networks model based on shared shallow-layer convolution feature is established;
S2: off-line training is carried out to multitask depth convolutional neural networks model;
S3: the multitask depth convolutional neural networks model front-end platform after training is transplanted;
S4: driving path scene analysis is carried out using the model after transplanting and to carrying out output post-processing.
The multitask depth convolutional neural networks model, as shown in Fig. 2, including input layer 1, sharing feature coding layer 2 Decoding layer is exported with lane structure data, the sharing feature coding layer is combined by cascading conv+relu+BN group, described It includes that can travel region branch 3, lane boundary branch 4 and road surface identification branch 5 that lane structure data, which export decoding layer,.It is shared The input of feature coding layer is the image of arbitrary resolution, down-sampled by the progress of convolution step-length, can export three different scales Characteristic layer, respectively correspond large, medium and small scale feature scene and separate, lane structure data export the network structure of decoding layer It is made of deconvolution and softmax layers, the input of decoding layer is to cascade shared convolutional layer feature, is exported as network inputs size phase Corresponding structural data binary system mask, the image of sharing feature coding layer input is defaulted as 640*320 in the present embodiment
As shown in figure 3, off-line training in the step S2 the following steps are included:
S11: acquiring all kinds of driving path contextual datas offline, extracts multiple discrete time series training samples;
S12: manually marking sample, generates sample label;
S13: multitask depth convolutional neural networks model is trained using the sample marked, according to training result Establish loss function
L=α * L_bond+ β * L_lane+ γ * L_mark
Training is shared convolution characteristic layer weight coefficient and is solidified, α by the way of stochastic gradient descent, and beta, gamma is configurable Parameter, default value are that 0.33, L_bond, L_lane and L_mark is respectively that can travel region, lane boundary, road surface identification Divide loss function, is softmaxLoss;
S14: travelable area loss function L is utilized1=L_bond, lane boundary loss function L2=L_lane, road marking Know loss function L3=L_mark using stochastic gradient descent mode training branching networks weight coefficient and solidifies.In this implementation, The sample size of extraction is 100000, carries out sample expansion using the geometric space transformation of image, all sample standard deviations pass through people Work marks label, and label substance includes that can travel region mask, lane boundary mask and road surface identification mask, when off-line training Training parameter include learning rate, mini batch sample number, weight decline coefficient and momentum coefficient.
As shown in figure 4, multitask depth convolutional neural networks model front-end platform is transplanted in the step S3, including following Step:
S21: the data type of the multitask depth convolutional neural networks model after judging off-line training whether be before hold level with both hands The optimal data type that platform is supported, if it is not, then quantifying model, utilizes preset test if so, entering in next step Collect and relative accuracy loss estimating is carried out to the model after conversion, enters in next step if relative accuracy is lost less than 1%, conversely, It then needs to enter back into next step after carrying out model retraining;
S22: judging whether front-end platform hardware supports sparse operation, if not supporting, enters in next step, if support, Model is subjected to model rarefaction, loss of significance estimation is carried out to the model after conversion using preset test set, if relatively smart Degree loss enters in next step less than 1%, conversely, then needing to enter back into next step after carrying out model retraining;
S23: model deployment is carried out in front-end platform.In this implementation, the data of multitask depth convolutional neural networks model Type is int8 and the sparse operation of its hardware support, and the optimal data type that front-end platform is supported is fp32, multitask depth Convolutional neural networks model is quantified to convert fp32 for data type, model accuracy verifying is carried out later, after being verified Model rarefaction is carried out again, then carries out model accuracy verifying again, starts to be disposed in front-end platform after being verified.
As shown in figure 5, driving path scene analysis in the step S4, comprising the following steps:
S31: Image Acquisition and pretreatment;
S32: pretreated image is input in the model after transplanting;
S33: output post-processing is carried out to the model output after transplanting and is converted into lane structure data.Image Acquisition passes through Vehicle-mounted vision system is completed, and image preprocessing includes automatic exposure, automatic white balance, distortion correction, shake removal, smothing filtering It scales and intercepts with ROI.
The lane structure data include the road road sign of uppermost road information, secondary lane information and lowermost level Know information, the road information includes road id, road boundary, road direction, road width, road relating attribute and attached vehicle Road quantity, the lane information include lane id, lane boundary and lane relating attribute, and the road surface identification information includes road surface Identify id, road surface identification classification and road surface identification position.Dynamic three lanes model is used in the present embodiment, with current lane is The heart respectively extends to an adjacent lane to the left and right.
The output post-processing includes that can travel the output post-processing of region branch, the output post-processing of lane branch and road marking Know branch's output post-processing, the travelable region branch output post-processes, as shown in Figure 6, comprising the following steps:
S41: travelable region branch is exported and carries out image procossing;
S42: the right boundary that can travel region to treated carries out luminance curve fitting;
S43: output road boundary information.The image procossing carried out in this implementation to the output of travelable region branch includes swollen It is swollen corrosion and based on road barrier testing result mask filling, be defaulted as conic section when carrying out luminance curve fitting.
As shown in fig. 7, lane branch output post-processing the following steps are included:
S51: cluster is carried out to the boundary mask of lane boundary branch output and forms several lane line candidate regions;
S52: each lane line candidate region is classified and is fitted using convolutional neural networks;
S53: lane second level analytical expression under output lane boundary classification and image coordinate system.
As shown in figure 8, road surface identification branch output post-processing the following steps are included:
S61: the boundary mask of road pavement mark branch output carries out cluster and forms several road surface identification candidate regions;
S62: classified using convolutional neural networks to each road surface identification candidate region;
S63: center under output road surface identification classification and image coordinate system.
As shown in figure 9, the convolutional neural networks include neural network input layer (6), neural network sharing feature coding Layer (7) and neural network output decoding layer (8), the neural network sharing feature coding layer are combined by cascading conv+relu+BN Composition, the neural network output decoding layer are that lane boundary classification and road surface identification classification export.In travelable region branch After the completion of output post-processing, the output post-processing of lane branch and the output post-processing of road surface identification branch, in the form of CAN message, with Other algoritic modules interact.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.

Claims (10)

1. a kind of lane class precision automatic Pilot structured data analysis method, it is characterized in that: the following steps are included:
S1: the multitask depth convolutional neural networks model based on shared shallow-layer convolution feature is established;
S2: off-line training is carried out to multitask depth convolutional neural networks model;
S3: the multitask depth convolutional neural networks model front-end platform after training is transplanted;
S4: driving path scene analysis is carried out using the model after transplanting and to carrying out output post-processing.
2. a kind of lane class precision automatic Pilot structured data analysis method according to claim 1, it is characterized in that: institute Stating multitask depth convolutional neural networks model includes that input layer (1), sharing feature coding layer (2) and lane structure data are defeated Decoding layer out, the sharing feature coding layer are combined by cascading conv+relu+BN group, the lane structure data output Decoding layer includes that can travel region branch (3), lane boundary branch (4) and road surface identification branch (5).
3. a kind of lane class precision automatic Pilot structured data analysis method according to claim 2, it is characterized in that: institute State the off-line training in step S2 the following steps are included:
S11: acquiring all kinds of driving path contextual datas offline, extracts multiple discrete time series training samples;
S12: manually marking sample, generates sample label;
S13: multitask depth convolutional neural networks model is trained using the sample marked, is established according to training result Loss function
L=α * L_bond+ β * L_lane+ γ * L_mark
Training is shared convolution characteristic layer weight coefficient and is solidified, α by the way of stochastic gradient descent, and beta, gamma is configurable ginseng Number, default value are that 0.33, L_bond, L_lane and L_mark is respectively that can travel region, lane boundary, road surface identification point Loss function is cut, is softmaxLoss;
S14: travelable area loss function L is utilized1=L_bond, lane boundary loss function L2=L_lane, road surface identification damage Lose function L3=L_mark using stochastic gradient descent mode training branching networks weight coefficient and solidifies.
4. a kind of lane class precision automatic Pilot structured data analysis method according to claim 1, it is characterized in that: institute Multitask depth convolutional neural networks model front-end platform in step S3 is stated to transplant, comprising the following steps:
S21: whether the data type of the multitask depth convolutional neural networks model after judging off-line training is front-end platform institute The optimal data type of support, if it is not, then quantifying model, utilizes preset test set pair if so, entering in next step Model after conversion carries out relative accuracy loss estimating, enters if relative accuracy is lost less than 1% in next step, conversely, needing It is entered back into next step after carrying out retraining to model;
S22: judging whether front-end platform hardware supports sparse operation, if not supporting, enters in next step, if support, by mould Type carries out model rarefaction, loss of significance estimation is carried out to the model after conversion using preset test set, if relative accuracy is damaged It loses and enters less than 1% in next step, conversely, then needing to enter back into next step after carrying out model retraining;
S23: model deployment is carried out in front-end platform.
5. a kind of lane class precision automatic Pilot structured data analysis method according to claim 2, it is characterized in that: institute State driving path scene analysis in step S4, comprising the following steps:
S31: Image Acquisition and pretreatment;
S32: pretreated image is input in the model after transplanting;
S33: output post-processing is carried out to the model output after transplanting and is converted into lane structure data.
6. a kind of lane class precision automatic Pilot structured data analysis method according to claim 5, it is characterized in that: institute State the road markings information that lane structure data include uppermost road information, secondary lane information and lowermost level, institute Stating road information includes road id, road boundary, road direction, road width, road relating attribute and attached lane quantity, institute Stating lane information includes lane id, lane boundary and lane relating attribute, and the road surface identification information includes road surface identification id, road Face identified category and road surface identification position.
7. a kind of lane class precision automatic Pilot structured data analysis method according to claim 5, it is characterized in that: institute Stating output post-processing includes that can travel the output post-processing of region branch, the output post-processing of lane branch and the output of road surface identification branch Post-processing, the travelable region branch output post-processing the following steps are included:
S41: travelable region branch is exported and carries out image procossing;
S42: the right boundary that can travel region to treated carries out luminance curve fitting;
S43: output road boundary information.
8. a kind of lane class precision automatic Pilot structured data analysis method according to claim 7, it is characterized in that: institute State lane branch output post-processing the following steps are included:
S51: cluster is carried out to the boundary mask of lane boundary branch output and forms several lane line candidate regions;
S52: each lane line candidate region is classified and is fitted using convolutional neural networks;
S53: lane second level analytical expression under output lane boundary classification and image coordinate system.
9. a kind of lane class precision automatic Pilot structured data analysis method according to claim 7, it is characterized in that: institute State road surface identification branch output post-processing the following steps are included:
S61: the boundary mask of road pavement mark branch output carries out cluster and forms several road surface identification candidate regions;
S62: classified using convolutional neural networks to each road surface identification candidate region;
S63: center under output road surface identification classification and image coordinate system.
10. a kind of lane class precision automatic Pilot structured data analysis method according to claim 8 or claim 9, feature Be: the convolutional neural networks include neural network input layer (6), neural network sharing feature coding layer (7) and neural network It exports decoding layer (8), the neural network sharing feature coding layer is combined by cascading conv+relu+BN group, the nerve net It is that lane boundary classification and road surface identification classification export that network, which exports decoding layer,.
CN201811641455.6A 2018-12-29 2018-12-29 Lane-level precision automatic driving structured data analysis method Active CN109858372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811641455.6A CN109858372B (en) 2018-12-29 2018-12-29 Lane-level precision automatic driving structured data analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811641455.6A CN109858372B (en) 2018-12-29 2018-12-29 Lane-level precision automatic driving structured data analysis method

Publications (2)

Publication Number Publication Date
CN109858372A true CN109858372A (en) 2019-06-07
CN109858372B CN109858372B (en) 2021-04-27

Family

ID=66893417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811641455.6A Active CN109858372B (en) 2018-12-29 2018-12-29 Lane-level precision automatic driving structured data analysis method

Country Status (1)

Country Link
CN (1) CN109858372B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110244734A (en) * 2019-06-20 2019-09-17 中山大学 A kind of automatic driving vehicle paths planning method based on depth convolutional neural networks
CN110415266A (en) * 2019-07-19 2019-11-05 东南大学 A method of it is driven safely based on this vehicle surrounding vehicles trajectory predictions
CN110568454A (en) * 2019-09-27 2019-12-13 驭势科技(北京)有限公司 Method and system for sensing weather conditions
CN110781717A (en) * 2019-08-09 2020-02-11 浙江零跑科技有限公司 Cab scene semantic and visual depth combined analysis method
CN111178253A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Visual perception method and device for automatic driving, computer equipment and storage medium
CN111368978A (en) * 2020-03-02 2020-07-03 开放智能机器(上海)有限公司 Precision improving method for offline quantization tool
CN111401251A (en) * 2020-03-17 2020-07-10 北京百度网讯科技有限公司 Lane line extraction method and device, electronic equipment and computer-readable storage medium
CN112116095A (en) * 2019-06-19 2020-12-22 北京搜狗科技发展有限公司 Method and related device for training multi-task learning model
CN112712893A (en) * 2021-01-04 2021-04-27 山东众阳健康科技集团有限公司 Method for improving clinical auxiliary diagnosis effect of computer
CN112926370A (en) * 2019-12-06 2021-06-08 纳恩博(北京)科技有限公司 Method and device for determining perception parameters, storage medium and electronic device
CN113705436A (en) * 2021-08-27 2021-11-26 一汽解放青岛汽车有限公司 Lane information determination method and device, electronic equipment and medium
WO2021237727A1 (en) * 2020-05-29 2021-12-02 Siemens Aktiengesellschaft Method and apparatus of image processing
CN114648745A (en) * 2022-02-14 2022-06-21 成都臻识科技发展有限公司 Road detection method, device and equipment based on deep learning and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679127A (en) * 2012-09-24 2014-03-26 株式会社理光 Method and device for detecting drivable area of road pavement
CN106778918A (en) * 2017-01-22 2017-05-31 北京飞搜科技有限公司 A kind of deep learning image identification system and implementation method for being applied to mobile phone terminal
WO2017103917A1 (en) * 2015-12-15 2017-06-22 Deep Instinct Ltd. Methods and systems for data traffic analysis
US20170206434A1 (en) * 2016-01-14 2017-07-20 Ford Global Technologies, Llc Low- and high-fidelity classifiers applied to road-scene images
CN106971544A (en) * 2017-05-15 2017-07-21 安徽大学 A kind of direct method that vehicle congestion is detected using still image
CN107301383A (en) * 2017-06-07 2017-10-27 华南理工大学 A kind of pavement marking recognition methods based on Fast R CNN
CN107368890A (en) * 2016-05-11 2017-11-21 Tcl集团股份有限公司 A kind of road condition analyzing method and system based on deep learning centered on vision
CN107395211A (en) * 2017-09-12 2017-11-24 郑州云海信息技术有限公司 A kind of data processing method and device based on convolutional neural networks model
CN107704866A (en) * 2017-06-15 2018-02-16 清华大学 Multitask Scene Semantics based on new neural network understand model and its application
CN107730904A (en) * 2017-06-13 2018-02-23 银江股份有限公司 Multitask vehicle driving in reverse vision detection system based on depth convolutional neural networks
CN108429753A (en) * 2018-03-16 2018-08-21 重庆邮电大学 A kind of matched industrial network DDoS intrusion detection methods of swift nature
CN108734275A (en) * 2017-04-24 2018-11-02 英特尔公司 Hardware I P optimizes convolutional neural networks
CN108921013A (en) * 2018-05-16 2018-11-30 浙江零跑科技有限公司 A kind of visual scene identifying system and method based on deep neural network

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679127A (en) * 2012-09-24 2014-03-26 株式会社理光 Method and device for detecting drivable area of road pavement
WO2017103917A1 (en) * 2015-12-15 2017-06-22 Deep Instinct Ltd. Methods and systems for data traffic analysis
US20170206434A1 (en) * 2016-01-14 2017-07-20 Ford Global Technologies, Llc Low- and high-fidelity classifiers applied to road-scene images
CN107368890A (en) * 2016-05-11 2017-11-21 Tcl集团股份有限公司 A kind of road condition analyzing method and system based on deep learning centered on vision
CN106778918A (en) * 2017-01-22 2017-05-31 北京飞搜科技有限公司 A kind of deep learning image identification system and implementation method for being applied to mobile phone terminal
CN108734275A (en) * 2017-04-24 2018-11-02 英特尔公司 Hardware I P optimizes convolutional neural networks
CN106971544A (en) * 2017-05-15 2017-07-21 安徽大学 A kind of direct method that vehicle congestion is detected using still image
CN107301383A (en) * 2017-06-07 2017-10-27 华南理工大学 A kind of pavement marking recognition methods based on Fast R CNN
CN107730904A (en) * 2017-06-13 2018-02-23 银江股份有限公司 Multitask vehicle driving in reverse vision detection system based on depth convolutional neural networks
CN107704866A (en) * 2017-06-15 2018-02-16 清华大学 Multitask Scene Semantics based on new neural network understand model and its application
CN107395211A (en) * 2017-09-12 2017-11-24 郑州云海信息技术有限公司 A kind of data processing method and device based on convolutional neural networks model
CN108429753A (en) * 2018-03-16 2018-08-21 重庆邮电大学 A kind of matched industrial network DDoS intrusion detection methods of swift nature
CN108921013A (en) * 2018-05-16 2018-11-30 浙江零跑科技有限公司 A kind of visual scene identifying system and method based on deep neural network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LIANGFU CHEN 等: "Driving Scene Perception Network:Real-time Joint Detection, Depth Estimation and Semantic Segmentation", 《ARXIV》 *
MALTE OELJEKLAUS 等: "A Fast Multi-Task CNN for Spatial Understanding of Traffic Scenes", 《2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC)》 *
PEISEN LIU 等: "Multi-lane Detection via Multi-task Network in Various Road Scenes", 《2018 CHINESE AUTOMATION CONGRESS(CAC)》 *
SEOKJU LEE 等: "VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition", 《ARXIV》 *
姜灏: "一种自动驾驶车的环境感知系统", 《自动化技术》 *
温泉 等: "基于深度学习的驾驶场景数据应用", 《电子技术与软件工程》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116095B (en) * 2019-06-19 2024-05-24 北京搜狗科技发展有限公司 Method and related device for training multi-task learning model
CN112116095A (en) * 2019-06-19 2020-12-22 北京搜狗科技发展有限公司 Method and related device for training multi-task learning model
CN110244734B (en) * 2019-06-20 2021-02-05 中山大学 Automatic driving vehicle path planning method based on deep convolutional neural network
CN110244734A (en) * 2019-06-20 2019-09-17 中山大学 A kind of automatic driving vehicle paths planning method based on depth convolutional neural networks
CN110415266A (en) * 2019-07-19 2019-11-05 东南大学 A method of it is driven safely based on this vehicle surrounding vehicles trajectory predictions
CN110781717A (en) * 2019-08-09 2020-02-11 浙江零跑科技有限公司 Cab scene semantic and visual depth combined analysis method
CN110568454A (en) * 2019-09-27 2019-12-13 驭势科技(北京)有限公司 Method and system for sensing weather conditions
CN112926370A (en) * 2019-12-06 2021-06-08 纳恩博(北京)科技有限公司 Method and device for determining perception parameters, storage medium and electronic device
CN111178253A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Visual perception method and device for automatic driving, computer equipment and storage medium
CN111178253B (en) * 2019-12-27 2024-02-27 佑驾创新(北京)技术有限公司 Visual perception method and device for automatic driving, computer equipment and storage medium
CN111368978B (en) * 2020-03-02 2023-03-24 开放智能机器(上海)有限公司 Precision improving method for offline quantization tool
CN111368978A (en) * 2020-03-02 2020-07-03 开放智能机器(上海)有限公司 Precision improving method for offline quantization tool
CN111401251A (en) * 2020-03-17 2020-07-10 北京百度网讯科技有限公司 Lane line extraction method and device, electronic equipment and computer-readable storage medium
CN111401251B (en) * 2020-03-17 2023-12-26 北京百度网讯科技有限公司 Lane line extraction method, lane line extraction device, electronic equipment and computer readable storage medium
WO2021237727A1 (en) * 2020-05-29 2021-12-02 Siemens Aktiengesellschaft Method and apparatus of image processing
CN112712893A (en) * 2021-01-04 2021-04-27 山东众阳健康科技集团有限公司 Method for improving clinical auxiliary diagnosis effect of computer
CN113705436A (en) * 2021-08-27 2021-11-26 一汽解放青岛汽车有限公司 Lane information determination method and device, electronic equipment and medium
CN114648745A (en) * 2022-02-14 2022-06-21 成都臻识科技发展有限公司 Road detection method, device and equipment based on deep learning and storage medium

Also Published As

Publication number Publication date
CN109858372B (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN109858372A (en) A kind of lane class precision automatic Pilot structured data analysis method
CN108647585B (en) Traffic identifier detection method based on multi-scale circulation attention network
CN106980858B (en) Language text detection and positioning system and language text detection and positioning method using same
CN110766098A (en) Traffic scene small target detection method based on improved YOLOv3
CN110472467A (en) The detection method for transport hub critical object based on YOLO v3
CN108765404A (en) A kind of road damage testing method and device based on deep learning image classification
KR20200091319A (en) Method and device for lane detection without post-processing by using lane mask, and testing method, and testing device using the same
CN109766887A (en) A kind of multi-target detection method based on cascade hourglass neural network
CN112183203A (en) Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN106228134A (en) Drivable region detection method based on pavement image, Apparatus and system
CN110599497A (en) Drivable region segmentation method based on deep neural network
CN109543681A (en) Character recognition method under a kind of natural scene based on attention mechanism
CN114120289B (en) Method and system for identifying driving area and lane line
CN111079675A (en) Driving behavior analysis method based on target detection and target tracking
CN114519819A (en) Remote sensing image target detection method based on global context awareness
CN110263739A (en) Photo table recognition methods based on OCR technique
CN109947948A (en) A kind of knowledge mapping expression learning method and system based on tensor
CN109241893B (en) Road selection method and device based on artificial intelligence technology and readable storage medium
CN108573238A (en) A kind of vehicle checking method based on dual network structure
CN116681657B (en) Asphalt pavement disease detection method based on improved YOLOv7 model
CN115205694A (en) Image segmentation method, device and computer readable storage medium
CN116912628A (en) Method and device for training defect detection model and detecting defects
CN112085101A (en) High-performance and high-reliability environment fusion sensing method and system
CN114724113B (en) Road sign recognition method, automatic driving method, device and equipment
CN110705441A (en) Pedestrian crossing line image post-processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 310051 1st and 6th floors, no.451 Internet of things street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Zhejiang Zero run Technology Co.,Ltd.

Address before: 310051 1st and 6th floors, no.451 Internet of things street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: ZHEJIANG LEAPMOTOR TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder