CN109858372B - Lane-level precision automatic driving structured data analysis method - Google Patents

Lane-level precision automatic driving structured data analysis method Download PDF

Info

Publication number
CN109858372B
CN109858372B CN201811641455.6A CN201811641455A CN109858372B CN 109858372 B CN109858372 B CN 109858372B CN 201811641455 A CN201811641455 A CN 201811641455A CN 109858372 B CN109858372 B CN 109858372B
Authority
CN
China
Prior art keywords
lane
road
neural network
structured data
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811641455.6A
Other languages
Chinese (zh)
Other versions
CN109858372A (en
Inventor
缪其恒
吴长伟
苏志杰
孙焱标
王江明
许炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zero Run Technology Co Ltd
Original Assignee
Zhejiang Leapmotor Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Leapmotor Technology Co Ltd filed Critical Zhejiang Leapmotor Technology Co Ltd
Priority to CN201811641455.6A priority Critical patent/CN109858372B/en
Publication of CN109858372A publication Critical patent/CN109858372A/en
Application granted granted Critical
Publication of CN109858372B publication Critical patent/CN109858372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a lane-level precision automatic driving structured data analysis method which comprises the steps of establishing a multitask deep convolutional neural network model, performing off-line training, transplanting a front-end platform, analyzing a road driving scene and outputting post-processing. The invention has the advantages that: the feature extraction capability of the road marking line is enhanced by utilizing the deep convolutional neural network, the application scene is wider, and the robustness is stronger; the driving available region branches are added, so that the related application can be expanded to an unstructured road; lane boundary branches are added, and accuracy of a congestion scene or under the condition that lane marking lines are shielded is improved; adding lane structured data comprising attributes such as lane boundary types and lane orientations, and enabling high-level automatic driving perception application; road surface identification information is added, including the type and the position of the road surface identification, so that high-grade automatic driving positioning application is enabled; by using the shared convolution characteristic, the method can share operation resources with other visual perception modules, and has high integration efficiency.

Description

Lane-level precision automatic driving structured data analysis method
Technical Field
The invention relates to the field of automatic driving, in particular to a lane-level precision automatic driving structured data analysis method.
Background
Intellectualization is one of the important trends in the development of the automobile industry nowadays, and the application of a vision system in the field of vehicle active safety is increasingly wide. The single-binocular front-view, rear-view and 360-degree around-view systems have become the mainstream sensing device of the existing advanced assistant driving system. Most of the existing commercial lane auxiliary systems are based on visual lane marking line perception, most of the systems are single in application scene and only suitable for the structured roads with clear road marking lines. The method can be summarized as lane marking line feature extraction, lane marking line feature clustering and lane marking line fitting. When applied, the system has the following disadvantages: (i) the lane marking line extraction accuracy is not high, and the lane marking line extraction method is easily influenced by factors such as illumination, road abrasion and the like; (ii) for road congestion conditions (where the vehicle obstructs the road marking), the lane detection rate is low; (iii) the lane fitting precision is poor, the method can be used for short-distance yaw assistance and is not suitable for automatic driving path planning; (iv) the output data structure is too simple, and lane and boundary attributes are not clearly defined; (v) and the correlation with other visual algorithms (such as a target detection module and the like) is low, and the efficiency of the integrated algorithm is not high after fusion.
Disclosure of Invention
The invention mainly solves the problems and provides the lane-level precision automatic driving structured data analysis method which has the advantages of high accuracy, wide application range, good robustness and lane attribute output.
The invention solves the technical problem by adopting the technical scheme that the lane-level precision automatic driving structured data analysis method comprises the following steps:
s1: establishing a multitask deep convolution neural network model based on shared shallow convolution characteristics;
s2: off-line training is carried out on the multitask deep convolution neural network model;
s3: transplanting the front-end platform of the trained multitask deep convolution neural network model;
s4: and analyzing the road driving scene by using the transplanted model and carrying out output post-processing.
By using the shared shallow layer convolution characteristic, the module adopting the method can share operation resources with other visual perception modules, so that the vehicle visual perception system has higher integration efficiency; a large amount of off-line training is carried out on the established multitask deep convolution neural network model, the shared convolution layer characteristic weight coefficient and each branch network weight coefficient in the model are determined, the accuracy of model operation is improved, the output of the model is subjected to output post-processing, the output is converted into specific lane structural data including road information, lane information and pavement identification information, and the subsequent automatic driving program can conveniently run.
As a preferable scheme of the above scheme, the multitask deep convolutional neural network model comprises an input layer, a shared feature coding layer and a lane structured data output decoding layer, wherein the shared feature coding layer is composed of cascaded conv + relu + BN combinations, and the lane structured data output decoding layer comprises a travelable region branch, a lane boundary branch and a road surface identification branch. The input of the shared characteristic coding layer is an image with any resolution, the down sampling is carried out through convolution step length, three characteristic layers with different scales can be output and are respectively corresponding to large-scale, medium-scale and small-scale characteristic scene separation, the network structure of the lane structured data output decoding layer is composed of deconvolution and softmax layers, the input of the decoding layer is the cascade shared convolution layer characteristics, and the output is the structured data binary mask corresponding to the network input size.
As a preferable solution of the foregoing solution, the offline training in step S2 includes the following steps:
s11: collecting various road driving scene data off line, and extracting a plurality of discrete time sequence training samples;
s12: manually labeling the sample to generate a sample label;
s13: training the multitask deep convolution neural network model by using the marked samples, and establishing a loss function according to the training result
L=α*L_bond+β*L_lane+γ*L_mark
Training the weight coefficient of the shared convolution characteristic layer in a random gradient descent mode and solidifying, wherein alpha, beta and gamma are configurable parameters, default values are all 0.33, and L _ bond, L _ lane and L _ mark are travelable areas, lane boundaries and pavement mark segmentation loss functions which are softmaxLoss respectively;
s14: using travelable region loss function L1Lane boundary loss function L — bond2Road marking loss function L — lane3And (5) training the weight coefficient of the branch network in a random gradient descent mode and solidifying. Sample labelThe method comprises a travelable area mask, a lane boundary mask and a road surface identification mask, and when sample data are few, the sample can be expanded by using image geometric space transformation.
As a preferable scheme of the above scheme, the multitask deep convolutional neural network model front-end platform transplantation in step S3 includes the following steps:
s21: judging whether the data type of the off-line trained multitask deep convolution neural network model is the optimal data type supported by the front-end platform, if so, entering the next step, otherwise, quantizing the model, estimating the relative precision loss of the converted model by using a preset test set, if the relative precision loss is less than 1%, entering the next step, otherwise, re-training the model, and entering the next step;
s22, judging whether the front-end platform hardware supports sparse operation, if not, entering the next step, if so, carrying out model sparsification on the model, carrying out precision loss estimation on the converted model by using a preset test set, if the relative precision loss is less than 1%, entering the next step, otherwise, entering the next step after retraining the model;
s23: and carrying out model deployment on the front-end platform.
As a preferable scheme of the above scheme, the analyzing of the driving scene of the road in the step S4 includes the following steps:
s31: collecting and preprocessing an image;
s32: inputting the preprocessed image into the transplanted model;
s33: and outputting the output of the transplanted model, and converting the output into lane structured data through post-processing. Image pre-processing includes automatic exposure, automatic white balance, distortion correction, jitter removal, smoothing filtering, ROI scaling and clipping, etc.
As a preferable mode of the above, the lane structured data includes road information of the highest level, lane information of the secondary level, and road identification information of the lowest level, the road information includes road id, road boundary, road orientation, road width, road association attribute, and number of subordinate lanes, the lane information includes lane id, lane boundary, and lane association attribute, and the road identification information includes road identification id, road identification category, and road identification position.
As a preferable mode of the above-described mode, the output post-processing includes travelable region branch output post-processing, lane branch output post-processing, and road surface marking branch output post-processing, and the travelable region branch output post-processing includes the steps of:
s41: performing image processing on the drivable region branch output;
s42: performing high-order curve fitting on the left and right boundaries of the processed travelable region;
s43: the road boundary information is output. The image processing comprises expansion corrosion and mask filling based on a road obstacle detection result, and a secondary curve is defaulted when high-order curve fitting is carried out.
As a preferable mode of the above, the lane branch output post-processing includes the steps of:
s51: clustering boundary masks output by lane boundary branches to form a plurality of lane line candidate areas;
s52: classifying and fitting the candidate regions of each lane line by using a convolutional neural network;
s53: and outputting lane boundary categories and lane secondary analysis expressions in the image coordinate system.
As a preferable mode of the above-described mode, the road surface marking branch output post-processing includes the steps of:
s61: clustering boundary masks output by the road surface identification branches to form a plurality of road surface identification candidate areas;
s62: classifying the road surface identification candidate areas by using a convolutional neural network;
s63: and outputting the road surface identification category and the central position under the image coordinate system.
As a preferable scheme of the above scheme, the convolutional neural network includes a neural network input layer, a neural network shared feature coding layer, and a neural network output decoding layer, where the neural network shared feature coding layer is formed by cascaded conv + relu + BN combinations, and the neural network output decoding layer outputs lane boundary categories and road surface identification categories.
The invention has the advantages that: the feature extraction capability of the road marking line is enhanced by utilizing the deep convolutional neural network, the application scene is wider, and the robustness is stronger; the driving available region branches are added, so that the related application can be expanded to an unstructured road; lane boundary branches are added, and accuracy of a congestion scene or under the condition that lane marking lines are shielded is improved; adding lane structured data comprising attributes such as lane boundary types and lane orientations, and enabling high-level automatic driving perception application; road surface identification information is added, including the type and the position of the road surface identification, so that high-grade automatic driving positioning application is enabled; by using the shared convolution characteristic, the method can share operation resources with other visual perception modules, and has high integration efficiency.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
FIG. 2 is a schematic structural diagram of a multitask deep convolutional neural network model in the present invention.
FIG. 3 is a schematic flow chart of the offline training of the present invention.
FIG. 4 is a schematic flow chart of the front-end platform transplantation of the multitask deep convolutional neural network model in the present invention.
Fig. 5 is a schematic flow chart of road driving scene analysis according to the present invention.
Fig. 6 is a schematic flow chart of the after-treatment of the branch output of the travelable region in the present invention.
FIG. 7 is a flow chart illustrating lane branch output post-processing according to the present invention.
Fig. 8 is a schematic flow chart of the road surface marking branch output post-processing in the invention.
FIG. 9 is a schematic diagram of a convolutional neural network according to the present invention.
1-input layer 2-shared characteristic coding layer 3-travelable region branch 4-lane boundary branch 5-road surface identification branch 6-neural network input layer 7-neural network shared characteristic coding layer 8-neural network output decoding layer.
Detailed Description
The technical solution of the present invention is further described below by way of examples with reference to the accompanying drawings.
Example (b):
the lane-level precision automatic driving structured data analysis method of the embodiment, as shown in fig. 1, includes the following steps:
s1: establishing a multitask deep convolution neural network model based on shared shallow convolution characteristics;
s2: off-line training is carried out on the multitask deep convolution neural network model;
s3: transplanting the front-end platform of the trained multitask deep convolution neural network model;
s4: and analyzing the road driving scene by using the transplanted model and carrying out output post-processing.
The multitask deep convolutional neural network model comprises an input layer 1, a shared feature coding layer 2 and a lane structured data output decoding layer as shown in fig. 2, wherein the shared feature coding layer is composed of cascade conv + relu + BN combinations, and the lane structured data output decoding layer comprises a travelable region branch 3, a lane boundary branch 4 and a road surface identification branch 5. The input of the shared feature coding layer is an image with any resolution, the down sampling is carried out through convolution step length, three feature layers with different scales can be output and correspond to large-scale, medium-scale and small-scale feature scene separation respectively, the network structure of the lane structured data output decoding layer is composed of deconvolution and softmax layers, the input of the decoding layer is the cascade shared convolution layer feature, the output is a structured data binary mask corresponding to the network input size, the image input by the shared feature coding layer in the embodiment is defaulted to be 640 x 320
As shown in fig. 3, the offline training in step S2 includes the following steps:
s11: collecting various road driving scene data off line, and extracting a plurality of discrete time sequence training samples;
s12: manually labeling the sample to generate a sample label;
s13: training the multitask deep convolution neural network model by using the marked samples, and establishing a loss function according to the training result
L=α*L_bond+β*L_lane+γ*L_mark
Training the weight coefficient of the shared convolution characteristic layer in a random gradient descent mode and solidifying, wherein alpha, beta and gamma are configurable parameters, default values are all 0.33, and L _ bond, L _ lane and L _ mark are travelable areas, lane boundaries and pavement mark segmentation loss functions which are softmaxLoss respectively;
s14: using travelable region loss function L1Lane boundary loss function L — bond2Road marking loss function L — lane3And (5) training the weight coefficient of the branch network in a random gradient descent mode and solidifying. In the implementation, the number of the extracted samples is 100000, the samples are expanded by adopting the geometric space transformation of the images, all the samples are labeled manually, the label content comprises a travelable area mask, a lane boundary mask and a road surface identification mask, and the training parameters during off-line training comprise a learning rate, a mini batch sample number, a weight decay coefficient and a momentum coefficient.
As shown in fig. 4, the multitask deep convolutional neural network model front-end platform migration in step S3 includes the following steps:
s21: judging whether the data type of the off-line trained multitask deep convolution neural network model is the optimal data type supported by the front-end platform, if so, entering the next step, otherwise, quantizing the model, estimating the relative precision loss of the converted model by using a preset test set, if the relative precision loss is less than 1%, entering the next step, otherwise, re-training the model, and entering the next step;
s22, judging whether the front-end platform hardware supports sparse operation, if not, entering the next step, if so, carrying out model sparsification on the model, carrying out precision loss estimation on the converted model by using a preset test set, if the relative precision loss is less than 1%, entering the next step, otherwise, entering the next step after retraining the model;
s23: and carrying out model deployment on the front-end platform. In the implementation, the data type of the multitask depth convolution neural network model is int8, the hardware supports sparse operation, the optimal data type supported by the front-end platform is fp32, the multitask depth convolution neural network model converts the data type into fp32 through quantification, then model accuracy verification is conducted, model sparsification is conducted after verification is passed, then model accuracy verification is conducted again, and deployment is conducted on the front-end platform after verification is passed.
As shown in fig. 5, the analysis of the driving scenario of the road in step S4 includes the following steps:
s31: collecting and preprocessing an image;
s32: inputting the preprocessed image into the transplanted model;
s33: and outputting the output of the transplanted model, and converting the output into lane structured data through post-processing. The image acquisition is completed through a vehicle-mounted vision system, and the image preprocessing comprises automatic exposure, automatic white balance, distortion correction, jitter removal, smooth filtering and ROI (region of interest) scaling and intercepting.
The lane structured data comprises the road information of the highest level, the lane information of the secondary level and the road identification information of the lowest level, the road information comprises road id, road boundary, road orientation, road width, road associated attributes and the number of auxiliary lanes, the lane information comprises the road id, the lane boundary and the lane associated attributes, and the road identification information comprises road identification id, road identification category and road identification position. In this embodiment, a dynamic three-lane model is adopted, and the current lane is used as a center and extends to an adjacent lane from left to right.
The output post-processing includes travelable region branch output post-processing, lane branch output post-processing, and road surface marking branch output post-processing, and the travelable region branch output post-processing, as shown in fig. 6, includes the following steps:
s41: performing image processing on the drivable region branch output;
s42: performing high-order curve fitting on the left and right boundaries of the processed travelable region;
s43: the road boundary information is output. In the present embodiment, the image processing performed on the travelable region branch output includes dilation corrosion and mask filling based on the road obstacle detection result, and a quadratic curve is defaulted when high-order curve fitting is performed.
As shown in fig. 7, the lane branch output post-processing includes the steps of:
s51: clustering boundary masks output by lane boundary branches to form a plurality of lane line candidate areas;
s52: classifying and fitting the candidate regions of each lane line by using a convolutional neural network;
s53: and outputting lane boundary categories and lane secondary analysis expressions in the image coordinate system.
As shown in fig. 8, the road surface marking branch output post-processing includes the steps of:
s61: clustering boundary masks output by the road surface identification branches to form a plurality of road surface identification candidate areas;
s62: classifying the road surface identification candidate areas by using a convolutional neural network;
s63: and outputting the road surface identification category and the central position under the image coordinate system.
As shown in fig. 9, the convolutional neural network includes a neural network input layer (6), a neural network shared feature coding layer (7) and a neural network output decoding layer (8), the neural network shared feature coding layer is composed of cascaded conv + relu + BN combinations, and the neural network output decoding layer outputs lane boundary categories and road surface identification categories. After the branch output post-processing of the drivable area, the lane branch output post-processing and the road surface identification branch output post-processing are finished, the drivable area is interacted with other algorithm modules in a CAN message form.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (9)

1. A lane-level precision automatic driving structured data analysis method is characterized by comprising the following steps: the method comprises the following steps:
s1: establishing a multitask deep convolution neural network model based on shared shallow convolution characteristics;
s2: off-line training is carried out on the multitask deep convolution neural network model;
s3: transplanting the front-end platform of the trained multitask deep convolution neural network model;
s4: using the transplanted model to analyze the road driving scene and perform output post-processing;
the front-end platform transplantation of the multitask deep convolution neural network model in the step S3 comprises the following steps:
s21: judging whether the data type of the off-line trained multitask deep convolution neural network model is the optimal data type supported by the front-end platform, if so, entering the next step, otherwise, quantizing the model, estimating the relative precision loss of the converted model by using a preset test set, if the relative precision loss is less than 1%, entering the next step, otherwise, re-training the model, and entering the next step;
s22, judging whether the front-end platform hardware supports sparse operation, if not, entering the next step, if so, carrying out model sparsification on the model, carrying out precision loss estimation on the converted model by using a preset test set, if the relative precision loss is less than 1%, entering the next step, otherwise, entering the next step after retraining the model;
s23: and carrying out model deployment on the front-end platform.
2. The lane-level precision autopilot structured data analysis method of claim 1, wherein: the multitask deep convolutional neural network model comprises an input layer (1), a shared feature coding layer (2) and a lane structured data output decoding layer, wherein the shared feature coding layer is composed of cascade conv + relu + BN combinations, and the lane structured data output decoding layer comprises a travelable region branch (3), a lane boundary branch (4) and a road surface identification branch (5).
3. The lane-level precision autopilot structured data analysis method of claim 2, wherein: the offline training in step S2 includes the following steps:
s11: collecting various road driving scene data off line, and extracting a plurality of discrete time sequence training samples;
s12: manually labeling the sample to generate a sample label;
s13: training the multitask deep convolution neural network model by using the marked samples, and establishing a loss function according to the training result
Figure DEST_PATH_IMAGE002
Training the weight coefficient of the shared convolution characteristic layer by adopting a random gradient descent mode and solidifying,
Figure DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE008
are configurable parameters, the default values are all 0.33,
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE012
and
Figure DEST_PATH_IMAGE014
respectively a travelable area, a laneBoundary, pavement marking segmentation loss function, all of which are softmaxLoss;
s14: using travelable region loss functions
Figure DEST_PATH_IMAGE016
Function of lane boundary loss
Figure DEST_PATH_IMAGE018
Road surface marking loss function
Figure DEST_PATH_IMAGE020
And training the weight coefficient of the branch network by adopting a random gradient descent mode and solidifying.
4. The lane-level precision autopilot structured data analysis method of claim 2, wherein: the analysis of the driving scene of the road in the step S4 includes the following steps:
s31: collecting and preprocessing an image;
s32: inputting the preprocessed image into the transplanted model;
s33: and outputting the output of the transplanted model, and converting the output into lane structured data through post-processing.
5. The lane-level precision autopilot structured data analysis method of claim 4, wherein: the lane structured data comprises the road information of the highest level, the lane information of the secondary level and the road identification information of the lowest level, the road information comprises road id, road boundary, road orientation, road width, road associated attributes and the number of auxiliary lanes, the lane information comprises the road id, the lane boundary and the lane associated attributes, and the road identification information comprises road identification id, road identification category and road identification position.
6. The lane-level precision autopilot structured data analysis method of claim 4, wherein: the output post-processing comprises drivable area branch output post-processing, lane branch output post-processing and road surface identification branch output post-processing, and the drivable area branch output post-processing comprises the following steps:
s41: performing image processing on the drivable region branch output;
s42: performing high-order curve fitting on the left and right boundaries of the processed travelable region;
s43: the road boundary information is output.
7. The lane-level precision autopilot structured data analysis method of claim 6, wherein: the lane branch output post-processing comprises the following steps:
s51: clustering boundary masks output by lane boundary branches to form a plurality of lane line candidate areas;
s52: classifying and fitting the candidate regions of each lane line by using a convolutional neural network;
s53: and outputting lane boundary categories and lane secondary analysis expressions in the image coordinate system.
8. The lane-level precision autopilot structured data analysis method of claim 6, wherein: the road surface identification branch output post-processing comprises the following steps:
s61: clustering boundary masks output by the road surface identification branches to form a plurality of road surface identification candidate areas;
s62: classifying the road surface identification candidate areas by using a convolutional neural network;
s63: and outputting the road surface identification category and the central position under the image coordinate system.
9. The lane-level accuracy autopilot structured data analysis method of claim 7 or 8, wherein: the convolutional neural network comprises a neural network input layer (6), a neural network shared feature coding layer (7) and a neural network output decoding layer (8), wherein the neural network shared feature coding layer is formed by cascading conv + relu + BN combinations, and the neural network output decoding layer is used for outputting lane boundary types and pavement identification types.
CN201811641455.6A 2018-12-29 2018-12-29 Lane-level precision automatic driving structured data analysis method Active CN109858372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811641455.6A CN109858372B (en) 2018-12-29 2018-12-29 Lane-level precision automatic driving structured data analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811641455.6A CN109858372B (en) 2018-12-29 2018-12-29 Lane-level precision automatic driving structured data analysis method

Publications (2)

Publication Number Publication Date
CN109858372A CN109858372A (en) 2019-06-07
CN109858372B true CN109858372B (en) 2021-04-27

Family

ID=66893417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811641455.6A Active CN109858372B (en) 2018-12-29 2018-12-29 Lane-level precision automatic driving structured data analysis method

Country Status (1)

Country Link
CN (1) CN109858372B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116095B (en) * 2019-06-19 2024-05-24 北京搜狗科技发展有限公司 Method and related device for training multi-task learning model
CN110244734B (en) * 2019-06-20 2021-02-05 中山大学 Automatic driving vehicle path planning method based on deep convolutional neural network
CN110415266A (en) * 2019-07-19 2019-11-05 东南大学 A method of it is driven safely based on this vehicle surrounding vehicles trajectory predictions
CN110781717A (en) * 2019-08-09 2020-02-11 浙江零跑科技有限公司 Cab scene semantic and visual depth combined analysis method
CN110568454B (en) * 2019-09-27 2022-06-28 驭势科技(北京)有限公司 Method and system for sensing weather conditions
CN112926370A (en) * 2019-12-06 2021-06-08 纳恩博(北京)科技有限公司 Method and device for determining perception parameters, storage medium and electronic device
CN111178253B (en) * 2019-12-27 2024-02-27 佑驾创新(北京)技术有限公司 Visual perception method and device for automatic driving, computer equipment and storage medium
CN111368978B (en) * 2020-03-02 2023-03-24 开放智能机器(上海)有限公司 Precision improving method for offline quantization tool
CN111401251B (en) * 2020-03-17 2023-12-26 北京百度网讯科技有限公司 Lane line extraction method, lane line extraction device, electronic equipment and computer readable storage medium
WO2021237727A1 (en) * 2020-05-29 2021-12-02 Siemens Aktiengesellschaft Method and apparatus of image processing
CN112712893B (en) * 2021-01-04 2023-01-20 众阳健康科技集团有限公司 Method for improving clinical auxiliary diagnosis effect of computer
CN113705436A (en) * 2021-08-27 2021-11-26 一汽解放青岛汽车有限公司 Lane information determination method and device, electronic equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107395211A (en) * 2017-09-12 2017-11-24 郑州云海信息技术有限公司 A kind of data processing method and device based on convolutional neural networks model
CN108429753A (en) * 2018-03-16 2018-08-21 重庆邮电大学 A kind of matched industrial network DDoS intrusion detection methods of swift nature
CN108734275A (en) * 2017-04-24 2018-11-02 英特尔公司 Hardware I P optimizes convolutional neural networks

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679127B (en) * 2012-09-24 2017-08-04 株式会社理光 The method and apparatus for detecting the wheeled region of pavement of road
US10552727B2 (en) * 2015-12-15 2020-02-04 Deep Instinct Ltd. Methods and systems for data traffic analysis
US20170206434A1 (en) * 2016-01-14 2017-07-20 Ford Global Technologies, Llc Low- and high-fidelity classifiers applied to road-scene images
US9760806B1 (en) * 2016-05-11 2017-09-12 TCL Research America Inc. Method and system for vision-centric deep-learning-based road situation analysis
CN106778918B (en) * 2017-01-22 2020-10-30 苏州飞搜科技有限公司 Deep learning image recognition system applied to mobile phone terminal and implementation method
CN106971544B (en) * 2017-05-15 2019-07-16 安徽大学 A kind of direct method that vehicle congestion is detected using still image
CN107301383B (en) * 2017-06-07 2020-11-24 华南理工大学 Road traffic sign identification method based on Fast R-CNN
CN107730904A (en) * 2017-06-13 2018-02-23 银江股份有限公司 Multitask vehicle driving in reverse vision detection system based on depth convolutional neural networks
CN107704866B (en) * 2017-06-15 2021-03-23 清华大学 Multitask scene semantic understanding model based on novel neural network and application thereof
CN108921013B (en) * 2018-05-16 2020-08-18 浙江零跑科技有限公司 Visual scene recognition system and method based on deep neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734275A (en) * 2017-04-24 2018-11-02 英特尔公司 Hardware I P optimizes convolutional neural networks
CN107395211A (en) * 2017-09-12 2017-11-24 郑州云海信息技术有限公司 A kind of data processing method and device based on convolutional neural networks model
CN108429753A (en) * 2018-03-16 2018-08-21 重庆邮电大学 A kind of matched industrial network DDoS intrusion detection methods of swift nature

Also Published As

Publication number Publication date
CN109858372A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN109858372B (en) Lane-level precision automatic driving structured data analysis method
CN108647585B (en) Traffic identifier detection method based on multi-scale circulation attention network
CN110728200B (en) Real-time pedestrian detection method and system based on deep learning
CN111145174B (en) 3D target detection method for point cloud screening based on image semantic features
CN111598030B (en) Method and system for detecting and segmenting vehicle in aerial image
CN112183203B (en) Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN108776772B (en) Cross-time building change detection modeling method, detection device, method and storage medium
CN111598095A (en) Deep learning-based urban road scene semantic segmentation method
CN113780296A (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN114048822A (en) Attention mechanism feature fusion segmentation method for image
CN115035295B (en) Remote sensing image semantic segmentation method based on shared convolution kernel and boundary loss function
CN111462140B (en) Real-time image instance segmentation method based on block stitching
CN113888547A (en) Non-supervision domain self-adaptive remote sensing road semantic segmentation method based on GAN network
CN112819837B (en) Semantic segmentation method based on multi-source heterogeneous remote sensing image
CN113723377A (en) Traffic sign detection method based on LD-SSD network
CN113591617B (en) Deep learning-based water surface small target detection and classification method
CN112329533A (en) Local pavement adhesion coefficient estimation method based on image segmentation
CN111476226A (en) Text positioning method and device and model training method
CN114005110B (en) 3D detection model training method and device, and 3D detection method and device
DE102019126631A1 (en) Improved trajectory estimation based on ground truth
CN114882205A (en) Target detection method based on attention mechanism
US10373004B1 (en) Method and device for detecting lane elements to plan the drive path of autonomous vehicle by using a horizontal filter mask, wherein the lane elements are unit regions including pixels of lanes in an input image
Jabeen et al. Weather classification on roads for drivers assistance using deep transferred features
CN114495049A (en) Method and device for identifying lane line
CN113901903A (en) Road identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 310051 1st and 6th floors, no.451 Internet of things street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Zhejiang Zero run Technology Co.,Ltd.

Address before: 310051 1st and 6th floors, no.451 Internet of things street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: ZHEJIANG LEAPMOTOR TECHNOLOGY Co.,Ltd.