CN117593717B - Lane tracking method and system based on deep learning - Google Patents

Lane tracking method and system based on deep learning Download PDF

Info

Publication number
CN117593717B
CN117593717B CN202410070092.4A CN202410070092A CN117593717B CN 117593717 B CN117593717 B CN 117593717B CN 202410070092 A CN202410070092 A CN 202410070092A CN 117593717 B CN117593717 B CN 117593717B
Authority
CN
China
Prior art keywords
lane
information
identification
result
incremental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410070092.4A
Other languages
Chinese (zh)
Other versions
CN117593717A (en
Inventor
韩畑州
何发智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202410070092.4A priority Critical patent/CN117593717B/en
Publication of CN117593717A publication Critical patent/CN117593717A/en
Application granted granted Critical
Publication of CN117593717B publication Critical patent/CN117593717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a lane tracking method and system based on deep learning, and relates to the technical field of intelligent information processing. According to the method, scene classification is carried out on a road video set to obtain a classified video set, the coordinate position of a vehicle is determined, coordinate conversion information is converted and determined, feature analysis is carried out on various videos in the classified video set, various scene image features are determined, preprocessing and feature identification are carried out on the various scene image features to obtain a lane feature identification information set, a lane detection model is obtained through network model deep learning, model analysis is carried out on video stream information of a preset angle of a vehicle, lane tracking is carried out on the basis of lane detection results, and the technical problems that the detection tracking accuracy is low and adaptability is poor due to poor limitation aiming at different road conditions in the prior art are solved, and the high-speed and high-precision road tracking is realized by improving data analysis dimension and optimizing model operation mechanism.

Description

Lane tracking method and system based on deep learning
Technical Field
The invention relates to the technical field of intelligent information processing, in particular to a lane tracking method and system based on deep learning.
Background
Specifically, the lane line is used as a critical cut-in problem in the automatic driving technology, belongs to a hot spot research technology in the unmanned driving field, and in order to ensure the safety and stability of the vehicle driving process, the detection and identification accuracy of the lane line is required to be strictly controlled so as to accurately track the lane, and the conventional lane line detection method mainly extracts the lane line point positions to be fitted for determination by carrying out morphological calculation, contour searching and other methods of a lane detection image at present, so that the lane tracking is carried out with high precision, high efficiency and convenience.
In the prior art, the current lane tracking method is poor in intelligence, is poor in limitation aiming at different road conditions, is easy to be interfered by natural environment, and is low in detection tracking accuracy and poor in adaptability.
Disclosure of Invention
Aiming at the technical problems, the application provides a lane tracking method and a lane tracking system based on deep learning, which are used for solving the technical problems of low detection tracking accuracy and poor adaptability caused by poor limitation aiming at different road conditions and easiness in interference of natural environment due to insufficient intelligence of the current lane tracking method in the prior art.
In view of the above, the present application provides a lane tracking method and system based on deep learning.
In a first aspect, the present application provides a lane tracking method based on deep learning, the method comprising:
acquiring a road video according to the preset condition requirement to obtain a road video set;
scene classification is carried out based on the road video set, and a classified video set is obtained;
according to the collected position information of the classified video set, carrying out vehicle coordinate position conversion and determining coordinate conversion information;
based on the coordinate conversion information, respectively carrying out feature analysis on various videos in the classified video set to determine various scene image features;
preprocessing the various scene image features and carrying out feature identification to obtain a lane feature identification information set;
deep learning of the network model is carried out by utilizing the lane characteristic identification information set, and a lane detection model is obtained;
collecting lane recognition constraint information, establishing a lane recognition record database, and performing incremental learning on the lane detection model by using the lane recognition record database to obtain an incremental lane detection model;
acquiring a road type result and lane identification constraint information, and correcting an incremental lane detection model;
And acquiring video stream information of a preset angle of the vehicle, wherein the preset angle of the vehicle is matched with the coordinate position of the vehicle in the coordinate conversion information, identifying and detecting the video stream information through an incremental lane detection model to obtain a lane detection result, and tracking the lane based on the lane detection result.
In a second aspect, the present application provides a deep learning-based lane tracking system, the system comprising:
the video acquisition module is used for acquiring road video according to the preset condition requirement to obtain a road video set;
the video set classification module is used for classifying scenes based on the road video set to obtain a classified video set;
the coordinate conversion module is used for converting the coordinate position of the vehicle according to the acquired position information of the classified video set and determining coordinate conversion information;
the feature analysis module is used for respectively carrying out feature analysis on various videos in the classified video set based on the coordinate conversion information to determine various scene image features;
the information set acquisition module is used for preprocessing the image features of the various scenes and identifying the features to obtain a lane feature identification information set;
The model acquisition module is used for carrying out network model deep learning by utilizing the lane characteristic identification information set to obtain a lane detection model;
the incremental learning module is used for collecting lane recognition constraint information, establishing a lane recognition record database, and performing incremental learning on the lane detection model by using the lane recognition record database to obtain an incremental lane detection model;
the model correction module is used for acquiring a road type result and lane identification constraint information and correcting the incremental lane detection model;
the information detection tracking module is used for collecting video stream information of a vehicle preset angle, wherein the vehicle preset angle is matched with the vehicle coordinate position in the coordinate conversion information, the video stream information is identified and detected through an incremental lane detection model, a lane detection result is obtained, and lane tracking is carried out based on the lane detection result.
One or more technical schemes provided in the application have the following beneficial effects:
the lane tracking method based on deep learning solves the technical problems that the limitation on different road conditions is poor, the detection tracking accuracy is low, the adaptability is poor due to the fact that the limitation on different road conditions is poor, the interference of natural environments is easy, the data analysis dimension is improved, the model operation mechanism is optimized, and high-speed and high-accuracy road tracking is achieved.
According to the lane tracking method based on deep learning, the real-time road video is subjected to diversified analysis processing, accurate and orderly information is guaranteed, a lane detection model is further trained, collected real-time video streams are subjected to recognition analysis to output lane detection results, accuracy and objectivity of the detection results are guaranteed, road type information and lane state information are acquired, corresponding association is respectively carried out on the road type information and the lane state information to determine preset condition requirements, namely video collection requirements of different roads are carried out, video collection is carried out on the roads based on the preset collection requirements, video collection results are divided and integrated based on the road types, a road video set is generated, the road video set is used as source data, and a basic basis is provided for subsequent lane feature recognition analysis.
Drawings
Fig. 1 is a schematic flow chart of a lane tracking method based on deep learning;
fig. 2 is a schematic diagram of a preset condition requirement acquisition flow in a lane tracking method based on deep learning;
fig. 3 is a schematic diagram of a process for obtaining a lane feature identification information set in a lane tracking method based on deep learning;
Fig. 4 is a schematic structural diagram of a lane tracking system based on deep learning.
Reference numerals illustrate: the system comprises a video acquisition module 11, a video set classification module 12, a coordinate conversion module 13, a feature analysis module 14, an information set acquisition module 15, a model acquisition module 16, an increment learning module 17, a model correction module 18 and an information detection tracking module 19.
Detailed Description
The following describes the invention further in connection with the specific implementation, and the invention is not limited thereto at all.
The method and the system are used for obtaining a classified video set by scene classification on a road video set, determining the coordinate position of a vehicle, converting and determining coordinate conversion information, performing feature analysis on various videos in the classified video set, determining various scene image features, preprocessing the various scene image features and feature identifiers to obtain a lane feature identifier information set, obtaining a lane detection model by network model deep learning, collecting video stream information of a preset angle of the vehicle, performing model analysis, and performing lane tracking based on lane detection results, and are used for solving the technical problems of poor detection tracking accuracy and poor adaptability caused by poor limitation aiming at different road conditions and easiness in interference of natural environments in the prior art due to insufficient intelligence of the current lane tracking method.
Example 1
As shown in fig. 1, the present application provides a lane tracking method based on deep learning, the method comprising:
step S100: acquiring a road video according to the preset condition requirement to obtain a road video set;
as shown in fig. 2, before the road video capturing according to the preset condition requirement, step S100 of the present application further includes:
step S110: obtaining road type information and lane state information;
the road type information is a plurality of road types at least comprising an expressway, a trunk road, a secondary trunk road and a branch road; the lane status is determined based on the road type information, such as road size, number of lanes, driving restriction, etc.
Step S120: based on the road type information and the lane state information, carrying out combination arrangement to obtain a type-lane parameter combination;
and matching the lane states corresponding to the road type information, and carrying out mapping association combination, namely, representing the combination as a type-lane parameter combination.
For example, for a expressway, the expressway is mainly arranged in a very large city or a metropolitan area, a separation belt exists in the center of the road, an ascending vehicle and a descending vehicle are separated, multiple road sizes are included, the setting of lane lines of the expressway is different, relevant lane states are regulated to serve as the lane state information, the road type information and the lane state information are further associated and correspond, multiple lane sequences are determined based on the corresponding results, and a type-lane parameter combination is obtained.
Step S130: summarizing and classifying the type-lane parameter combinations to determine the preset condition requirements, wherein the preset condition requirements are data acquisition requirements of various road types.
Summarizing the type-lane parameter combinations, carrying out combination classification based on lane types, wherein the road video acquisition requirements corresponding to different classification groups are different, for example, the influence of high light, shadow, people flow and the like is considered on a trunk road and a branch road, the image definition under the influence of the external environment is ensured, the acquisition requirements of each classification group are respectively determined according to the acquisition angle and the like, and corresponding identification is carried out, namely, the acquisition requirements are used as the preset condition requirements, so that the pertinence and the accuracy of the video acquisition result are ensured.
Step S200: scene classification is carried out based on the road video set, and a classified video set is obtained;
specifically, the road video set includes collected videos of different road types under multiple scenes, wherein straight roads, curves, environmental darkness, lane line integrity and the like are used as scene influence indexes, and the road video set is classified based on the scene influence indexes.
Preferably, the road types are used as primary classification standards and are divided into primary classification results with each road type as a single category. And respectively carrying out secondary classification on the primary classification result based on the scene influence indexes, carrying out mapping association on the primary classification result and the secondary classification result as secondary classification results to generate a video set classification tree, and carrying out video set classification as the classified video set, so that the ordering of information can be improved, and the requirement information extraction can be carried out rapidly.
Step S300: according to the collected position information of the classified video set, carrying out vehicle coordinate position conversion and determining coordinate conversion information;
specifically, the classified video sets are subjected to position location, acquired position information is acquired, in order to ensure definition and physical meaning of position parameters, the acquired position information corresponding to the video sets is required to be converted into position coordinates under a vehicle body coordinate system, the classified video sets are respectively subjected to frame-by-frame position location and coordinate conversion, converted coordinates are identified, the coordinate conversion information is generated, and the vehicle is convenient to directly perform positioning analysis based on the position coordinates.
Step S400: based on the coordinate conversion information, respectively carrying out feature analysis on various videos in the classified video set to determine various scene image features;
specifically, a classified video is randomly extracted based on the classified video set, video segmentation is carried out to determine a plurality of video shot sequences, sequence starting frame recognition is respectively carried out, the video shot sequences are used as key frames, key frame extraction is respectively carried out on various videos in the classified video set, a plurality of groups of key frames are determined, image feature recognition is respectively carried out on the plurality of groups of key frames, and classification integration is carried out on image feature recognition results to generate various scene image features. And then carrying out image feature recognition of the key frame based on a convolution learning model.
Further, the step S400 of the present application further includes:
step S410: determining key frames for the various videos by utilizing a shot boundary algorithm;
step S420: analyzing category constraint conditions of various video sets, and determining category constraint information;
step S430: and performing convolutional model deep learning based on the category constraint information, and performing feature analysis on the key frames by using a convolutional learning model to determine the image features of various scenes.
Specifically, the classified video set is obtained by classifying the video set, a video is randomly extracted from various videos in the classified video, the displacement value of two adjacent frames of images is determined based on video time sequence, a displacement value deviation threshold is set, when the displacement value of the images is larger than or equal to the displacement value deviation threshold, video segmentation points are determined, a plurality of shot sequences are determined, initial frame identification extraction is respectively carried out, video analysis is carried out based on a shot boundary algorithm, and the initial frames of the shots are used as key frames.
Further, the various video sets are subjected to category constraint condition analysis, such as lane type, lane line type, external environment parameters and the like, the image base tone and texture are identified for judgment, corresponding parameter information is determined to serve as the category constraint information, and corresponding identification is carried out with the various video sets, so that identification and distinction are facilitated. Furthermore, convolutional neural network training is carried out based on the category constraint information, and improvement of the multi-level hidden layer is carried out by carrying out convolutional model deep learning. Wherein, can combine the key frame screening formula to advance And (3) performing self-adaptive extraction:wherein->Frame frequency interval based on last extracted key frame image, < >>For item i constraint,/>And (3) inputting the key frame image into a convolutional learning model for configuration weight of the ith constraint condition, wherein the main framework of the convolutional learning model is in a convolutional neural network structure, carrying out image feature recognition of the key frame based on the convolutional learning model, determining that the key frame contains image primary color tone features, texture features and the like, and the feature corresponding to different scenes has different information, for example, urban streets may have more straight lines or rectangular structures, rural areas may have more curves and irregular textures, or in early morning or evening, the intensity and angle of light can lead to different image brightness, contrast and the like, carrying out feature matching and normalization based on the information, and generating various scene image features including image features and corresponding feature values, thereby effectively guaranteeing recognition accuracy of the various scene image features.
Step S500: preprocessing the various scene image features and carrying out feature identification to obtain a lane feature identification information set;
further, feature correlation analysis is carried out on the various scene image features, feature ordering is carried out through calculating a target maximum information coefficient, features which do not meet a preset ranking threshold are removed, feature dimension reduction processing is carried out, meanwhile, the necessity of the required features is guaranteed, further, identification integration is carried out on the features which meet the preset ranking threshold, the lane feature identification information set is generated, and the acquisition of the lane feature identification information set is used for tamping a basis for follow-up model construction.
Further, as shown in fig. 3, the preprocessing and feature identification are performed on the various scene image features to obtain a lane feature identification information set, and step S500 of the present application further includes:
step S510: setting labeling rule information;
step S520: sequentially extracting one feature from various scene image features at will, traversing the various scene image features to obtain a plurality of feature sets;
step S530: constructing a set of target scatter plots based on the plurality of feature sets;
step S540: constructing a gridding scheme set according to the target scatter diagram set;
step S550: partitioning the target scatter diagram set based on the gridding scheme set in sequence, and calculating mutual information values of a plurality of partition results in sequence to obtain a plurality of maximum mutual information values;
step S560: determining a plurality of target maximum information coefficients based on the plurality of maximum mutual information values;
step S570: the plurality of target maximum information coefficients are arranged in a descending order and are reversely matched to obtain a factor characteristic sequence;
step S580: extracting factor characteristics of a preset ranking threshold value in the factor characteristic sequence to form the target factor characteristic set;
step S590: and marking the target factor feature set by using marking conditions in marking rule information to obtain the lane feature marking information set.
Specifically, the labeling rule information is set, that is, a preset requirement for identifying lane features is set, for example, a plurality of feature identification indexes are determined, a group of labeling sequences are generated for feature identification, various scene image features are obtained through video analysis, an image feature is randomly extracted based on the various scene image features, the various scene image features comprise the image feature and corresponding feature values, the various scene image features are traversed, and a plurality of correlation feature values are determined.
Respectively carrying out correlation characteristic value identification and matching on a plurality of image characteristics, obtaining a plurality of characteristic sets, taking the plurality of correlation characteristic values as independent variables, taking matching of scene image characteristic values as dependent variables, drawing to obtain a scatter diagram according to the mapping relation between the independent variables and the dependent variables, obtaining a target scatter diagram set, comprising the scatter diagram of each image characteristic, determining a plurality of division scales of grid division, and constructing the grid division scheme set.
Randomly extracting a scatter diagram based on the target scatter diagram set, partitioning the scatter diagram based on the grid partitioning scheme set in sequence, and obtaining a plurality of partition results, wherein the difference exists in the number of the scatter diagrams in each grid in any partition result, determining the approximate probability density distribution of two variables in the scatter diagram in the grid, and calculating the mutual information value. The calculation formula of the mutual information value is as follows:
Wherein,representing mutual information values, x being the correlation eigenvalue belonging to the independent variable, y being the correlation eigenvalue belonging to the dependent variable associated with x,/->For joint density function>、/>Is a marginal density function of each correlation characteristic value. The number of the correlation features is not particularly limited, and parameters in the mutual information value calculation formula can be subjected to self-defined adjustment based on the correlation number of the correlation features.
And determining a plurality of mutual information values based on the grid division scheme set, determining the maximum mutual information value by performing mutual information value correction, respectively performing maximum mutual information value calculation on the target scatter diagram set based on the steps to obtain the plurality of maximum mutual information values, sequentially performing normalization processing on the plurality of maximum mutual information values, eliminating dimension influence, and obtaining a plurality of target maximum mutual information coefficients.
Further, the plurality of maximum information coefficients are arranged in a descending order, the factor feature sequence is determined through reverse matching of image features, a preset ranking threshold is set according to a gradient relation of image feature correlation of the sequence, factor features meeting the preset ranking threshold in the factor feature sequence are strong in correlation corresponding to the features, the influence on the images is high, factor features with weak correlation, namely the preset ranking threshold is not met, are eliminated, the target factor feature set is formed, marking conditions are determined based on marking rule information, the target factor feature set is respectively marked, the lane feature marking information set is generated, and feature dimension is reduced and subsequent image processing efficiency is improved through factor feature analysis and screening.
Step S600: deep learning of the network model is carried out by utilizing the lane characteristic identification information set, and a lane detection model is obtained;
the lane detection model is a video feature recognition analysis tool generated based on convolutional neural network training. One way of constructing the lane detection model is as follows: specifically, the lane feature identification information set is used as training data, the lane detection model is formed by training a convolutional neural network, wherein the lane detection model is obtained by mutually connecting a plurality of neurons to form the neural network and comprises an input layer, an output layer and a plurality of hidden layers, the input layer and the output layer are basic structures of the model, and the hidden layers are functional layers of the model. Preferably, when new data exist, adjustment and optimization can be performed on the basis of the original operation mechanism of the model, so that the operation performance of the model is improved.
Step S700: collecting lane recognition constraint information, establishing a lane recognition record database, and performing incremental learning on the lane detection model by using the lane recognition record database to obtain an incremental lane detection model;
further, step S700 of the present application further includes:
Step S710: collecting lane identification constraint information, wherein the lane identification constraint information comprises weather constraint information, light constraint information and road surface constraint information;
step S720: establishing a lane recognition record database based on the lane recognition constraint information, wherein the lane recognition record database comprises the lane recognition constraint information and a lane recognition result;
step 730: and performing incremental learning on the lane detection model by using the lane recognition record database to obtain an incremental lane detection model.
Specifically, the weather constraint information, the light constraint information and the road constraint information are obtained, for example, the weather constraint information comprises multiple types of weather and limiting degrees of lane recognition, the weather constraint information is used as the lane recognition constraint information, different combination states of the lane recognition constraint information cause the dispersion of lane recognition results, corresponding lane recognition results, namely, the corresponding recognition lane information under the condition of differentiated lane recognition constraint information, are respectively extracted according to the multiple combination states of the lane recognition constraint information, the lane recognition constraint information and the lane recognition results are matched and correspond to each other, the lane recognition record database is constructed, network model deep learning is carried out based on a lane feature identification information set, a lane detection model is obtained, the lane detection model is used as a primary model, the lane recognition record database is used as new data, the lane detection model is subjected to incremental learning, the incremental detection model is obtained, the model running mechanism is optimized and perfected, and the model analysis accuracy is improved.
Further, step S730 of the present application further includes:
step S731: loading video data corresponding to the lane identification constraint information in the lane identification record database to the lane detection model for lane identification detection to obtain a lane detection and identification result;
step S732: carrying out data loss analysis on the lane detection and identification result to obtain loss data;
step S733: inputting the loss data into the lane detection model for training to obtain a new lane detection and identification result;
step S734: calculating a loss value of the detection and identification result of the newly added lane to obtain newly added loss data, calculating a loss trend by using the loss data and the newly added loss data, and predicting to obtain training parameters based on the detection and identification result of the newly added lane when the loss trend meets the preset trend requirement;
step S735: when the loss trend does not meet the preset trend requirement, constructing a neighborhood lane detection recognition result by utilizing a preset step length based on the lane detection recognition result and the loss data;
step S736: and carrying out training parameter optimization by using the neighborhood lane detection recognition result to obtain neighborhood training parameters, and similarly, determining an incremental model parameter based on all the training parameters and the neighborhood training parameters to obtain the incremental lane detection model.
Specifically, the lane recognition constraint information is extracted based on the lane recognition record database, corresponding video data is determined through video matching, the video data is used as data to be detected, the data is input into the lane detection model, the lane detection recognition result is output through model analysis, the loss data is further obtained through data loss analysis of the lane detection result, the loss data is the loss data representing the relevant knowledge of the lane detection model on the video data, and then the incremental learning of the lane detection model is completed based on the loss data, wherein the incremental learning means that a learning system can continuously learn new knowledge from new samples and can save most of the previously learned knowledge, the learning model is very similar to the learning model of a human body, the incremental lane detection model is obtained through data loss analysis based on the introduced loss function, the model operation mechanism is superior to the lane detection model, the incremental detection model retains the basic functions of the lane detection model through the training of the loss data, the performance of the lane detection model is continuously updated, and the information analysis accuracy is improved.
Aiming at a specific training process, after the loss data is input into the lane detection model for training verification, an output result, namely the newly added lane detection recognition result, is obtained, mapping and checking are carried out on the newly added lane detection result and the lane detection result in the training data, and the difference value of the mapping data in the detection result is calculated and is used as the newly added loss value. And further determining the loss trend based on the loss data and the newly added loss data. The analysis of the newly added loss data is based on the loss data, and the trend state of the corresponding loss data comprises a loss change direction and a loss change scale, which are taken as the loss trend, wherein the loss trend is synchronously updated with each iteration training.
Further, the preset trend requirement, that is, the definition standard of the defined loss trend, which is set by the person skilled in the art, includes the preset loss variation direction and the preset loss variation scale. And checking the loss trend and the preset trend requirement, and when the loss trend meets the preset trend requirement, predicting and obtaining training parameters based on the detection and identification result of the newly-added lane, namely setting parameter adjustment step length, parameter adjustment direction and the like, and taking the parameters as parameter adjustment rules of the training parameters, namely adjusting the original set of identification results to perform iterative training.
And if the loss trend does not meet the preset trend requirement, indicating that the current loss degree exceeds the limit, and calibrating by performing parameter adjustment rules based on the lane detection and identification result and loss data. For example, the preset step length is determined by adjusting the set parameter adjusting step length, the lane identification information after adjustment based on the preset compensation is used as a corresponding neighborhood lane detection identification result, namely, the corresponding neighborhood lane detection identification result is used as a newly constructed identification result set, and the construction of the neighborhood lane detection identification result is completed in the same way. And optimizing the training parameters based on the detection and recognition results of the neighborhood lanes to obtain the neighborhood training parameters. Based on the steps, combining all training parameters and neighborhood training parameters, determining the incremental model parameters to perform iterative training and loss analysis of the incremental model parameters until the acquired loss data meets convergence conditions, for example, the loss data approaches to the initial input training data, and acquiring the built incremental lane detection model. And configuring a targeted adjustment rule based on the loss trend, and adjusting adaptive parameters aiming at training live so as to reduce training iteration times and accelerate model training speed.
In summary, if the loss degree meets the preset trend requirement, the current analysis loss is smaller, an adjustment rule of the training parameters is determined based on the loss trend, and the incremental lane detection model is generated based on the adjusted training parameters as incremental model parameters.
If the loss degree does not meet the preset trend requirement, the current analysis loss is larger, incremental training is performed based on an adjustment rule determined by the loss trend, so that the incremental training result and the initial difference are too large to adapt to the analysis processing of the previous information, and at the moment, the neighborhood lane detection recognition result is constructed, namely, a newly added detection branch is built in the neighborhood, and the pertinence of the model is improved.
Step S800: acquiring a road type result and lane identification constraint information, and correcting an incremental lane detection model;
further, step S800 of the present application further includes:
step S810: carrying out characteristic analysis on the lane acquisition information according to the road video set, and determining a road type information result and a lane identification constraint information result of the acquired road;
step S820: obtaining weather information through a big data platform;
step S830: obtaining weather restriction information according to the weather information;
Step S840: carrying out matching degree analysis on the lane identification constraint information result by utilizing the weather constraint information, and correcting the lane identification constraint information result by utilizing the weather constraint information;
step S850: and adding the road type information result and the lane recognition constraint information result which is determined through correction to the incremental lane detection model, and correcting the incremental lane detection model.
Specifically, video acquisition is performed based on a preset angle of a vehicle, the video stream information is acquired, feature recognition is performed on the video stream information, the road type information is acquired by performing road type matching, meanwhile weather constraint information, light constraint information and road constraint information recognition and extraction are respectively determined, lane recognition constraint information results are acquired, the lane recognition constraint information results are possibly subject to certain deviation due to environmental factors, real-time weather information retrieval is performed based on a big data platform, recognition constraint analysis such as gust weather, darker light, a field of view module with lower environmental visibility is performed based on the weather information, and road information detection causes obstruction, and as the weather constraint information, matching and corresponding the weather constraint information with the lane identification constraint information result, determining an information deviation value of the lane identification constraint information compared with the weather constraint information based on the matching result, correcting the lane identification constraint information based on the information deviation value, inputting the road type information and the corrected lane identification constraint information as real-time monitoring data into the incremental lane monitoring model for lane detection positioning, correcting the incremental lane detection model, wherein the output lane detection result can effectively exclude environmental influence factors, and improve the accuracy of the lane detection result.
Step S900: and acquiring video stream information of a preset angle of the vehicle, wherein the preset angle of the vehicle is matched with the coordinate position of the vehicle in the coordinate conversion information, identifying and detecting the video stream information through an incremental lane detection model to obtain a lane detection result, and tracking the lane based on the lane detection result.
Further, a vehicle preset angle is determined, namely, a demand angle for video acquisition is matched with a vehicle coordinate position in the coordinate conversion information, so that the coordinate conversion is convenient to directly carry out, the analysis efficiency of an acquired image is improved, a video stream of the vehicle preset angle is acquired, road type information and lane identification constraint information are determined through video stream analysis, the video stream is input into the lane detection model for detecting and positioning lane lines, the lane detection result is obtained, the lane detection result is guaranteed to be matched with an actual lane line, lane tracking is carried out based on the lane detection result, and intelligent lane accurate positioning tracking is realized.
Example two
Based on the same inventive concept as one of the lane tracking methods based on deep learning in the foregoing embodiments, as shown in fig. 4, the present application provides a lane tracking system based on deep learning, the system including:
The video acquisition module 11 is used for acquiring road video according to the preset condition requirement to obtain a road video set;
the video set classification module 12, the video set classification module 12 is used for classifying scenes based on the road video set to obtain a classified video set;
the coordinate conversion module 13 is used for converting the vehicle coordinate position according to the collected position information of the classified video set and determining coordinate conversion information;
the feature analysis module 14 is used for respectively carrying out feature analysis on various videos in the classified video set based on the coordinate conversion information to determine various scene image features;
the information set acquisition module 15 is used for preprocessing the various scene image features and identifying the features to obtain a lane feature identification information set;
the model acquisition module 16 is used for performing network model deep learning by utilizing the lane characteristic identification information set to obtain a lane detection model;
the increment learning module 17 is used for collecting lane recognition constraint information, establishing a lane recognition record database, and performing increment learning on the lane detection model by using the lane recognition record database to obtain an increment lane detection model;
The model correction module 18 is used for acquiring a road type result and lane identification constraint information and correcting an incremental lane detection model by the model correction module 18;
the information detection and tracking module 19, the information detection and tracking module 19 is configured to collect video stream information of a preset angle of a vehicle, where the preset angle of the vehicle is matched with a coordinate position of the vehicle in the coordinate conversion information, identify and detect the video stream information through an incremental lane detection model to obtain a lane detection result, and perform lane tracking based on the lane detection result.
Further, the system further comprises:
the information acquisition module is used for acquiring road type information and lane state information;
the parameter combination acquisition module is used for carrying out combination arrangement based on road type information and lane state information to obtain type-lane parameter combination;
the requirement determining module is used for summarizing and classifying the type-lane parameter combinations to determine the preset condition requirements, wherein the preset condition requirements are data acquisition requirements of various road types.
Further, the system further comprises:
the constraint information acquisition module is used for collecting lane identification constraint information, wherein the lane identification constraint information comprises weather constraint information, light constraint information and road surface constraint information;
the database acquisition module is used for establishing a lane identification record database based on the lane identification constraint information, wherein the lane identification record database comprises the lane identification constraint information and a lane identification result.
Further, the system further comprises:
the characteristic information determining module is used for carrying out characteristic analysis on the lane acquisition information according to the road video set and determining a road type information result and a lane identification constraint information result of the acquired road;
the weather information acquisition module is used for acquiring weather information through the big data platform;
the weather constraint information acquisition module is used for acquiring weather constraint information according to the weather information;
the information matching correction module is used for analyzing the matching degree of the lane identification constraint information result by utilizing the weather constraint information and correcting the lane identification constraint information result by utilizing the weather constraint information;
The detection result acquisition module is used for adding the road type information result and the lane identification constraint information result which is determined through correction to the incremental lane detection model to obtain a lane detection result.
Further, the system further comprises:
the result acquisition module is used for loading video data corresponding to the lane identification constraint information in the lane identification record database to the lane detection model to carry out lane identification detection, so as to obtain a lane detection and identification result;
the loss data acquisition module is used for carrying out data loss analysis on the lane detection and identification result to obtain loss data;
the model training module is used for inputting the loss data into the lane detection model for training to obtain a newly added lane detection recognition result;
the loss analysis module is used for calculating a loss value of the detection and identification result of the newly-added lane to obtain newly-added loss data, calculating a loss trend by using the loss data and the newly-added loss data, and predicting to obtain training parameters based on the detection and identification result of the newly-added lane when the loss trend meets the preset trend requirement;
The neighborhood recognition result acquisition module is used for constructing a neighborhood lane detection recognition result by using a preset step length based on the lane detection recognition result and the loss data when the loss trend does not meet the preset trend requirement;
and the optimizing training module is used for optimizing training parameters by utilizing the neighborhood lane detection recognition result to obtain neighborhood training parameters, and the like, and determining the incremental model parameters based on all the training parameters and the neighborhood training parameters to obtain the incremental lane detection model.
Wherein the system further comprises the steps of:
in the incremental learning process of the lane detection model, after the loss data is input into the lane detection model for training verification, an output result, namely the newly added lane detection recognition result, is obtained, the newly added lane detection result is mapped and checked with the lane detection result in the training data, and the difference value of the mapping data in the detection result is calculated and used as the newly added loss value; further, based on the loss data and the newly added loss data, determining a loss trend, wherein the loss trend is synchronously updated along with each iteration training;
If the loss trend does not meet the preset trend requirement, indicating that the current loss degree exceeds the limit, calibrating a parameter adjustment rule based on the lane detection and identification result and loss data, optimizing training parameters based on the lane detection and identification result, and acquiring the neighborhood training parameters; based on the steps, combining all training parameters and neighborhood training parameters, determining the incremental model parameters, performing iterative training and loss analysis of the incremental model parameters until the acquired loss data meet convergence conditions, and acquiring the constructed incremental lane detection model.
Further, the system further comprises:
the key frame determining module is used for determining key frames of the various videos by utilizing a shot boundary algorithm;
the category constraint information acquisition module is used for analyzing category constraint conditions of various video sets and determining category constraint information;
and the image feature determining module is used for performing convolutional model deep learning based on the category constraint information, performing feature analysis on the key frames by using a convolutional learning model, and determining the image features of various scenes.
Wherein the system further comprises the steps of:
in the method for analyzing the category constraint conditions of various video sets and determining the category constraint information, the various video sets are analyzed for category constraint conditions, including lane types, lane line types and external environment parameters, and corresponding parameter information is determined as the category constraint information by identifying image base tone and texture, and corresponding identification is carried out on the various video sets;
performing convolutional neural network training based on the category constraint information, and performing deep learning of a convolutional model to perfect a multi-level hidden layer; in the training process, the key frame screening formula is combined for self-adaptive extraction:wherein->Frame frequency interval based on last extracted key frame image, < >>For item i constraint,/>The method comprises the steps of setting weight for the ith constraint condition, inputting a key frame image into a convolution learning model, identifying image features of the key frame, determining that image primary color tone features and texture features contained in the key frame are different in feature characterizing information corresponding to different scenes, and further performing feature matching and normalization to generate various scene graphs The image features comprise image features and corresponding feature values.
Further, the system further comprises:
the rule setting module is used for setting labeling rule information;
the feature set acquisition module is used for extracting one feature from various scene image features at will in sequence, traversing the various scene image features and obtaining a plurality of feature sets;
a scatter plot construction module for constructing a set of target scatter plots based on the plurality of feature sets;
the scheme set building module is used for building a meshed scheme set according to the target scatter diagram set;
the mutual information value calculation module is used for dividing the target scatter diagram set based on the gridding scheme set in sequence and calculating the mutual information values of a plurality of division results in sequence to obtain a plurality of maximum mutual information values;
the maximum information coefficient determining module is used for determining a plurality of target maximum information coefficients based on the plurality of maximum mutual information values;
the characteristic sequence matching module is used for arranging the plurality of target maximum information coefficients in a descending order and reversely matching to obtain a factor characteristic sequence;
The feature extraction module is used for extracting factor features of a preset ranking threshold value in the factor feature sequence to form the target factor feature set;
the feature set identification module is used for identifying the target factor feature set by using the marking conditions in the marking rule information to obtain the lane feature identification information set.
In the present disclosure, through the foregoing detailed description of a deep learning-based lane tracking method, those skilled in the art can clearly understand that a deep learning-based lane tracking method and system in this embodiment, for the apparatus disclosed in the embodiments, the description is relatively simple, and relevant places refer to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The present invention is not limited to the above-mentioned embodiments, but any modifications, equivalents, improvements and modifications within the scope of the invention will be apparent to those skilled in the art.

Claims (8)

1. The lane tracking method based on deep learning is characterized by comprising the following steps:
acquiring a road video according to the preset condition requirement to obtain a road video set;
scene classification is carried out based on the road video set, and a classified video set is obtained;
according to the collected position information of the classified video set, carrying out vehicle coordinate position conversion and determining coordinate conversion information;
based on the coordinate conversion information, respectively carrying out feature analysis on various videos in the classified video set to determine various scene image features;
preprocessing and feature identification are carried out on the various scene image features to obtain a lane feature identification information set, wherein the method comprises the following steps:
setting labeling rule information;
sequentially extracting one feature from various scene image features at will, traversing the various scene image features to obtain a plurality of feature sets;
constructing a set of target scatter plots based on the plurality of feature sets;
Constructing a gridding scheme set according to the target scatter diagram set;
partitioning the target scatter diagram set based on the gridding scheme set in sequence, and calculating mutual information values of a plurality of partition results in sequence to obtain a plurality of maximum mutual information values;
determining a plurality of target maximum information coefficients based on the plurality of maximum mutual information values;
the plurality of target maximum information coefficients are arranged in a descending order and are reversely matched to obtain a factor characteristic sequence;
extracting factor characteristics of a preset ranking threshold value in the factor characteristic sequence to form a target factor characteristic set;
marking the target factor feature set by using marking conditions in marking rule information to obtain the lane feature marking information set;
deep learning of the network model is carried out by utilizing the lane characteristic identification information set, and a lane detection model is obtained;
collecting lane identification constraint information, wherein the lane identification constraint information comprises weather constraint information, light constraint information and road surface constraint information; establishing a lane recognition record database based on the lane recognition constraint information, wherein the lane recognition record database comprises the lane recognition constraint information and a lane recognition result;
Performing incremental learning on the lane detection model by using the lane recognition record database to obtain an incremental lane detection model, wherein the incremental lane detection model comprises: loading video data corresponding to the lane identification constraint information in the lane identification record database to the lane detection model for lane identification detection to obtain a lane detection and identification result;
carrying out data loss analysis on the lane detection and identification result to obtain loss data;
inputting the loss data into the lane detection model for training to obtain a new lane detection and identification result;
calculating a loss value of the detection and identification result of the newly added lane to obtain newly added loss data, calculating a loss trend by using the loss data and the newly added loss data, and predicting to obtain training parameters based on the detection and identification result of the newly added lane when the loss trend meets the preset trend requirement;
when the loss trend does not meet the preset trend requirement, constructing a neighborhood lane detection recognition result by utilizing a preset step length based on the lane detection recognition result and the loss data;
carrying out training parameter optimization by using a neighborhood lane detection recognition result to obtain neighborhood training parameters, and similarly, determining an incremental model parameter based on all the training parameters and the neighborhood training parameters to obtain the incremental lane detection model;
Obtaining road type results and lane identification constraint information, and correcting an incremental lane detection model, wherein the method comprises the following steps: carrying out characteristic analysis on the lane acquisition information according to the road video set, and determining a road type information result and a lane identification constraint information result of the acquired road; obtaining weather information through a big data platform; obtaining weather restriction information according to the weather information; carrying out matching degree analysis on the lane identification constraint information result by utilizing the weather constraint information, and correcting the lane identification constraint information result by utilizing the weather constraint information; adding the road type information result and the lane identification constraint information result which is determined by correction to the incremental lane detection model, and correcting the incremental lane detection model;
and acquiring video stream information of a preset angle of the vehicle, wherein the preset angle of the vehicle is matched with the coordinate position of the vehicle in the coordinate conversion information, identifying and detecting the video stream information through a corrected incremental lane detection model to obtain a lane detection result, and tracking a lane based on the lane detection result.
2. The deep learning based lane tracking method of claim 1, further comprising the step of, prior to the road video acquisition:
Based on the road type information and the lane state information, carrying out combination arrangement to obtain a type-lane parameter combination;
summarizing and classifying the type-lane parameter combinations to determine the preset condition requirements; the preset condition requirement is a data acquisition requirement of each road type.
3. The deep learning based lane tracking method of claim 1, wherein the method of obtaining a lane detection model comprises:
the lane characteristic identification information set is used as training data, the lane detection model is formed by training a convolutional neural network, wherein the lane detection model is obtained by mutually connecting a plurality of neurons to form the neural network and comprises an input layer, an output layer and a plurality of hidden layers, the input layer and the output layer are basic structures of the model, and the hidden layers are functional layers of the model.
4. The deep learning-based lane tracking method according to claim 1, characterized in that the method comprises:
in the incremental learning process of the lane detection model, acquiring an incremental lane detection model after the loss data is input into the lane detection model for training and verification;
wherein, the lane detection model carries out training verification, still includes:
Carrying out mapping correction on the newly added lane detection result and the lane detection result in the training data, and calculating the difference value of the mapping data in the detection result as a newly added loss value;
determining a loss trend based on the loss data and the newly added loss data, wherein the loss trend is synchronously updated along with each iteration training;
if the loss trend does not meet the preset trend requirement, calibrating a parameter adjustment rule based on the lane detection and identification result and loss data, and performing training parameter optimization based on the lane detection and identification result to acquire the neighborhood training parameters;
and combining all training parameters and neighborhood training parameters, determining an incremental model parameter to perform iterative training and loss analysis of the incremental model parameter until the acquired loss data meets convergence conditions, and acquiring the constructed incremental lane detection model.
5. The deep learning-based lane tracking method of claim 1, wherein the method of obtaining road type results, lane recognition constraint information, and correcting the incremental lane detection model is as follows:
carrying out characteristic analysis on the lane acquisition information according to the road video set, and determining a road type information result and a lane identification constraint information result of the acquired road;
Obtaining weather information through a big data platform;
obtaining weather restriction information according to the weather information;
carrying out matching degree analysis on the lane identification constraint information result by utilizing the weather constraint information, and correcting the lane identification constraint information result by utilizing the weather constraint information;
and adding the road type information result and the lane recognition constraint information result which is determined through correction to the incremental lane detection model, and correcting the incremental lane detection model.
6. The method for tracking a lane based on deep learning according to claim 1, wherein the method for respectively performing feature analysis on each type of video in the classified video set to determine each type of scene image features comprises:
determining key frames for the various videos by utilizing a shot boundary algorithm;
analyzing category constraint conditions of various video sets, and determining category constraint information;
and performing convolutional model deep learning based on the category constraint information, and performing feature analysis on the key frames by using a convolutional learning model to determine the image features of various scenes.
7. The method for deep learning based lane tracking according to claim 6, wherein in the method for determining category constraint information by analyzing category constraint conditions for various video sets, the method further comprises:
Analyzing the category constraint conditions of the various video sets, determining category constraint information by identifying image base tone and texture information, and identifying the category constraint information corresponding to the various video sets, wherein the category constraint conditions comprise lane types, lane line types and external environment parameters;
performing convolutional neural network training based on the category constraint information to perfect a multi-level hidden layer;
the improvement of the multi-level hidden layer is carried out by carrying out the deep learning of the convolution model, and the method further comprises the following steps:
for the training process, the self-adaptive extraction is performed by combining a key frame screening formula, wherein the key frame screening formula is as follows:
wherein,for frame frequency interval based on last extracted key frame image, +.>For item i constraint,/>Configuring weights for the ith constraint condition;
inputting a key frame image into a convolution learning model, identifying image features of the key frame, determining image primary color tone features and texture features contained in the key frame, performing feature matching and normalization, and generating various scene image features, wherein the feature characterizing information corresponding to different scenes is different, and the various scene image features comprise image features and corresponding feature values.
8. A deep learning-based lane tracking system, the system comprising:
the video acquisition module is used for acquiring road video according to the preset condition requirement to obtain a road video set;
the video set classification module is used for classifying scenes based on the road video set to obtain a classified video set;
the coordinate conversion module is used for converting the coordinate position of the vehicle according to the acquired position information of the classified video set and determining coordinate conversion information;
the feature analysis module is used for respectively carrying out feature analysis on various videos in the classified video set based on the coordinate conversion information to determine various scene image features;
the information set acquisition module is used for preprocessing the image features of the various scenes and identifying the features to obtain a lane feature identification information set; the method for preprocessing the various scene image features and obtaining the lane feature identification information set comprises the following steps:
setting labeling rule information;
sequentially extracting one feature from various scene image features at will, traversing the various scene image features to obtain a plurality of feature sets;
Constructing a set of target scatter plots based on the plurality of feature sets;
constructing a gridding scheme set according to the target scatter diagram set;
partitioning the target scatter diagram set based on the gridding scheme set in sequence, and calculating mutual information values of a plurality of partition results in sequence to obtain a plurality of maximum mutual information values;
determining a plurality of target maximum information coefficients based on the plurality of maximum mutual information values;
the plurality of target maximum information coefficients are arranged in a descending order and are reversely matched to obtain a factor characteristic sequence;
extracting factor characteristics of a preset ranking threshold value in the factor characteristic sequence to form a target factor characteristic set;
marking the target factor feature set by using marking conditions in marking rule information to obtain the lane feature marking information set;
the model acquisition module is used for carrying out network model deep learning by utilizing the lane characteristic identification information set to obtain a lane detection model;
the incremental learning module is used for collecting lane recognition constraint information, establishing a lane recognition record database, and performing incremental learning on the lane detection model by using the lane recognition record database to obtain an incremental lane detection model; the method for collecting lane identification constraint information and establishing a lane identification record database comprises the following steps: collecting lane identification constraint information, wherein the lane identification constraint information comprises weather constraint information, light constraint information and road surface constraint information; establishing a lane recognition record database based on the lane recognition constraint information, wherein the lane recognition record database comprises the lane recognition constraint information and a lane recognition result;
The method for obtaining the incremental lane detection model by utilizing the lane identification record database to perform incremental learning on the lane detection model comprises the following steps: loading video data corresponding to the lane identification constraint information in the lane identification record database to the lane detection model for lane identification detection to obtain a lane detection and identification result;
carrying out data loss analysis on the lane detection and identification result to obtain loss data;
inputting the loss data into the lane detection model for training to obtain a new lane detection and identification result;
calculating a loss value of the detection and identification result of the newly added lane to obtain newly added loss data, calculating a loss trend by using the loss data and the newly added loss data, and predicting to obtain training parameters based on the detection and identification result of the newly added lane when the loss trend meets the preset trend requirement;
when the loss trend does not meet the preset trend requirement, constructing a neighborhood lane detection recognition result by utilizing a preset step length based on the lane detection recognition result and the loss data;
carrying out training parameter optimization by using a neighborhood lane detection recognition result to obtain neighborhood training parameters, and similarly, determining an incremental model parameter based on all the training parameters and the neighborhood training parameters to obtain the incremental lane detection model;
The model correction module is used for acquiring a road type result and lane identification constraint information and correcting the incremental lane detection model; the method for correcting the incremental lane detection model comprises the following steps of: carrying out characteristic analysis on the lane acquisition information according to the road video set, and determining a road type information result and a lane identification constraint information result of the acquired road; obtaining weather information through a big data platform; obtaining weather restriction information according to the weather information; carrying out matching degree analysis on the lane identification constraint information result by utilizing the weather constraint information, and correcting the lane identification constraint information result by utilizing the weather constraint information; adding the road type information result and the lane identification constraint information result which is determined by correction to the incremental lane detection model, and correcting the incremental lane detection model;
the information detection tracking module is used for collecting video stream information of a vehicle preset angle, wherein the vehicle preset angle is matched with the vehicle coordinate position in the coordinate conversion information, the video stream information is identified and detected through the corrected incremental lane detection model, a lane detection result is obtained, and lane tracking is carried out based on the lane detection result.
CN202410070092.4A 2024-01-18 2024-01-18 Lane tracking method and system based on deep learning Active CN117593717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410070092.4A CN117593717B (en) 2024-01-18 2024-01-18 Lane tracking method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410070092.4A CN117593717B (en) 2024-01-18 2024-01-18 Lane tracking method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN117593717A CN117593717A (en) 2024-02-23
CN117593717B true CN117593717B (en) 2024-04-05

Family

ID=89911848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410070092.4A Active CN117593717B (en) 2024-01-18 2024-01-18 Lane tracking method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN117593717B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09223218A (en) * 1996-02-15 1997-08-26 Toyota Motor Corp Method and device for detecting traveling route
JP2015069289A (en) * 2013-09-27 2015-04-13 日産自動車株式会社 Lane recognition device
CN109389102A (en) * 2018-11-23 2019-02-26 合肥工业大学 The system of method for detecting lane lines and its application based on deep learning
CN111145545A (en) * 2019-12-25 2020-05-12 西安交通大学 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
CN112487907A (en) * 2020-11-23 2021-03-12 北京理工大学 Dangerous scene identification method and system based on graph classification
CN113191256A (en) * 2021-04-28 2021-07-30 北京百度网讯科技有限公司 Method and device for training lane line detection model, electronic device and storage medium
CN113936266A (en) * 2021-10-19 2022-01-14 西安电子科技大学 Deep learning-based lane line detection method
CN114252082A (en) * 2022-03-01 2022-03-29 苏州挚途科技有限公司 Vehicle positioning method and device and electronic equipment
CN114332822A (en) * 2021-12-31 2022-04-12 北京百度网讯科技有限公司 Method and device for determining lane group type and electronic equipment
CN114913498A (en) * 2022-05-27 2022-08-16 南京信息工程大学 Parallel multi-scale feature aggregation lane line detection method based on key point estimation
CN115276006A (en) * 2022-09-26 2022-11-01 江苏永鼎股份有限公司 Load prediction method and system for power integration system
WO2022237272A1 (en) * 2021-05-11 2022-11-17 北京车和家信息技术有限公司 Road image marking method and device for lane line recognition
CN116110230A (en) * 2022-11-02 2023-05-12 东北林业大学 Vehicle lane crossing line identification method and system based on vehicle-mounted camera
CN116189130A (en) * 2023-02-24 2023-05-30 智道网联科技(北京)有限公司 Lane line segmentation method and device based on image annotation model
CN116310754A (en) * 2023-02-16 2023-06-23 中国科学院计算技术研究所 Incremental learning method and system for detecting fake face image video
CN116486352A (en) * 2023-03-30 2023-07-25 长沙理工大学 Lane line robust detection and extraction method based on road constraint
CN116524382A (en) * 2023-05-22 2023-08-01 西南交通大学 Bridge swivel closure accuracy inspection method system and equipment
CN116665176A (en) * 2023-07-21 2023-08-29 石家庄铁道大学 Multi-task network road target detection method for vehicle automatic driving
CN117333846A (en) * 2023-11-03 2024-01-02 中国科学技术大学 Detection method and system based on sensor fusion and incremental learning in severe weather

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09223218A (en) * 1996-02-15 1997-08-26 Toyota Motor Corp Method and device for detecting traveling route
JP2015069289A (en) * 2013-09-27 2015-04-13 日産自動車株式会社 Lane recognition device
CN109389102A (en) * 2018-11-23 2019-02-26 合肥工业大学 The system of method for detecting lane lines and its application based on deep learning
CN111145545A (en) * 2019-12-25 2020-05-12 西安交通大学 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
CN112487907A (en) * 2020-11-23 2021-03-12 北京理工大学 Dangerous scene identification method and system based on graph classification
CN113191256A (en) * 2021-04-28 2021-07-30 北京百度网讯科技有限公司 Method and device for training lane line detection model, electronic device and storage medium
WO2022237272A1 (en) * 2021-05-11 2022-11-17 北京车和家信息技术有限公司 Road image marking method and device for lane line recognition
CN113936266A (en) * 2021-10-19 2022-01-14 西安电子科技大学 Deep learning-based lane line detection method
CN114332822A (en) * 2021-12-31 2022-04-12 北京百度网讯科技有限公司 Method and device for determining lane group type and electronic equipment
CN114252082A (en) * 2022-03-01 2022-03-29 苏州挚途科技有限公司 Vehicle positioning method and device and electronic equipment
CN114913498A (en) * 2022-05-27 2022-08-16 南京信息工程大学 Parallel multi-scale feature aggregation lane line detection method based on key point estimation
CN115276006A (en) * 2022-09-26 2022-11-01 江苏永鼎股份有限公司 Load prediction method and system for power integration system
CN116110230A (en) * 2022-11-02 2023-05-12 东北林业大学 Vehicle lane crossing line identification method and system based on vehicle-mounted camera
CN116310754A (en) * 2023-02-16 2023-06-23 中国科学院计算技术研究所 Incremental learning method and system for detecting fake face image video
CN116189130A (en) * 2023-02-24 2023-05-30 智道网联科技(北京)有限公司 Lane line segmentation method and device based on image annotation model
CN116486352A (en) * 2023-03-30 2023-07-25 长沙理工大学 Lane line robust detection and extraction method based on road constraint
CN116524382A (en) * 2023-05-22 2023-08-01 西南交通大学 Bridge swivel closure accuracy inspection method system and equipment
CN116665176A (en) * 2023-07-21 2023-08-29 石家庄铁道大学 Multi-task network road target detection method for vehicle automatic driving
CN117333846A (en) * 2023-11-03 2024-01-02 中国科学技术大学 Detection method and system based on sensor fusion and incremental learning in severe weather

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
incremental learning-based lane detection for Automated rubber-tired gantries in container termial;yunjian fang 等;《IEEE Xplore》;20230911;全文 *
基于特征通道和空间位置注意力的三维点云特征学习网络;何发智 等;《计算机工程与科学》;20221231;全文 *
结构化道路上应用区域划分的车道线识别;王越;范先星;刘金城;庞振营;;计算机应用;20150910(09);全文 *

Also Published As

Publication number Publication date
CN117593717A (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN108830188B (en) Vehicle detection method based on deep learning
CN107609525B (en) Remote sensing image target detection method for constructing convolutional neural network based on pruning strategy
CN116758059B (en) Visual nondestructive testing method for roadbed and pavement
CN105718866A (en) Visual target detection and identification method
CN110866430A (en) License plate recognition method and device
CN108154158B (en) Building image segmentation method for augmented reality application
CN112818775B (en) Forest road rapid identification method and system based on regional boundary pixel exchange
CN117173913B (en) Traffic control method and system based on traffic flow analysis at different time periods
CN106228136A (en) Panorama streetscape method for secret protection based on converging channels feature
CN110084198A (en) The airport CNN indoor scene recognition methods based on Fisher signature analysis
CN116901089B (en) Multi-angle vision distance robot control method and system
CN111882573B (en) Cultivated land block extraction method and system based on high-resolution image data
CN117593717B (en) Lane tracking method and system based on deep learning
CN111191510B (en) Relation network-based remote sensing image small sample target identification method in complex scene
CN116206208A (en) Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence
CN116524344A (en) Tomato string picking point detection method based on RGB-D information fusion
CN114627493A (en) Gait feature-based identity recognition method and system
CN117116065B (en) Intelligent road traffic flow control method and system
CN112364844A (en) Data acquisition method and system based on computer vision technology
CN115858846B (en) Skier image retrieval method and system based on deep learning
CN110909670A (en) Unstructured road identification method
CN117218613B (en) Vehicle snapshot recognition system and method
CN111241944B (en) Scene recognition and loop detection method based on background target and background feature matching
CN116862952B (en) Video tracking method for substation operators under similar background conditions
CN117541799B (en) Large-scale point cloud semantic segmentation method based on online random forest model multiplexing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant