CN114092914A - Lane line detection method and device - Google Patents
Lane line detection method and device Download PDFInfo
- Publication number
- CN114092914A CN114092914A CN202111420948.9A CN202111420948A CN114092914A CN 114092914 A CN114092914 A CN 114092914A CN 202111420948 A CN202111420948 A CN 202111420948A CN 114092914 A CN114092914 A CN 114092914A
- Authority
- CN
- China
- Prior art keywords
- lane line
- event
- line detection
- point cloud
- event point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a lane line detection method and a device, wherein the method comprises the following steps: acquiring an event point cloud consisting of event data of a target scene acquired within set time; calculating a surface normal vector of the event point cloud; obtaining the outline edge of the track based on the track change condition of the event point cloud; constructing an unstructured graph based on the contour edge, the event points of the event point cloud and the corresponding surface normal vectors thereof; and inputting the unstructured map into a trained lane line detection model to obtain a lane line detection result. Therefore, the event camera can be used for more accurately capturing the slight change of the lane line, the image quality is ensured not to be influenced by illumination change and motion blur, the accuracy of the final lane line detection result is improved, and the automatic driving vehicle can accurately detect the position of the lane line. Data collected by the event camera is treated as point cloud data, diversified input data are obtained by constructing an unstructured graph, and accuracy of lane line detection results is improved.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to a lane line detection method and a device,
background
The perception of the environment in an autonomous driving task also presents a number of challenging challenges, such as lane line extraction, object detection, and traffic sign recognition. With the rapid development of sensors, a great influence is generated on the perception task in automatic driving. The core of the perceptual tasks involves extracting lane information, since detecting lanes helps to determine the precise location of the autonomous vehicle between lanes, while accurate lane line detection is critical to lane departure and lane planning.
At present, the traditional lane line detection is realized by depending on data collected by an RGB camera, but the RGB camera has the problems of violent change of illumination, unclear scene shooting and the like during data collection, so that the accuracy rate of the lane line detection is seriously influenced.
Disclosure of Invention
In view of this, embodiments of the present invention provide a lane line detection method and apparatus to solve the problem in the prior art that the accuracy of a detection result is difficult to guarantee in a manner of using an RGB camera to collect data to realize lane line detection.
According to a first aspect, an embodiment of the present invention provides a lane line detection method, including:
acquiring an event point cloud consisting of event data of a target scene acquired by an event camera within set time;
calculating a surface normal vector of the event point cloud;
obtaining the outline edge of the track based on the track change condition of the event point cloud;
constructing an unstructured graph based on the contour edge, the event points of the event point cloud and the corresponding surface normal vectors thereof;
and inputting the unstructured graph into a trained lane line detection model to obtain a lane line detection result.
Optionally, the obtaining a contour edge of the track based on the track change condition of the event point cloud includes:
sequentially calculating the local distribution of the current point light flow vector on the track based on the forming time of the track of the event point cloud;
determining whether the current event point belongs to the track or not based on the local distribution result of the current point light flow vector;
and determining a set formed by all event points belonging to the track as the contour edge of the track.
Optionally, constructing an unstructured graph based on the contour edge, the event points of the event point cloud, and their corresponding surface normal vectors includes:
configuring the event point as a node;
configuring a surface normal vector corresponding to the event node and the contour edge as a connecting edge of the event node;
and constructing the unstructured graph based on the nodes and the connecting edges.
Optionally, the calculating a surface normal vector of the event point cloud includes:
sequentially selecting current event points in the event point cloud;
searching a near point for the current event point, and fitting the near event point containing the current event point into a curved surface;
performing Principal Component Analysis (PCA) on the event points in the curved surface to obtain a feature vector corresponding to the minimum feature value;
and determining the characteristic vector as a surface normal vector of the curved surface corresponding to the current event point.
Optionally, the lane line detection model is composed of a plurality of map convolutional layers and a multilayer sensor, wherein,
the multilayer perceptron is composed of an input layer, 3 full-connection hidden layers and an output layer, and full-connection modes are adopted among different layers;
each map convolutional layer of the plurality of map convolutional layers comprises 64 input channels and 64 output channels, and each output channel outputs features of different scales;
features of different scales of the outputs of the plurality of map convolutional layers are concatenated for input to the input layer of the multi-layer sensor.
Optionally, the lane line detection model is trained by:
acquiring lane line image data sets acquired by an event camera under different scenes, wherein each image data in the lane line image data sets is composed of a plurality of events;
and carrying out lane line information labeling on each image data, wherein the lane line information labeling comprises the following steps: whether the image data belongs to the area contains a lane line and three-dimensional coordinate information of the lane line;
inputting the unstructured graph corresponding to each image data in the lane line image data set into the lane line detection model to obtain a prediction result of a lane line;
calculating a loss function value of a model corresponding to the loss function based on the prediction result and the corresponding lane line information label;
and adjusting model parameters of the lane line detection model based on the loss function values, and returning to the step of acquiring lane line image data sets acquired by the event camera in different scenes until the loss function values meet preset requirements to obtain the trained lane line detection model.
Optionally, the lane line detection result includes: whether the target scene contains the lane line and the three-dimensional coordinate information of the lane line.
According to a second aspect, an embodiment of the present invention provides a lane line detection apparatus, including:
the acquisition module is used for acquiring an event point cloud consisting of event data of a target scene acquired within set time;
the first processing module is used for calculating a surface normal vector of the event point cloud;
the second processing module is used for obtaining the outline edge of the track based on the track change condition of the event point cloud;
the third processing module is used for constructing an unstructured graph based on the contour edge, the event points of the event point cloud and the corresponding surface normal vectors thereof;
and the fourth processing module is used for inputting the unstructured map into a trained lane line detection model to obtain a lane line detection result.
According to a third aspect, embodiments of the present invention provide a non-transitory computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of the first aspect of the present invention and any one of its alternatives.
According to a fourth aspect, an embodiment of the present invention provides an electronic device, including: a memory and a processor, the memory and the processor being communicatively coupled to each other, the memory having stored therein computer instructions, the processor being configured to execute the computer instructions to perform the method of the first aspect of the present invention and any one of the alternatives thereof.
The technical scheme of the invention has the following advantages:
the embodiment of the invention provides a lane line detection method and a lane line detection device, wherein an event point cloud formed by event data of a target scene collected by an event camera within a set time is obtained; calculating a surface normal vector of the event point cloud; obtaining the outline edge of the track based on the track change condition of the event point cloud; constructing an unstructured graph based on the contour edge, the event points of the event point cloud and the corresponding surface normal vectors thereof; and inputting the unstructured map into a trained lane line detection model to obtain a lane line detection result. Therefore, by utilizing the characteristics of low time delay and high dynamic range of the event camera, the subtle changes of the lane line can be captured more accurately, the image quality can be ensured not to be influenced by illumination changes and motion blur, the accuracy of the final lane line detection result is improved, and the position of the lane line can be accurately detected by the automatic driving vehicle. Data collected by the event camera is treated as point cloud data, diversified input data are obtained by constructing an unstructured graph, and accuracy of lane line detection results is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a lane line detection method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a lane line detection apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device in an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The perception of the environment in an autonomous driving task also presents a number of challenging challenges, such as lane line extraction, object detection, and traffic sign recognition. With the rapid development of sensors, a great influence is generated on the perception task in automatic driving. The core of the perceptual tasks involves extracting lane information, since detecting lanes helps to determine the precise location of the autonomous vehicle between lanes, while accurate lane line detection is critical to lane departure and lane planning.
At present, the traditional lane line detection is realized by depending on data collected by an RGB camera, but the RGB camera has the problems of violent change of illumination, unclear scene shooting and the like during data collection, so that the accuracy rate of the lane line detection is seriously influenced.
Based on the above problem, an embodiment of the present invention provides a lane line detection method, as shown in fig. 1, the lane line detection method specifically includes the following steps:
step S101: and acquiring an event point cloud consisting of event data of a target scene acquired within set time.
Specifically, the data collected by the event camera is an event, and an event has a format of a vector, which can be expressed as: when the brightness value of a position where a certain pixel is located changes, the event camera will send back an event in the above format, where x and y represent the pixel coordinates of the event, t is the timestamp, and p is the polarity. In the whole camera field of view, as long as there is a pixel value change, an event is transmitted back, which represents pixel-level motion between two time instants, and all events are asynchronous, containing all information about structure and motion.
Step S102: the surface normal vector of the event point cloud is calculated.
Specifically, in the embodiment of the present invention, a principal component analysis algorithm is used to obtain a surface vector of an event point cloud, and in practical applications, other algorithms may also be used to implement the method, which is not limited to this.
Step S103: and obtaining the contour edge of the track based on the track change condition of the event point cloud.
Specifically, describing the event cloud track as a function e ∈ t (x, y) with time t and x, y spatial coordinates, calculating local distribution of optical flow vectors, determining whether an event point corresponds to a track of a lane line or a point caused by occlusion, and combining the points not caused by occlusion into a new array to obtain an approximate contour of the track of the lane line.
Step S104: and constructing an unstructured graph based on the contour edge, the event points of the event point cloud and the corresponding surface normal vectors thereof.
The unstructured graph is composed of event points in (x, y, t) space as nodes, surface normals (nx, ny, nt) of each event point and contour edges.
Step S105: and inputting the unstructured map into a trained lane line detection model to obtain a lane line detection result.
Wherein, lane line testing result includes: whether the target scene contains the lane line and the three-dimensional coordinate information of the lane line.
By executing the steps, the lane line detection method provided by the embodiment of the invention can more accurately capture the slight change of the lane line by utilizing the characteristics of low time delay and high dynamic range of the event camera, ensure that the image quality is not influenced by illumination change and motion blur, improve the accuracy of the final lane line detection result and ensure that the automatic driving vehicle can accurately detect the position of the lane line. Data collected by the event camera is treated as point cloud data, diversified input data are obtained by constructing an unstructured graph, and accuracy of lane line detection results is further improved.
Specifically, in an embodiment, the step S102 specifically includes the following steps: sequentially selecting current event points in the event point cloud;
step S201: and searching a near point for the current event point, and fitting the near event point containing the current event point into a curved surface.
Step S202: and carrying out Principal Component Analysis (PCA) on the event points in the curved surface to obtain the eigenvector corresponding to the minimum eigenvalue.
Step S203: and determining the characteristic vector as a surface normal vector of the curved surface corresponding to the current event point.
Specifically, because the surface normal vector of the point cloud is an important geometric surface characteristic, the spatial transformation shows that the normal vector included angle and curvature value of each point in the point cloud are not changed along with the motion of the object, and the point cloud has rigid motion invariance. Therefore, when the event camera is processed, the data of the event camera is treated as a point cloud, and the feature of a surface normal vector of the point cloud is added. And (3) solving a normal vector of the curved surface n by using a PCA algorithm by regarding the event cloud track epsilon occurring within a period of time t as the curved surface n. The specific implementation process is as follows:
consider event point c as the center point of all event points in a certain domain:
at the same time
yi=Xi-m
Then, the optimization objective function is:
the objective function f is further derived:
rewrite the f (n) objective function to:
f(n)=nTSn
wherein S ═ y (YY)T)
min(f(n))
s.t.nTn=1
Wherein, YYTIs a 3 x 3 covariance matrix, which is a covariance matrix of (x, y, z) coordinates. Solving for f (n) by using Lagrangian algorithm:
L(n,λ)=f(n)-λ(nTn-1)
the solution of the normal vector of the curved surface n is to perform vector decomposition on S in the formula, then take the eigenvector with the minimum eigenvalue as the normal vector of the solution, and then adopt a standard solution process of PCA: subtracting each critical domain point from the central point in the critical domain to obtain an n multiplied by 3 matrix, and then adopting SVD singular value decomposition:
A=UΣVT
where U is the matrix spanned by the n eigenvectors and Σ is the matrix with the n eigenvalues being the major diagonal. The last column in U is the normal vector n to be solved, i.e. the eigenvector with the smallest eigenvalue. The above process is a specific implementation process of calculating the surface normal vector of the corresponding curved surface of the current event point by using a principal component analysis method, and the process is the prior art, and the content not described in detail in the above description refers to the related record of solving the point cloud surface normal vector by using a principal component analysis algorithm in the prior art, and is not described herein again.
Since a surface contains three components, x, y, and t, the normal vector nl for surface n is decomposed into (nx, ny, nt), nx representing the component in the x-direction, ny representing the component in the y-direction, and nt representing the component in the time t-direction.
Specifically, in an embodiment, the step S103 specifically includes the following steps:
step S301: and sequentially calculating the local distribution of the current point light flow vector on the track based on the forming time of the track of the event point cloud.
Step S302: and determining whether the current event point belongs to the track or not based on the local distribution result of the current point light flow vector.
Specifically, by time-differentiating the trajectory, if the result is close to ∞, it indicates that this event point corresponds to the trajectory, whereas this event point may be an event point due to occlusion. Specifically, the calculation can be performed by the following formula:
step S303: and determining a set formed by all event points belonging to the track as the contour edge of the track.
Specifically, event points that are not occlusion-induced are combined into a new array, resulting in an approximate contour of the trajectory. Therefore, the accuracy of the lane line detection result is further improved by taking the contour edge of the track as an important characteristic of the lane line detection.
Specifically, in an embodiment, the step S104 specifically includes the following steps:
step S401: and configuring the event points as nodes.
Step S402: and configuring the surface normal vector and the contour edge corresponding to the event node as a connecting edge of the event node.
Step S403: and constructing the unstructured graph based on the nodes and the connecting edges.
Specifically, the final unstructured graph consists of events in (x, y, t) space as nodes, surface normal vectors (nx, ny, nt) per point and contour edges. Therefore, the unstructured graph formed by multiple features is used as the input of the model, the particularity of event camera data is fully considered, in order to increase the accuracy of the lane line detection result, a group of unstructured graphs such as event camera data, preliminary contour, normal vector indication and the like in a period of time t are used as the input of the network, and the graph neural network is used as the training model, so that the detection accuracy of the lane line is improved.
Specifically, in an embodiment, the lane line detection model is composed of a plurality of map convolution layers and a multilayer sensor, wherein the multilayer sensor is composed of an input layer, 3 fully-connected hidden layers and an output layer, and a fully-connected mode is adopted between different layers; each map convolutional layer of the plurality of map convolutional layers comprises 64 input channels and 64 output channels, and each output channel outputs features with different scales; features of different scales of the outputs of the plurality of map convolutional layers are concatenated for input to the input layer of the multi-layer sensor.
Where the above 5 consecutive graph convolution layers and 3 fully connected hidden layers, where all point weights are shared and global feature aggregation is performed. Each node in each graph convolutional layer changes its state to the final equilibrium state at all times because of neighboring and farther points, with points that are closer in relationship having greater influence on each other. And connecting the feature outputs of the 5 convolutional layers with different scales, inputting the feature outputs into a multi-layer sensor for classification to obtain a single-point score, and determining whether the data of the input model belongs to the lane line category according to the score. And judging the difference between the output result and the real result through a loss function.
Specifically, in an embodiment, the lane line detection model is trained as follows:
step S501: sets of lane line image data acquired by an event camera in different scenes are acquired.
Wherein each section of image data in the lane line image data set consists of a plurality of events;
step S502: and marking the lane line information for each image data.
Wherein, lane line information mark includes: the image data includes a lane line and three-dimensional coordinate information of the lane line.
Step S503: and inputting the unstructured graph corresponding to each image data in the lane line image data set into a lane line detection model to obtain a prediction result of the lane line.
Step S504: and calculating the loss function value of the model corresponding to the loss function based on the prediction result and the corresponding category label.
In the embodiment of the present invention, the loss function is a binary cross entropy function.
Step S505: and adjusting model parameters of the lane line detection model based on the loss function value, and returning to the step S501 until the loss function value meets the preset requirement to obtain the trained lane line detection model.
Specifically, if the difference between the predicted result of the lane line obtained by inputting the training data in the current training sample set into the model of the lane line detection model and the actual lane line information label is large, it is indicated that the model parameters of the model at this time need to be adjusted, and the adjusted model is trained again until the obtained predicted result of the lane line is close to the actual lane line information label, so that the accuracy of the model predicted result is ensured.
The following describes a specific implementation process of the lane line detection method provided by the embodiment of the present invention in detail with reference to a specific application example.
1. Data set preparation:
the method comprises the steps of acquiring a lane line data set under different scenes by using an event camera (such as a model of DAVIS 346C), wherein the acquired data comprises n events per second, and labeling acquired images (the labeling information comprises whether the data images comprise lane lines or not, the three-dimensional coordinate information category of the lane lines and the like). The data set is randomly divided into a training set, a validation set and a test set according to the ratio of 6:2: 2. The training set is used for training a design model, namely the deep convolution network, the verification set is used for selecting the optimal training model, and the test set is used for testing the performance of the design model at a later stage.
2. Processing input data:
taking events occurring within a period of time t as input to the network:
e=(xi,yi,ti)
wherein xi,yiSpatial coordinates of the representing points, tiAnd the time is represented, the time t is not fixed, and the adjustment can be specifically carried out according to the output result of the model.
And solving the normal vector of the surface of the event point cloud, and detecting the track edge of the event point cloud to obtain the edge contour of the track. And constructing an unstructured graph, wherein the unstructured graph is composed of event points in (x, y, t) space as nodes, surface normal vectors (nx, ny, nt) of each event point and contour edges.
3. And inputting the unstructured graph as a training sample into a pre-built network model, wherein the network model structure is composed of 5 continuous graph convolution layers and 3 fully-connected hidden layers, judging whether the loss function of the output result and the true value is greater than a threshold value through a loss function, and repeatedly training the neural network model until the loss function value meets the requirement, thereby completing the training of the neural network model. And carrying out lane line detection by using the trained network model.
Compared with the traditional camera, the event camera has the characteristics of low time delay and high dynamic range, is more suitable for being used in automatic driving, and can ensure that the image quality is not influenced by illumination change and motion blur. The event camera can be used for more accurately capturing the slight change of the lane line, and the position of the lane line can be accurately detected by the automatic driving vehicle when the strong illumination change occurs. The event camera is used as point cloud data for processing, and the event camera is used for detecting the lane line, so that the fine pixel change of the lane line can be captured. In addition, event camera data are processed by adopting a graph convolution neural network, and the input of the network is diversified, so that the final lane line detection result is more accurate.
By executing the steps, the lane line detection method provided by the embodiment of the invention can more accurately capture the slight change of the lane line by utilizing the characteristics of low time delay and high dynamic range of the event camera, ensure that the image quality is not influenced by illumination change and motion blur, improve the accuracy of the final lane line detection result and ensure that the automatic driving vehicle can accurately detect the position of the lane line. Data collected by the event camera is treated as point cloud data, diversified input data are obtained by constructing an unstructured graph, and accuracy of lane line detection results is further improved.
An embodiment of the present invention further provides a lane line detection apparatus, as shown in fig. 2, the lane line detection apparatus specifically includes:
the acquiring module 101 is configured to acquire an event point cloud formed by event data of a target scene acquired within a set time. For details, refer to the related description of step S101 in the above method embodiment, and no further description is provided here.
The first processing module 102 is configured to calculate a surface normal vector of the event point cloud. For details, refer to the related description of step S102 in the above method embodiment, and no further description is provided here.
And the second processing module 103 is configured to obtain a contour edge of the track based on a track change condition of the event point cloud. For details, refer to the related description of step S103 in the above method embodiment, and no further description is provided here.
And the third processing module 104 is configured to construct an unstructured graph based on the contour edge, the event points of the event point cloud, and the corresponding surface normal vectors thereof. For details, refer to the related description of step S104 in the above method embodiment, and no further description is provided here.
The fourth processing module 105 is configured to input the unstructured map into the trained lane line detection model to obtain a lane line detection result. For details, refer to the related description of step S105 in the above method embodiment, and no further description is provided here.
Further functional descriptions of the modules are the same as those of the corresponding method embodiments, and are not repeated herein.
Through the cooperative cooperation of the above components, the lane line detection device provided by the embodiment of the invention can more accurately capture the fine change of the lane line by utilizing the characteristics of low time delay and high dynamic range of the event camera, ensure that the image quality is not influenced by illumination change and motion blur, improve the accuracy of the final lane line detection result, and ensure that the automatic driving vehicle can accurately detect the position of the lane line. Data collected by the event camera is treated as point cloud data, diversified input data are obtained by constructing an unstructured graph, and accuracy of lane line detection results is further improved.
An embodiment of the present invention further provides an electronic device, as shown in fig. 3, the electronic device may include a processor 901 and a memory 902, where the processor 901 and the memory 902 may be connected by a bus or in another manner, and fig. 3 takes the connection by the bus as an example.
The memory 902, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the methods in the embodiments of the present invention. The processor 901 executes various functional applications and data processing of the processor, i.e., implements the above-described method, by executing non-transitory software programs, instructions, and modules stored in the memory 902.
The memory 902 may include a storage program area and a storage data area, wherein the storage program area may store an application program required for operating the device, at least one function; the storage data area may store data created by the processor 901, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 902 may optionally include memory located remotely from the processor 901, which may be connected to the processor 901 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 902, which when executed by the processor 901 performs the methods described above.
The specific details of the electronic device may be understood by referring to the corresponding related descriptions and effects in the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, and the implemented program can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
The above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.
Claims (10)
1. A lane line detection method is characterized by comprising the following steps:
acquiring an event point cloud consisting of event data of a target scene acquired within set time;
calculating a surface normal vector of the event point cloud;
obtaining the outline edge of the track based on the track change condition of the event point cloud;
constructing an unstructured graph based on the contour edge, the event points of the event point cloud and the corresponding surface normal vectors thereof;
and inputting the unstructured graph into a trained lane line detection model to obtain a lane line detection result.
2. The method of claim 1, wherein obtaining the contour edge of the track based on the track change condition of the event point cloud comprises:
sequentially calculating the local distribution of the current point light flow vector on the track based on the forming time of the track of the event point cloud;
determining whether the current event point belongs to the track or not based on the local distribution result of the current point light flow vector;
and determining a set formed by all event points belonging to the track as the contour edge of the track.
3. The method of claim 1, wherein constructing an unstructured graph based on the contour edges, the event points of the event point cloud, and their corresponding surface normal vectors comprises:
configuring the event point as a node;
configuring a surface normal vector corresponding to the event node and the contour edge as a connecting edge of the event node;
and constructing the unstructured graph based on the nodes and the connecting edges.
4. The lane line detection method of claim 1, wherein said calculating a surface normal vector of the event point cloud comprises:
sequentially selecting current event points in the event point cloud;
searching a near point for the current event point, and fitting the near event point containing the current event point into a curved surface;
performing Principal Component Analysis (PCA) on the event points in the curved surface to obtain a feature vector corresponding to the minimum feature value;
and determining the characteristic vector as a surface normal vector of the curved surface corresponding to the current event point.
5. The lane line detection method according to claim 1, wherein the lane line detection model is composed of a plurality of map convolutional layers and a multilayer sensor,
the multilayer perceptron is composed of an input layer, 3 full-connection hidden layers and an output layer, and full-connection modes are adopted among different layers;
each map convolutional layer of the plurality of map convolutional layers comprises 64 input channels and 64 output channels, and each output channel outputs features of different scales;
features of different scales of the outputs of the plurality of map convolutional layers are concatenated for input to the input layer of the multi-layer sensor.
6. The lane line detection method according to claim 5, wherein the lane line detection model is trained by:
acquiring lane line image data sets acquired by an event camera under different scenes, wherein each image data in the lane line image data sets is composed of a plurality of events;
and carrying out lane line information labeling on each image data, wherein the lane line information labeling comprises the following steps: whether the image data belongs to the area contains a lane line and three-dimensional coordinate information of the lane line;
inputting the unstructured graph corresponding to each image data in the lane line image data set into the lane line detection model to obtain a prediction result of a lane line;
calculating a loss function value of a model corresponding to the loss function based on the prediction result and the corresponding lane line information label;
and adjusting model parameters of the lane line detection model based on the loss function values, and returning to the step of acquiring lane line image data sets acquired by the event camera in different scenes until the loss function values meet preset requirements to obtain the trained lane line detection model.
7. The lane line detection method according to claim 1, wherein the lane line detection result includes: whether the target scene contains the lane line and the three-dimensional coordinate information of the lane line.
8. A lane line detection apparatus, comprising:
the acquisition module is used for acquiring an event point cloud consisting of event data of a target scene acquired within set time;
the first processing module is used for calculating a surface normal vector of the event point cloud;
the second processing module is used for obtaining the outline edge of the track based on the track change condition of the event point cloud;
the third processing module is used for constructing an unstructured graph based on the contour edge, the event points of the event point cloud and the corresponding surface normal vectors thereof;
and the fourth processing module is used for inputting the unstructured map into a trained lane line detection model to obtain a lane line detection result.
9. A computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions that, when executed by a processor, implement the method of any one of claims 1-7.
10. An electronic device, comprising:
a memory and a processor, the memory and the processor being communicatively coupled to each other, the memory having stored therein computer instructions, the processor being configured to execute the computer instructions to perform the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111420948.9A CN114092914A (en) | 2021-11-26 | 2021-11-26 | Lane line detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111420948.9A CN114092914A (en) | 2021-11-26 | 2021-11-26 | Lane line detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114092914A true CN114092914A (en) | 2022-02-25 |
Family
ID=80304974
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111420948.9A Pending CN114092914A (en) | 2021-11-26 | 2021-11-26 | Lane line detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114092914A (en) |
-
2021
- 2021-11-26 CN CN202111420948.9A patent/CN114092914A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112446270B (en) | Training method of pedestrian re-recognition network, pedestrian re-recognition method and device | |
CN109087510B (en) | Traffic monitoring method and device | |
CN109960742B (en) | Local information searching method and device | |
CN110826379B (en) | Target detection method based on feature multiplexing and YOLOv3 | |
CN111222395A (en) | Target detection method and device and electronic equipment | |
CN110334703B (en) | Ship detection and identification method in day and night image | |
CN112036381B (en) | Visual tracking method, video monitoring method and terminal equipment | |
CN109886271B (en) | Image accurate segmentation method integrating deep learning network and improving edge detection | |
CN115049821A (en) | Three-dimensional environment target detection method based on multi-sensor fusion | |
CN112634369A (en) | Space and or graph model generation method and device, electronic equipment and storage medium | |
CN110222718A (en) | The method and device of image procossing | |
CN112634368A (en) | Method and device for generating space and OR graph model of scene target and electronic equipment | |
CN113780145A (en) | Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium | |
CN112101114B (en) | Video target detection method, device, equipment and storage medium | |
CN116883588A (en) | Method and system for quickly reconstructing three-dimensional point cloud under large scene | |
CN115690545B (en) | Method and device for training target tracking model and target tracking | |
CN111553474A (en) | Ship detection model training method and ship tracking method based on unmanned aerial vehicle video | |
CN110705564A (en) | Image recognition method and device | |
CN112069997B (en) | Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net | |
CN111652168B (en) | Group detection method, device, equipment and storage medium based on artificial intelligence | |
CN112037255B (en) | Target tracking method and device | |
CN116958873A (en) | Pedestrian tracking method, device, electronic equipment and readable storage medium | |
CN111639651A (en) | Ship retrieval method and device based on full-connection layer feature extraction | |
CN111611836A (en) | Ship detection model training and ship tracking method based on background elimination method | |
CN116343143A (en) | Target detection method, storage medium, road side equipment and automatic driving system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |