CN112084988B - Lane line instance clustering method and device, electronic equipment and storage medium - Google Patents

Lane line instance clustering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112084988B
CN112084988B CN202010972488.XA CN202010972488A CN112084988B CN 112084988 B CN112084988 B CN 112084988B CN 202010972488 A CN202010972488 A CN 202010972488A CN 112084988 B CN112084988 B CN 112084988B
Authority
CN
China
Prior art keywords
lane line
clustering
feature vector
cluster
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010972488.XA
Other languages
Chinese (zh)
Other versions
CN112084988A (en
Inventor
李宇明
刘国清
郑伟
杨广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Youjia Innovation Technology Co ltd
Original Assignee
Wuhan Youjia Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Youjia Innovation Technology Co ltd filed Critical Wuhan Youjia Innovation Technology Co ltd
Publication of CN112084988A publication Critical patent/CN112084988A/en
Application granted granted Critical
Publication of CN112084988B publication Critical patent/CN112084988B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a lane line instance clustering method, a lane line instance clustering device, electronic equipment and a storage medium. The method comprises the following steps: obtaining a lane line binary segmentation result and a lane line feature vector, obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector, inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius, performing distance judgment on the lane line feature vector based on the clustering center and the clustering radius to obtain a clustering identifier corresponding to the lane line feature vector, and performing corresponding mapping on the lane line feature vector and the clustering identifier corresponding to the lane line feature vector and the lane line binary segmentation result to obtain a lane line instance segmentation result. In the whole process, a clustering network is used for replacing a traditional clustering algorithm to obtain a clustering center and a clustering radius, so that the processing of the clustering algorithm can be transferred to GPU operation, the CPU calculation amount is saved, and the lane line example segmentation efficiency is improved.

Description

Lane line instance clustering method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of automatic driving, in particular to a lane line instance clustering method, a lane line instance clustering device, electronic equipment and a storage medium.
Background
With the development of artificial intelligence technology and the improvement of sensor precision, the automatic driving technology becomes a popular research field and is in great charge of society. Among them, lane line detection is one of the fundamental and important tasks, and plays a critical role in both assisted driving systems and automatic driving systems.
In order to solve the problems of poor stability and time consumption in the traditional lane line detection method, researchers adopt a deep neural network to replace the traditional lane line detection method, and the method has obvious improvement on the accuracy and the robustness of lane line detection. The most representative method for dividing the lane line example based on the deep learning is to use an example dividing algorithm on the lane line detection, output a lane line characteristic vector branch while outputting a lane line binary dividing result, and further convert the output of the lane line semantic division into the example division by combining the result of the characteristic vector branch.
However, the above method still has some problems in practical engineering applications: the lane line feature vector branch also needs a complex post-processing clustering algorithm to obtain a final clustering result and convert the lane line semantic segmentation into instance segmentation, the process is complex and time-consuming, and the CPU (Central Processing Unit ) on the vehicle-mounted embedded device has limited computational power, so that the complex operation of frequent clustering is generally not supported. Therefore, the existing lane line example segmentation method based on deep learning has the problem of low lane line example segmentation efficiency.
Disclosure of Invention
Based on the foregoing, it is necessary to provide a lane line instance clustering method, apparatus, electronic device and storage medium capable of improving the lane line instance segmentation efficiency.
A lane line instance clustering method, the method comprising:
obtaining a lane line binary segmentation result and a lane line characteristic vector;
obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
inputting the characteristic vector histogram of the lane line into a trained clustering network to obtain a clustering center and a clustering radius;
based on the clustering center and the clustering radius, carrying out distance judgment on the lane line feature vector to obtain a clustering identifier corresponding to the lane line feature vector;
mapping the lane line feature vector and the cluster mark corresponding to the lane line feature vector with the lane line binary segmentation result to obtain a lane line instance segmentation result;
the distance judgment is used for distinguishing the clustering center to which the lane line clustering vector belongs, and the trained clustering network is obtained by training based on the history lane line feature vector histogram, the clustering center obtained by a preset clustering algorithm and the clustering radius.
In one embodiment, inputting the lane line feature vector histogram into a trained clustering network, obtaining a cluster center and a cluster radius includes:
inputting the characteristic vector histogram of the lane line into a trained clustering network to obtain a clustering center classification result;
based on the classification result of the clustering center, extracting the coordinates of the clustering center by adopting a connected domain calibration algorithm;
and indexing the cluster radius corresponding to each cluster center according to the coordinates of the cluster centers.
In one embodiment, the trained clustering network includes a cluster center classification branch and a cluster radius regression branch;
inputting the lane line feature vector histogram to a trained clustering network, the obtaining a clustering center and a clustering radius comprising:
inputting the characteristic vector histogram of the lane line into a trained clustering network, and extracting a clustering center classification result from a clustering center classification branch;
based on the classification result of the clustering center, extracting the clustering center by adopting a connected domain calibration algorithm;
and indexing the cluster radius corresponding to each cluster center through the cluster radius regression branch according to the extracted cluster center.
In one embodiment, based on the cluster center and the cluster radius, performing distance judgment on the lane line feature vector, and obtaining the cluster identifier corresponding to the lane line feature vector includes:
Obtaining a cluster identifier corresponding to a cluster center;
when the pixel point corresponding to the current lane line feature vector is in the target range, adding a cluster identifier corresponding to the current cluster center for the current lane line feature vector, wherein the target range is an area range formed by the current cluster center and the cluster radius corresponding to the current cluster center.
In one embodiment, before inputting the lane line feature vector histogram to the trained clustering network to obtain the clustering center and the clustering radius, the method further comprises:
acquiring a binary segmentation result of a historical lane line and a characteristic vector of the historical lane line;
clustering the binary segmentation result of the historical lane lines and the feature vector of the historical lane lines by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; carrying out histogram statistics on the binary segmentation result of the historical lane line and the characteristic vector of the historical lane line to obtain a characteristic vector histogram of the historical lane line;
according to the characteristic vector histogram of the historical lane lines, a clustering center and a clustering radius which are obtained by a preset clustering algorithm, a training data set is constructed;
and training the initial clustering network based on the training data set and combining the preset loss function to obtain a trained clustering network.
In one embodiment, the lane line binary segmentation result and the lane line feature vector are obtained by performing semantic segmentation on the lane driving scene image through a trained lane line detection network.
In one embodiment, the trained clustering network includes a backbone network and a multitasking output network.
A lane line instance clustering apparatus, the apparatus comprising:
the data acquisition module is used for acquiring a lane line binary segmentation result and a lane line characteristic vector;
the histogram data acquisition module is used for acquiring a lane line characteristic vector histogram according to the lane line binary segmentation result and the lane line characteristic vector;
the cluster feature data extraction module is used for inputting the characteristic vector histogram of the lane line into a trained cluster network to obtain a cluster center and a cluster radius;
the clustering processing module is used for judging the distance of the lane line feature vector based on the clustering center and the clustering radius to obtain a clustering identifier corresponding to the lane line feature vector;
the instance segmentation module is used for mapping the lane line feature vector and the cluster identifier corresponding to the lane line feature vector with the lane line binary segmentation result to obtain a lane line instance segmentation result;
The distance judgment is used for distinguishing the clustering center to which the lane line feature vector belongs, and the trained clustering network is obtained by training based on the history lane line feature vector histogram, the clustering center obtained by a preset clustering algorithm and the clustering radius.
In one embodiment, the apparatus further comprises:
the network training module is used for acquiring historical lane driving scene image data, inputting the historical lane driving scene image data into a trained lane line detection network for semantic segmentation to obtain a historical lane line binary segmentation result and a historical lane line feature vector, and carrying out clustering processing on the historical lane line binary segmentation result and the historical lane line feature vector by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; and carrying out histogram statistics on the historical lane line binary segmentation result and the historical lane line feature vector to obtain a historical lane line feature vector histogram, constructing a training data set according to the historical lane line feature vector histogram, a clustering center and a clustering radius which are obtained by a preset clustering algorithm, and training an initial clustering network based on the training data set and a preset loss function to obtain a trained clustering network.
An electronic device comprising a memory storing a computer program and a processor that when executing the computer program performs the steps of:
obtaining a lane line binary segmentation result and a lane line characteristic vector;
obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
inputting the characteristic vector histogram of the lane line into a trained clustering network to obtain a clustering center and a clustering radius;
based on the clustering center and the clustering radius, carrying out distance judgment on the lane line feature vector to obtain a clustering identifier corresponding to the lane line feature vector;
mapping the lane line feature vector and the cluster mark corresponding to the lane line feature vector with the lane line binary segmentation result to obtain a lane line instance segmentation result;
the distance judgment is used for distinguishing the clustering center to which the lane line feature vector belongs, and the trained clustering network is obtained by training based on the history lane line feature vector histogram, the clustering center obtained by a preset clustering algorithm and the clustering radius.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Obtaining a lane line binary segmentation result and a lane line characteristic vector;
obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
inputting the characteristic vector histogram of the lane line into a trained clustering network to obtain a clustering center and a clustering radius;
based on the clustering center and the clustering radius, carrying out distance judgment on the lane line feature vector to obtain a clustering identifier corresponding to the lane line feature vector;
mapping the lane line feature vector and the cluster mark corresponding to the lane line feature vector with the lane line binary segmentation result to obtain a lane line instance segmentation result;
the distance judgment is used for distinguishing the clustering center to which the lane line feature vector belongs, and the trained clustering network is obtained by training based on the history lane line feature vector histogram, the clustering center obtained by a preset clustering algorithm and the clustering radius.
According to the lane line example clustering method, the device, the electronic equipment and the storage medium, the lane line binary segmentation result and the lane line feature vector are obtained, the lane line feature vector histogram is obtained according to the lane line binary segmentation result and the lane line feature vector, the lane line feature vector histogram is input into a trained clustering network to obtain a clustering center and a clustering radius, the distance judgment is carried out on the lane line feature vector based on the clustering center and the clustering radius to obtain a clustering mark corresponding to the lane line feature vector, and the clustering mark corresponding to the lane line feature vector and the lane line feature vector is mapped with the lane line binary segmentation result to obtain the lane line example segmentation result. In the whole process, a clustering network is used for replacing a traditional complex clustering algorithm to obtain a clustering center and a clustering radius, so that the clustering algorithm can be transferred to a GPU (Graphics Processing Unit, a graphic processor) for operation, a large amount of CPU (Central processing Unit) calculation amount is saved, meanwhile, the output of a preset clustering algorithm is used as training data of the clustering network, manual labeling data is not needed, and the lane line instance segmentation efficiency is improved.
Drawings
FIG. 1 is an application environment diagram of a lane-line example clustering method in one embodiment;
FIG. 2 is a flow diagram of a lane-line example clustering method in one embodiment;
FIG. 3 (a) is a schematic diagram of a two-dimensional histogram of lane line feature vectors in one embodiment;
FIG. 3 (b) is a schematic diagram of output results of a clustering network in one embodiment;
FIG. 3 (c) is a diagram illustrating a lane line binary segmentation result in one embodiment;
FIG. 3 (d) is a diagram illustrating the segmentation result of an example lane line in one embodiment;
FIG. 4 is a flow chart of a method for clustering lane-line examples in another embodiment;
FIG. 5 is a flow diagram of the steps for training a clustered network in one embodiment;
FIG. 6 is a block diagram of an example lane-line clustering apparatus in one embodiment;
FIG. 7 is a block diagram of a lane-line example clustering apparatus in another embodiment;
fig. 8 is an internal structural diagram of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The lane line example clustering method provided by the application can be applied to an application environment shown in fig. 1. Wherein the vehicle 102 is provided with an onboard embedded device comprising a camera and a processor. Specifically, it may be that the camera of the vehicle 102 collects the lane driving scene image in real time, and uploads the collected image to the processor, the processor performs semantic segmentation processing on the lane driving scene image through the trained lane line detection network to obtain a lane line binary segmentation result and a lane line feature vector, then, according to the lane line binary segmentation result and the lane line feature vector, a lane line feature vector histogram is obtained, the lane line feature vector histogram is input to the trained clustering network to obtain a clustering center and a clustering radius, based on the clustering center and the clustering radius, distance judgment is performed on the lane line feature vector to obtain a clustering identifier corresponding to the lane line feature vector, and the clustering identifier corresponding to the lane line feature vector and the lane line binary segmentation result are mapped correspondingly to obtain a lane line instance segmentation result. Wherein the processor may comprise a GPU and/or a CPU.
In one embodiment, as shown in fig. 2, a lane line example clustering method is provided, and the method is applied to the vehicle 102 in fig. 1 for illustration, and includes the following steps:
And 202, obtaining a lane line binary segmentation result and a lane line characteristic vector.
The lane line binary segmentation result is a lane line binary segmentation image, and as shown in fig. 3 (c), the part displays the lane line binary segmentation result output by the lane line detection network. In practical application, a camera of a vehicle acquires a lane driving scene image in real time, the acquired image is uploaded to a processor, the processor receives the image, the image is input to a trained lane line detection network, and the lane driving scene image is subjected to semantic segmentation to obtain a lane line binary segmentation result and a lane line feature vector. In this embodiment, the lane line binary division result is a two-dimensional image of h×w, where H and W are the same as the height and width of the input image. The value of a point on the lane line in the binary division result is 1, and the value of the background point is 0. The lane line feature vector is a lane line two-dimensional feature vector, specifically a 2×h×w matrix. The values in the eigenvectors ensure that the L2 distance between the eigenvectors of points on the same lane line is less than delta v While the L2 distance between the characteristic vectors of the points on different lane lines is greater than 2delta d Wherein delta v And delta d The distance edges within the class and the distance edges between the classes are respectively indicated and designated at training time.
And 204, obtaining a lane line characteristic vector histogram according to the lane line binary segmentation result and the lane line characteristic vector.
After the lane line binary segmentation result and the lane line feature vector are obtained, histogram statistics processing can be carried out according to the lane line binary segmentation result and the lane line feature vector to obtain a lane line feature vector histogram. Histogram statistics is one of the basic and common algorithms in image processing algorithms, and the main principle is to calculate and count the number of pixels of each gray level in an image. In this embodiment, the lane line feature vector histogram is a two-dimensional histogram of lane line feature vectors, which may be specifically shown in fig. 3 (a), and may be used as input data of the clustering network. The two-dimensional histogram is constrained in a small-sized two-dimensional space of h×w, thereby achieving the purpose of reducing the calculation amount.
And 206, inputting the characteristic vector histogram of the lane line into a trained clustering network to obtain a clustering center and a clustering radius.
In particular, the trained clustering network may be a network structure of CNN (Convolutional Neural Networks) backbone network+multitasking output using shared weights. Wherein the multitasking network output comprises two branches, a cluster center classification branch and a cluster radius regression branch. The cluster center is classified into a two-dimensional image of h×w, which is the same as the height and width of the input histogram, and a small-sized image (e.g., 56×56). The value of a point on the cluster center in the binary segmentation result is 1, and the value of a background point is 0. The cluster radius regression branches into a two-dimensional image of h×w, wherein the point corresponding to the cluster center outputs the cluster radius and the background point outputs the random value. The clustering network can be obtained by training based on historical lane driving scene images, a preset lane line detection network and the like. After training, the clustering network inputs the characteristic vector histogram of the lane line to obtain the clustering radius and the clustering center, and specifically, fig. 3 (b) is an output result of the clustering network, including the clustering center and the clustering radius.
And step 208, based on the clustering center and the clustering radius, performing distance judgment on the lane line feature vector to obtain a clustering identifier corresponding to the lane line feature vector.
In this embodiment, after the cluster center and the cluster radius are obtained, the distance judgment can be performed on the feature vector of the lane line based on the cluster center and the cluster radius, that is, the L2 distance is calculated between the two-dimensional feature vector of all points on the lane line and each obtained cluster center, and if the feature point falls within a circle defined by a certain cluster center and the cluster radius, the feature point is considered to belong to the cluster center. Through distance judgment, feature vectors corresponding to all points on the lane lines can be uniquely attributed to a certain clustering center, so that a clustering result of a lane line two-dimensional feature vector space is obtained, namely, a unique lane line ID (Identity) is assigned to each lane line two-dimensional feature vector.
And 210, mapping the lane line feature vector and the cluster mark corresponding to the lane line feature vector with the lane line binary segmentation result to obtain a lane line instance segmentation result.
After the clustering result of the two-dimensional feature vectors of the lane lines is obtained, the points on each lane line cannot be distinguished due to the binary segmentation result of the lane lines, so that the lane line feature vectors and the clustering marks corresponding to the lane line feature vectors can be reversely projected back to the real space of the binary segmentation result of the lane lines, and one-to-one mapping can be performed to obtain the lane line instance segmentation result (the lane line instance segmentation result can be shown as the lane line instance segmentation result shown in fig. 3 (d)). In the lane line example segmentation result, points on the same lane line have the same ID, and the IDs of the points on different lane lines are different.
In the lane line example clustering method, a lane line binary segmentation result and a lane line feature vector are obtained, a lane line feature vector histogram is obtained according to the lane line binary segmentation result and the lane line feature vector, the lane line feature vector histogram is input into a trained clustering network to obtain a clustering center and a clustering radius, the distance judgment is carried out on the lane line feature vector based on the clustering center and the clustering radius to obtain a clustering mark corresponding to the lane line feature vector, and the clustering mark corresponding to the lane line feature vector and the lane line feature vector is mapped with the lane line binary segmentation result to obtain the lane line example segmentation result. In the whole process, the clustering network is used for clustering the lane line feature vectors instead of the traditional clustering algorithm, so that the post-processing clustering algorithm can be transferred to the GPU for operation, a large amount of CPU (Central processing Unit) calculation amount is saved, meanwhile, the output of the preset clustering algorithm is directly used as training data of the clustering network, manual labeling of the data is not needed, and the lane line instance segmentation efficiency is improved.
In one embodiment, step 206 includes: inputting the characteristic vector histogram of the lane line into a trained clustering network to obtain a clustering center classification result, extracting coordinates of the clustering centers by adopting a connected domain calibration algorithm based on the clustering center classification result, and indexing a clustering radius corresponding to each clustering center according to the coordinates of the clustering centers.
In one embodiment, as shown in FIG. 4, the trained clustering network includes cluster center classification branches and cluster radius regression branches;
step 206 comprises: and 226, inputting the characteristic vector histogram of the lane line into a trained clustering network, extracting a clustering center classification result from a clustering center classification branch, extracting the clustering centers by adopting a connected domain calibration algorithm based on the clustering center classification result, and indexing the corresponding clustering radius of each clustering center through a clustering radius regression branch according to the extracted clustering centers.
In particular, the clustering network includes a cluster center classification branch and a cluster radius regression branch. The cluster center may be obtained by inputting the feature vector histogram of the lane line into a trained cluster network, and extracting the classification result of the cluster center from the classification branch of the cluster center. Wherein, the pixel value of the central point of the cluster is 1, and the pixel value of the background point is 0. And then, marking the classification result of the clustering center by adopting a connected domain calibration algorithm, and extracting the clustering center. The coordinates of the cluster centers are known, and the corresponding cluster radius of each cluster center can be indexed in the cluster radius regression branch based on the coordinates of the cluster centers, so that the cluster centers and the corresponding cluster radius can be output. In this embodiment, the cluster centers can be quickly extracted by adopting the connected domain calibration algorithm, and the cluster radius corresponding to each cluster center can be quickly indexed by the cluster radius regression branch.
In one embodiment, as shown in fig. 4, performing distance judgment on the lane line feature vector based on the cluster center and the cluster radius, and obtaining the cluster identifier corresponding to the lane line feature vector includes: step 228, obtaining a cluster identifier corresponding to the cluster center, and adding the cluster identifier corresponding to the current cluster center to the current lane line feature vector when the pixel point corresponding to the current lane line feature vector is in the target range, wherein the target range is a region range formed by the current cluster center and the cluster radius corresponding to the current cluster center.
In the implementation, a cluster identifier such as a lane line ID is added to each cluster center, and then, whether a pixel point corresponding to a current lane line feature vector is within a target range or not is determined, that is, whether the pixel point corresponding to a lane line two-dimensional feature vector is within an area range (circle) formed by a certain cluster center and a cluster radius corresponding to the cluster center is determined, if the pixel point is within a certain circle, a lane line ID to which the cluster center corresponding to the circle where the pixel point is located belongs is given to the pixel point, so, each lane line two-dimensional feature vector corresponds to a unique lane line ID, and a clustering result of a lane line two-dimensional feature vector space is obtained. In this embodiment, the two-dimensional feature vector of the lane line is compared with the clustering center and the clustering radius, and corresponding identification data is added to the two-dimensional feature vector, so that later instance segmentation can be facilitated.
In one embodiment, as shown in fig. 5, before inputting the lane line feature vector histogram into the trained clustering network to obtain the cluster center and the cluster radius, the method further includes:
step 205, acquiring a historical lane line binary segmentation result and a historical lane line feature vector;
step 225, clustering the binary segmentation result of the historical lane lines and the feature vector of the historical lane lines by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; carrying out histogram statistics on the binary segmentation result of the historical lane line and the characteristic vector of the historical lane line to obtain a characteristic vector histogram of the historical lane line;
step 245, constructing a training data set according to the characteristic vector histogram of the historical lane lines, a clustering center and a clustering radius which are obtained by a preset clustering algorithm;
step 265, training the initial clustering network based on the training data set in combination with the preset loss function to obtain a trained clustering network.
In practical application, in order to train a clustering network, a camera needs to collect a large amount of driving image data of different illumination conditions, different scenes and different view angles, namely historical lane driving scene images, including the view angle data of a large vehicle and the view angle data of a small vehicle. And after the historical lane driving scene image data is collected, the historical lane driving scene image data can be input into a preset trained lane line detection network for semantic segmentation to obtain a historical lane line binary segmentation result and a historical lane line two-dimensional feature vector. Then, training of the clustering network is started, and the training process of the clustering network can be: the method comprises the steps of obtaining a historical lane line binary segmentation result and a historical lane line two-dimensional feature vector, and inputting the historical lane line binary segmentation result and the historical lane line two-dimensional feature vector into a designed post-processing clustering algorithm to obtain a clustering center and a clustering radius, wherein the post-processing clustering algorithm can be DBSCAN (Density-Based Spatial Clustering of Applications with Noise, a Density-based clustering method with noise), mean-Shift (Mean Shift algorithm) and other algorithms. And carrying out two-dimensional histogram statistical calculation on the lane line binary segmentation result and the lane line two-dimensional feature vector to obtain a corresponding historical lane line feature vector histogram. And inputting a large number of historical lane driving scene images into a lane line detection network, repeating the processing to obtain a historical lane line characteristic vector histogram, a clustering center and a clustering radius corresponding to each image, and constructing a training data set of the clustering network based on the data. The clustering center and the clustering radius of the output of the post-processing clustering algorithm are used as the true value of the training data of the clustering network, and the clustering network is trained by combining the training data set. Specifically, the clustering network may be trained using the loss function of the following equation (1):
L 1 (W)=||y 1 -f W (x)|| 2 (2)
L 2 (W)=-log(softmax(y 2 ,f W (x))) (3)
Above-mentionedIn the formula, L 1 (W) is the Euclidean distance between the cluster radius and the cluster radius true value of the cluster radius regression branch output, y 1 Representing the true value of the cluster radius, f W (x) Representing the clustered network parameters to be trained. Only the points corresponding to the clustering centers are considered when the distance is calculated, and other points do not participate in training. L (L) 2 (W) is a cross entropy loss function of cluster center class branches, where y 2 Representing the true value of the cluster center. The optimization of equation (1) aims to find the optimal network parameters W, the weight coefficients sigma 1 Sum sigma 2 The final goal may be seen as learning the relative weights of each subtask output. Wherein, the value of sigma is large 2 Will lower L 2 Influence of (W), small value sigma 2 Will increase L 2 The influence of (W) can be automatically determined by the training process, manual setting is not needed, and the output result of the post-processing clustering algorithm is used as the true value of the clustering radius and the clustering center, so that manual labeling of data is not needed, and the industrialization of the algorithm is facilitated.
It should be understood that, although the steps in the flowcharts of fig. 2, 4, and 5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps of fig. 2, 4, and 5 may include a plurality of steps or stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the execution of the steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least a portion of the steps or stages of other steps or other steps.
In one embodiment, as shown in fig. 6, there is provided a lane line example clustering apparatus, including: a data acquisition module 510, a histogram data acquisition module 520, a cluster feature data extraction module 530, a cluster processing module 540, and an instance segmentation module 550, wherein:
the data obtaining module 510 is configured to obtain a lane line binary segmentation result and a lane line feature vector, where the lane line binary segmentation result and the lane line feature vector are used.
The histogram data acquisition module 520 is configured to obtain a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector.
The cluster feature data extraction module 530 is configured to input the lane line feature vector histogram to a trained cluster network to obtain a cluster center and a cluster radius, where the trained cluster network is trained based on the historical lane line feature vector histogram and the cluster center and the cluster radius obtained by a preset clustering algorithm.
The cluster processing module 540 is configured to perform distance judgment on the lane line feature vectors based on the cluster center and the cluster radius, to obtain cluster identifiers corresponding to the lane line feature vectors, where the distance judgment is used to distinguish the cluster center to which the lane line feature vectors belong.
The instance segmentation module 550 is configured to map the lane line feature vector and the cluster identifier corresponding to the lane line feature vector with the lane line binary segmentation result, so as to obtain a lane line instance segmentation result.
In one embodiment, the cluster feature data extraction module 530 is further configured to extract coordinates of cluster centers by using a connected domain calibration algorithm based on the lane line feature vector histogram, and index a cluster radius corresponding to each cluster center according to the coordinates of the cluster centers.
In one embodiment, the cluster feature data extraction module 530 is further configured to input the lane line feature vector histogram to a trained clustering network, extract a cluster center classification result from the cluster center classification branches, extract cluster centers by using a connected domain calibration algorithm based on the cluster center classification result, and index the cluster radii corresponding to each cluster center by a cluster radius regression branch according to the extracted cluster centers.
In one embodiment, the cluster processing module 540 is further configured to obtain a cluster identifier corresponding to a cluster center, and when a pixel point corresponding to a feature vector of a current lane line is within a target range, add the cluster identifier corresponding to the current cluster center to the feature vector of the current lane line, where the target range is a region range formed by the current cluster center and a cluster radius corresponding to the current cluster center.
In one embodiment, as shown in fig. 7, the apparatus further includes a network training module 560, configured to obtain a historical lane line binary segmentation result and a historical lane line feature vector, and perform a clustering process on the historical lane line binary segmentation result and the historical lane line feature vector by using a preset clustering algorithm to obtain a clustering center and a clustering radius; and carrying out histogram statistics on the historical lane line binary segmentation result and the historical lane line feature vector to obtain a historical lane line feature vector histogram, constructing a training data set according to the historical lane line feature vector histogram, a clustering center and a clustering radius which are obtained by a preset clustering algorithm, and training an initial clustering network based on the training data set and a preset loss function to obtain a trained clustering network.
For specific limitation of the lane line instance clustering device, reference may be made to the limitation of the lane line instance clustering method hereinabove, and no further description is given here. The above-described respective modules in the lane line instance clustering apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or independent of a processor in the electronic device, or may be stored in software in a memory in the electronic device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, an electronic device is provided, which may be a vehicle-mounted embedded device, and an internal structure thereof may be as shown in fig. 8. The electronic device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a lane line instance clustering method. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the electronic device to which the present application is applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, an electronic device is provided that includes a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of: obtaining a lane line binary segmentation result and a lane line feature vector, obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector, inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius, performing distance judgment on the lane line feature vector based on the clustering center and the clustering radius to obtain a clustering identifier corresponding to the lane line feature vector, and performing corresponding mapping on the clustering identifier corresponding to the lane line feature vector and the lane line binary segmentation result to obtain a lane line instance segmentation result, wherein the distance judgment is used for distinguishing which clustering center the lane line two-dimensional feature vector belongs to, and the trained clustering network is obtained by training the clustering center and the clustering radius obtained by a preset clustering algorithm based on the history lane line feature vector histogram.
In one embodiment, the processor when executing the computer program further performs the steps of: and extracting coordinates of the clustering centers by adopting a connected domain calibration algorithm based on the lane line feature vector histogram, and indexing a clustering radius corresponding to each clustering center according to the coordinates of the clustering centers.
In one embodiment, the processor when executing the computer program further performs the steps of: inputting the characteristic vector histogram of the lane line into a trained clustering network, extracting a clustering center classification result from a clustering center classification branch, extracting the clustering centers by adopting a connected domain calibration algorithm based on the clustering center classification result, and indexing the clustering radius corresponding to each clustering center through a clustering radius regression branch according to the extracted clustering centers.
In one embodiment, the processor when executing the computer program further performs the steps of: and obtaining a cluster identifier corresponding to the cluster center, and adding the cluster identifier corresponding to the current cluster center to the current lane line feature vector when the pixel point corresponding to the current lane line feature vector is in a target range, wherein the target range is a region range formed by the current cluster center and a cluster radius corresponding to the current cluster center.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring historical lane driving scene image data, inputting the historical lane driving scene image data into a trained lane line detection network for semantic segmentation to obtain a historical lane line binary segmentation result and a historical lane line feature vector, and clustering the historical lane line binary segmentation result and the historical lane line feature vector by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; and carrying out histogram statistics on the historical lane line binary segmentation result and the historical lane line feature vector to obtain a historical lane line feature vector histogram, constructing a training data set according to the historical lane line feature vector histogram, a clustering center and a clustering radius which are obtained by a preset clustering algorithm, and training an initial clustering network based on the training data set and a preset loss function to obtain a trained clustering network.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of: obtaining a lane line binary segmentation result and a lane line feature vector, obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector, inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius, performing distance judgment on the lane line feature vector based on the clustering center and the clustering radius to obtain a clustering identifier corresponding to the lane line feature vector, and performing corresponding mapping on the lane line feature vector and the clustering identifier corresponding to the lane line feature vector and the lane line binary segmentation result to obtain a lane line instance segmentation result, wherein the distance judgment is used for distinguishing the clustering center to which the lane line two-dimensional feature vector belongs, and the trained clustering network is obtained by training the clustering center and the clustering radius obtained by a preset clustering algorithm based on the history lane line feature vector histogram.
In one embodiment, the computer program when executed by the processor further performs the steps of: and extracting coordinates of the clustering centers by adopting a connected domain calibration algorithm based on the lane line feature vector histogram, and indexing a clustering radius corresponding to each clustering center according to the coordinates of the clustering centers.
In one embodiment, the computer program when executed by the processor further performs the steps of: inputting the characteristic vector histogram of the lane line into a trained clustering network, extracting a clustering center classification result from a clustering center classification branch, extracting the clustering centers by adopting a connected domain calibration algorithm based on the clustering center classification result, and indexing the clustering radius corresponding to each clustering center through a clustering radius regression branch according to the extracted clustering centers.
In one embodiment, the computer program when executed by the processor further performs the steps of: and obtaining a cluster identifier corresponding to the cluster center, and adding the cluster identifier corresponding to the current cluster center to the current lane line feature vector when the pixel point corresponding to the current lane line feature vector is in a target range, wherein the target range is a region range formed by the current cluster center and a cluster radius corresponding to the current cluster center.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a historical lane line binary segmentation result and a historical lane line feature vector, and carrying out clustering treatment on the historical lane line binary segmentation result and the historical lane line feature vector by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; and carrying out histogram statistics on the historical lane line binary segmentation result and the historical lane line feature vector to obtain a historical lane line feature vector histogram, constructing a training data set according to the historical lane line feature vector histogram, a clustering center and a clustering radius which are obtained by a preset clustering algorithm, and training an initial clustering network based on the training data set and a preset loss function to obtain a trained clustering network.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A lane-line instance clustering method, the method comprising:
obtaining a lane line binary segmentation result and a lane line characteristic vector;
obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
inputting the lane line feature vector histogram to a trained clustering network to obtain a clustering center and a clustering radius;
Based on the clustering center and the clustering radius, performing distance judgment on the lane line feature vector to obtain a clustering identifier corresponding to the lane line feature vector;
correspondingly mapping the lane line feature vector and the cluster mark corresponding to the lane line feature vector with the lane line binary segmentation result to obtain a lane line instance segmentation result;
the distance judgment is used for distinguishing a clustering center to which the lane line feature vector belongs, the trained clustering network is obtained by training the clustering center and the clustering radius obtained by a preset clustering algorithm based on a history lane line feature vector histogram, and the trained clustering network comprises a clustering center classification branch and a clustering radius regression branch;
inputting the lane line feature vector histogram to a trained clustering network, and obtaining a clustering center and a clustering radius comprises: and classifying branches through the cluster centers to obtain cluster centers, and regressing branches through the cluster radii to obtain the cluster radii corresponding to the cluster centers.
2. The method of claim 1, wherein inputting the lane line feature vector histogram into a trained clustering network to obtain a cluster center and a cluster radius comprises:
Inputting the lane line feature vector histogram to a trained clustering network to obtain a clustering center classification result;
based on the classification result of the clustering center, extracting the clustering center by adopting a connected domain calibration algorithm;
and indexing out a clustering radius corresponding to the clustering center according to the clustering center.
3. The method of claim 1, wherein inputting the lane line feature vector histogram into a trained clustering network to obtain a cluster center and a cluster radius comprises:
inputting the lane line characteristic vector histogram to a trained clustering network, and extracting a clustering center classification result from the clustering center classification branch;
based on the classification result of the clustering center, extracting the clustering center by adopting a connected domain calibration algorithm;
and according to the extracted cluster centers, indexing the cluster radius corresponding to each cluster center through the cluster radius regression branch.
4. The method of claim 1, wherein the performing distance determination on the lane line feature vector based on the cluster center and the cluster radius to obtain the cluster identifier corresponding to the lane line feature vector comprises:
Obtaining a cluster identifier corresponding to a cluster center;
when the pixel point corresponding to the current lane line feature vector is in a target range, adding a cluster identifier corresponding to the current cluster center for the current lane line feature vector, wherein the target range is a region range formed by the current cluster center and a cluster radius corresponding to the current cluster center.
5. The method of claim 1, wherein before inputting the lane line feature vector histogram to a trained clustering network to obtain a cluster center and a cluster radius, further comprising:
acquiring a binary segmentation result of a historical lane line and a characteristic vector of the historical lane line;
clustering the binary segmentation result of the historical lane lines and the characteristic vector of the historical lane lines by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; carrying out histogram statistics on the historical lane line binary segmentation result and the historical lane line feature vector to obtain a historical lane line feature vector histogram;
constructing a training data set according to the historical lane line feature vector histogram, the clustering center and the clustering radius obtained by the preset clustering algorithm;
Based on the training data set, training an initial clustering network by combining a preset loss function to obtain a trained clustering network.
6. The method of claim 1, wherein the lane line binary segmentation result and the lane line feature vector are derived from semantic segmentation of a lane driving scene image by a trained lane line detection network.
7. The method according to any one of claims 1 to 5, wherein the trained clustering network comprises a backbone network and a multitasking output network.
8. A lane-line instance clustering apparatus, the apparatus comprising:
the data acquisition module is used for acquiring a lane line binary segmentation result and a lane line characteristic vector;
the histogram data acquisition module is used for acquiring a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
the cluster feature data extraction module is used for inputting the lane line feature vector histogram into a trained cluster network to obtain a cluster center and a cluster radius;
the clustering processing module is used for judging the distance of the lane line feature vector based on the clustering center and the clustering radius to obtain a clustering identifier corresponding to the lane line feature vector;
The instance segmentation module is used for mapping the lane line feature vector and the cluster identifier corresponding to the lane line feature vector with the lane line binary segmentation result to obtain a lane line instance segmentation result;
the distance judgment is used for distinguishing a clustering center to which the lane line feature vector belongs, the trained clustering network is obtained by training the clustering center and the clustering radius obtained by a preset clustering algorithm based on a history lane line feature vector histogram, and the trained clustering network comprises a clustering center classification branch and a clustering radius regression branch;
inputting the lane line feature vector histogram to a trained clustering network, and obtaining a clustering center and a clustering radius comprises: and classifying branches through the cluster centers to obtain cluster centers, and regressing branches through the cluster radii to obtain the cluster radii corresponding to the cluster centers.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202010972488.XA 2020-06-08 2020-09-16 Lane line instance clustering method and device, electronic equipment and storage medium Active CN112084988B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020105133575 2020-06-08
CN202010513357 2020-06-08

Publications (2)

Publication Number Publication Date
CN112084988A CN112084988A (en) 2020-12-15
CN112084988B true CN112084988B (en) 2024-01-05

Family

ID=73738004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010972488.XA Active CN112084988B (en) 2020-06-08 2020-09-16 Lane line instance clustering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112084988B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507977B (en) * 2021-01-21 2021-12-07 国汽智控(北京)科技有限公司 Lane line positioning method and device and electronic equipment
CN112906551A (en) * 2021-02-09 2021-06-04 北京有竹居网络技术有限公司 Video processing method and device, storage medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770577A (en) * 2010-01-18 2010-07-07 浙江林学院 Method for extracting information on expansion and elimination of dead wood of pine wilt disease in air photos of unmanned aerial vehicle
CN109214428A (en) * 2018-08-13 2019-01-15 平安科技(深圳)有限公司 Image partition method, device, computer equipment and computer storage medium
CN109740609A (en) * 2019-01-09 2019-05-10 银河水滴科技(北京)有限公司 A kind of gauge detection method and device
CN110400322A (en) * 2019-07-30 2019-11-01 江南大学 Fruit point cloud segmentation method based on color and three-dimensional geometric information
CN110866527A (en) * 2018-12-28 2020-03-06 北京安天网络安全技术有限公司 Image segmentation method and device, electronic equipment and readable storage medium
CN111178245A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Lane line detection method, lane line detection device, computer device, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7020329B2 (en) * 2001-08-31 2006-03-28 Massachusetts Institute Of Technology Color image segmentation in an object recognition system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770577A (en) * 2010-01-18 2010-07-07 浙江林学院 Method for extracting information on expansion and elimination of dead wood of pine wilt disease in air photos of unmanned aerial vehicle
CN109214428A (en) * 2018-08-13 2019-01-15 平安科技(深圳)有限公司 Image partition method, device, computer equipment and computer storage medium
CN110866527A (en) * 2018-12-28 2020-03-06 北京安天网络安全技术有限公司 Image segmentation method and device, electronic equipment and readable storage medium
CN109740609A (en) * 2019-01-09 2019-05-10 银河水滴科技(北京)有限公司 A kind of gauge detection method and device
CN110400322A (en) * 2019-07-30 2019-11-01 江南大学 Fruit point cloud segmentation method based on color and three-dimensional geometric information
CN111178245A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Lane line detection method, lane line detection device, computer device, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Image segmentation by histogram thresholding using hierarchical cluster analysis;Agus Zainal Arifin, Akira Asano;Pattern Recognition Letters;全文 *
改进的 FCM 聚类医学超声图像分割算法;石振刚 等;沈 阳 理 工 大 学 学 报;全文 *

Also Published As

Publication number Publication date
CN112084988A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
Yi et al. ASSD: Attentive single shot multibox detector
CN112446270B (en) Training method of pedestrian re-recognition network, pedestrian re-recognition method and device
Ghamisi et al. Multilevel image segmentation based on fractional-order Darwinian particle swarm optimization
CN106228185B (en) A kind of general image classifying and identifying system neural network based and method
CN107909039B (en) High-resolution remote sensing image earth surface coverage classification method based on parallel algorithm
CN111353512B (en) Obstacle classification method, obstacle classification device, storage medium and computer equipment
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN104036244B (en) Checkerboard pattern corner point detecting method and device applicable to low-quality images
CN110222718B (en) Image processing method and device
KR101618996B1 (en) Sampling method and image processing apparatus for estimating homography
CN112084988B (en) Lane line instance clustering method and device, electronic equipment and storage medium
CN109948457B (en) Real-time target recognition method based on convolutional neural network and CUDA acceleration
Xiang et al. Lightweight fully convolutional network for license plate detection
CN112528845B (en) Physical circuit diagram identification method based on deep learning and application thereof
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN112001378B (en) Lane line processing method and device based on feature space, vehicle-mounted terminal and medium
CN112347970A (en) Remote sensing image ground object identification method based on graph convolution neural network
CN111539910B (en) Rust area detection method and terminal equipment
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112132145A (en) Image classification method and system based on model extended convolutional neural network
CN110942473A (en) Moving target tracking detection method based on characteristic point gridding matching
Alsanad et al. Real-time fuel truck detection algorithm based on deep convolutional neural network
CN112883827B (en) Method and device for identifying specified target in image, electronic equipment and storage medium
CN111382638A (en) Image detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230428

Address after: No. 103-63, Xiaojunshan Community Commercial Building, Junshan Street, Wuhan Economic and Technological Development Zone, Wuhan City, Hubei Province, 430119

Applicant after: Wuhan Youjia Innovation Technology Co.,Ltd.

Address before: 518051 1101, west block, Skyworth semiconductor design building, 18 Gaoxin South 4th Road, Gaoxin community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant