CN106650814B - Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision - Google Patents

Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision Download PDF

Info

Publication number
CN106650814B
CN106650814B CN201611227291.3A CN201611227291A CN106650814B CN 106650814 B CN106650814 B CN 106650814B CN 201611227291 A CN201611227291 A CN 201611227291A CN 106650814 B CN106650814 B CN 106650814B
Authority
CN
China
Prior art keywords
classifier
image
color
vehicle
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611227291.3A
Other languages
Chinese (zh)
Other versions
CN106650814A (en
Inventor
杜勇志
闫飞
庄严
于海晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201611227291.3A priority Critical patent/CN106650814B/en
Publication of CN106650814A publication Critical patent/CN106650814A/en
Application granted granted Critical
Publication of CN106650814B publication Critical patent/CN106650814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention belongs to the technical field of autonomous environment perception of robots and discloses a method for generating a vehicle-mounted monocular vision outdoor road self-adaptive classifier. The invention introduces a sample pool tool, the components of the sample pool are different road characteristics, and the self-adaptive generation of the outdoor road recognition classifier is realized through the similarity matching of adjacent images. The method mainly comprises the steps of extracting features of adjacent images, and then calculating feature similarity of the adjacent images to be used as a basis for sudden change of a scene. When the similarity is small, updating the recognition result according to the previous classifier by the classifier; when the similarity is large, updating the classifier according to the matching result of the classifier and the sample pool; the classifier generated by the invention can be well suitable for accurately identifying roads due to changes of weather, seasons, light rays and the like, solves the problem of dependence on a large number of training samples and other sensors, has better adaptability, and can provide accurate auxiliary information for the driving of unmanned vehicles.

Description

Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision
Technical Field
The invention belongs to the technical field of autonomous environment perception of robots, relates to scene understanding of image data acquired by a mobile robot system, and particularly relates to a vehicle-mounted monocular vision outdoor road adaptive classifier generation method.
Background
The vision is one of the important means for the environment perception of the intelligent robot and the intelligent system. The vision-based natural scene understanding is a basic condition which a mobile robot working in a natural environment can realize autonomous environment adaptation. As a typical mobile robot, an unmanned vehicle plays an increasingly important role in various industries in recent years, and understanding of outdoor environments, particularly road recognition, is a key point in achieving unmanned driving.
Vision-based scene understanding is the classification of image data acquired by a vision sensor, i.e. the assignment of corresponding labels to different categories in an image. However, since the working environment of the unmanned vehicle is mostly an outdoor unstructured environment, and the diversity, randomness, complexity and mobility of outdoor scenes require that the constructed scene understanding system has high adaptivity, the complex outdoor environment understanding is usually realized by designing a classifier to complete image classification at present, the classifier constructs a classification function or a classification model according to the characteristics of a data set, and the model can map samples of unknown classes to one of given classes, which is also the most effective image classification method at present.
At present, two methods for generating classifiers are offline and online in the field of image classification. The offline classifier is the most commonly used one at present, and a fixed classifier is mainly trained for a specific data set by using a corresponding machine learning algorithm. Firstly, training samples, namely data with different labels, are made according to a data set to be identified, and then training is carried out by using a proper machine learning algorithm. Common Machine learning algorithms are Support Vector Machine (SVM), K-nearest neighbor (KNN), Bayes, Back Propagation (BP) neural network, and the like. The references (k.rebai, n.achour, and o.azoaooui. "Hierarchical SVM classifier for road intersection detection and retrieval." IEEE Conference on Open systems.2013: 100-.
Image recognition using an online classifier is a more challenging method, and particularly, the classifier is continuously updated in the process of recognizing an image. In a reference document (xu wenhao, unstructured road detection [ D ] based on visual laser data fusion, university of major connecting engineers, 2014.), a new sample is generated by using a picture acquired in real time in a road identification algorithm, real-time online updating of a classifier is realized while uninterrupted classification operation is performed, when a road is changed violently, the road cannot be identified correctly by depending on the previous classifier, so that updating of the classifier cannot be completed, at the moment, a feasible area in front of a vehicle can be ensured only by depending on geometric attributes of laser data, and online updating of the classifier is completed. The method has the advantages that the classifier can be well adapted to changes of factors such as weather, seasons and light, and the robustness of the algorithm is guaranteed. However, the method does not really realize the self-adaptive updating of the classifier, when a new road sample appears, the online updating of the classifier can be completed only by the aid of the laser sensor, however, the laser sensor is not necessary equipment for all robots, and the price of the laser sensor is expensive, so that the method is not universal.
Disclosure of Invention
Aiming at the defects of the method, the invention provides a self-adaptive classifier generation method. The method is characterized in that a sample pool tool is introduced, the composition of the sample pool is the characteristics of different roads, similarity calculation is carried out by extracting color histograms of adjacent images, then different training samples of a classifier are selected according to calculation results, and when the calculation results are similar: updating the training sample of the classifier as the recognition result of the previous classifier; when the calculation results are not similar: and updating the training samples of the classifier into the matching result with the sample pool, thereby realizing the self-adaptive generation of the classifier.
The technical scheme of the invention is as follows:
1) constructing a cuvette
The cuvette is a tool introduced by the present invention, which is essentially an m x n eigenvector. Wherein m is the number of roads with different forms contained in the sample pool, and n is the characteristic dimension of each road.
2) Calculating the similarity of adjacent images
The adaptive generation of the classifier when new samples appear needs to rely on the results of similarity calculation of adjacent images. Firstly, extracting a color histogram of an adjacent image:
H(P)=[h(x1),h(x2),...h(xi)](1)
Figure BDA0001193858130000031
wherein, S (x)i) For the number of i-th color appearing in the image, S (x)j) Is the total number of pixel points.
For example, a gray histogram is a function of gray level, which represents the number of pixels in an image having a certain gray level, reflecting the frequency of occurrence of a certain gray level in the image. The color histogram is a special case of a high-dimensional histogram, which counts the frequency of occurrence of colors, i.e., probability distribution information of colors. The histogram normalization effect is good, the similarity of the two images with different resolutions can be directly calculated by calculating the histogram, and the calculation amount is small.
The histogram similarity calculation method is correlation, chi-square coefficient, intersection coefficient or Papanicolaou distance.
Preferably, the babbit distance calculates the similarity. For example, image A and image B, histogram H of two images is calculated respectivelyAAnd HBThen, the babbitt distance of the two histograms is calculated:
Figure BDA0001193858130000032
wherein HA(i) And HB(i) Represents the ith color histogram data of images A and B, respectively, and n is the number of bins in the histogram d ∈ [0,1]Smaller distances indicate higher similarity.
3) Segmenting images and extracting image features
The image is subjected to superpixel segmentation, namely, pixels which are adjacent in spatial position and have similar characteristics such as color, texture and the like are aggregated into a plurality of superpixel blocks in the preprocessing stage of image processing, and then the image is further processed by taking the superpixel as a unit.
The image feature extraction is to use a computer to extract image information and determine whether pixel points or pixel blocks of each image belong to one image feature. The image features are one or more than two of color features, texture features, shape features and spatial relationship features.
The image features preferably adopt color features and spatial relationship features, L ab color space is adopted, wherein L components are used for expressing the brightness of pixels, the value range is [0,100] for expressing the range from pure black to pure white, a represents the range from red to green, the value range is [127, -128], b represents the range from yellow to blue, the value range is [127, -128], five-dimensional feature vectors are finally extracted, namely L ab three components and (x, y) coordinates of the pixels respectively, and the mean value and the variance of pixel points contained in each superpixel block are calculated to form the ten-dimensional feature vectors.
Figure BDA0001193858130000041
Figure BDA0001193858130000042
Wherein E ismAnd VmRespectively mean value and variance, and n is the number of pixel points contained in the pixel block.
(4) Training of classifiers
The classifier needs to complete the online updating of the classifier, and needs to reduce training samples as much as possible to ensure the real-time performance of the identification, and then the accuracy of the identification is ensured on the premise of few training samples.
Training a classifier on the basis of the step 3) according to the calculation result of the step 2); when the results of step 2) are similar: updating the training sample of the classifier to be the recognition result of the previous classification; when the results of step 2) are not similar: updating training samples of the classifier into a matching result of the current pool constructed in the step 1); according to the characteristics and the attribute labels of the super pixel blocks, a boosting algorithm is adopted, and the core idea is to train different classifiers, namely weak classifiers, aiming at the same training set, and update the weight of the sample in each round of weak classifier classification. If the sample is classified correctly, its weight is appropriately decreased, and otherwise, it is increased. And finally, the weak classifiers are collected to construct a stronger final classifier. The specific algorithm flow is as follows:
given a training data set T { (x)1,y1),(x2,y2)…(xi,yi)…(xN,yN) In which xiIs to input a training sample, yiBelongs to the label set { -1, +1}, and N is the number of training samples.
Step 1: and initializing weight distribution of the training data. Each training sample is initially given the same weight: 1/N.
Figure BDA0001193858130000051
Step 2: a number of iterations are performed, with M being 1, 2.
a) Using D with weight distributionmTraining a data set to learn to obtain a basic classifier: gm(x)
b) Calculation of Gm(x) Classification error rate on training data set:
Figure BDA0001193858130000052
c) calculation of Gm(x) Coefficient α ofmDenotes emDegree of importance in the final classifier (purpose: get the weight the basic classifier takes in the final classifier):
Figure BDA0001193858130000053
d) updating the training data setWeight distribution Dm+1=(wm+1,1,wm+1,2....wm+1,i…wm+1,N)
Figure BDA0001193858130000054
Wherein ZmTo normalize the factors, so that Dm+1Becomes a probability distribution.
And step 3: and combining the weak classifiers.
Figure BDA0001193858130000055
The classifier adaptively generates the selection of the used training samples according to the Bhattacharyya distance in the formula (3), when the Bhattacharyya distance is smaller than a set value, the training samples of the classifier are the image recognition result at the time t, and when the Bhattacharyya distance is larger than the set value, the training samples of the classifier are samples matched with the images at the time t +1 in a sample pool. And the self-adaptive generation of the classifier is realized by sequentially circulating.
The method has the advantages that the generated classifier can be well adapted to weather, seasons, light and the like, can accurately identify roads due to changes, solves the problem of dependence on a large number of training samples and other sensors, has good robustness, and can provide accurate auxiliary information for driving of unmanned vehicles.
Drawings
FIG. 1 is a schematic diagram of a road sample contained in a cuvette.
Fig. 2 is a road sequence image acquired by an unmanned vehicle.
In the figures (a) (b) (c), (d) (e) (f) and (g) (h) (i) represent road images acquired through three different road conditions.
FIG. 3 is a color histogram of (a) (b) (c) (d) (e) (f) (g) (h) (i) in FIG. 2.
FIG. 4 shows the classifier identification results of (a) (b) (c) (d) (e) (f) (g) (h) (i) in FIG. 2.
Detailed Description
In order to verify the effectiveness of the invention, the specific implementation mode of the invention comprises two aspects, namely, the acquisition of image data, and the selection of a corresponding training sample according to the result of matching the image to the histogram, so as to finish the self-adaptive generation of the classifier.
Image data is automatically acquired through the monocular camera, and then all data information is transmitted to the computer. Fig. 2 shows a travel route according to the present embodiment. Wherein (a) (b) (c), (d) (e) (f) and (g) (h) (i) are road images collected by three different road conditions passed by the vehicle in the driving process. The process of adaptive generation of the classifier is described by taking these nine images as an example. Firstly, performing superpixel segmentation on a first image (figure 2(a)), extracting the feature of each superpixel block to match with the features in a sample pool, assigning a label +1 to the superpixel block successfully matched, assigning a label-1 to the superpixel block not successfully matched, and then training a classifier G by using a training sample with the labelaThe color histograms of fig. 2(a) and 2(b) (i.e., fig. 3(a) and 3(b)) are extracted at the same time, the similarity between fig. 3(a) and 3(b) is calculated using the babbitt distance, and d is 0.406 by calculation, and in this embodiment, the images are considered similar when d ∈ [0.5, 0.5 ] is used, and the images are considered similar when d ∈ [0.5,1 ]]The images are considered dissimilar.
It can be seen that FIG. 2(a) is similar to FIG. 2(b), and thus classifier GbIs dependent on the recognition result of FIG. 2(a) (i.e., FIG. 3(a)), and the recognition result of FIG. 3(a) with different labels is used as the classifier GbThe training sample of (2).
Same method generates classifier Gc. Next, the color histograms of fig. 2(c) and 2(d) (fig. 3(c) and 3(d)) are extracted, the degree of similarity between fig. 3(c) and 3(d) is calculated using the babbitt distance, and d is 0.781 as calculated, and thus it is understood that fig. 2(c) and 2(d) are not similar to each other, and the classifier G is used to classify the color histograms of fig. 2(c) and 2(d) into a color histogram of fig. 3(c) and 3(d)dThe generation of (c) depends on a sample pool, superpixel segmentation is carried out on the graph 3(d), then the characteristics of each superpixel block are matched with the sample pool, the successfully matched label is set as +1, otherwise, the successfully matched label is set as-1, and finally, a classifier G is trained by using the sample with the labeld. Classification device G with same principlee、Gf、GhAnd GiIs dependent on the recognition result of the previous classifier. Classifier GgThe generation of (c) needs to be dependent on the sample cell. FIG. 4 shows the result of classifier identification of the road passing this time. And the self-adaptive generation of the classifier is realized by sequentially circulating.

Claims (8)

1. A vehicle-mounted monocular vision-based outdoor road self-adaptive classifier generation method is characterized by comprising the following steps:
1) constructing a cuvette
The sample cell is a feature vector of m'. times.n; wherein m' is the type of the road with different forms contained in the sample pool, and n is the characteristic dimension of each road;
2) calculating the similarity of adjacent images
Firstly, extracting a color histogram of an adjacent image:
H(P)=[h(x1),h(x2),...h(xi)](1)
Figure FDA0002418625530000011
wherein, S (x)i) For the number of i-th color appearing in the image, S (x)j) The total number of pixel points is counted;
calculating the similarity of color histograms of adjacent images, and judging whether the adjacent images are similar according to the calculated value;
3) segmenting images and extracting image features
After the step 2) is finished, firstly performing superpixel segmentation on the image, and then extracting the image characteristics of each superpixel block for next training of a classifier and road identification;
4) training classifier
Training a classifier on the basis of the step 3) according to the calculation result of the step 2); when the results of step 2) are similar: updating the training sample of the classifier to be the recognition result of the previous classification; when the results of step 2) are not similar: updating training samples of the classifier into a matching result with the sample pool constructed in the step 1), matching the characteristics of each super-pixel block with the sample pool in the step 1), setting the successfully matched label as +1, otherwise, setting the successfully matched label as-1, and finally training the classifier by using the sample with the label; according to the characteristics and the attribute labels of each super pixel block, a boosting algorithm is adopted to train a plurality of weak classifiers, and the weak classifiers are collected to construct a stronger final classifier:
Figure FDA0002418625530000021
wherein G ism(x) For each weak classifier, αmFor the weight of each weak classifier, m is the number of iterations, i.e., weak classifiers.
2. The adaptive classifier generation method based on vehicle-mounted monocular vision outdoor road according to claim 1, wherein the histogram similarity calculation method of step 2) is correlation, chi-square coefficient, intersection coefficient or Papanicolaou distance.
3. The method for generating the outdoor road adaptive classifier based on vehicle-mounted monocular vision according to claim 1 or 2, wherein the histogram similarity calculation method of step 2) selects a babbitt distance:
Figure FDA0002418625530000022
wherein HA(i) And HB(i) Representing the ith color histogram data of images A and B, respectively, n being the number of bins in the histogram, d ∈ [0,1]。
4. The method for generating the outdoor road adaptive classifier based on the vehicle-mounted monocular vision according to claim 1 or 2, wherein the image feature in the step 3) is one or more than two of a color feature, a texture feature, a shape feature and a spatial relationship feature.
5. The method as claimed in claim 3, wherein the image feature of step 3) is one or more of a color feature, a texture feature, a shape feature or a spatial relationship feature.
6. The method for generating the outdoor road adaptive classifier based on vehicle-mounted monocular vision according to claim 1,2 or 5, wherein the image features in step 3) are color features and spatial relationship features, and the representation method is as follows:
Figure FDA0002418625530000023
Figure FDA0002418625530000031
wherein E ismAnd VmThe mean and variance of the three components of the color space L a b and the position coordinates (x, y) of the pixel, respectively, and n is the number of pixels contained in the superpixel block.
7. The method for generating the outdoor road adaptive classifier based on vehicle-mounted monocular vision according to claim 3, wherein the image features in step 3) are color features and spatial relationship features, and the representation method is as follows:
Figure FDA0002418625530000032
Figure FDA0002418625530000033
wherein E ismAnd VmThe mean and variance of the three components of the color space L a b and the position coordinates (x, y) of the pixel, respectively, and n is the number of pixels contained in the superpixel block.
8. The method for generating the adaptive classifier based on the on-vehicle monocular vision outdoor road according to claim 4, wherein the image features in step 3) are color features and spatial relationship features, and the representation method is as follows:
Figure FDA0002418625530000034
Figure FDA0002418625530000035
wherein E ismAnd VmThe mean and variance of the three components of the color space L a b and the position coordinates (x, y) of the pixel, respectively, and n is the number of pixels contained in the superpixel block.
CN201611227291.3A 2016-12-27 2016-12-27 Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision Active CN106650814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611227291.3A CN106650814B (en) 2016-12-27 2016-12-27 Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611227291.3A CN106650814B (en) 2016-12-27 2016-12-27 Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision

Publications (2)

Publication Number Publication Date
CN106650814A CN106650814A (en) 2017-05-10
CN106650814B true CN106650814B (en) 2020-07-14

Family

ID=58832678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611227291.3A Active CN106650814B (en) 2016-12-27 2016-12-27 Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision

Country Status (1)

Country Link
CN (1) CN106650814B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392252A (en) * 2017-07-26 2017-11-24 上海城诗信息科技有限公司 Computer deep learning characteristics of image and the method for quantifying perceptibility
CN108694848A (en) * 2018-05-30 2018-10-23 深圳众厉电力科技有限公司 A kind of vehicle communication and navigation system
TWI696144B (en) * 2018-12-19 2020-06-11 財團法人工業技術研究院 Training method of image generator

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024030A (en) * 2010-11-30 2011-04-20 辽宁师范大学 Multi-classifier integration method based on maximum expected parameter estimation
CN104835196A (en) * 2015-05-12 2015-08-12 东华大学 Vehicular infrared image colorization and three-dimensional reconstruction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8311287B2 (en) * 2008-09-25 2012-11-13 Microsoft Corporation Validation and correction of map data using oblique images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024030A (en) * 2010-11-30 2011-04-20 辽宁师范大学 Multi-classifier integration method based on maximum expected parameter estimation
CN104835196A (en) * 2015-05-12 2015-08-12 东华大学 Vehicular infrared image colorization and three-dimensional reconstruction method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于图像的交通场景理解;赵业东;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》;20140515;第2-3章 *
基于视频图像的运动车辆检测与跟踪算法研究;韩艺;《中国优秀硕士学位论文全文数据库信息科技辑》;20160215;第32-33页 *
赵业东.基于图像的交通场景理解.《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》.2014,第2-3章. *

Also Published As

Publication number Publication date
CN106650814A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN111626217B (en) Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion
Chabot et al. Deep manta: A coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image
US10733755B2 (en) Learning geometric differentials for matching 3D models to objects in a 2D image
CN107563372B (en) License plate positioning method based on deep learning SSD frame
CN108304873B (en) Target detection method and system based on high-resolution optical satellite remote sensing image
CN108875608B (en) Motor vehicle traffic signal identification method based on deep learning
Kim et al. An efficient color space for deep-learning based traffic light recognition
CN107239730B (en) Quaternion deep neural network model method for intelligent automobile traffic sign recognition
CN111639564B (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
CN110633632A (en) Weak supervision combined target detection and semantic segmentation method based on loop guidance
Monteiro et al. Tracking and classification of dynamic obstacles using laser range finder and vision
Wang et al. An overview of 3d object detection
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
CN106650814B (en) Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision
Rabiee et al. IV-SLAM: Introspective vision for simultaneous localization and mapping
CN117058646B (en) Complex road target detection method based on multi-mode fusion aerial view
CN113408584A (en) RGB-D multi-modal feature fusion 3D target detection method
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN115620393A (en) Fine-grained pedestrian behavior recognition method and system oriented to automatic driving
Barodi et al. An enhanced artificial intelligence-based approach applied to vehicular traffic signs detection and road safety enhancement
Ghahremannezhad et al. Robust road region extraction in video under various illumination and weather conditions
Ghahremannezhad et al. Automatic road detection in traffic videos
CN117237884A (en) Interactive inspection robot based on berth positioning
CN116664851A (en) Automatic driving data extraction method based on artificial intelligence
CN114359493B (en) Method and system for generating three-dimensional semantic map for unmanned ship

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant