CN108229316A - A kind of vehicle's contour extracting method based on super-pixel segmentation - Google Patents
A kind of vehicle's contour extracting method based on super-pixel segmentation Download PDFInfo
- Publication number
- CN108229316A CN108229316A CN201711219942.9A CN201711219942A CN108229316A CN 108229316 A CN108229316 A CN 108229316A CN 201711219942 A CN201711219942 A CN 201711219942A CN 108229316 A CN108229316 A CN 108229316A
- Authority
- CN
- China
- Prior art keywords
- pixel
- super
- vehicle
- image
- represent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of vehicle's contour detection methods based on super-pixel segmentation, realize the vehicle's contour extracting method under a small amount of processor performance expense, improve the accuracy of vehicle specifying information extraction;Problem is extracted for vehicle's contour, passes through super-pixel background difference, the method for calmodulin binding domain CaM affinity classification, realize rapid vehicle contours extract, have the characteristics that accuracy rate is high, fireballing, avoid the problem of contours extract consumes big energy-handling capability, solve the problems, such as rapid vehicle contours extract.
Description
Technical field
Skill is identified the present invention relates to a kind of vehicle's contour detection method based on super-pixel segmentation more particularly to using image
Art handles vehicle image in road scene to obtain the method for vehicle's contour.
Background technology
Vehicle detection is the basis of vehicle peccancy detecting system, and vehicle's contour extraction is then type of vehicle judgement and automobile
The basis that the Vehicle Details such as color judgement judge.With the continuous rising of domestic automobile ownership, suspected vehicles are looked into public security
Look for also more difficult, therefore the requirement of vehicle peccancy detecting system also improves therewith, and vehicle is carried out by color and vehicle
Classification can greatly reduce the artificial workload checked and compared preferably by the range shorter of suspected vehicles.But existing wheel
Wide extracting method occupies a large amount of system processing powers, has seriously affected the function of original violation systems.It is efficient for vehicle's contour
A kind of extraction problem, it is proposed that vehicle's contour extracting method based on super-pixel segmentation.
In order to solve the problems, such as that vehicle's contour extracts, domestic and international academia, industrial quarters propose many schemes.Wherein with this hair
The bright technical solution being closer to includes:Wu Guowei (vehicle's contour edge detection algorithm [J] the Henan sections based on Sobel operators
Skill college journal:Natural science edition, 2009,30 (6):38-41.) the actual conditions of this method combination vehicle in itself, in tradition
On the basis of soble operators, the template in 6 directions is increased, and vehicle edge is detected with reference to Denoising Algorithm, than tradition
Soble operators have better edge extracting effect, and calculation amount is small, arithmetic speed is very fast;But this method is all by vehicle
Edge extracting comes out, but it cannot be determined whether outer edge for vehicle, using limited in actual scene.Ji Xiaopeng is (based on wheel
The wireless vehicle tracking of wide feature and EKF filter studies [J] Journal of Image and Graphics, 2011,16 (2):267-
272.) vehicles segmentation of this method based on contour feature inflection point by matching contour feature inflection point sequence, realizes more vehicles point
It cuts, and passes through Kalman filter to vehicle into line trace;This method can preferably meet requirement of real-time, but this method is partitioned into
The vehicle's contour come is more coarse, and the Shadow segmentation of vehicle in image is not gone out, and influences vehicle color and judges to sentence with vehicle
It is disconnected, it is impossible to preferably to apply in the system of illegal detection.Magnify strange (the vehicle boundary profile based on sequence image motion segmentation
Extraction algorithm [J] Communication and Transportation Engineering journals, 2009 (3):117-121.) this article proposes a kind of based on sequence image
Vehicle boundary profile extraction algorithm, can obtain real vehicle region and its boundary profile;But the algorithm is only applicable to vehicle
The situation of substantially abundant distance is kept along straight road running, and before and after vehicle, is not suitable for traffic violation systems, and the algorithm
Need addition parallel computation that could meet requirement of real-time.
In conclusion exist in current vehicle contours extract scheme following insufficient:
(1) the operational efficiency problem in Practical Project is not considered in method use, contours extract algorithm occupies a large amount of
CPU processing capacities, efficiency of algorithm cannot meet actual demand;
(2) most methods for meeting real-time operation requirement cannot extract accurate vehicle's contour;
(3) Part Methods are more demanding to detection scene, and algorithm generalization is poor;
Vehicle's contour extraction is the basis that Vehicle Detail judges, it is accurate to determine that vehicle color judges to judge with vehicle
Property, but since traffic scene is complicated and changeable, and in actual scene, the partial function cannot occupy excessive processor operational performance,
Fast and accurately vehicle's contour extraction algorithm is always a difficult point.And the vehicle wheel based on super-pixel segmentation is proposed in the present invention
Exterior feature extraction strategy, for the moving vehicle contours extract in complicated road scene, can there is preferable effect, and only take up a small amount of
CPU processing capacities preferably meet the operation demand of actual scene.
Invention content
In order to solve the problems, such as that vehicle's contour extracts, the present invention provides a kind of vehicle's contour extractions based on super-pixel segmentation
Method;Scheme comprises the following steps used by order to solve its technical problem:
Step 1:The background image of current scene is extracted, is denoted as FBG;
Step 2:The image of Current traffic scene is acquired by camera, is denoted as F;
Step 3:Vehicle region R is obtained using the vehicle checking method based on HOG+SVM;
Step 4:Subgraph from image F in replication region R, is denoted as F1;From image FBGSubgraph in middle replication region R
Picture is denoted as Fbg;
Step 5:By SLIC methods to image F1Super-pixel segmentation is carried out, obtains super-pixel regional ensemble S={ Ri| i=
1,2,3 ..., n }, wherein RiRepresent i-th of super-pixel region in S, n represents the super-pixel number in S;
Step 6:According to formula (1)~(3) zoning RiSuper-pixel difference mean value diffi, obtain super-pixel difference mean value collection
Close Dsal={ diffi| i=1,2,3 ..., n }, it is specific as follows:
dpq=F1.fLab(xp,yq)-FbgfLab(xp,yq) (2)
fLab(xp,yq)=L (xp,yq)+a(xp,yq)+b(xp,yq) (3)
Wherein, NiRepresent region RiIn pixel quantity, dpqRepresent RiMiddle pixel (xp,yq) in image F1With image FbgBetween
Difference value, xpAnd yqR is represented respectivelyiThe abscissa and ordinate of middle pixel, fLab(xp,yq) denotation coordination be (xp,yq)
The sum of pixel each channel value in Lab color spaces, L (xp,yq),a(xp,yq) and b (xp,yq) point (x is represented respectivelyp,yq)
Value on each channels of Lab;
Step 7:Prospect difference value set S is calculated according to formula (4)froe={ si| i=1,2,3 ..., n };
Wherein, siRepresent region RiProspect difference value, λ represents prospect judgment threshold given in advance;Step 8:Calculate figure
As F1The adjacency matrix M of middle super-pixeladj, and calculate F1Middle region RiSuper-pixel quantity adjacent thereto, it is adjacent to be denoted as super-pixel
Magnitude-set Nadj={ adji| i=1,2,3 ..., n }, wherein adjiRepresent super-pixel RiNeighbouring super pixels quantity;It calculates super
Pixel RiProspect difference value is more than s in adjacent super-pixeliQuantity numi, notable magnitude-set N must be abuttedsal={ numi| i=
1,2,3,…,n};
Step 9:R is calculated according to formula (5) and (6)iSaliency value ti, obtain notable value set Ssal={ ti| i=1,2,3 ...,
n};
Wherein, savgRepresent image F1Difference mean value;
Step 10:Calculate F1Super-pixel set S on image borderbd={ Rj| j=1,2,3 ..., nbd, nbdRepresent F1Side
Super-pixel quantity on edge;
Step 11:Image F is calculated according to formula (7)~(9)1In super-pixel background adjacency matrix Mbd-adj:
Mbd-adj=Madj+Mbd (7)
Mbd=[aij]n×n (8)
Wherein MbdRepresent F1In super-pixel background correlation matrix, aijRepresent region RiAnd RjBetween correlation;
Step 12:The affinity matrix M between super-pixel is calculated according to formula (10)~(13)affi:
Maffi=[vij]n×n (10)
dispq=fLab(xp,yq) (13)
Wherein, vijRepresent RiAnd RjBetween affinity, f (Ri) and f (Rj) region R is represented respectivelyiAnd RjIn Lab space
Color mean value;NiRepresent RiIn pixel quantity, dispqDenotation coordination is (xp,yq) pixel it is each logical in Lab color spaces
The sum of road value, f (xp,yq) can be acquired by formula (3);
Step 13:According to affinity matrix Maffi, classification is ranked up to it using Mainfold Rank methods, is obtained
Vehicle notable figure Fsal;
Step 14:To FsalOTSU binary conversion treatments are carried out, and carry out the morphological operation of dilation erosion, obtain vehicle wheel
Exterior feature figure Fcar。
Advantages of the present invention
It realizes the vehicle's contour extracting method under a small amount of processor performance expense, improves the accurate of vehicle specifying information extraction
Property;Problem is extracted for vehicle's contour, by super-pixel background difference, the method for calmodulin binding domain CaM affinity classification is realized quick
Vehicle's contour extracts, and has the characteristics that accuracy rate is high, fireballing, avoids the problem of contours extract consumes big energy-handling capability,
Solves the problems, such as rapid vehicle contours extract.
Description of the drawings
The implementation sample that Fig. 1 is the present invention chooses traffic scene image.
Fig. 2 is the vehicle region subgraph obtained by step 4 of the present invention.
Fig. 3 is the regional background subgraph obtained by step 4 of the present invention.
Fig. 4 is the vehicle notable figure obtained by step 13 of the present invention.
Fig. 5 is the vehicle's contour figure obtained by step 14 of the present invention.
Specific embodiment
The tool of the vehicle's contour detection method the present invention is based on super-pixel segmentation is elaborated with reference to embodiment
Body embodiment.Step is as follows:
Step 1:The background image of current scene is extracted, is denoted as FBG;
Step 2:The image of Current traffic scene is acquired by camera, is denoted as F;
Step 3:Vehicle region R is obtained using the vehicle checking method based on HOG+SVM;
Step 4:Subgraph from image F in replication region R, is denoted as F1;From image FBGThe middle subgraph replicated in R,
It is denoted as Fbg;
Step 5:By SLIC methods to image F1Super-pixel segmentation is carried out, obtains super-pixel regional ensemble S={ Ri| i=
1,2,3 ..., n }, wherein RiRepresent i-th of super-pixel region in set S, n represents the super-pixel number in set S;
Step 6:According to formula (1)~(3) zoning RiSuper-pixel difference mean value diffi, obtain super-pixel difference mean value collection
Close Dsal={ diffi| i=1,2,3 ..., n }, it is specific as follows:
dpq=F1.fLab(xp,yq)-FbgfLab(xp,yq) (2)
fLab(xp,yq)=L (xp,yq)+a(xp,yq)+b(xp,yq) (3)
Wherein, NiRepresent RiIn pixel quantity, dpqRepresent RiMiddle pixel (xp,yq) in image F1With image FbgBetween difference
Score value, xpAnd yqR is represented respectivelyiThe abscissa and ordinate of middle pixel, fLab(xp,yq) denotation coordination be (xp,yq) pixel
The sum of each channel value in Lab color spaces, L (xp,yq),a(xp,yq) and b (xp,yq) point (x is represented respectivelyp,yq) each in Lab
Value on a channel;
Step 7:Prospect difference value set S is calculated according to formula (4)froe={ si| i=1,2,3 ..., n };
Wherein, siRepresent region RiProspect difference value, λ represents prospect judgment threshold given in advance;In this example, λ
=50;
Step 8:Calculate image F1The adjacency matrix M of middle super-pixeladj, and calculate image F1Middle region RiAdjacent thereto is super
Pixel quantity is denoted as the adjacent magnitude-set N of super-pixeladj={ adji| i=1,2,3 ..., n }, wherein adjiRepresent RiIt is adjacent
Super-pixel quantity;Calculate RiProspect difference value is more than s in adjacent super-pixeliQuantity numi, notable magnitude-set N must be abuttedsal
={ numi| i=1,2,3 ..., n };
Step 9:According to formula (5) and (6) zoning RiSaliency value ti, obtain notable value set Ssal={ ti| i=1,2,
3,…,n};
Wherein, savgRepresent image F1Difference mean value;
Step 10:Calculate F1Super-pixel set S on image borderbd={ Rj| j=1,2,3 ..., nbd, nbdRepresent F1Side
Super-pixel quantity on edge;
Step 11:Image F is calculated according to formula (7)~(9)1In super-pixel background adjacency matrix Mbd-adj:
Mbd-adj=Madj+Mbd (7)
Mbd=[aij]n×n (8)
Wherein MbdRepresent F1In super-pixel background correlation matrix, aijRepresent region RiAnd RjBetween correlation;
Step 12:The affinity matrix M between super-pixel is calculated according to formula (10)~(13)affi:
Maffi=[vij]n×n (10)
dispq=fLab(xp,yq) (13)
Wherein, vijRepresent region RiAnd RjBetween affinity, f (Ri) and f (Rj) R is represented respectivelyiWith region RjIn Lab skies
Between on color mean value;NiRepresent RiIn pixel quantity, dispqDenotation coordination is (xp,yq) pixel in Lab color spaces
The sum of each channel value, f (xp,yq) can be acquired by formula (3);
Step 13:According to affinity matrix Maffi, classification is ranked up to it using Mainfold Rank methods, is obtained
Vehicle notable figure Fsal;
Step 14:To image FsalOTSU binary conversion treatments are carried out, and carry out the morphological operation of dilation erosion, obtain vehicle
Profile diagram Fcar。
Content described in this specification embodiment is only enumerating to the way of realization of inventive concept, protection of the invention
The concrete form for being not construed as being only limitted to embodiment and being stated of range, protection scope of the present invention is also and in this field skill
Art personnel according to present inventive concept it is conceivable that equivalent technologies mean.
Claims (1)
1. a kind of vehicle's contour detection method based on super-pixel segmentation, includes the following steps:
Step 1:The background image of current scene is extracted, is denoted as FBG;
Step 2:The image of Current traffic scene is acquired by camera, is denoted as F;
Step 3:Vehicle region R is obtained using the vehicle checking method based on HOG+SVM;
Step 4:Subgraph from image F in replication region R, is denoted as F1;From image FBGThe middle subgraph replicated in R, is denoted as
Fbg;
Step 5:By SLIC methods to image F1Super-pixel segmentation is carried out, obtains super-pixel regional ensemble S={ Ri| i=1,2,
3 ..., n }, wherein RiRepresent i-th of super-pixel region in set S, n represents the super-pixel number in set S;
Step 6:According to formula (1)~(3) zoning RiSuper-pixel difference mean value diffi, obtain the equal value set of super-pixel difference
Dsal={ diffi| i=1,2,3 ..., n }, it is specific as follows:
dpq=F1.fLab(xp,yq)-FbgfLab(xp,yq) (2)
fLab(xp,yq)=L (xp,yq)+a(xp,yq)+b(xp,yq) (3)
Wherein, NiRepresent RiIn pixel quantity, dpqRepresent RiMiddle pixel (xp,yq) in image F1With image FbgBetween difference value,
xpAnd yqR is represented respectivelyiThe abscissa and ordinate of middle pixel, fLab(xp,yq) denotation coordination be (xp,yq) pixel in Lab
The sum of each channel value in color space, L (xp,yq),a(xp,yq) and b (xp,yq) point (x is represented respectivelyp,yq) in each channels of Lab
On value;
Step 7:Prospect difference value set S is calculated according to formula (4)froe={ si| i=1,2,3 ..., n };
Wherein, siRepresent region RiProspect difference value, λ represents prospect judgment threshold given in advance;
Step 8:Calculate image F1The adjacency matrix M of middle super-pixeladj, and calculate image F1Middle region RiSuper-pixel adjacent thereto
Quantity is denoted as the adjacent magnitude-set N of super-pixeladj={ adji| i=1,2,3 ..., n }, wherein adjiRepresent RiAdjacent super picture
Prime number amount;Calculate RiProspect difference value is more than s in adjacent super-pixeliQuantity numi, notable magnitude-set N must be abuttedsal=
{numi| i=1,2,3 ..., n };
Step 9:According to formula (5) and (6) zoning RiSaliency value ti, obtain notable value set Ssal={ ti| i=1,2,3 ...,
n};
Wherein, savgRepresent image F1Difference mean value;
Step 10:Calculate F1Super-pixel set S on image borderbd={ Rj| j=1,2,3 ..., nbd, nbdRepresent F1On edge
Super-pixel quantity;
Step 11:Image F is calculated according to formula (7)~(9)1In super-pixel background adjacency matrix Mbd-adj:
Mbd-adj=Madj+Mbd (7)
Mbd=[aij]n×n (8)
Wherein MbdRepresent F1In super-pixel background correlation matrix, aijRepresent region RiAnd RjBetween correlation;
Step 12:The affinity matrix M between super-pixel is calculated according to formula (10)~(13)affi:
Maffi=[vij]n×n (10)
dispq=fLab(xp,yq) (13)
Wherein, vijRepresent region RiAnd RjBetween affinity, f (Ri) and f (Rj) R is represented respectivelyiWith region RjIn Lab space
Color mean value;NiRepresent RiIn pixel quantity, dispqDenotation coordination is (xp,yq) pixel it is each logical in Lab color spaces
The sum of road value, f (xp,yq) can be acquired by formula (3);
Step 13:According to affinity matrix Maffi, classification is ranked up to it using Mainfold Rank methods, vehicle is obtained and shows
Write figure Fsal;
Step 14:To image FsalOTSU binary conversion treatments are carried out, and carry out the morphological operation of dilation erosion, obtain vehicle wheel
Exterior feature figure Fcar。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711219942.9A CN108229316B (en) | 2017-11-28 | 2017-11-28 | Vehicle contour extraction method based on superpixel segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711219942.9A CN108229316B (en) | 2017-11-28 | 2017-11-28 | Vehicle contour extraction method based on superpixel segmentation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108229316A true CN108229316A (en) | 2018-06-29 |
CN108229316B CN108229316B (en) | 2020-05-12 |
Family
ID=62653048
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711219942.9A Active CN108229316B (en) | 2017-11-28 | 2017-11-28 | Vehicle contour extraction method based on superpixel segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108229316B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109409208A (en) * | 2018-09-10 | 2019-03-01 | 东南大学 | A kind of vehicle characteristics extraction and matching process based on video |
CN111862152A (en) * | 2020-06-30 | 2020-10-30 | 西安工程大学 | Moving target detection method based on interframe difference and super-pixel segmentation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679719A (en) * | 2013-12-06 | 2014-03-26 | 河海大学 | Image segmentation method |
CN104036500A (en) * | 2014-05-30 | 2014-09-10 | 西安交通大学 | Fast double-cycle level set method for narrowband background detection |
CN105760886A (en) * | 2016-02-23 | 2016-07-13 | 北京联合大学 | Image scene multi-object segmentation method based on target identification and saliency detection |
CN105787481A (en) * | 2016-04-05 | 2016-07-20 | 湖南人文科技学院 | Target detection algorithm based on targeted potential areas analysis and application thereof |
US20170178336A1 (en) * | 2015-12-16 | 2017-06-22 | General Electric Company | Systems and methods for hair segmentation |
-
2017
- 2017-11-28 CN CN201711219942.9A patent/CN108229316B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679719A (en) * | 2013-12-06 | 2014-03-26 | 河海大学 | Image segmentation method |
CN104036500A (en) * | 2014-05-30 | 2014-09-10 | 西安交通大学 | Fast double-cycle level set method for narrowband background detection |
US20170178336A1 (en) * | 2015-12-16 | 2017-06-22 | General Electric Company | Systems and methods for hair segmentation |
CN105760886A (en) * | 2016-02-23 | 2016-07-13 | 北京联合大学 | Image scene multi-object segmentation method based on target identification and saliency detection |
CN105787481A (en) * | 2016-04-05 | 2016-07-20 | 湖南人文科技学院 | Target detection algorithm based on targeted potential areas analysis and application thereof |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109409208A (en) * | 2018-09-10 | 2019-03-01 | 东南大学 | A kind of vehicle characteristics extraction and matching process based on video |
CN111862152A (en) * | 2020-06-30 | 2020-10-30 | 西安工程大学 | Moving target detection method based on interframe difference and super-pixel segmentation |
CN111862152B (en) * | 2020-06-30 | 2024-04-05 | 西安工程大学 | Moving target detection method based on inter-frame difference and super-pixel segmentation |
Also Published As
Publication number | Publication date |
---|---|
CN108229316B (en) | 2020-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107767383B (en) | Road image segmentation method based on superpixels | |
CN103871079A (en) | Vehicle tracking method based on machine learning and optical flow | |
CN104036246A (en) | Lane line positioning method based on multi-feature fusion and polymorphism mean value | |
CN104850850A (en) | Binocular stereoscopic vision image feature extraction method combining shape and color | |
CN102842037A (en) | Method for removing vehicle shadow based on multi-feature fusion | |
CN103136528A (en) | Double-edge detection based vehicle license plate identification method | |
CN103106409A (en) | Composite character extraction method aiming at head shoulder detection | |
CN108229316A (en) | A kind of vehicle's contour extracting method based on super-pixel segmentation | |
Zhang et al. | Road marking segmentation based on siamese attention module and maximum stable external region | |
Wei et al. | Detection of lane line based on Robert operator | |
Wang et al. | Lane detection based on two-stage noise features filtering and clustering | |
CN105184293B (en) | Vehicle-logo location method based on salient region detection | |
CN102509308A (en) | Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection | |
Wang et al. | Lane detection algorithm based on density clustering and RANSAC | |
CN114708560B (en) | YOLOX algorithm-based illegal parking detection method and system | |
CN104573703A (en) | Method for quickly identifying power transmission line based on partial derivative distribution and boundary strategy | |
Deng et al. | UMiT-Net: a U-shaped mix-transformer network for extracting precise roads using remote sensing images | |
CN103679156A (en) | Automatic identification and tracking method for various kinds of moving objects | |
Yingyong et al. | Research on algorithm for automatic license plate recognition system | |
Shiru et al. | Research on multi-feature front vehicle detection algorithm based on video image | |
Kapileswar et al. | Automatic traffic monitoring system using lane centre edges | |
Li et al. | Progressive probabilistic hough transform based nighttime lane line detection for micro-traffic road | |
Pan et al. | A new method of vehicle license plate location under complex scenes | |
Lan et al. | Research on the license plate recognition based on image processing | |
CN101377853B (en) | Method for extracting vehicle from colorful video image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |