CN106023270A - Video vehicle detection method based on locally symmetric features - Google Patents
Video vehicle detection method based on locally symmetric features Download PDFInfo
- Publication number
- CN106023270A CN106023270A CN201610338718.0A CN201610338718A CN106023270A CN 106023270 A CN106023270 A CN 106023270A CN 201610338718 A CN201610338718 A CN 201610338718A CN 106023270 A CN106023270 A CN 106023270A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- point
- shadow
- characteristic
- corner
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 30
- 239000013598 vector Substances 0.000 claims description 20
- 238000000605 extraction Methods 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000005192 partition Methods 0.000 abstract 1
- 230000003068 static effect Effects 0.000 abstract 1
- 230000011218 segmentation Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video vehicle detection method based on locally symmetric features. The method comprises the step of describing a vehicle target through the adoption of the set of locally unchanged features so as to effectively avoid partition problem; compared with the prior art, the symmetric features of the vehicle are used as important clues about local feature clustering, the positioning of the vehicle central position is realized, the problem of high algorithm complexity produced through the adoption of the conventional clustering algorithm is effectively avoided; and the method is high in detection precision, simple in operation process, strong in real time, and capable of effectively detecting and identifying the vehicle target in a static picture and the video image; therefore, the method has wide application prospect.
Description
Technical Field
The invention belongs to the technical field of video detection, and particularly relates to a video vehicle detection method based on local symmetric characteristics.
Background
The rapid development of economy and the rapid increase of the number of vehicles lead the developed countries and the developing countries to be fully troubled by traffic problems, how to acquire traffic information, effectively dredge traffic and relieve traffic jam, so that the occurrence of traffic accidents is more and more important, and the video-based vehicle detection technology is used as important research content in the field of computer vision and image processing, and is more and more concerned by people due to the advantages of convenience and quickness.
In order to detect and identify a vehicle target, the vehicle target usually needs to be extracted, and a commonly used vehicle target extraction method is to separate the target from a traffic background by using the motion characteristics of a vehicle so as to detect the vehicle target, however, the method is easily affected by the change of external conditions such as light, weather, vehicle shielding and the like, and it is very difficult to separate the motion target from a complex background; another method is to use low-level features (such as edge features, color features, shape features, etc.) of the vehicle to perform segmentation, but this method is difficult to obtain ideal segmentation effect, and the above two methods mainly use global characteristics of the vehicle, inevitably segment suspected vehicle targets, and the accuracy of segmentation will directly affect the result of vehicle recognition. However, in an actual traffic scene, moving vehicles are easily affected by factors such as weather and illumination changes, vehicle congestion, occlusion, and shadows, and thus the target vehicle is more difficult to segment.
Disclosure of Invention
Aiming at the defects and shortcomings of the prior art, the invention aims to provide a video vehicle detection method based on local symmetric features.
In order to achieve the above object, the present invention adopts the following technical solution, a video vehicle detection method based on local symmetric features, comprising the following steps:
manually setting an ROI (region of interest) along a lane line on an initial frame, and recording coordinates of each pixel point on a boundary line of the ROI;
step two, for the interior of the ROI, calculating a characteristic corner point p of a video image in the current frameiAnd recording the characteristic corner point piThe position coordinates of (a);
step (ii) ofThirdly, taking each characteristic angular point obtained in the second step as a center, respectively constructing a square area, and adopting a characteristic description operator to construct a characteristic vector of each characteristic angular point
Fourthly, constructing a horizontal symmetrical angular point q of each characteristic angular pointiFeature vector of
Step five, calculating any characteristic angular point piHorizontal symmetrical angular points q corresponding to all other characteristic angular points respectivelyiIs given by the minimum distance denoted Min (p)i) The next smallest distance is denoted MinSec(pi),Min(pi) The calculation formula of (a) is as follows:
wherein,a feature vector representing the ith feature corner,representing the jth horizontally symmetric corner point qjCharacteristic vector of (q)jAs a characteristic corner point pjA horizontal symmetry corner point;
step six, if the characteristic angular point piThe characteristic corner point p is satisfiediAnd a characteristic corner point pjFor symmetrical corner pairs, characteristic corner pjAs a characteristic corner point piHorizontal symmetrical corner point q with minimum distancejThe corresponding characteristic angular point;
step seven, traversing all the characteristic angular points piRepeating the fifth step and the sixth step until all the symmetrical angle point pairs are found;
step eight, assume useRepresenting each symmetric angle pair, respectively obtaining the x coordinate of each symmetric angle pair center position asCalculating to obtain the central positionThe peak point of the statistical histogram is used as the initial position x of the candidate vehicle center linevehicleAnd calculating the variance of the statistical histogram
Step nine, judging symmetrical angle point pairsWhether the angle points belong to symmetrical angle point pairs on the same vehicle or not, if the angle point pairs belong to symmetrical angle point pairsIf the x coordinate of the central position of the point pair satisfies the formula (1.3), the symmetrical angle point pair is reserved, otherwise, the point pair is deleted;
step ten, counting the average value mu of all the symmetrical corner points retained in the step nine to the central positionvehicle,μvehicleThe center line position of the candidate vehicle is obtained;
eleven, selecting a plurality of frames in the current video image, manually selecting a shadow area at the bottom of the vehicle, modeling by using the geometric characteristics, brightness and color information of the shadow, and training the mean value mu of the shadow sampleshadowSum variance σshadowThe number l of x-direction pixel points and the number h of y-direction pixel points in the shadow area;
step twelve, testing the suspected vehicle area pixel points on the two sides of the central line by using a Gaussian mixture model, wherein the suspected vehicle area is x ∈ [ mu ]vehicle-l,μvehicle+l]As shown in formula (1.4), wherein p isiRepresenting the measured pixel point, T, in the imageshadowG for shadow sample setshadow(pi) The mean of the function; if the measured pixel point satisfies the formula (1.5), the pixel point is judged to be a shadow point, and when the shadow point is continuous and satisfies the | N |x-l | < 0.2 x l and | Ny-h | < 0.1 × h, Nx、NyIf the number of the continuous shadow points in the x direction and the y direction is respectively, the central line corresponding to the shadow area can be judged to be the central line of the vehicle, otherwise, the central line is not the central line of the vehicle, and the detection is finished;
Gshadow(pi)>Tshadow(1.5)
step thirteen, assume that the vehicle target region may be represented as R ═ lR,rR,uR,bR),lRAnd rRIs respectively measured in muvehicleLeft and right boundary values of the centered target region, uRAnd bRBoundary values representing the upper and lower sides of the target region, determining lR、rR、uRAnd bRThe value taking method comprises the following steps:
bRthe calculating method of (2): counting the number of pixel values of shadow areas at the bottom of the vehicle line by line, and taking the maximum value as bRWidth of (2) with bRBy width of (2) as a limit, using a droopA straight Sobel operator for extracting the edge of the image in the ROI above the shadow region at the bottom of the vehicle, and the central line between the rear bumper of the vehicle and the shadow region at the bottom of the vehicle is determined as bRThe position of (a);
rRand lRThe calculating method of (2): r isRAnd lRIs determined according to the position of the center line of the vehicle and bRIs determined as shown in formula (1.6);
lR=μvehicle-width(bR)/2rR=μvehicle+width(bR)/2 (1.6)
uRthe calculating method of (2): assuming that the vehicle height is h, the height and width satisfy the proportional relationship h ═ γ (r)R-lR) Wherein γ is obtained from a set of vehicle data training; image [ l ] extraction using vertical Sobel operatorR,rR]Horizontal edges within the range and counting the horizontal projection of the gray level image, assuming that P (y) represents the distribution of the horizontal projection histogram, then y ∈ [ bR+0.5h,bR+1.5h]The maximum value of P (y) in the range is uRA value of (d);
by applying the above method, we can obtain b sequentiallyR、lR、rRAnd uRAnd taking values, thereby determining the target vehicle.
Further, the method for extracting the feature corner in the second step is a Harris corner extraction algorithm.
Further, the feature vector V of each feature corner point in the third stepiThe construction method comprises the steps of utilizing a Haar wavelet template to calculate response values of a square area image, respectively obtaining Haar wavelet response values along the X direction and the Y direction, counting response values of ∑ dx, ∑ dy, ∑ | dx | and ∑ | dy |, and obtaining the response value of each characteristic angular point piFeature vector of
Further, the square area size constructed in the third step is 5 x 5.
Further, the size of the Haar wavelet template in the third step is 2 x 2.
Further, the horizontal symmetrical corner point q of each characteristic corner point in the fourth stepiFeature vector ofThe construction method comprises the following steps:
compared with the prior art, the method has the advantages that the vehicle target is described by using the local invariant feature set, so that the segmentation problem can be effectively avoided.
Drawings
Fig. 1 is a view for manually setting the ROI along the lane line.
Fig. 2 is a schematic structural diagram of an image region and a Haar wavelet template for constructing a Harris corner feature vector, where (a) is a 5 × 5 image region centered on a Harris corner, and (b) and (c) are Haar wavelet response templates in the X direction and the Y direction, respectively.
Fig. 3 is a schematic diagram of parameters of a vehicle target area, where (a) is an original image of a vehicle, and (b) is a horizontal edge extraction and horizontal projection image of the vehicle.
Fig. 4 shows a result of detection of a vehicle target.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
A video vehicle detection method based on local symmetric features comprises the following steps:
step one, as shown in fig. 1, manually setting an ROI, i.e., a square frame (red) in fig. 1, along a lane line on an initial frame, and recording coordinates of each pixel point on a boundary line of the ROI;
step two, for the interior of the ROI, calculating a characteristic corner p of a video image in the current frame by adopting a Harris corner extraction algorithmiAnd recording the characteristic corner point piThe position coordinates of (a);
step three, as shown in fig. 2, with each characteristic corner point obtained in step two as the center, each square region (fig. 2a) with the size of 5 × 5 is constructed, a Haar wavelet template is used to calculate the response value of the image of the square region, wherein the size of the Haar wavelet template is 2 × 2, Haar wavelet response values along the X direction (fig. 2b) and the Y direction (fig. 2c) are obtained, and the response values of ∑ dx, ∑ dy, ∑ | dx | and ∑ | dy | are counted, so as to obtain the response value of each characteristic corner point piFeature vector ofFeature vector for constructing each feature angular point by adopting feature description operator
Fourthly, constructing a horizontal symmetrical angular point q of each characteristic angular pointiFeature vector of
Step five, calculating any characteristic angular point piHorizontal symmetrical angular points q corresponding to all other characteristic angular points respectivelyiIs given by the minimum distance denoted Min (p)i) The next smallest distance is denoted MinSec(pi),Min(pi) The calculation formula of (a) is as follows:
wherein,a feature vector representing the ith feature corner,representing the jth horizontally symmetric corner point qjCharacteristic vector of (q)jAs a characteristic corner point pjA horizontal symmetry corner point;
step six, if the characteristic angular point piThe characteristic corner point p is satisfiediAnd a characteristic corner point pjFor symmetrical corner pairs, characteristic corner pjAs a characteristic corner point piHorizontal symmetrical corner point q with minimum distancejThe corresponding characteristic angular point;
step seven, traversing all the characteristic angular points piRepeating the fifth step and the sixth step until all the symmetrical angle point pairs are found;
step eight, assume useRepresenting each symmetric angle pair, respectively obtaining the x coordinate of each symmetric angle pair center position asCalculating to obtain the central positionThe peak point of the statistical histogram is used as the initial position x of the candidate vehicle center linevehicleAnd calculating the variance of the statistical histogram
Step nine, judging symmetrical angle point pairsWhether the angle points belong to symmetrical angle point pairs on the same vehicle or not, if the angle point pairs belong to symmetrical angle point pairsIf the x coordinate of the central position of the point pair satisfies the formula (1.3), the symmetrical angle point pair is reserved, otherwise, the point pair is deleted;
step ten, counting the average value mu of all the symmetrical corner points retained in the step nine to the central positionvehicle,μvehicleThe center line position of the candidate vehicle is obtained;
eleven, selecting a plurality of frames in the current video image, manually selecting a shadow area at the bottom of the vehicle, modeling by using the geometric characteristics, brightness and color information of the shadow, and training the mean value mu of the shadow sampleshadowSum variance σshadowThe number l of x-direction pixel points and the number h of y-direction pixel points in the shadow area;
step twelve, testing the suspected vehicle area pixel points on the two sides of the central line by using a Gaussian mixture model, wherein the suspected vehicle area is x ∈ [ mu ]vehicle-l,μvehicle+l]As shown in formula (1.4), wherein p isiRepresenting the measured pixel point, T, in the imageshadowG for shadow sample setshadow(pi) The mean of the function; if the measured pixel point satisfies the formula (1.5), the pixel point is judged to be a shadow point, and when the shadow point is continuous and satisfies the | N |x-l | < 0.2 x l and | Ny-h | < 0.1 × h, Nx、NyIf the number of the continuous shadow points in the x direction and the y direction is respectively, the central line corresponding to the shadow area can be judged to be the central line of the vehicle, otherwise, the central line is not the central line of the vehicle, and the detection is finished;
Gshadow(pi)>Tshadow(1.5)
step thirteen, as shown in fig. 3, assume that the vehicle target region can be expressed as R ═ (l)R,rR,uR,bR),lRAnd rRIs respectively measured in muvehicleLeft and right boundary values of the centered target region, uRAnd bRBoundary values representing the upper and lower sides of the target region, determining lR、rR、uRAnd bRThe value taking method comprises the following steps:
bRthe calculating method of (2): counting the number of pixel values of shadow areas at the bottom of the vehicle line by line, and taking the maximum value as bRWidth of (2) with bRIs used as a boundary, the edge of the image in the ROI area above the shadow area at the bottom of the vehicle is extracted by using a vertical Sobel operator, and the central line between the rear bumper of the vehicle and the shadow area at the bottom of the vehicle is determined as bRThe position of (a);
rRand lRThe calculating method of (2): r isRAnd lRIs determined according to the position of the center line of the vehicle and bRIs determined as shown in formula (1.6);
lR=μvehicle-width(bR)/2rR=μvehicle+width(bR)/2 (1.6)
uRthe calculating method of (2): as shown in fig. 3, assuming that the vehicle height is h, the height and width satisfy the proportional relationship h ═ γ (r)R-lR) Wherein γ is obtained from a set of vehicle data training; image [ l ] extraction using vertical Sobel operatorR,rR]Horizontal edges within the range (as shown in fig. 3 b) and counting the horizontal projection of its gray scale image, assuming that p (y) represents its horizontal projection histogram distribution, then y ∈ [ bR+0.5h,bR+1.5h]The maximum value of P (y) in the range is uRA value of (d);
by applying the above method, we can obtain b in turn, as shown in FIG. 4R、lR、rRAnd uRAnd taking values, thereby determining the target vehicle.
Claims (6)
1. A video vehicle detection method based on local symmetric features is characterized by comprising the following steps:
manually setting an ROI (region of interest) along a lane line on an initial frame, and recording coordinates of each pixel point on a boundary line of the ROI;
step two, for the interior of the ROI, calculating a characteristic corner point p of a video image in the current frameiAnd recording the characteristic corner point piThe position coordinates of (a);
step three, taking each characteristic angular point obtained in the step two as a center, and constructing a first structureConstructing feature vectors of feature angular points by using feature description operators in each square region
Fourthly, constructing a horizontal symmetrical angular point q of each characteristic angular pointiFeature vector of
Step five, calculating any characteristic angular point piHorizontal symmetrical angular points q corresponding to all other characteristic angular points respectivelyiIs given by the minimum distance denoted Min (p)i) The next smallest distance is denoted MinSec(pi),Min(pi) The calculation formula of (a) is as follows:
wherein,a feature vector representing the ith feature corner,representing the jth horizontally symmetric corner point qjCharacteristic vector of (q)jAs a characteristic corner point pjA horizontal symmetry corner point;
step six, if the characteristic angular point piThe characteristic corner point p is satisfiediAnd a characteristic corner point pjFor symmetrical corner pairs, characteristic corner pjAs a characteristic corner point piHorizontal symmetrical corner point q with minimum distancejThe corresponding characteristic angular point;
step seven, traversing all the characteristic angular points piRepeating the fifth step and the sixth step until all the symmetrical angle point pairs are found;
step eight, assume useRepresenting each symmetric angle pair, respectively obtaining the x coordinate of each symmetric angle pair center position asCalculating to obtain the central positionThe peak point of the statistical histogram is used as the initial position x of the candidate vehicle center linevehicleAnd calculating the variance of the statistical histogram
Step nine, judging symmetrical angle point pairsWhether the angle points belong to symmetrical angle point pairs on the same vehicle or not, if the angle point pairs belong to symmetrical angle point pairsIf the x coordinate of the central position of the point pair satisfies the formula (1.3), the symmetrical angle point pair is reserved, otherwise, the point pair is deleted;
step ten, counting the average value mu of all the symmetrical corner points retained in the step nine to the central positionvehicle,μvehicleThe center line position of the candidate vehicle is obtained;
eleven, selecting a plurality of frames in the current video image, manually selecting a shadow area at the bottom of the vehicle, modeling by using the geometric characteristics, brightness and color information of the shadow, and training the mean value mu of the shadow sampleshadowSum variance σshadowThe number l of x-direction pixel points and the number h of y-direction pixel points in the shadow area;
step twelve, testing the suspected vehicle area pixel points on the two sides of the central line by using a Gaussian mixture model, wherein the suspected vehicle area is x ∈ [ mu ]vehicle-l,μvehicle+l]As shown in formula (1.4), wherein p isiRepresenting the measured pixel point, T, in the imageshadowG for shadow sample setshadow(pi) The mean of the function; if the measured pixel point satisfies the formula (1.5), the pixel point is judged to be a shadow point, and when the shadow point is continuous and satisfies the | N |x-l | < 0.2 x l and | Ny-h | < 0.1 × h, Nx、NyIf the number of the continuous shadow points in the x direction and the y direction is respectively, the central line corresponding to the shadow area can be judged to be the central line of the vehicle, otherwise, the central line is not the central line of the vehicle, and the detection is finished;
Gshadow(pi)>Tshadow(1.5)
step thirteen, assume that the vehicle target region may be represented as R ═ lR,rR,uR,bR),lRAnd rRIs respectively measured in muvehicleLeft and right boundary values of the centered target region, uRAnd bRBoundary values representing the upper and lower sides of the target region, determining lR、rR、uRAnd bRThe value taking method comprises the following steps:
bRthe calculating method of (2): counting the number of pixel values of shadow areas at the bottom of the vehicle line by line, and taking the maximum value as bRWidth of (2) with bRIs used as a boundary, the edge of the image in the ROI area above the shadow area at the bottom of the vehicle is extracted by using a vertical Sobel operator, and the central line between the rear bumper of the vehicle and the shadow area at the bottom of the vehicle is determined as bRThe position of (a);
rRand lRThe calculating method of (2): r isRAnd lRIs determined according to the position of the center line of the vehicle and bRIs determined as shown in equation (1.6);
lR=μvehicle-width(bR)/2 rR=μvehicle+width(bR)/2 (1.6)
uRThe calculating method of (2): assuming that the vehicle height is h, the height and width satisfy the proportional relationship h ═ γ (r)R-lR) Wherein γ is obtained from a set of vehicle data training; image [ l ] extraction using vertical Sobel operatorR,rR]Horizontal edges within the range and counting the horizontal projection of the gray level image, assuming that P (y) represents the distribution of the horizontal projection histogram, then y ∈ [ bR+0.5h,bR+1.5h]The maximum value of P (y) in the range is uRA value of (d);
by applying the above method, we can obtain b sequentiallyR、lR、rRAnd uRAnd taking values, thereby determining the target vehicle.
2. The method for detecting vehicles based on local symmetric features according to claim 1, wherein the feature corner point extraction method in the second step is a Harris corner point extraction algorithm.
3. The method for detecting video vehicles based on local symmetric features of claim 1, wherein the feature vector V of each feature corner point in the three stepsiThe construction method comprises the steps of utilizing a Haar wavelet template to calculate response values of a square area image, respectively obtaining Haar wavelet response values along the X direction and the Y direction, counting response values of ∑ dx, ∑ dy, ∑ | dx | and ∑ | dy |, and obtaining the response value of each characteristic angular point piFeature vector of
4. The method for video vehicle detection based on local symmetry features of claim 1, wherein the size of the square region constructed in step three is 5 x 5.
5. The method as claimed in claim 1, wherein the size of the Haar wavelet templates in the third step is 2 x 2.
6. The method for detecting video vehicles based on local symmetric features of claim 1, wherein the horizontal symmetric corner point q of each feature corner point in the fourth stepiFeature vector ofThe construction method comprises the following steps:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610338718.0A CN106023270A (en) | 2016-05-19 | 2016-05-19 | Video vehicle detection method based on locally symmetric features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610338718.0A CN106023270A (en) | 2016-05-19 | 2016-05-19 | Video vehicle detection method based on locally symmetric features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106023270A true CN106023270A (en) | 2016-10-12 |
Family
ID=57096134
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610338718.0A Pending CN106023270A (en) | 2016-05-19 | 2016-05-19 | Video vehicle detection method based on locally symmetric features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106023270A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304750A (en) * | 2017-01-13 | 2018-07-20 | 比亚迪股份有限公司 | Front vehicles recognition methods, device and vehicle |
CN108319910A (en) * | 2018-01-30 | 2018-07-24 | 海信集团有限公司 | A kind of vehicle identification method, device and terminal |
CN108846395A (en) * | 2018-04-13 | 2018-11-20 | 西藏民族大学 | The vehicle checking method merged based on vehicle Local Symmetries and shadow character |
-
2016
- 2016-05-19 CN CN201610338718.0A patent/CN106023270A/en active Pending
Non-Patent Citations (4)
Title |
---|
B.F.MOMIN等: "《Vehicle Detection in Video Surveillance System using Symmetrical SURF》", 《IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL,COMPUTER AND COMMUNICATION TECHNOLOGIES》 * |
卢胜男等: "《基于块匹配置信度的隧道交通背景提取算法》", 《电视技术》 * |
卢胜男等: "《基于虚拟检测窗口的车流量检测算法设计与实现》", 《电脑知识与技术》 * |
张亚娟: "《基于SURF特征的图像与视频拼接技术的研究》", 《中国优秀硕士论文全文数据库》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304750A (en) * | 2017-01-13 | 2018-07-20 | 比亚迪股份有限公司 | Front vehicles recognition methods, device and vehicle |
CN108304750B (en) * | 2017-01-13 | 2020-11-06 | 比亚迪股份有限公司 | Front vehicle identification method and device and vehicle |
CN108319910A (en) * | 2018-01-30 | 2018-07-24 | 海信集团有限公司 | A kind of vehicle identification method, device and terminal |
CN108319910B (en) * | 2018-01-30 | 2021-11-16 | 海信集团有限公司 | Vehicle identification method and device and terminal |
CN108846395A (en) * | 2018-04-13 | 2018-11-20 | 西藏民族大学 | The vehicle checking method merged based on vehicle Local Symmetries and shadow character |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108171112B (en) | Vehicle identification and tracking method based on convolutional neural network | |
CN105488454B (en) | Front vehicles detection and ranging based on monocular vision | |
TWI409718B (en) | Method of locating license plate of moving vehicle | |
Nieto et al. | Road environment modeling using robust perspective analysis and recursive Bayesian segmentation | |
CN110175576A (en) | A kind of driving vehicle visible detection method of combination laser point cloud data | |
CN106156752B (en) | A kind of model recognizing method based on inverse projection three-view diagram | |
CN111709416A (en) | License plate positioning method, device and system and storage medium | |
CN104899554A (en) | Vehicle ranging method based on monocular vision | |
CN102542289A (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN103605953A (en) | Vehicle interest target detection method based on sliding window search | |
CN106778633B (en) | Pedestrian identification method based on region segmentation | |
CN108288047A (en) | A kind of pedestrian/vehicle checking method | |
CN108416798B (en) | A kind of vehicle distances estimation method based on light stream | |
CN105809716A (en) | Superpixel and three-dimensional self-organizing background subtraction algorithm-combined foreground extraction method | |
CN113516853B (en) | Multi-lane traffic flow detection method for complex monitoring scene | |
CN108305260A (en) | Detection method, device and the equipment of angle point in a kind of image | |
Li et al. | Road markings extraction based on threshold segmentation | |
CN112699267B (en) | Vehicle type recognition method | |
CN113111707B (en) | Front car detection and ranging method based on convolutional neural network | |
CN117949942B (en) | Target tracking method and system based on fusion of radar data and video data | |
CN111695373A (en) | Zebra crossing positioning method, system, medium and device | |
CN113111722A (en) | Automatic driving target identification method based on improved Mask R-CNN | |
CN117037103A (en) | Road detection method and device | |
CN110675442A (en) | Local stereo matching method and system combined with target identification technology | |
CN106023270A (en) | Video vehicle detection method based on locally symmetric features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161012 |