CN109409208A - A kind of vehicle characteristics extraction and matching process based on video - Google Patents

A kind of vehicle characteristics extraction and matching process based on video Download PDF

Info

Publication number
CN109409208A
CN109409208A CN201811052077.8A CN201811052077A CN109409208A CN 109409208 A CN109409208 A CN 109409208A CN 201811052077 A CN201811052077 A CN 201811052077A CN 109409208 A CN109409208 A CN 109409208A
Authority
CN
China
Prior art keywords
image
vehicle
point
value
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811052077.8A
Other languages
Chinese (zh)
Inventor
路小波
夏雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201811052077.8A priority Critical patent/CN109409208A/en
Publication of CN109409208A publication Critical patent/CN109409208A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/446Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The vehicle characteristics that the invention discloses a kind of based on video extract and matching process, include the following steps: to read in video file;Background modeling is carried out to video image;The foreground area comprising moving target is obtained using Background difference;Vehicle detection is carried out using the method for obtaining vehicle's contour;Extract the pHash feature of vehicle image;Extract vehicle image ISIFT feature;In conjunction with pHash algorithm and ISIFT algorithm, and ISIFT characteristic matching point is screened by BBF searching algorithm and RANSAC algorithm, to realize vehicle match.The present invention can realize vehicle detection, vehicle characteristic information extraction, vehicle match function based on video, detect that the vehicle's contour of vehicle is very accurate, information of vehicles feature extraction is accurate, arithmetic speed is fast, meet requirement of real-time, it is low in cost, additionally it is possible to the vehicle of different directions traveling is detected, it is applied widely.

Description

A kind of vehicle characteristics extraction and matching process based on video
Technical field
The invention belongs to image procossing and traffic video detection field, be related to a kind of vehicle characteristics based on video extract with Matching process is mainly used in the vehicle detection based on video, vehicle characteristic information extracts, vehicle match.
Background technique
It is low in cost by video extraction traffic information, it is only necessary to utilize the candid photograph camera of traffic block port.It extracts The various information of vehicle are related to the technology of computer vision and image procossing, with the continuous enhancing of Computing ability, make Information of vehicles must accurately be extracted by digital video to be possibly realized.But this method still has many difficult points, such as bayonet Camera is mounted on outdoor, be illuminated by the light, the factors such as environment influence it is very big;In addition, since traffic scene is complicated and changeable, it is difficult to protect S demonstrate,proves the robustness and accuracy when vehicle is intensive or under emergency situations;And real time problems: newest algorithm tends to Enough there is good accuracy, but operand is huge, it is very time-consuming, it is difficult to practical.
Two classes can be mainly divided into the method for vehicle retrieval in video at this stage, one kind is examined by vehicle attribute Rope, for example scanned for by information such as vehicle color, vehicle, license plates, the essence that the accuracy rate Dependency Specification of this method extracts Degree.It is another kind of to be generally divided into two processes mainly by vehicle image signature search, it is to extract vehicle image low-level image feature first Vector carries out similitude matching followed by the feature that will be extracted, and this method is similar to image retrieval.Existing search method is general Need collecting sample, it is also necessary to shoot the pure background image for not having moving target in image, need a large amount of preliminary preparation.
The present invention carries out dynamic update to video background, guarantees real-time while improving Detection accuracy.
Summary of the invention
To solve the above problems, the vehicle characteristics that the invention discloses a kind of based on video extract and matching process, emphasis Vehicle image feature extraction is studied, and detects moving target by way of detecting profile, realizes the vehicle based on images match Retrieval.
In order to achieve the above object, the invention provides the following technical scheme:
A kind of vehicle characteristics extraction and matching process based on video, includes the following steps:
Step 1: read in video file:
Video file is read in from traffic block port camera, and takes the color image of a frame sign, is denoted as G, color image Width and height be respectively W and H;
Step 2: background modeling is carried out to video image, guarantees that background is precise and stable and real-time:
If image G corresponds to the first frame image of video file, the initialization of background model is carried out, otherwise, is carried out The update of background model;
The initialization procedure of background model are as follows:
By interval sampling multiple image, the average value of multiple image is sought as video background picture;It first will be colored Image carries out the separation of RGB triple channel, is indicated respectively with 8 bit binary numbers, and the pixel value range of R, G, B triple channel is 0~ 255, three channels are once sampled every p frame respectively, p is constant, by obtained N frame image after progress n times sampling Averaged;Sum0(x, y) is obtained by initialization background, indicates accumulation N frame image pixel value summation at (x, y), then Sum0 (x, y) is embodied as
Wherein, Grpi(x, y), Ggpi(x, y), Gbpi(x, y) is illustrated respectively in R, G, B triple channel, and pth × i frame image exists Pixel value at (x, y), wherein p expression are once sampled every p frame, and N indicates to participate in the totalframes of initialization background;
B0(x, y) indicates initial background image pixel value at (x, y), then B0(x, y) is embodied as
B0(x, y)=Sum0/N。
The renewal process of background model are as follows:
Sumt(x, y) indicates the N frame accumulated centered on t frame pixel value summation at (x, y), then Sumt(x, y) is specific It is expressed as
Bt(x, y) indicates background pixel value of the t frame at (x, y), then Bt(x, y) is embodied as
Bt(x, y)=Sumt(x, y)/N.
Step 3: the foreground area comprising moving target is obtained using Background difference:
It indicates detection zone manually first, non-detection region is filtered to reduce interference;Then by current frame image and Background image is converted to 8 gray level images, then carries out calculus of differences, to obtain foreground image;
Wherein Gt(x, y), Bt(x, y) respectively indicates the pixel value of t frame video image, background image at (x, y), Grayt(x, y), Bgt(x, y) respectively indicates the gray scale of t frame video image, background image at (x, y) after gray processing Value;Then foreground image Fgt(x, y) is embodied as:
Fgt(x, y)=| Grayt(x, y)-Bgt(x, y) |.
Wherein, Fgt(x, y) indicates gray value of the t frame foreground image at (x, y);
Step 4: vehicle detection:
Vehicle detection, detailed process are carried out using the method for obtaining vehicle's contour are as follows:
Step 4.1: binaryzation foreground image Fgt(x, y), the foreground image T after obtaining binaryzationt(x, y), then Tt(x, y) It is embodied as
Wherein T indicates threshold value;If FgtThe value of (x, y) is more than or equal to the threshold value T of setting, then sets prospect for the point Otherwise point is background dot;
Step 4.2: morphological transformation being carried out to bianry image, to two-value foreground image Tt(x, y) carries out closed operation;
Step 4.3: the profile of foreground area in two-value foreground image is extracted using Freeman chain code following method, simultaneously Rejecting is clearly not the image of profile;Then the convex closure of prospect profile is obtained, and using Graham scanning method with prospect convex closure work For vehicle's contour, vehicle's contour is finally obtained;
Step 4.4: carry out vehicle detection:
A record sheet is defined first, for saving the various information for the vehicle that detected recently, and is constantly deleted Outdated information;Using the method for carrying out one-time detection every N frame, and each vehicle that detected each time is located one by one Reason;For any one vehicle, need and record in all vehicles be compared, sentenced according to region overlapping cases and centre distance Whether included vehicle in disconnected record;If in record including vehicle, the i.e. position of vehicle, time, path more in new record Etc. information, it is on the contrary then by new car be added record;When within a short period of time, vehicle is detected more than certain number, and The relative position of vehicle is moderate in video, it can extracts the information of vehicle and carries out Input of Data operation;
Step 5: extract the pHash feature of vehicle image:
Each image is generated finger print information by pHash algorithm, then the fingerprint between more different pictures, and distance is closer Represent that two pictures are more similar, the Hamming distance wherein after compression of images and the Hamming distance very little of original image, between identical picture From being zero;
Step 6: extracting vehicle image ISIFT feature, detailed process are as follows:
Step 6.1: scale space extremum extracting:
Search for the picture position on all scales, identified by gaussian derivative function potentially for scale and rotation not The point of interest of change;First using adjacent upper layer and lower layer image subtraction in every group of gaussian pyramid, difference of Gaussian image is obtained;It is crucial Point is made of the Local Extremum in difference of Gaussian space, and the first step is exactly to find out the Local Extremum in the space DOG, each pixel Point will be all with it consecutive points compare, see whether it bigger than the consecutive points of its image area and scale domain or small;Then it arranges Except edge effect and the weaker point of effect;
Step 6.2: the positioning of key point and direction:
It determines position and the scale of key point, and determines the direction of key point based on gradient direction;
The position of key point has been accurately obtained by sub-pixel interpolation method first;Scale space DoG function is utilized Taylor expansion carries out curve fitting, and rejects unstable skirt response point;By obtaining the Hessian matrix at characteristic point Principal curvatures is calculated, to weed out unstable skirt response point;For the key point detected in DOG pyramid, adopt Collect the gradient and directional spreding feature of pixel in its place gaussian pyramid Image neighborhood window;For two dimensional image, definition ladder The modulus value m (x, y) of degree and direction θ (x, y) are as follows:
Scale used in L indicates the scale where each key point, count the pixel in key point contiguous range gradient and Direction, specific to calculate the weighting of modulus value m (x, y) Shi Caiyong Gaussian Profile, the crucial dot center of distance is closer, and weight is bigger;
Calculate key point region in each pixel angle and modulus after, use the gradient of pixel in statistics with histogram neighborhood The direction and;0~360 degree of direction scope is divided into 36 columns (bins) by histogram of gradients, wherein 10 degree of every column;The master of histogram Peak by be key point Main way;80% or more main peak is remained as auxiliary direction;
Step 6.3: a descriptor is established for each key point, describes key point with one group of vector:
Firstly, determining key point contiguous range, range internal coordinate axis is rotated, so that x-axis and key point principal direction Unanimously;For the coordinate of pixel in neighborhood, meet following formula by rotation:
For postrotational image, be usually classified as the region of d × d, calculate gradient modulus value in each region and Gradient is assigned on 8 directions by direction;When d takes 4, a total of 128 dimension of feature vector, each subregion gradient direction presses 0 ~360 ° are divided into 8 parts, 45 ° every part;
Key point principal direction is located at the whole " overturning " on the left of y-axis to right side, principal direction is only counted and is located on the right side of y-axis Key point;ISIFT feature d ' is calculated to the key point after all key points of image and overturning, meets following formula:
θ >=0 d '=d, cos
Wherein θ indicates the angle of key point principal direction, and d indicates the SIFT feature of original image and flipped image, and d ' expression changes ISIFT feature after, i.e. new feature ISIFT only retain the feature that principal direction is located at key point on the right side of y-axis;
Step 7: carrying out vehicle match: ISIFT algorithm and pHash algorithm are combined, carried out first using pHash feature sudden and violent Power matching, most like picture is searched in picture library, when the Hamming distance between retrieving image and similar pictures be less than threshold value, just It is considered as matching to complete, when smallest hamming distance is greater than threshold value, is then accurately matched using ISIFT algorithm, searched in combination with BBF Rope algorithm and RANSAC algorithm;Detailed process are as follows:
Step 7.1: violence matching being carried out using pHash feature, most like picture is searched in picture library, works as retrieving image Hamming distance between similar pictures is less than threshold value, is treated as matching and completes, otherwise enters step 7.2;
Step 7.2: accurately being matched using ISIFT algorithm, and screened by BBF searching algorithm and RANSAC algorithm ISIFT characteristic matching point:
Step 7.2.1:BBF searching algorithm: BBF is a kind of advanced k-d tree K-NN search algorithm, in order to eliminate part mistake Mismatching point takes K=2, finds the arest neighbors and time neighborhood matching point of characteristic point;Assuming that 128 dimensional vectors and arest neighbors feature Distance is d1And the distance of secondary neighbour is d2, calculate d1And d2Ratio, the matching is rejected when being unsatisfactory for following formula;
Wherein, ratio is a constant, between zero and one;
Step 7.2.2:RANSAC algorithm: SIFT feature is further screened using RANSAC algorithm RANSAC Match;RANSAC algorithm inputs one group of observed parameter, usually contains one group " exterior point ", by the way of iteration, finds optimal parameter Model, the point for meeting the parameter model is " interior point ", remaining is defined as " exterior point ";RANSAC algorithm thinks two different views Point on the image of angle indicate to can be changed with projection, i.e. X1=HX2, i.e. following formula:
Wherein s indicates that scale (x, y) and (x ', y ') respectively indicates the coordinate on the left side and the right;Find optimal homography Matrix H, the matrix that H is 3 × 3 need 4 groups not conllinear can solve matrix H in total;RANSAC algorithm is from data centralized calculation Then homography matrix H out utilizes all data of model measurement, so that D value is minimum in following formula:
Further, when the step 4.3 rejects image, rule is rejected are as follows: when perimeter, the area of profile are less than certain threshold The rejecting of giving of value or shape anomaly is handled.
Further, the step 5 specifically includes following sub-step:
Step 5.1: picture minification: being dwindled into 8 × 8,32 × 32;
Step 5.2: simplifying color: converting gray scale for picture;
Step 5.3: calculating DCT: image being subjected to dct transform, obtains 32 × 32 DCT matrix, by image from pixel domain It is transformed into frequency domain;Image low frequency part is only taken, the part in the DCT matrix upper left corner 8 × 8 is retained;
Step 5.4: it calculates hash value: calculating the average value of DCT, according to 8 × 8 DCT matrix, be set as 1 greater than mean value, It is set as 0 less than mean value, thus constitutes 64 integers, obtains the hash fingerprint of image;
Step 5.5: calculate Hamming distance: when carrying out image retrieval, other images equally calculate respective hash value, and Each finger image is compared in database, and Hamming distance is smaller to indicate closer.
Further, new feature ISIFT still retains spy in the case where cos θ < 0 and 0 ≈ cos θ in the step 6.3 Levy d.
Further, RANSAC algorithm steps are as follows in the step 7.2.2:
1 iteration K times constantly executes following a few steps;
2 extract 4 pairs of not conllinear samples out from data set at random, calculate H-matrix;
3 given threshold T, test all data, calculate D value according to above formula, if D < T, which is added interior point set;
If 4 interior point set numbers are greater than optimal record, the record is updated.
Compared with prior art, the invention has the advantages that and the utility model has the advantages that
Video background is generated and updated 1. present invention improves over traditional averaging methods, is obtaining video initial background It also needs to constantly update video background later, enhances robustness, improve the accuracy of information of vehicles feature extraction.
2. making SIFT complete while there is symmetric invariance present invention utilizes a kind of new feature ISIFT for improving SIFT All risk insurance stays the advantages that rotational invariance possessed by SIFT feature, scale invariability, realizes between symmetrical vehicle pictures Matching.Due to completely remaining the structure of SIFT feature, so there is scale invariability, rotational invariance, to illumination variation There is good robustness.In terms of match time, since ISIFT equally has 128 dimensional features, characteristic point quantity also with SIFT base This is identical, so match time is almost consistent with SIFT.
3. the method for the present invention only needs the candid photograph camera using traffic block port, and does not need collecting sample, Do not need shooting image in there is no the pure background image of moving target, and only need one write with the method for the present invention made of view Frequency processing software can detect the vehicle in video, low in cost.
4. detecting that the vehicle's contour of vehicle is very accurate using the method for the present invention, be conducive to next step information of vehicles feature Extraction.
5. the present invention combines ISIFT algorithm and pHash algorithm, substantially accelerate picture retrieval speed in some cases. In vehicle retrieval, the frequency scanned for using original image and thumbnail is very high, and in this case, this paper algorithm can be substantially Retrieval time is reduced, overall operational speed is fast, can satisfy the requirement of real-time substantially.
6. can be adapted for the traffic block port video of different angle shooting present invention employs the mode of global detection, fit It is wider with face, it can detecte the vehicle of different directions traveling.The method of the present invention is according to captured video appropriate adjustment software Some parameters, so that it may vehicle detection and vehicle retrieval are realized well, thus the method for the present invention can be applied to a variety of differences Vehicle detection and vehicle retrieval under background environment.
Detailed description of the invention
Fig. 1 is that the vehicle characteristics provided by the invention based on video extract and matching process flow chart.
Specific embodiment
Technical solution provided by the invention is described in detail below with reference to specific embodiment, it should be understood that following specific Embodiment is only illustrative of the invention and is not intended to limit the scope of the invention.
Vehicle characteristics provided by the invention based on video extract and matching process, and process is as shown in Figure 1, include as follows Step:
Step 1: read in video file:
Video file is read in from traffic block port camera, and takes the color image of a frame sign, is denoted as G, color image Width and height be respectively W and H;
Step 2: background modeling is carried out to video image, guarantees that background is precise and stable and real-time:
If image G corresponds to the first frame image of video file, the initialization of background model is carried out, otherwise, is carried out The update of background model;
The initialization procedure of background model are as follows:
By interval sampling multiple image, the average value of multiple image is sought as video background picture;It first will be colored Image carries out the separation of RGB triple channel, is indicated respectively with 8 bit binary numbers, and the pixel value range of R, G, B triple channel is 0~ 255, three channels are once sampled every p frame respectively, p is constant, by obtained N frame image after progress n times sampling Averaged;Sum0(x, y) is obtained by initialization background, indicates accumulation N frame image pixel value summation at (x, y), then Sum0 (x, y) is embodied as
Wherein, Grpi(x, y), Ggpi(x, y), Gbpi(x, y) is illustrated respectively in R, G, B triple channel, and pth × i frame image exists Pixel value at (x, y), wherein p expression are once sampled every p frame, and N indicates to participate in the totalframes of initialization background;
B0(x, y) indicates initial background image pixel value at (x, y), then B0(x, y) is embodied as
B0(x, y)=Sum0/N。
The renewal process of background model are as follows:
Sumt(x, y) indicates the N frame accumulated centered on t frame pixel value summation at (x, y), then Sumt(x, y) is specific It is expressed as
Bt(x, y) indicates background pixel value of the t frame at (x, y), then Bt(x, y) is embodied as
Bt(x, y)=Sumt(x, y)/N.
Step 3: the foreground area comprising moving target is obtained using Background difference:
It indicates detection zone manually first, non-detection region is filtered to reduce interference;Then by current frame image and Background image is converted to 8 gray level images, then carries out calculus of differences, to obtain foreground image;
Wherein Gt(x, y), Bt(x, y) respectively indicates the pixel value of t frame video image, background image at (x, y), Grayt(x, y), Bgt(x, y) respectively indicates the gray scale of t frame video image, background image at (x, y) after gray processing Value;Then foreground image Fgt(x, y) is embodied as:
Fgt(x, y)=| Grayt(x, y)-Bgt(x, y) |.
Wherein, Fgt(x, y) indicates gray value of the t frame foreground image at (x, y);
Step 4: vehicle detection:
Vehicle detection, detailed process are carried out using the method for obtaining vehicle's contour are as follows:
Step 4.1: binaryzation foreground image Fgt(x, y), the foreground image T after obtaining binaryzationt(x, y), then Tt(x, y) It is embodied as
Wherein T indicates threshold value;If FgtThe value of (x, y) is more than or equal to the threshold value T of setting, then sets prospect for the point Otherwise point is background dot;
Step 4.2: morphological transformation is carried out to bianry image.To two-value foreground image Tt(x, y) carries out closed operation, so that Moving target interior void part has been lacked very much, and moving target is provided with connectivity.
Step 4.3: the profile of foreground area in two-value foreground image is extracted using Freeman chain code following method, simultaneously Rejecting is clearly not the image of profile, when the perimeter of profile, area are less than the giving at rejecting of certain threshold value or shape anomaly Reason.Then the convex closure of prospect profile is obtained using Graham scanning method, and using prospect convex closure as vehicle's contour, finally obtains vehicle Profile.
Step 4.4: carry out vehicle detection:
A record sheet is defined first, for saving the various information for the vehicle that detected recently, and is constantly deleted Outdated information.Using the method for carrying out one-time detection every N frame, and each vehicle that detected each time is located one by one Reason.For any one vehicle, need and record in all vehicles be compared, sentenced according to region overlapping cases and centre distance Whether included vehicle in disconnected record.If in record including vehicle, the i.e. position of vehicle, time, path more in new record Etc. information, it is on the contrary then by new car be added record.When within a short period of time, vehicle is detected more than certain number, and The relative position of vehicle is moderate in video, it can extracts the information of vehicle and carries out Input of Data operation.Using this vehicle Detection method detects that the vehicle's contour of vehicle is very accurate, is conducive to the extraction of next step information of vehicles feature;And it can be same When detection different directions traveling vehicle, arithmetic speed is also relatively fast, can satisfy the requirement of real-time substantially.
Step 5: extract the pHash feature of vehicle image:
Each image is generated finger print information by pHash algorithm, then the fingerprint between more different pictures, and distance is closer Represent that two pictures are more similar, the Hamming distance wherein after compression of images and the Hamming distance very little of original image, between identical picture From being zero.Detailed process are as follows:
Step 5.1: minification: picture being dwindled into 8 × 8,32 × 32, purpose mainly simplifies DCT and calculates, and accelerates fortune Calculate speed.
Step 5.2: simplifying color: converting gray scale for picture, be further reduced calculation amount.
Step 5.3: calculating DCT: image being subjected to dct transform, obtains 32 × 32 DCT matrix, by image from pixel domain It is transformed into frequency domain.In order to enhance the robustness of algorithm, image low frequency part is only taken, retains the part in the DCT matrix upper left corner 8 × 8.
Step 5.4: it calculates hash value: calculating the average value of DCT, according to 8 × 8 DCT matrix, be set as 1 greater than mean value, It is set as 0 less than mean value, thus constitutes 64 integers, obtains the hash fingerprint of image.
Step 5.5: calculate Hamming distance: when carrying out image retrieval, other images equally calculate respective hash value, and Each finger image is compared in database, and Hamming distance is smaller to indicate closer.
Step 6: extracting vehicle image ISIFT feature, detailed process are as follows:
Step 6.1: scale space extremum extracting:
Search for the picture position on all scales, identified by gaussian derivative function potentially for scale and rotation not The point of interest of change.First using adjacent upper layer and lower layer image subtraction in every group of gaussian pyramid, difference of Gaussian image is obtained.It is crucial Point is made of the Local Extremum in difference of Gaussian space, and the first step is exactly to find out the Local Extremum in the space DOG, each pixel Point will be all with it consecutive points compare, see whether it bigger than the consecutive points of its image area and scale domain or small.Then it arranges Except edge effect and the weaker point of effect.
Step 6.2: the positioning of key point and direction:
It determines position and the scale of key point, and determines the direction of key point based on gradient direction.Pass through sub- picture first Plain interpolation method has been accurately obtained the position of key point.In order to improve the stability of key point, need to scale space DoG function benefit It is carried out curve fitting with taylor expansion.DOG operator can generate stronger skirt response, need to reject unstable edge and ring Ying Dian.One extreme value for defining bad difference of Gaussian has biggish principal curvatures in the place across edge, and vertical There is lesser principal curvatures in the direction at edge.Therefore principal curvatures is calculated by obtaining the Hessian matrix at characteristic point, to pick Remove unstable skirt response point.Need to define the direction of key point with that.For what is detected in DOG pyramid Key point acquires the gradient and directional spreding feature of pixel in its place gaussian pyramid Image neighborhood window.For X-Y scheme Picture, modulus value m (x, y) and direction θ (x, y) for defining gradient are as follows:
Scale used in L indicates the scale where each key point, count the pixel in key point contiguous range gradient and Direction, specific to calculate the weighting of modulus value m (x, y) Shi Caiyong Gaussian Profile, the crucial dot center of distance is closer, and weight is bigger.
Calculate key point region in each pixel angle and modulus after, use the gradient of pixel in statistics with histogram neighborhood The direction and.0~360 degree of direction scope is divided into 36 columns (bins) by histogram of gradients, wherein 10 degree of every column.The master of histogram Peak by be key point Main way.80% or more main peak is remained as auxiliary direction.
Step 6.3: a descriptor is established for each key point, describes key point with one group of vector:
Firstly, it is necessary to determine key point contiguous range, range internal coordinate axis is rotated, so that x-axis and key point master Direction is consistent.For the coordinate of pixel in neighborhood, meet following formula by rotation:
For postrotational image, be usually classified as the region of d × d, calculate gradient modulus value in each region and Gradient is assigned on 8 directions by direction.When d takes 4, a total of 128 dimension of feature vector, each subregion gradient direction presses 0 ~360 ° are divided into 8 parts, 45 ° every part.
Key point principal direction is located at the whole " overturning " on the left of y-axis to right side, principal direction is only counted and is located on the right side of y-axis Key point.We calculate ISIFT feature d ' to the key point after all key points of image and overturning, meet following formula:
θ >=0 d '=d, cos
Wherein θ indicates the angle of key point principal direction, and d indicates the SIFT feature of original image and flipped image, and d ' expression changes ISIFT feature after, that is to say, that new feature ISIFT only retains the feature that principal direction is located at key point on the right side of y-axis.Due to meter Error is calculated, character pair point may and be not present after the overturning of image, and finds flipped image character pair point and will increase calculation Method complexity, therefore ISIFT does not need to find character pair point, only principal direction in original image and flipped image need to be located at y-axis Right side key point retains as final characteristic point, simplifies and calculates.Furthermore angle calculation is it is equally possible that there are error, practical operation In, keeping characteristics d is still needed in the case where cos θ < 0 and 0 ≈ cos θ.
Step 7: carry out vehicle match: this method combines ISIFT algorithm and pHash algorithm, uses pHash feature first Violence matching is carried out, most like picture is searched in picture library, when the Hamming distance between retrieving image and similar pictures is less than Threshold value is treated as matching and completes, and when smallest hamming distance is greater than threshold value, is then accurately matched using ISIFT algorithm.In addition, Subtract the erroneous matching quantity that ISIFT feature is greatly reduced by BBF searching algorithm and RANSAC algorithm, while also accelerating With speed.Detailed process are as follows:
Step 7.1: violence matching being carried out using pHash feature, most like picture is searched in picture library, works as retrieving image Hamming distance between similar pictures is less than threshold value, is treated as matching and completes, otherwise enters step 7.2.
Step 7.2: accurately being matched using ISIFT algorithm, and screened by BBF searching algorithm and RANSAC algorithm ISIFT characteristic matching point:
Step 7.2.1:BBF searching algorithm: BBF is a kind of advanced k-d tree K-NN search algorithm, in order to eliminate part mistake Mismatching point takes K=2, finds the arest neighbors and time neighborhood matching point of characteristic point.Assuming that 128 dimensional vectors and arest neighbors feature Distance is d1And the distance of secondary neighbour is d2, calculate d1And d2Ratio, the matching is rejected when being unsatisfactory for following formula.
Wherein, ratio is a constant, and between zero and one, ratio is smaller to be meaned stringenter, be will lead to some correct Match point is also removed, and this method ratio uses empirical value 0.8.
Step 7.2.2:RANSAC algorithm: SIFT feature is further screened using RANSAC algorithm RANSAC Matching.RANSAC algorithm inputs one group of observed parameter, usually contains one group " exterior point ", by the way of iteration, finds best ginseng Exponential model, the point for meeting the parameter model is " interior point ", remaining is defined as " exterior point ".RANSAC algorithm thinks two differences Point on multi-view image indicate to can be changed with projection, i.e. X1=HX2, i.e. following formula:
Wherein s indicates that scale (x, y) and (x ', y ') respectively indicates the coordinate on the left side and the right.RANSAC algorithm is exactly to seek Look for optimal homography matrix H, the matrix that H is 3 × 3 needs 4 groups not conllinear can solve matrix H in total.RANSAC algorithm Homography matrix H is calculated from data set, then utilizes all data of model measurement, so that D value is minimum in following formula.
RANSAC algorithm steps are as follows:
1 iteration K times constantly executes following a few steps.
2 extract 4 pairs of not conllinear samples out from data set at random, calculate H-matrix.
3 given threshold T, test all data, calculate D value according to above formula, if D < T, which is added interior point set.
If 4 interior point set numbers are greater than optimal record, the record is updated.
The present invention tests vehicle detecting algorithm proposed in this paper using two sections of videos, and video 1 is using voluntarily clapping The video taken the photograph, resolution ratio are 640 × 480,50 frames of frame per second/second.Video 2 shoots video, resolution ratio 3576 from practical bayonet × 2008,25 frames of frame per second/second.Wherein 1 a total of 114 vehicle of video, 2 a total of 358 vehicles of video, detects video 1 112 vehicles, verification and measurement ratio reach 98.2%, for practical bayonet video 2, successfully detect 356 vehicles, verification and measurement ratio reaches 99.4%.;
Test video 2 shoots video from practical bayonet, and resolution ratio is 3576 × 2008,25 frames of frame per second/second.Background extracting Time is less than 40ms and thinks to meet requirement of real-time.For the video 2 that high definition bayonet in real road is shot, single Gaussian Background mould Requirement of real-time is far not achieved in type and mixture Gaussian background model, and wherein mixed Gauss model obtains a frame background and needs 659.3ms falls far short apart from requirement of real-time 40ms.Vibe algorithm time-consuming is less than Gauss model, but still reaches to less than reality The requirement of when property, Algorithms for Background Extraction calculation amount of the present invention is smaller, can satisfy requirement of real-time.
The technical means disclosed in the embodiments of the present invention is not limited only to technological means disclosed in above embodiment, further includes Technical solution consisting of any combination of the above technical features.It should be pointed out that for those skilled in the art For, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also considered as Protection scope of the present invention.

Claims (5)

1. a kind of vehicle characteristics based on video extract and matching process, which comprises the steps of:
Step 1: read in video file:
Video file is read in from traffic block port camera, and takes the color image of a frame sign, is denoted as G, the width of color image Degree and height are respectively W and H;
Step 2: background modeling is carried out to video image, guarantees that background is precise and stable and real-time:
If image G corresponds to the first frame image of video file, the initialization of background model is carried out, otherwise, carries out background The update of model;
The initialization procedure of background model are as follows:
By interval sampling multiple image, the average value of multiple image is sought as video background picture;First by color image The separation of RGB triple channel is carried out, is indicated respectively with 8 bit binary numbers, the pixel value range of R, G, B triple channel is 0~255, Three channels are once sampled every p frame respectively, p is constant, seeks obtained N frame image after carrying out n times sampling Average value;Sum0(x, y) is obtained by initialization background, indicates accumulation N frame image pixel value summation at (x, y), then Sum0(x, Y) it is embodied as
Wherein, Grpi(x, y), Ggpi(x, y), Gbpi(x, y) is illustrated respectively in R, G, B triple channel, pth × i frame image (x, Y) pixel value at place, wherein p expression are once sampled every p frame, and N indicates to participate in the totalframes of initialization background;
B0(x, y) indicates initial background image pixel value at (x, y), then B0(x, y) is embodied as
B0(x, y)=Sum0/N
The renewal process of background model are as follows:
Sumt(x, y) indicates the N frame accumulated centered on t frame pixel value summation at (x, y), then Sumt(x, y) is specifically indicated For
Bt(x, y) indicates background pixel value of the t frame at (x, y), then Bt(x, y) is embodied as
Bt(x, y)=Sumt(x, y)/N
Step 3: the foreground area comprising moving target is obtained using Background difference:
It indicates detection zone manually first, non-detection region is filtered to reduce interference;Then by current frame image and background Image is converted to 8 gray level images, then carries out calculus of differences, to obtain gray scale foreground image;
Wherein Gt(x, y), Bt(x, y) respectively indicates the pixel value of t frame video image, background image at (x, y), Grayt (x, y), Bgt(x, y) respectively indicates the gray value of t frame video image, background image at (x, y) after gray processing;Then Gray scale foreground image Fgt(x, y) is embodied as:
Fgt(x, y)=| Grayt(x, y)-Bgt(x, y) |
Wherein, Fgt(x, y) indicates gray value of the t frame foreground image at (x, y);
Step 4: vehicle detection:
Vehicle detection, detailed process are carried out using the method for obtaining vehicle's contour are as follows:
Step 4.1: binaryzation foreground image Fgt(x, y), the foreground image T after obtaining binaryzationt(x, y), then Tt(x, y) is specific It is expressed as
Wherein T indicates threshold value;If FgtThe value of (x, y) is more than or equal to the threshold value T of setting, then sets foreground point for the point, no It is then background dot;
Step 4.2: morphological transformation being carried out to bianry image, to two-value foreground image Tt(x, y) carries out closed operation;
Step 4.3: extracting the profile of foreground area in two-value foreground image using Freeman chain code following method, reject simultaneously It is clearly not the point of profile;Then the convex closure of prospect profile is obtained using Graham scanning method, and using prospect convex closure as vehicle Profile finally obtains vehicle's contour;
Step 4.4: carry out vehicle detection:
A record sheet is defined first, for saving the various information for the vehicle that detected recently, and is constantly deleted expired Information;Using the method for carrying out one-time detection every N frame, and each vehicle that detected each time is handled one by one; For any one vehicle, need and record in all vehicles be compared, judged according to region overlapping cases and centre distance Whether included vehicle in record;If in record including vehicle, the i.e. position of vehicle, time, path etc. more in new record Information, it is on the contrary then by new car be added record;When within a short period of time, vehicle is detected more than certain number, and is being regarded The relative position of vehicle is moderate in frequency, it can extracts the information of vehicle and carries out Input of Data operation;
Step 5: extract the pHash feature of vehicle image:
Each image is generated finger print information by pHash algorithm, then the fingerprint between more different pictures, distance more modern age table Two pictures are more similar, and wherein after compression of images and the Hamming distance very little of original image, the Hamming distance between identical picture is Zero;
Step 6: extracting vehicle image ISIFT feature, detailed process are as follows:
Step 6.1: scale space extremum extracting:
The picture position on all scales is searched for, is identified by gaussian derivative function potentially for scale and invariable rotary Point of interest;First using adjacent upper layer and lower layer image subtraction in every group of gaussian pyramid, difference of Gaussian image is obtained;Key point by The Local Extremum in difference of Gaussian space forms, and the first step is exactly to find out the Local Extremum in the space DOG, each pixel is wanted All consecutive points compare with it, see whether it is bigger than the consecutive points of its image area and scale domain or small;Then side is excluded Edge effect and the weaker point of effect;
Step 6.2: the positioning of key point and direction:
It determines position and the scale of key point, and determines the direction of key point based on gradient direction;
The position of key point has been accurately obtained by sub-pixel interpolation method first;Taylor exhibition is utilized to scale space DoG function Open type carries out curve fitting, and rejects unstable skirt response point;Master is calculated by obtaining the Hessian matrix at characteristic point Curvature, to weed out unstable skirt response point;For the key point detected in DOG pyramid, its place is acquired The gradient and directional spreding feature of pixel in gaussian pyramid Image neighborhood window;For two dimensional image, the modulus value m of gradient is defined (x, y) and direction θ (x, y) are as follows:
Scale used in L indicates the scale where each key point, counts gradient and the direction of the pixel in key point contiguous range, Specific to calculate the weighting of modulus value m (x, y) Shi Caiyong Gaussian Profile, the crucial dot center of distance is closer, and weight is bigger;
Calculate key point region in each pixel angle and modulus after, gradient and side using pixel in statistics with histogram neighborhood To;0~360 degree of direction scope is divided into 36 columns (bins) by histogram of gradients, wherein 10 degree of every column;The main peak of histogram will It is the Main way of key point;80% or more main peak is remained as auxiliary direction;
Step 6.3: a descriptor is established for each key point, describes key point with one group of vector:
Firstly, determining key point contiguous range, range internal coordinate axis is rotated, so that x-axis is consistent with key point principal direction; For the coordinate of pixel in neighborhood, meet following formula by rotation:
For postrotational image, it is usually classified as the region of d × d, calculates gradient modulus value and direction in each region, Gradient is assigned on 8 directions;When d takes 4, a total of 128 dimension of feature vector, each subregion gradient direction by 0~ 360 ° are divided into 8 parts, 45 ° every part;
Key point principal direction is located at the whole " overturning " on the left of y-axis to right side, principal direction is only counted and is located at the key on the right side of y-axis Point;ISIFT feature d ' is calculated to the key point after all key points of image and overturning, meets following formula:
θ >=0 d '=d, cos
Wherein θ indicates the angle of key point principal direction, and d indicates the SIFT feature of original image and flipped image, and d ' is indicated after improving ISIFT feature, i.e. new feature ISIFT only retains the feature that principal direction is located at key point on the right side of y-axis;
Step 7: carrying out vehicle match: ISIFT algorithm and pHash algorithm are combined, violence is carried out using pHash feature first Match, most like picture searched in picture library, when the Hamming distance between retrieving image and similar pictures be less than threshold value, be treated as Matching is completed, and when smallest hamming distance is greater than threshold value, is then accurately matched using ISIFT algorithm, in combination with BBF search calculation Method and RANSAC algorithm;Detailed process are as follows:
Step 7.1: violence matching being carried out using pHash feature, most like picture is searched in picture library, when retrieving image and phase It is less than threshold value like the Hamming distance between picture, is treated as matching and completes, otherwise enter step 7.2;
Step 7.2: accurately being matched using ISIFT algorithm, and screened by BBF searching algorithm and RANSAC algorithm ISIFT characteristic matching point:
Step 7.2.1:BBF searching algorithm: BBF is a kind of advanced k-d tree K-NN search algorithm, in order to eliminate partial error With point, K=2 is taken, finds the arest neighbors and time neighborhood matching point of characteristic point;Assuming that the distance of 128 dimensional vectors and arest neighbors feature For d1And the distance of secondary neighbour is d2, calculate d1And d2Ratio, the matching is rejected when being unsatisfactory for following formula;
Wherein, ratio is a constant, between zero and one;
Step 7.2.2:RANSAC algorithm: SIFT feature matching is further screened using RANSAC algorithm RANSAC; RANSAC algorithm inputs one group of observed parameter, usually contains one group " exterior point ", by the way of iteration, finds optimal parameter mould Type, the point for meeting the parameter model is " interior point ", remaining is defined as " exterior point ";RANSAC algorithm thinks two different perspectivess Point on image indicate to can be changed with projection, i.e. X1=HX2, i.e. following formula:
Wherein s indicates that scale (x, y) and (x ', y ') respectively indicates the coordinate on the left side and the right;Find optimal homography matrix The matrix that H, H are 3 × 3 needs 4 groups not conllinear can solve matrix H in total;RANSAC algorithm calculates list from data set Then answering property matrix H utilizes all data of model measurement, so that D value is minimum in following formula:
2. the vehicle characteristics according to claim 1 based on video extract and matching process, which is characterized in that the step 4.3 when rejecting image, rejects rule are as follows: when the perimeter of profile, area are less than certain threshold value or giving for shape anomaly is rejected Processing.
3. the vehicle characteristics according to claim 1 based on video extract and matching process, which is characterized in that the step 5 specifically include following sub-step:
Step 5.1: picture minification: being dwindled into 8 × 8,32 × 32;
Step 5.2: simplifying color: converting gray scale for picture;
Step 5.3: calculating DCT: image being subjected to dct transform, obtains 32 × 32 DCT matrix, image is converted from pixel domain At frequency domain;Image low frequency part is only taken, the part in the DCT matrix upper left corner 8 × 8 is retained;
Step 5.4: calculating hash value: calculating the average value of DCT, according to 8 × 8 DCT matrix, be set as 1 greater than mean value, be less than Mean value is set as 0, thus constitutes 64 integers, obtains the hash fingerprint of image;
Step 5.5: calculate Hamming distance: when carrying out image retrieval, other images equally calculate respective hash value and data Each finger image is compared in library, and Hamming distance is smaller to indicate closer.
4. the vehicle characteristics according to claim 1 based on video extract and matching process, which is characterized in that the step In 6.3 in the case where cos θ < 0 and 0 ≈ cos θ new feature ISIFT still keeping characteristics d.
5. the vehicle characteristics according to claim 1 based on video extract and matching process, which is characterized in that the step 7.2.2 middle RANSAC algorithm steps are as follows:
1 iteration K times constantly executes following a few steps;
2 extract 4 pairs of not conllinear samples out from data set at random, calculate H-matrix;
3 given threshold T, test all data, calculate D value according to above formula, if D < T, which is added interior point set;
If 4 interior point set numbers are greater than optimal record, the record is updated.
CN201811052077.8A 2018-09-10 2018-09-10 A kind of vehicle characteristics extraction and matching process based on video Pending CN109409208A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811052077.8A CN109409208A (en) 2018-09-10 2018-09-10 A kind of vehicle characteristics extraction and matching process based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811052077.8A CN109409208A (en) 2018-09-10 2018-09-10 A kind of vehicle characteristics extraction and matching process based on video

Publications (1)

Publication Number Publication Date
CN109409208A true CN109409208A (en) 2019-03-01

Family

ID=65464620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811052077.8A Pending CN109409208A (en) 2018-09-10 2018-09-10 A kind of vehicle characteristics extraction and matching process based on video

Country Status (1)

Country Link
CN (1) CN109409208A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069666A (en) * 2019-04-03 2019-07-30 清华大学 The Hash learning method and device kept based on Near-neighbor Structure
CN110083740A (en) * 2019-05-07 2019-08-02 深圳市网心科技有限公司 Video finger print extracts and video retrieval method, device, terminal and storage medium
CN110110608A (en) * 2019-04-12 2019-08-09 国网浙江省电力有限公司嘉兴供电公司 The fork truck speed monitoring method and system of view-based access control model under a kind of overall view monitoring
CN110287783A (en) * 2019-05-18 2019-09-27 天嗣智能信息科技(上海)有限公司 A kind of video monitoring image human figure identification method
CN110516528A (en) * 2019-07-08 2019-11-29 杭州电子科技大学 A kind of moving-target detection and tracking method based under movement background
CN111311766A (en) * 2020-02-24 2020-06-19 电子科技大学 Roadside parking intelligent charging system and method based on license plate recognition and tracking technology
CN112200765A (en) * 2020-09-04 2021-01-08 浙江大华技术股份有限公司 Method and device for determining false-detected key points in vehicle
CN112818989A (en) * 2021-02-04 2021-05-18 成都工业学院 Image matching method based on gradient amplitude random sampling
CN113033435A (en) * 2021-03-31 2021-06-25 苏州车泊特智能科技有限公司 Whole vehicle chassis detection method based on multi-view vision fusion
CN113657378A (en) * 2021-07-28 2021-11-16 讯飞智元信息科技有限公司 Vehicle tracking method, vehicle tracking system and computing device
CN114219836A (en) * 2021-12-15 2022-03-22 北京建筑大学 Unmanned aerial vehicle video vehicle tracking method based on space-time information assistance
CN116189083A (en) * 2023-01-11 2023-05-30 广东汇通信息科技股份有限公司 Dangerous goods identification method for community security inspection assistance
CN116681695A (en) * 2023-07-27 2023-09-01 山东阁林板建材科技有限公司 Quality detection method for anti-deformation template end face
CN117422714A (en) * 2023-12-18 2024-01-19 大陆汽车电子(济南)有限公司 Assembly inspection method, apparatus, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005778A (en) * 2015-08-14 2015-10-28 东南大学 Expressway vehicle detection method based on visual background extraction
JP2015207211A (en) * 2014-04-22 2015-11-19 サクサ株式会社 Vehicle detection device and system, and program
CN105740809A (en) * 2016-01-28 2016-07-06 东南大学 Expressway lane line detection method based on onboard camera
JP2017102634A (en) * 2015-12-01 2017-06-08 三菱電機株式会社 Image processing apparatus and image processing system
CN108229316A (en) * 2017-11-28 2018-06-29 浙江工业大学 A kind of vehicle's contour extracting method based on super-pixel segmentation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015207211A (en) * 2014-04-22 2015-11-19 サクサ株式会社 Vehicle detection device and system, and program
CN105005778A (en) * 2015-08-14 2015-10-28 东南大学 Expressway vehicle detection method based on visual background extraction
JP2017102634A (en) * 2015-12-01 2017-06-08 三菱電機株式会社 Image processing apparatus and image processing system
CN105740809A (en) * 2016-01-28 2016-07-06 东南大学 Expressway lane line detection method based on onboard camera
CN108229316A (en) * 2017-11-28 2018-06-29 浙江工业大学 A kind of vehicle's contour extracting method based on super-pixel segmentation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHEN CONG, ET AL.: "An Effective Vehicle Logo Recognition Method for Road Surveillance Images", 《2016 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC)》 *
SONG JIAJI,ET AL.: "ISIFT: Improving the Performance of SIFT for Mirror Images", 《2016 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS(ICCC)》 *
宋嘉冀.: "基于视频的车辆信息提取与检索技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069666A (en) * 2019-04-03 2019-07-30 清华大学 The Hash learning method and device kept based on Near-neighbor Structure
CN110110608A (en) * 2019-04-12 2019-08-09 国网浙江省电力有限公司嘉兴供电公司 The fork truck speed monitoring method and system of view-based access control model under a kind of overall view monitoring
CN110110608B (en) * 2019-04-12 2023-02-07 国网浙江省电力有限公司嘉兴供电公司 Forklift speed monitoring method and system based on vision under panoramic monitoring
CN110083740B (en) * 2019-05-07 2021-04-06 深圳市网心科技有限公司 Video fingerprint extraction and video retrieval method, device, terminal and storage medium
CN110083740A (en) * 2019-05-07 2019-08-02 深圳市网心科技有限公司 Video finger print extracts and video retrieval method, device, terminal and storage medium
CN110287783A (en) * 2019-05-18 2019-09-27 天嗣智能信息科技(上海)有限公司 A kind of video monitoring image human figure identification method
CN110516528A (en) * 2019-07-08 2019-11-29 杭州电子科技大学 A kind of moving-target detection and tracking method based under movement background
CN111311766A (en) * 2020-02-24 2020-06-19 电子科技大学 Roadside parking intelligent charging system and method based on license plate recognition and tracking technology
CN112200765A (en) * 2020-09-04 2021-01-08 浙江大华技术股份有限公司 Method and device for determining false-detected key points in vehicle
CN112200765B (en) * 2020-09-04 2024-05-14 浙江大华技术股份有限公司 Method and device for determining false-detected key points in vehicle
CN112818989A (en) * 2021-02-04 2021-05-18 成都工业学院 Image matching method based on gradient amplitude random sampling
CN112818989B (en) * 2021-02-04 2023-10-03 成都工业学院 Image matching method based on gradient amplitude random sampling
CN113033435A (en) * 2021-03-31 2021-06-25 苏州车泊特智能科技有限公司 Whole vehicle chassis detection method based on multi-view vision fusion
CN113033435B (en) * 2021-03-31 2023-11-14 苏州车泊特智能科技有限公司 Whole vehicle chassis detection method based on multi-vision fusion
CN113657378A (en) * 2021-07-28 2021-11-16 讯飞智元信息科技有限公司 Vehicle tracking method, vehicle tracking system and computing device
CN113657378B (en) * 2021-07-28 2024-04-26 讯飞智元信息科技有限公司 Vehicle tracking method, vehicle tracking system and computing device
CN114219836B (en) * 2021-12-15 2022-06-03 北京建筑大学 Unmanned aerial vehicle video vehicle tracking method based on space-time information assistance
CN114219836A (en) * 2021-12-15 2022-03-22 北京建筑大学 Unmanned aerial vehicle video vehicle tracking method based on space-time information assistance
CN116189083B (en) * 2023-01-11 2024-01-09 广东汇通信息科技股份有限公司 Dangerous goods identification method for community security inspection assistance
CN116189083A (en) * 2023-01-11 2023-05-30 广东汇通信息科技股份有限公司 Dangerous goods identification method for community security inspection assistance
CN116681695A (en) * 2023-07-27 2023-09-01 山东阁林板建材科技有限公司 Quality detection method for anti-deformation template end face
CN116681695B (en) * 2023-07-27 2023-12-01 山东阁林板建材科技有限公司 Quality detection method for anti-deformation template end face
CN117422714A (en) * 2023-12-18 2024-01-19 大陆汽车电子(济南)有限公司 Assembly inspection method, apparatus, and storage medium
CN117422714B (en) * 2023-12-18 2024-03-29 大陆汽车电子(济南)有限公司 Assembly inspection method, apparatus, and storage medium

Similar Documents

Publication Publication Date Title
CN109409208A (en) A kind of vehicle characteristics extraction and matching process based on video
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
CN104978567B (en) Vehicle checking method based on scene classification
CN103699905B (en) Method and device for positioning license plate
CN112686812B (en) Bank card inclination correction detection method and device, readable storage medium and terminal
CN104134222B (en) Traffic flow monitoring image detecting and tracking system and method based on multi-feature fusion
CN106683119B (en) Moving vehicle detection method based on aerial video image
CN103035013B (en) A kind of precise motion shadow detection method based on multi-feature fusion
Wang et al. An effective method for plate number recognition
Zhang et al. Detecting and extracting the photo composites using planar homography and graph cut
CN106815583B (en) Method for positioning license plate of vehicle at night based on combination of MSER and SWT
CN105469046B (en) Based on the cascade vehicle model recognizing method of PCA and SURF features
Zhang et al. License plate localization in unconstrained scenes using a two-stage CNN-RNN
CN104182973A (en) Image copying and pasting detection method based on circular description operator CSIFT (Colored scale invariant feature transform)
CN111340855A (en) Road moving target detection method based on track prediction
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
CN108491498A (en) A kind of bayonet image object searching method based on multiple features detection
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN107644227A (en) A kind of affine invariant descriptor of fusion various visual angles for commodity image search
CN110443295A (en) Improved images match and error hiding reject algorithm
Forczmański et al. Stamps detection and classification using simple features ensemble
CN114463619B (en) Infrared dim target detection method based on integrated fusion features
CN110516527B (en) Visual SLAM loop detection improvement method based on instance segmentation
CN103336964A (en) SIFT image matching method based on module value difference mirror image invariant property
Jin et al. Registration of UAV images using improved structural shape similarity based on mathematical morphology and phase congruency

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190301