CN106682087A - Method for retrieving vehicles on basis of sparse codes of features of vehicular ornaments - Google Patents

Method for retrieving vehicles on basis of sparse codes of features of vehicular ornaments Download PDF

Info

Publication number
CN106682087A
CN106682087A CN201611063148.5A CN201611063148A CN106682087A CN 106682087 A CN106682087 A CN 106682087A CN 201611063148 A CN201611063148 A CN 201611063148A CN 106682087 A CN106682087 A CN 106682087A
Authority
CN
China
Prior art keywords
vehicle
image
dictionary
front windshield
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611063148.5A
Other languages
Chinese (zh)
Inventor
赵池航
林盛梅
陈爱伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201611063148.5A priority Critical patent/CN106682087A/en
Publication of CN106682087A publication Critical patent/CN106682087A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for retrieving vehicles on the basis of sparse codes of features of vehicular ornaments. The method includes acquiring vehicle image data and detecting vehicle areas in images by the aid of gray co-occurrence matrix features and support vector machines; acquiring vehicular ornament area images in the vehicle images; creating over-complete dictionaries of the vehicular ornament area images; solving sparse vectors of the vehicular ornament area images corresponding to the to-be-inquired vehicles; reconstructing to-be-inquired vehicular ornament area images and computing Euclidean distances from reconstructed images to the to-be-inquired vehicular ornament area images; updating the sparse vectors of the to-be-inquired vehicular ornament area images, sorting elements of the sparse vectors according to the magnitude and setting thresholds to obtain retrieval results of the vehicular ornament area images. Compared with the prior art, the method has the advantages that the vehicles are retrieved on the basis of the features of the vehicular ornaments, accordingly, the retrieval precision and reliability can be greatly improved, and important bases can be provided for effectively solving problems in the aspect of illegal vehicle tracking.

Description

Vehicle retrieval method based on vehicle-mounted decoration feature sparse coding
Technical Field
The invention relates to the field of intelligent traffic research, in particular to research on a vehicle image retrieval method.
Background
In order to solve the urban development problem and realize the sustainable development of the city, the construction of a 'smart city' becomes the irreversible historical trend of the urban development in China; meanwhile, in order to meet the requirements of urban public security prevention and control and urban management, the government and law committee initiates the joint ministry of public security and the industry and the credit to jointly construct a skynet project. The skynet project is to utilize equipment and control software such as image acquisition, transmission, control, display and the like to carry out real-time monitoring and information recording on a fixed area, and provides technical support for extracting evidences of illegal criminal activities of vehicles in cities. However, the main problem in the process of evidence collection is to search for illegal criminal vehicles by adopting a manual searching method in the currently stored mass monitoring data, and the searching process not only consumes a large amount of manpower, material resources and financial resources, but also has great unreliability in manual searching. At present, the automatic search of suspicious vehicle information from video or picture data is mainly based on the automatic identification of the number plate number, the brand and the color of the vehicle inherent to the vehicle. However, in the case of illegal cases involving vehicles in real life, the case-involved vehicles are often fake (fake) branded vehicles, and the investigation according to the vehicle number cannot play any role; meanwhile, retrieving suspicious vehicles according to the brand and color of the vehicle plays little role in reducing workload. Therefore, the search method based on the vehicle intrinsic attribute features does not achieve the expected search effect on fake (fake) brand vehicles. An effective solution to this problem is to search for the suspect vehicle by using the features of the vehicle-mounted ornaments of the suspect vehicle, such as the vehicle ornaments and the annual inspection labels.
Disclosure of Invention
The invention aims to effectively solve the problem of vehicle image retrieval. In order to solve the technical problems, the invention adopts the following technical scheme:
a vehicle retrieval method based on vehicle-mounted decoration feature sparse coding comprises the following steps:
1) acquiring vehicle image data, and detecting a vehicle region in an image by adopting a gray level co-occurrence matrix characteristic and a support vector machine;
2) positioning the vehicle-mounted decoration area image in the vehicle image obtained in the step 1) according to the relative position of the vehicle front windshield and the vehicle;
3) constructing an over-complete dictionary of the vehicle-mounted ornament region image by adopting a K-singular value decomposition method;
4) solving sparse vectors of vehicle-mounted ornament area images corresponding to the vehicle to be inquired by adopting a sparsity self-adaptive matching tracking algorithm;
5) reconstructing a vehicle-mounted ornament region image to be inquired according to the overcomplete dictionary of the vehicle-mounted ornament region image obtained in the step 3) and the sparse vector obtained in the step 4), and calculating the Euclidean distance between the reconstructed image and the vehicle-mounted ornament region image to be inquired;
6) updating the sparse vector of the vehicle-mounted ornament region image to be inquired according to the Euclidean distance obtained in the step 5), sequencing elements of the sparse vector according to the size, setting a threshold value, obtaining a retrieval result of the vehicle-mounted ornament region image, and obtaining the vehicle to be inquired in the image database.
The vehicle image data obtained in the step 1) is obtained by a 300w pixel bayonet camera installed at an urban road intersection in a time period of 8: 00-18: 00 per day; the image data set contains different weather, illumination, vehicle angle and other conditions, each picture contains a complete vehicle image, and the original image size is 2048 × 1536 pixels.
The method for detecting the vehicle region in the image by adopting the gray level co-occurrence matrix characteristics and the support vector machine in the step 1) comprises the following specific steps:
11) selecting a picture only containing a vehicle positive sample and a picture only containing a background negative sample, and training to obtain a detected feature library;
12) carrying out graying processing on the vehicle image, and gridding the image into M multiplied by N sub-images with the size of W multiplied by H; calculating the gray level co-occurrence matrix characteristics of each grid window image, predicting whether the grid image belongs to a vehicle part by using the trained characteristic library in the step (1), and counting the coordinates of the grid centers of all identified vehicle areas to obtain the mean value of the grid center coordinates;
13) calculating Euclidean distance according to the grid center coordinates and the coordinate mean value of each identified vehicle area, and setting a certain threshold value; if the calculated distance is larger than the set average value, the calculated distance is determined as background noise; and obtaining the detected vehicle image according to the mean value of the coordinates.
In the step 2), the vehicle-mounted decoration is positioned in a front windshield area of the vehicle, namely the front windshield area of the vehicle is an interested area; and positioning the vehicle front windshield area according to the relative position of the vehicle front windshield area and the whole vehicle.
The vehicle front windshield area positioning specifically comprises:
21) finally obtaining an image of a roughly positioned vehicle front windshield area according to the relative position relation; and preprocessing the obtained image of the front windshield area of the vehicle: the preprocessing comprises graying, edge detection by using a Canny operator and projection calculation in the transverse and longitudinal directions of the image;
22) finding the left, right, upper and lower boundaries of a front windshield area according to the statistical transverse and longitudinal projection histograms, performing straight line fitting on an edge image of the vehicle front windshield by adopting Hough transformation, calculating the rotation angle of the image, and rotating the original image of the vehicle front windshield area to obtain a horizontal image of the vehicle front windshield area.
The step 3) of constructing the overcomplete dictionary of the vehicle-mounted ornament region image by adopting a K-singular value decomposition method specifically comprises the following steps:
31) initializing a dictionary D by using a training sample set, and solving an optimal coefficient vector of an input sample by using an orthogonal matching pursuit algorithm;
assuming that all sample sets of the dictionary to be trained are represented by Y, solving the dictionary D as an unknown quantity, x represents a sparse vector to be solved, and T0Representing sparsity constraint parameters, the dictionary learning can be converted into the following formula to solve, namely
32) Updating the dictionary, keeping the other columns of the dictionary D unchanged, and updating the new column DkNew column dkBy singular value decomposition, and the coefficients of the new columnAssuming that sparse vector X and dictionary D are fixed, for a certain column D of the dictionarykCoefficients corresponding to the column, penalty term for the target signal being
Wherein, the matrix EkIndicates to remove dkError of all samples after vector; definition of ωkColumn d for successfully storing dictionary updateskOf (2), i.e. coefficient vectorsIs not zero, i.e.
Define the matrix omegak=N×|ωkI, orderThe penalty term for the target signal is expressed as
At this time, directly face toPerforming singular value decomposition to obtain:wherein the first column of U isV multiplied by Δ (1,1) is
33) And step 32) is repeated, and iteration is carried out until the maximum iteration times defined by initialization is reached, so that the final dictionary is obtained.
The process of solving the sparse vector of the vehicle-mounted ornament region image to be inquired by adopting the sparsity self-adaptive matching tracking algorithm in the step 4) is specifically as follows:
in the algorithm, L represents the initialization sparsity, rtRepresenting the signal residual, t the number of iterations, ΛtSet of indices representing t iterations, αjColumn j, A, representing dictionary AtRepresenting a collection Λ of pass indexestA set of columns selected from dictionary A, the number of columns being Lt,θtIs Lt× 1 column vectors, the input of the algorithm is signal y, the size of step S;
a) initialization signalResidual r0=y,Λ0Is empty set, L is S, t is 1;
b) let h be equal to<rt-1,A>And calculating the first L maximum value elements in the vector h, and forming a set S by the column sequence numbers of the values in the dictionary Ak
c) Let Ck=Λt-1∪Sk,At={αjJ is the set CkAll subscript sets in;
d) finding y as AtθtIs least squares solution of, i.e.
e) FromThe L term with the maximum absolute value obtained by the calculation is recorded asCorresponding to AtL in (1) is listed as AtLL in corresponding A is listed as ΛtL
f) Residual update
g) Such as rnewGo to the next step if 0 rnew||2≥||rt-1||2Updating the step length to L + S and returning to b) to continue iteration;
h) to obtain a final least squares solutionAt ΛtLWith a non-zero term having a value obtained in the last iterationNamely the obtained sparse vector.
In the step 5), the established overcomplete dictionary and the established sparse vector are divided into k types, namely
A=[A1,A2,…,Aj,…,…,An]=[D1,D2,…,Dk]
Reconstruction of image C with segmented elementsk=Dkx0
In the step 6), the reconstructed image C is calculated according to the Euclidean distance calculation formulakWith the image S to be queriedkDistance between, two M × N-dimensional images CkAnd SkIs calculated as follows
Wherein,andrespectively represent images CkAnd an image SkThe d-th pixel point of (1).
From the calculated reconstructed image CkUpdate sparse vectors with Euclidean distance from the image to be queried, i.e.
And sequencing according to the updated sparse vectors, and setting a threshold value to obtain a retrieval result.
Firstly, selecting a picture only containing a vehicle positive sample and a picture only containing a background negative sample to carry out gray level co-occurrence matrix characteristic extraction, training to obtain a detected characteristic library, secondly, quantizing the gray level of the image, wherein the quantized level is divided into 16 levels, and the dimensionality of the gray level co-occurrence matrix is 16 × 16 dimensions because of the relative relationship of pixel pairsThe method comprises the steps of calculating the probability distribution of relative position deviation of a vehicle image, calculating a gray level co-occurrence matrix according to the gray level value, performing gray level processing on the input image, gridding the image into M × N sub-images with the size of W × H, calculating the characteristic of the gray level co-occurrence matrix of each grid window image, predicting whether the grid image belongs to a vehicle part or not by using a trained characteristic library, counting the coordinates of the grid centers of all identified vehicle areas to obtain the mean value of the coordinates of the grid centers, calculating the Euclidean distance according to the coordinates of the grid centers and the mean value of the coordinates of each identified vehicle area, setting a certain threshold, determining the vehicle image as background noise if the calculated distance is larger than the set mean value, and finally obtaining the detected vehicle image according to the mean value of the coordinates.
Compared with the prior art, the technical scheme carries out vehicle retrieval based on the characteristics of the vehicle-mounted ornaments, greatly increases the retrieval precision and reliability, and provides important basis for effectively solving the problem of illegal vehicle tracking.
Drawings
FIG. 1 is a process for vehicle front windshield ROI localization according to the present invention.
Detailed Description
The technical solution is further explained below with reference to specific embodiments:
the method comprises the steps of firstly training a feature library, selecting a picture only containing a vehicle positive sample and a picture only containing a background negative sample to perform gray level co-occurrence matrix feature extraction, training to obtain a detected feature library, quantizing the gray level of an image, wherein the quantized grade is divided into 16 grades, so that the dimension of a gray level co-occurrence matrix is 16 × 16 dimensions, and selectingFirstly, carrying out graying processing on an input image, gridding the image into M × N subgraphs with the size of W × H, then, calculating the gray level co-occurrence matrix characteristic of each grid window image, utilizing the trained characteristic library to predict whether the grid image belongs to a vehicle part, carrying out statistics on the coordinates of the grid centers of all the identified vehicle areas to obtain the mean value of the grid center coordinates, then, carrying out Euclidean distance calculation according to the grid center coordinates and the mean value of each identified vehicle area, setting a certain threshold value, if the calculated distance is greater than the set mean value, determining the grid area as background noise, and finally, obtaining the detected vehicle image according to the mean value of the coordinates.
The second step is that: and positioning the vehicle-mounted decoration according to the relative position of the vehicle front windshield and the whole vehicle. Vehicle ornaments such as vehicle ornaments, labels are mainly located the front windshield region of vehicle, so need carry out the regional location of vehicle front windshield after detecting the vehicle, the location of the region of interest (ROI) in this patent promptly, the vehicle front windshield location need be according to its and the holistic relative position of vehicle and fix a position. For example, as shown in fig. 1(a), for the detected vehicle picture, the start coordinate is located at the lower left corner of the whole picture, the Width of the whole picture is Width, the Height of the whole picture is assumed to be Height, the start position of the front windshield of the vehicle is 0.4 × Height from the bottom end of the picture, the Height of the whole region of interest is 0.35 × Height, and the Width is 0.85 × Width. The roughly positioned front windshield portion finally obtained from the relative positional relationship thereof is shown in fig. 1 (b). After the rough positioning of the front windshield area is obtained, it will be precisely positioned. A series of pre-processing of the image is required before accurate positioning: 1) carrying out gray processing on the front windshield part; 2) carrying out edge detection on the image by applying a Canny operator; 3) the projection in the lateral and longitudinal directions of the image is calculated to find the left and right and upper and lower boundaries of the front windshield portion. Since there is a sudden change in the projection histograms in which there are distinct edges at the left and right and upper and lower boundaries of the front windshield, the left and right and upper and lower boundaries of the front windshield portion can be clearly found from the statistical lateral and longitudinal projection histograms. However, in some vehicle front windshield images, the front windshield portion found by coordinate positioning is rotated by a certain angle, so that a certain correction is required after obtaining a partial image of the front windshield, thereby restoring the front windshield portion to a horizontal position. The method comprises the steps of calculating the inclination angle of the vehicle windshield part when correcting the image, adopting Hough transformation to perform linear fitting on the edge image of the front windshield, calculating the rotation angle of the image, and then rotating the original vehicle front windshield image to obtain a horizontal front windshield image.
The third step: constructing a dictionary of sparse representations is an important prerequisite for sparse coding implementations. This patent adopts K Singular Value Decomposition (KSVD) as the dictionary updating algorithm. The main processes of the KSVD algorithm are as follows:
a) initializing a dictionary D by using a training sample set;
b) solving a sparse coefficient X by using an orthogonal matching pursuit algorithm based on an over-complete dictionary D;
c) define the matrix omegakDecomposing the penalty term;
d) to pairSingular value decomposition is carried out, and a super-complete dictionary and a sparse coefficient are updated;
e) and repeating the steps b) to d), and iterating to the maximum iteration times defined by initialization to obtain the final dictionary.
The dictionary represented sparsely is obtained by training known samples, the dictionary A is assumed to be formed by preferentially arranging sample images according to rows or columns and is known, in the solving process of sparse vectors, the images need to be expanded into one-dimensional vectors according to the rows or columns, the dimension of expanded image signals is large, the calculation amount in the solving process is large, and therefore a dictionary learning method needs to be selected in the first step of sparse coding. The dictionary learning algorithm selected by the patent is a commonly used K Singular Value Decomposition (KSVD) algorithm.
The main processes of the KSVD algorithm are as follows:
(1) initializing a dictionary D by using a training sample set, and solving an optimal coefficient vector of an input sample by using an orthogonal matching pursuit algorithm;
assuming that all sample sets of the dictionary to be trained are represented by Y, now the solution is performed for dictionary D as an unknown quantity, x represents the sparse vector to be solved, T0Representing sparsity constraint parameters, the dictionary learning can be converted into the following formula to solve, namely
(2) Updating the dictionary, keeping the other columns of the dictionary D unchanged, and updating the new column DkNew column dkBy singular value decomposition, and the coefficients of the new columnAssuming that the sparse vector X and the dictionary D are fixed, the object of interest at this time is a certain column D of the dictionarykThe coefficient corresponding to the column, the penalty term of the target signal can be expressed as
Wherein, the matrix EkIndicates to remove dkError of all samples after the vector. However, the coefficient obtained at this timeIf the dictionary is not sparse, the dictionary is not updated, so that the definition omega is not satisfiedkColumn d for successfully storing dictionary updateskI.e. coefficient vectorsIs not zero, i.e.
Define the matrix omegak=N×|ωkI, orderThen the formula (2) can be represented as
At this time, directly face toPerforming singular value decomposition to obtain:wherein the first column of U isV multiplied by Δ (1,1) is
(3) And (5) repeating the step (2), and iterating to the maximum iteration times defined by initialization to obtain the final dictionary.
The fourth step: the self-adaptive matching tracking algorithm based on the sparsity does not need the prerequisite of the sparsity, but estimates and updates the sparsity in the algorithm processing process, initializes the sparsity to be L at the beginning of the algorithm, and continuously estimates the sparsity and a source image so as to achieve the purpose of reconstructing the image.
In the algorithm, rtRepresenting the signal residual, t the number of iterations, ΛtSet of indices representing t iterations, αjColumn j, A, representing dictionary AtRepresenting a collection Λ of pass indexestA set of columns selected from dictionary A, the number of columns being Lt,θtIs Lt× 1 column vector, the input of the algorithm is signal y, the built dictionary A, the step size S, the process of the sparsity-based adaptive matching pursuit algorithm is as follows:
a) initialization residual r0=y,Λ0Is empty set, L is S, t is 1;
b) let h be equal to<rt-1,A>And calculating L maximum values in h, and forming a set S by using the column sequence numbers of the values in the dictionary Ak
c) Let Ck=Λt-1∪Sk,At={αj(wherein j is C)kAll subscript sets in);
d) finding y as AtθtIs least squares solution of, i.e.
e) FromThe L term with the maximum absolute value obtained by the calculation is recorded asCorresponding to AtL in (1) is listed as AtLL in corresponding A is listed as ΛtL
f) Residual update
g) Such as rnewGo to the next step if 0 rnew||2≥||rt-1||2Updating the step length to L + S and returning to b) to continue iteration;
h) to obtain a final least squares solutionAt ΛtLWith a non-zero term having a value obtained in the last iterationNamely the obtained sparse vector.
The fifth step: the built overcomplete dictionary and sparse vectors are classified into k classes, i.e.
A=[A1,A2,…,Aj,…,…,An]=[D1,D2,…,Dk](6)
Reconstruction of image C with segmented elementsk=Dkx0
And a sixth step: calculating a reconstructed image C according to a Euclidean distance calculation formulakWith the image S to be queriedkDistance between, two M × N-dimensional images CkAnd SkIs calculated as follows
Wherein,andrespectively represent images CkAnd an image SkThe d-th pixel point
From the calculated reconstructed image CkUpdate sparse vectors with Euclidean distance from the image to be queried, i.e.
And sequencing according to the updated sparse vectors, and setting a threshold value to obtain a retrieval result.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A vehicle retrieval method based on vehicle-mounted decoration feature sparse coding is characterized by comprising the following steps: the method comprises the following steps:
1) acquiring vehicle image data, and detecting a vehicle region in an image by adopting a gray level co-occurrence matrix characteristic and a support vector machine;
2) positioning the vehicle-mounted decoration area image in the vehicle image obtained in the step 1) according to the relative position of the vehicle front windshield and the vehicle;
3) constructing an over-complete dictionary of the vehicle-mounted ornament region image by adopting a K-singular value decomposition method;
4) solving sparse vectors of vehicle-mounted ornament area images corresponding to the vehicle to be inquired by adopting a sparsity self-adaptive matching tracking algorithm;
5) reconstructing a vehicle-mounted ornament region image to be inquired according to the overcomplete dictionary of the vehicle-mounted ornament region image obtained in the step 3) and the sparse vector obtained in the step 4), and calculating the Euclidean distance between the reconstructed image and the vehicle-mounted ornament region image to be inquired;
6) updating the sparse vector of the vehicle-mounted ornament region image to be inquired according to the Euclidean distance obtained in the step 5), sequencing elements of the sparse vector according to the size, setting a threshold value, obtaining a retrieval result of the vehicle-mounted ornament region image, and obtaining the vehicle to be inquired in the image database.
2. The vehicle retrieval method according to claim 1, characterized in that: the vehicle image data obtained in the step 1) is obtained by a 300w pixel bayonet camera installed at an urban road intersection in a time period of 8: 00-18: 00 per day; the image data set contains different weather, lighting and vehicle angle conditions, and each picture contains a complete vehicle image, with the original image size being 2048 × 1536 pixels.
3. The vehicle retrieval method according to claim 1, characterized in that: the method for detecting the vehicle region in the image by adopting the gray level co-occurrence matrix characteristics and the support vector machine in the step 1) comprises the following specific steps:
11) selecting a picture only containing a vehicle positive sample and a picture only containing a background negative sample, and training to obtain a detected feature library;
12) carrying out graying processing on the vehicle image, and gridding the image into M multiplied by N sub-images with the size of W multiplied by H; calculating the gray level co-occurrence matrix characteristics of each grid window image, predicting whether the grid image belongs to a vehicle part by using the trained characteristic library in the step (1), and counting the coordinates of the grid centers of all identified vehicle areas to obtain the mean value of the grid center coordinates;
13) calculating Euclidean distance according to the grid center coordinates and the coordinate mean value of each identified vehicle area, and setting a certain threshold value; if the calculated distance is larger than the set average value, the calculated distance is determined as background noise; and obtaining the detected vehicle image according to the mean value of the coordinates.
4. The vehicle retrieval method according to claim 1, characterized in that: in the step 2), the vehicle-mounted decoration is positioned in a front windshield area of the vehicle, namely the front windshield area of the vehicle is an interested area; and positioning the vehicle front windshield area according to the relative position of the vehicle front windshield area and the whole vehicle.
5. The vehicle retrieval method according to claim 4, characterized in that: the vehicle front windshield area positioning specifically comprises:
21) finally obtaining an image of a roughly positioned vehicle front windshield area according to the relative position relation; and preprocessing the obtained image of the front windshield area of the vehicle: the preprocessing comprises graying, edge detection by using a Canny operator and projection calculation in the transverse and longitudinal directions of the image;
22) finding the left, right, upper and lower boundaries of a front windshield area according to the statistical transverse and longitudinal projection histograms, performing straight line fitting on an edge image of the vehicle front windshield by adopting Hough transformation, calculating the rotation angle of the image, and rotating the original image of the vehicle front windshield area to obtain a horizontal image of the vehicle front windshield area.
6. The vehicle retrieval method according to claim 1, characterized in that: the step 3) of constructing the overcomplete dictionary of the vehicle-mounted ornament region image by adopting a K-singular value decomposition method specifically comprises the following steps:
31) initializing a dictionary D by using a training sample set, and solving an optimal coefficient vector of an input sample by using an orthogonal matching pursuit algorithm;
assuming that all sample sets of the dictionary to be trained are represented by Y, the dictionary D is solved as an unknown quantity,x represents the sparse vector to be solved, T0Representing sparsity constraint parameters, the dictionary learning can be converted into the following formula to solve, namely
min D , X { | | Y - D X | | F 2 } s u b j e c t t o &ForAll; i , | | x i | | 0 &le; T 0
32) Updating the dictionary, keeping the other columns of the dictionary D unchanged, and updating the new column DkNew column dkBy singular value decomposition, and the coefficients of the new columnAssuming that sparse vector X and dictionary D are fixed, for a certain column D of the dictionarykCoefficients corresponding to the column, penalty term for the target signal being
| | Y - D X | | F 2 = | | Y - &Sigma; j = 1 K d j x T j | | F 2 = | | Y - &Sigma; j &NotEqual; k d j x T j - d k x T k | | F 2 = | | E k - d k x T k | | F 2
Wherein, the matrix EkIndicates to remove dkError of all samples after vector; definition of ωkColumn d for successfully storing dictionary updateskOf (2), i.e. coefficient vectorsIs not zero, i.e.
&omega; k = { i | 1 &le; i &le; K , x T k ( i ) &NotEqual; 0 }
Define the matrix omegak=N×|ωkI, orderThe penalty term for the target signal is expressed as
| | E k &Omega; k - d k x T k &Omega; k | | 2 2 = | | E R k - d k x R k | | 2 2
At this time, directly face toPerforming singular value decomposition to obtain:wherein the first column of U isV multiplied by Δ (1,1) is
33) And step 32) is repeated, and iteration is carried out until the maximum iteration times defined by initialization is reached, so that the final dictionary is obtained.
7. The vehicle retrieval method according to claim 1, characterized in that: the process of solving the sparse vector of the vehicle-mounted ornament region image to be inquired by adopting the sparsity self-adaptive matching tracking algorithm in the step 4) is specifically as follows:
in the algorithm, L represents the initialization sparsity, rtRepresenting the signal residual, t the number of iterations, ΛtSet of indices representing t iterations, αjColumn j, A, representing dictionary AtRepresenting a collection Λ of pass indexestA set of columns selected from dictionary A, the number of columns being Lt,θtIs Lt× 1 column vectors, the input of the algorithm is signal y, the size of step S;
a) initialization signal residual r0=y,Λ0Is empty set, L is S, t is 1;
b) let h be < rt-1A >, and calculates the first L maximum elements in the vector h, and sets S with these values in the column number of dictionary Ak
c) Let Ck=Λt-1∪Sk,At={αjJ is the set CkAll subscript sets in;
d) finding y as AtθtIs least squares solution of, i.e.
e) FromThe L term with the maximum absolute value obtained by the calculation is recorded asCorresponding to AtL in (1) is listed as AtLL in corresponding A is listed as ΛtL
f) Residual update
g) Such as rnewGo to the next step if 0 rnew||2≥||rt-1||2Updating the step length to L + S and returning to b) to continue iteration;
h) to obtain a final least squares solutionAt ΛtLWith a non-zero term having a value obtained in the last iterationNamely the obtained sparse vector.
8. The vehicle retrieval method according to claim 1, characterized in that: in the step 5), the established overcomplete dictionary and the established sparse vector are divided into k types, namely
A=[A1,A2,…,Aj,…,…,An]=[D1,D2,…,Dk]
Reconstruction of image C with segmented elementsk=Dkx0
9. The vehicle retrieval method according to claim 1, characterized in that: in the step 6), the reconstructed image C is calculated according to the Euclidean distance calculation formulakWith the image S to be queriedkDistance between, two M × N-dimensional images CkAnd SkIs calculated as follows
E D ( C k , S k ) = &lsqb; &Sigma; d = 1 M N ( C k d - S k d ) 2 &rsqb; 1 / 2
Wherein,andrespectively represent images CkAnd figuresImage SkThe d-th pixel point of (1).
From the calculated reconstructed image CkUpdate sparse vectors with Euclidean distance from the image to be queried, i.e.
x 0 &prime; = &lsqb; | x 0 , 1 | ED 1 ; | x 0 , 2 | ED 2 ; ... ; | x 0 , k | &rsqb; ED k
And sequencing according to the updated sparse vectors, and setting a threshold value to obtain a retrieval result.
CN201611063148.5A 2016-11-28 2016-11-28 Method for retrieving vehicles on basis of sparse codes of features of vehicular ornaments Pending CN106682087A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611063148.5A CN106682087A (en) 2016-11-28 2016-11-28 Method for retrieving vehicles on basis of sparse codes of features of vehicular ornaments

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611063148.5A CN106682087A (en) 2016-11-28 2016-11-28 Method for retrieving vehicles on basis of sparse codes of features of vehicular ornaments

Publications (1)

Publication Number Publication Date
CN106682087A true CN106682087A (en) 2017-05-17

Family

ID=58866773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611063148.5A Pending CN106682087A (en) 2016-11-28 2016-11-28 Method for retrieving vehicles on basis of sparse codes of features of vehicular ornaments

Country Status (1)

Country Link
CN (1) CN106682087A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103314A (en) * 2017-06-15 2017-08-29 浙江南自智能科技股份有限公司 A kind of fake license plate vehicle retrieval system based on machine vision
CN108182712A (en) * 2017-12-07 2018-06-19 西安万像电子科技有限公司 Image processing method, apparatus and system
CN111612087A (en) * 2020-05-28 2020-09-01 北京益嘉阳光科技发展有限公司 Generation method of image feature dictionary of TEDS (train test data System) of motor train unit
CN112104869A (en) * 2020-11-10 2020-12-18 光谷技术股份公司 Video big data storage and transcoding optimization system
CN112699829A (en) * 2021-01-05 2021-04-23 山东交通学院 Vehicle weight identification method and system based on depth feature and sparse measurement projection
CN113657378A (en) * 2021-07-28 2021-11-16 讯飞智元信息科技有限公司 Vehicle tracking method, vehicle tracking system and computing device
CN114860976A (en) * 2022-04-29 2022-08-05 南通智慧交通科技有限公司 Image data query method and system based on big data
CN117390812A (en) * 2023-12-11 2024-01-12 江西少科智能建造科技有限公司 CAD drawing warm ventilation pipe structured information extraction method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020647A (en) * 2013-01-08 2013-04-03 西安电子科技大学 Image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding
CN103345511A (en) * 2013-07-04 2013-10-09 西北工业大学 Remote sensing image searching method based on sparse representation
CN103530366A (en) * 2013-10-12 2014-01-22 湖北微模式科技发展有限公司 Vehicle searching method and system based on user-defined features
CN105930812A (en) * 2016-04-27 2016-09-07 东南大学 Vehicle brand type identification method based on fusion feature sparse coding model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020647A (en) * 2013-01-08 2013-04-03 西安电子科技大学 Image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding
CN103345511A (en) * 2013-07-04 2013-10-09 西北工业大学 Remote sensing image searching method based on sparse representation
CN103530366A (en) * 2013-10-12 2014-01-22 湖北微模式科技发展有限公司 Vehicle searching method and system based on user-defined features
CN105930812A (en) * 2016-04-27 2016-09-07 东南大学 Vehicle brand type identification method based on fusion feature sparse coding model

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANDYJEE: ""浅谈压缩感知(二十七):压缩感知重构算法之稀疏度自适应匹配追踪(SAMP)"", 《 HTTPS//WWW.CNBLOGS.COM_ANDYJEE/P/5137674.HTML》 *
MICHAL AHARON, ET AL: ""K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation"", 《IEEE TRANSACTIONS ON SIGNAL PROCESSING》 *
刘煜等: "《稀疏表示基础理论与典型应用》", 31 October 2014 *
台州学院教务处: "《2009届本科优秀毕业设计(论文)选编 下》", 31 October 2009 *
李涛等: "《数据挖掘的应用与实践 大数据时代的案例分析》", 31 October 2013 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103314A (en) * 2017-06-15 2017-08-29 浙江南自智能科技股份有限公司 A kind of fake license plate vehicle retrieval system based on machine vision
CN108182712B (en) * 2017-12-07 2021-06-04 西安万像电子科技有限公司 Image processing method, device and system
CN108182712A (en) * 2017-12-07 2018-06-19 西安万像电子科技有限公司 Image processing method, apparatus and system
CN111612087A (en) * 2020-05-28 2020-09-01 北京益嘉阳光科技发展有限公司 Generation method of image feature dictionary of TEDS (train test data System) of motor train unit
CN111612087B (en) * 2020-05-28 2023-07-14 北京益嘉阳光科技发展有限公司 Method for generating image feature dictionary of EMUs TEDS system
CN112104869A (en) * 2020-11-10 2020-12-18 光谷技术股份公司 Video big data storage and transcoding optimization system
CN112104869B (en) * 2020-11-10 2021-02-02 光谷技术有限公司 Video big data storage and transcoding optimization system
CN112699829A (en) * 2021-01-05 2021-04-23 山东交通学院 Vehicle weight identification method and system based on depth feature and sparse measurement projection
CN112699829B (en) * 2021-01-05 2022-08-30 山东交通学院 Vehicle weight identification method and system based on depth feature and sparse measurement projection
CN113657378A (en) * 2021-07-28 2021-11-16 讯飞智元信息科技有限公司 Vehicle tracking method, vehicle tracking system and computing device
CN113657378B (en) * 2021-07-28 2024-04-26 讯飞智元信息科技有限公司 Vehicle tracking method, vehicle tracking system and computing device
CN114860976A (en) * 2022-04-29 2022-08-05 南通智慧交通科技有限公司 Image data query method and system based on big data
CN117390812A (en) * 2023-12-11 2024-01-12 江西少科智能建造科技有限公司 CAD drawing warm ventilation pipe structured information extraction method and system
CN117390812B (en) * 2023-12-11 2024-03-08 江西少科智能建造科技有限公司 CAD drawing warm ventilation pipe structured information extraction method and system

Similar Documents

Publication Publication Date Title
CN106682087A (en) Method for retrieving vehicles on basis of sparse codes of features of vehicular ornaments
Bang et al. Encoder–decoder network for pixel‐level road crack detection in black‐box images
Guan et al. Robust traffic-sign detection and classification using mobile LiDAR data with digital images
CN103034863B (en) The remote sensing image road acquisition methods of a kind of syncaryon Fisher and multiple dimensioned extraction
CN101551809B (en) Search method of SAR images classified based on Gauss hybrid model
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
CN106650731B (en) Robust license plate and vehicle logo recognition method
CN107992819B (en) Method and device for determining vehicle attribute structural features
CN103400151A (en) Optical remote-sensing image, GIS automatic registration and water body extraction integrated method
CN110263635B (en) Marker detection and identification method based on structural forest and PCANet
CN111179152A (en) Road sign identification method and device, medium and terminal
CN109635733B (en) Parking lot and vehicle target detection method based on visual saliency and queue correction
CN113486886B (en) License plate recognition method and device in natural scene
CN109635789B (en) High-resolution SAR image classification method based on intensity ratio and spatial structure feature extraction
CN103955913A (en) SAR image segmentation method based on line segment co-occurrence matrix characteristics and regional maps
CN106845496B (en) Fine target identification method and system
CN104077782A (en) Satellite-borne remote sense image matching method
CN114596500A (en) Remote sensing image semantic segmentation method based on channel-space attention and DeeplabV3plus
CN113762278B (en) Asphalt pavement damage identification method based on target detection
CN103353941B (en) Natural marker registration method based on viewpoint classification
CN103871062A (en) Lunar surface rock detection method based on super-pixel description
CN106407959A (en) Low-illumination complicated background license plate positioning method based on wavelet transform and SVM
Hu et al. Automatic extraction of main road centerlines from high resolution satellite imagery using hierarchical grouping
CN113496221B (en) Point supervision remote sensing image semantic segmentation method and system based on depth bilateral filtering
CN107292268A (en) The SAR image semantic segmentation method of quick ridge ripple deconvolution Structure learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170517

RJ01 Rejection of invention patent application after publication