CN106778777B - Vehicle matching method and system - Google Patents

Vehicle matching method and system Download PDF

Info

Publication number
CN106778777B
CN106778777B CN201611080118.5A CN201611080118A CN106778777B CN 106778777 B CN106778777 B CN 106778777B CN 201611080118 A CN201611080118 A CN 201611080118A CN 106778777 B CN106778777 B CN 106778777B
Authority
CN
China
Prior art keywords
image
target image
vehicle
feature
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611080118.5A
Other languages
Chinese (zh)
Other versions
CN106778777A (en
Inventor
谷瑞翔
高体红
毛河
龙学军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Topplusvision Science & Technology Co ltd
Original Assignee
Chengdu Topplusvision Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Topplusvision Science & Technology Co ltd filed Critical Chengdu Topplusvision Science & Technology Co ltd
Priority to CN201611080118.5A priority Critical patent/CN106778777B/en
Publication of CN106778777A publication Critical patent/CN106778777A/en
Application granted granted Critical
Publication of CN106778777B publication Critical patent/CN106778777B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The application discloses a vehicle matching method, including: acquiring a first target image and a second target image; respectively extracting the characteristic information of the first target image and the second target image, and correspondingly obtaining the characteristic information of the first image and the characteristic information of the second image; the first image feature information and the second image feature information respectively comprise a visual dictionary, a vehicle edge feature, a gray histogram feature and a color feature of a corresponding target image; performing feature differentiation processing on the first image feature information and the second image feature information to obtain corresponding feature vectors; and inputting the characteristic vector into a pre-established vehicle matching degree pre-estimation model to obtain a matching result which is output by the vehicle matching degree pre-estimation model and used for representing the matching degree between the first vehicle to be matched and the second vehicle to be matched. The technical scheme disclosed in the application improves the matching accuracy and matching efficiency of the vehicle and reduces the labor cost. In addition, this application still correspondingly discloses a vehicle matching system.

Description

Vehicle matching method and system
Technical Field
The invention relates to the technical field of vehicle supervision, in particular to a vehicle matching method and system.
Background
Currently, with the development of national economy, the quantity of automobile reserves in the society is continuously increased, and a plurality of vehicle supervision problems are brought while people go out conveniently. For example, in order to find possible illegal vehicle usage phenomena, the road supervision department and the related law enforcement department often need to identify and match the vehicles in two pictures taken on the expressway to determine whether the vehicles in the two pictures are the same vehicle.
However, when the current road supervision department and related law enforcement departments match vehicles in different pictures, a manual identification matching mode is usually adopted to judge whether the vehicles in the different pictures are the same vehicle, on one hand, the manual identification matching mode can greatly increase labor cost, on the other hand, the efficiency is very low, the current increasingly heavy vehicle supervision task cannot be adapted, on the other hand, due to the fact that serious subjective errors often occur during manual identification matching, key information in the pictures is difficult to objectively and comprehensively master, and therefore matching error events are easily caused.
In conclusion, it can be seen that how to improve the vehicle matching accuracy and matching efficiency and reduce the labor cost is an urgent problem to be solved at present.
Disclosure of Invention
In view of this, the present invention provides a vehicle matching method and system, which improve vehicle matching accuracy and matching efficiency and reduce labor cost. The specific scheme is as follows:
a vehicle matching method, comprising:
acquiring a first target image and a second target image; the first target image is an image obtained after image acquisition is carried out on a first vehicle to be matched, and the second target image is an image obtained after image sampling is carried out on a second vehicle to be matched;
respectively extracting the feature information of the first target image and the second target image to correspondingly obtain first image feature information and second image feature information; wherein the first image feature information and the second image feature information each include a visual dictionary, a vehicle edge feature, a grayscale histogram feature, and a color feature of the respective target image;
performing feature differentiation processing on the first image feature information and the second image feature information to obtain corresponding feature vectors;
inputting the characteristic vector into a pre-established vehicle matching degree pre-estimation model to obtain a matching result which is output by the vehicle matching degree pre-estimation model and is used for representing the matching degree between the first vehicle to be matched and the second vehicle to be matched; the vehicle matching degree estimation model is a model obtained based on a machine learning algorithm.
Preferably, the process of extracting the visual dictionary of any one of the first target image and the second target image includes:
extracting the feature points of the target image by using a preset image feature point extraction algorithm to obtain a corresponding vehicle vision vocabulary set;
and mapping each vocabulary in the vehicle visual vocabulary set to a corresponding clustering center in a pre-established visual dictionary model respectively to obtain a visual dictionary corresponding to the target image.
Preferably, the process of creating the visual dictionary model includes:
acquiring an image sample set; the image sample set comprises images obtained after image acquisition is carried out on different types of vehicles under different shooting parameters and different shooting environments respectively; the shooting parameters comprise a shooting visual angle and a shooting time period; the shooting environment comprises a foggy environment, a rain and snow environment, a sunny environment and a sand and dust environment;
performing feature point extraction processing on each image sample in the image sample set by using the image feature point extraction algorithm to correspondingly obtain a vehicle visual vocabulary set corresponding to each image sample;
and sequentially clustering the vehicle vision vocabulary set corresponding to each image sample by using a K-means clustering algorithm to obtain the vision dictionary model.
Preferably, before the process of extracting the feature information of the first target image and the feature information of the second target image, the method further includes:
and carrying out matting processing on image areas corresponding to a vehicle windshield from the first target image and the second target image respectively.
Preferably, before the process of extracting the feature information of the first target image and the feature information of the second target image, the method further includes:
and respectively carrying out illumination compensation pretreatment on the first target image and the second target image.
Preferably, the process of creating the vehicle matching degree estimation model includes:
acquiring a training set; wherein the training set comprises a positive sample training set and a negative sample training set; the positive sample training set comprises N groups of positive sample images, the negative sample training set comprises M groups of negative sample images, and both N and M are positive integers; any group of positive sample images in the positive sample training set comprises sample images obtained after image acquisition is carried out on the same vehicle under different shooting parameters and different shooting environments; any group of negative sample images in the negative sample training set comprises sample images obtained after image acquisition is carried out on different vehicles under the same shooting parameters and the same shooting environment;
respectively determining the feature vector of each group of sample images in the training set to obtain a corresponding feature vector set;
performing learning training on each feature vector in the feature vector set by using the machine learning algorithm to obtain the vehicle matching degree estimation model;
the process of determining the feature vector of any group of sample images in the training set comprises the following steps: respectively extracting the characteristic information of each sample image in the group of sample images to obtain a corresponding image characteristic information set, and then respectively carrying out characteristic differentiation processing on every two pieces of image characteristic information in the image characteristic information set to obtain a characteristic vector corresponding to the group of sample images.
The invention also correspondingly discloses a vehicle matching system, which comprises:
the image acquisition module is used for acquiring a first target image and a second target image; the first target image is an image obtained after image acquisition is carried out on a first vehicle to be matched, and the second target image is an image obtained after image sampling is carried out on a second vehicle to be matched;
the characteristic extraction module is used for respectively extracting the characteristic information of the first target image and the second target image and correspondingly obtaining the characteristic information of the first image and the characteristic information of the second image; wherein the first image feature information and the second image feature information each include a visual dictionary, a vehicle edge feature, a grayscale histogram feature, and a color feature of the respective target image;
the characteristic processing module is used for carrying out characteristic differentiation processing on the first image characteristic information and the second image characteristic information to obtain corresponding characteristic vectors;
the model creating module is used for creating a vehicle matching degree estimation model in advance based on a machine learning algorithm;
and the vehicle matching module is used for inputting the characteristic vector into the vehicle matching degree pre-estimation model to obtain a matching result which is output by the vehicle matching degree pre-estimation model and is used for representing the matching degree between the first vehicle to be matched and the second vehicle to be matched.
Preferably, the vehicle matching system further includes:
and the region matting module is used for matting and removing the image region corresponding to the vehicle windshield from the first target image and the second target image respectively before the feature extraction module extracts the feature information of the first target image and the second target image.
Preferably, the vehicle matching system further includes:
and the illumination compensation preprocessing module is used for respectively carrying out illumination compensation preprocessing on the first target image and the second target image before the feature extraction module extracts the feature information of the first target image and the second target image.
Preferably, the model creation module includes:
a training set acquisition unit for acquiring a training set; wherein the training set comprises a positive sample training set and a negative sample training set; the positive sample training set comprises N groups of positive sample images, the negative sample training set comprises M groups of negative sample images, and both N and M are positive integers; any group of positive sample images in the positive sample training set comprises sample images obtained after image acquisition is carried out on the same vehicle under different shooting parameters and different shooting environments; any group of negative sample images in the negative sample training set comprises sample images obtained after image acquisition is carried out on different vehicles under the same shooting parameters and the same shooting environment;
the characteristic vector determining unit is used for respectively determining the characteristic vectors of each group of sample images in the training set to obtain a corresponding characteristic vector set;
the training unit is used for performing learning training on each feature vector in the feature vector set by using the machine learning algorithm to obtain the vehicle matching degree estimation model;
wherein the process of determining the feature vector of any group of sample images in the training set by the feature vector determination unit comprises: respectively extracting the characteristic information of each sample image in the group of sample images to obtain a corresponding image characteristic information set, and then respectively carrying out characteristic differentiation processing on every two pieces of image characteristic information in the image characteristic information set to obtain a characteristic vector corresponding to the group of sample images.
In the invention, the vehicle matching method comprises the following steps: acquiring a first target image and a second target image; respectively extracting the characteristic information of the first target image and the second target image, and correspondingly obtaining the characteristic information of the first image and the characteristic information of the second image; the first image feature information and the second image feature information respectively comprise a visual dictionary, a vehicle edge feature, a gray histogram feature and a color feature of a corresponding target image; performing feature differentiation processing on the first image feature information and the second image feature information to obtain corresponding feature vectors; inputting the characteristic vector into a pre-established vehicle matching degree estimation model to obtain a matching result which is output by the vehicle matching degree estimation model and used for representing the matching degree between a first vehicle to be matched and a second vehicle to be matched; the vehicle matching degree estimation model is a model obtained based on a machine learning algorithm.
It can be seen that, in the present invention, when the vehicles in different images need to be identified and matched, the feature information of the visual dictionary, the vehicle edge feature, the gray histogram feature, the color feature, and the like in the different images can be extracted to obtain the image feature information corresponding to the different images, and then the image feature information corresponding to the different images is input into the vehicle matching degree pre-estimation model obtained in advance based on the machine learning algorithm, so as to obtain the matching degree between the vehicles to be matched in the different images, as can be seen from the above, when the vehicles in different images are identified by matching, the invention does not need to perform manual matching, but can perform automatic matching identification by extracting the feature information in the images and then using the matching degree pre-estimation model obtained based on the machine learning algorithm, thereby improving the matching efficiency and reducing the labor cost, in addition, the extracted feature information comprises a visual dictionary of the image, vehicle edge features, gray histogram features and color features, and the feature information reflects the inherent features of the vehicle very objectively and comprehensively, so that the final matching result has very high accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a vehicle matching method disclosed in an embodiment of the present invention;
FIG. 2 is a flowchart of a visual dictionary model creation method disclosed in an embodiment of the present invention;
FIG. 3 is a flowchart of a vehicle matching degree estimation model creation method disclosed by the embodiment of the invention;
fig. 4 is a schematic structural diagram of a vehicle matching system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a vehicle matching method, which is shown in figure 1 and comprises the following steps:
step S11: acquiring a first target image and a second target image; the first target image is an image obtained after image acquisition is carried out on a first vehicle to be matched, and the second target image is an image obtained after image sampling is carried out on a second vehicle to be matched.
It can be understood that the technical solution in this embodiment can be applied to various application scenarios, such as vehicle matching in a parking lot, vehicle matching at a highway intersection, or vehicle matching management around other important places.
In addition, any one of the target images in the present embodiment may specifically be an image of the appearance of the vehicle captured by a camera provided in a parking lot, on an expressway, or around another important place.
Step S12: respectively extracting the characteristic information of the first target image and the second target image, and correspondingly obtaining the characteristic information of the first image and the characteristic information of the second image; the first image feature information and the second image feature information respectively comprise a visual dictionary, a vehicle edge feature, a gray histogram feature and a color feature of the corresponding target image.
In this embodiment, an Overlap image segmentation strategy may be specifically adopted to obtain respective corresponding grayscale histogram features in the first target image and the second target image.
In addition, in this embodiment, the color features corresponding to the first target image and the second target image may be obtained by extracting an H-channel histogram of the target image in an HSV color space.
Step S13: and performing feature differentiation processing on the first image feature information and the second image feature information to obtain corresponding feature vectors.
That is, the present embodiment performs difference processing on the first image feature information and the second image feature information to obtain the feature vector.
Step S14: inputting the characteristic vector into a pre-established vehicle matching degree estimation model to obtain a matching result which is output by the vehicle matching degree estimation model and used for representing the matching degree between a first vehicle to be matched and a second vehicle to be matched; the vehicle matching degree estimation model is a model obtained based on a machine learning algorithm.
In this embodiment, after the matching result is obtained, the matching result may be compared with a preset threshold, and when the matching degree reflected by the matching result is greater than or equal to the threshold, it may be determined that the first vehicle to be matched and the second vehicle to be matched are the same vehicle, otherwise, it may be determined that the first vehicle to be matched and the second vehicle to be matched are different vehicles, respectively. In addition, this embodiment may also provide a threshold value changing interface for the user, and through the threshold value changing interface, the numerical value input by the user may be acquired, and the numerical value may be determined as the current latest threshold value. It will be appreciated that in the case of a high requirement on matching accuracy, the threshold value needs to be adjusted up accordingly through the above threshold value changing interface.
It can be seen that, in the embodiment of the present invention, when the vehicles in different images need to be identified and matched, the feature information of the visual dictionary, the vehicle edge feature, the gray histogram feature, the color feature, and the like in the different images may be extracted to obtain the image feature information corresponding to each of the different images, and then the image feature information corresponding to each of the different images is input into the vehicle matching degree estimation model obtained in advance based on the machine learning algorithm, so as to obtain the matching degree between the vehicles to be matched in the different images, as can be seen from the above, when the vehicles in different images are identified by matching, the feature information in the images may be extracted without manual matching, and then the matching degree estimation model obtained based on the machine learning algorithm is used to perform automatic matching identification, in the embodiment of the invention, the extracted feature information comprises a visual dictionary of an image, vehicle edge features, gray histogram features and color features, and the feature information reflects the inherent features of the vehicle very objectively and comprehensively, so that the final matching result has very high accuracy.
The embodiment of the invention discloses a specific vehicle matching method, and compared with the previous embodiment, the embodiment further explains and optimizes the technical scheme. Specifically, the method comprises the following steps:
in step S12 of the previous embodiment, feature information of the first target image and the second target image needs to be extracted, where the feature information in the target images includes a visual dictionary, a vehicle edge feature, a grayscale histogram feature, and a color feature.
In this embodiment, the process of extracting the visual dictionary of any one of the first target image and the second target image may specifically include the following steps S1201 and S1202:
step S1201: and extracting the feature points of any target image by using a preset image feature point extraction algorithm to obtain a corresponding vehicle vision vocabulary set.
It is understood that the feature points on any one of the target images form the visual words of the target image, and all the feature points on the target image form the vehicle visual word set corresponding to the target image.
Step S1202: and mapping each vocabulary in the vehicle visual vocabulary set to a corresponding clustering center in a pre-established visual dictionary model respectively to obtain a visual dictionary corresponding to the target image.
Namely, according to the clustering center in the visual dictionary model, each vocabulary in the vehicle visual vocabulary set is classified, and then the vocabulary frequency in each class of vocabulary is counted, so as to obtain the visual dictionary corresponding to the corresponding target image.
In addition, referring to fig. 2, in this embodiment, the process of creating the visual dictionary model specifically includes the following steps S21 to S23:
step S21: acquiring an image sample set; the image sample set comprises images obtained after image acquisition is carried out on different types of vehicles under different shooting parameters and different shooting environments respectively; in the present embodiment, the shooting parameters include, but are not limited to, a shooting angle of view and a shooting time period; the shooting environment includes, but is not limited to, fog environment, rain and snow environment, sunny environment and sand and dust environment;
step S22: performing feature point extraction processing on each image sample in the image sample set by using the image feature point extraction algorithm to correspondingly obtain a vehicle visual vocabulary set corresponding to each image sample;
step S23: and sequentially clustering the vehicle vision vocabulary set corresponding to each image sample by using a K-means clustering algorithm to obtain a vision dictionary model.
In this embodiment, the image Feature point extraction algorithm may specifically be a SIFT algorithm (Scale-Invariant Feature Transform) or a SURF algorithm (Speeded Up Robust Features).
In addition, the present embodiment may perform normalization processing on the features extracted in the above-described feature point extraction process, considering the possibility that the number of feature points between different images may not coincide. For example, when performing feature point extraction processing on an image sample by using the SURF algorithm, the present embodiment may perform normalization processing by using the following formula:
fnorm=(f*N)/Nnum
in the formula (f)normRepresenting the SURF feature after normalization, f representing the SURF feature before normalization, NnumRepresenting the total number of SURF feature points in the corresponding image sample, N and NnumBelonging to the same order of magnitude and used for ensuring the precision of f, and having no physical meaning.
Further, in this embodiment, the process of extracting the vehicle edge feature of any one of the first target image and the second target image may specifically include: and utilizing the first order difference operator or the second order difference operator to extract the edge characteristics of the target image to obtain the vehicle edge characteristics of the target image.
Specifically, in this embodiment, a Sobel operator may be adopted to calculate edge information of the target image to obtain an edge image of the vehicle in the target image, and then the edge image is projected in the horizontal and vertical directions to obtain corresponding vehicle edge features.
In addition, in view of the adverse effect on the subsequent processing process caused by the image area where the vehicle windshield is located in the target image, in this embodiment, before the process of extracting the feature information of the first target image and the second target image respectively, the method may further include: and carrying out matting processing on an image area corresponding to the vehicle windshield from the first target image and the second target image respectively. Secondly, this embodiment can also further be with the image area on the vehicle windshield from the target image scratch out, make the vehicle region of target image only keep the image area below vehicle bonnet and the bonnet in like this, because contain most characteristics of vehicle in this image area, so after the processing of above-mentioned scratching, can not influence the apparent characteristic of vehicle characteristic in the image, can also reduce information processing volume by a wide margin in addition, be favorable to accelerating subsequent processing speed.
Further, in view of the problem that there may be image size inconsistency between different target images, in order to reduce adverse effects of the image size inconsistency on subsequent processing, in this embodiment, before the process of extracting the feature information of the first target image and the feature information of the second target image respectively, the method further includes: the first target image and the second target image are scaled to the same size.
In order to reduce the influence of factors such as illumination and the like to improve the subsequent matching accuracy and robustness, in this embodiment, before the process of extracting the feature information of the first target image and the second target image respectively, the method may further include: and respectively carrying out illumination compensation pretreatment on the first target image and the second target image. Specifically, in this embodiment, an MSRCR algorithm (i.e., Multi-Scale recovery with Color retrieval) may be adopted to remove low-frequency information of the image in the airspace, enhance high-frequency information of the image, and obtain the target image without the illumination influence.
In step S14 of the previous embodiment, the feature vectors obtained in step S13 need to be input into a pre-created vehicle matching degree estimation model, and then a matching result indicating the matching degree between corresponding vehicles needs to be input through the model. Referring to fig. 3, in the present embodiment, the process of creating the vehicle matching degree estimation model may specifically include the following steps S31 to S33:
step S31: acquiring a training set; wherein the training set comprises a positive sample training set and a negative sample training set; the positive sample training set comprises N groups of positive sample images, the negative sample training set comprises M groups of negative sample images, and N and M are positive integers; any group of positive sample images in the positive sample training set comprises sample images obtained after image acquisition is carried out on the same vehicle under different shooting parameters and different shooting environments; any group of negative sample images in the negative sample training set comprises sample images obtained after image acquisition is carried out on different vehicles under the same shooting parameters and the same shooting environment.
In order to avoid the data skew phenomenon, in this embodiment, the numbers of sets of sample images in the positive sample training set and the negative sample training set may be ensured to be the same or substantially the same as each other, that is, the N value and the M value are ensured to be the same or substantially the same as each other as possible.
Step S32: respectively determining the feature vector of each group of sample images in the training set to obtain a corresponding feature vector set;
step S33: learning and training each feature vector in the feature vector set by using a machine learning algorithm to obtain a vehicle matching degree estimation model;
specifically, in the step S32, the process of determining the feature vector of any group of sample images in the training set may include the following steps S321 and S322:
step S321: respectively extracting the characteristic information of each sample image in the group of sample images to obtain a corresponding image characteristic information set;
step S322: and respectively carrying out feature differentiation processing on every two pieces of image feature information in the image feature information set to obtain feature vectors corresponding to the group of sample images.
It is understood that the specific process of extracting the feature information of the sample image in step S321 is similar to the specific process of extracting the feature information of the target image in step S12 in the previous embodiment, and is not repeated herein. Similarly, the feature differentiation processing procedure in step S322 is similar to the feature differentiation processing procedure in step S13 in the previous embodiment, and is not repeated herein.
In this embodiment, the Machine learning algorithm in step S33 preferentially adopts an SVM algorithm (SVM, i.e., Support Vector Machine).
Correspondingly, the embodiment of the invention also discloses a vehicle matching system, which is shown in fig. 4 and comprises:
an image acquisition module 41, configured to acquire a first target image and a second target image; the first target image is an image obtained after image acquisition is carried out on a first vehicle to be matched, and the second target image is an image obtained after image sampling is carried out on a second vehicle to be matched;
a feature extraction module 42, configured to extract feature information of the first target image and the second target image, respectively, and accordingly obtain first image feature information and second image feature information; the first image feature information and the second image feature information respectively comprise a visual dictionary, a vehicle edge feature, a gray histogram feature and a color feature of a corresponding target image;
the feature processing module 43 is configured to perform feature differentiation processing on the first image feature information and the second image feature information to obtain corresponding feature vectors;
the model creating module 44 is used for creating a vehicle matching degree estimation model based on a machine learning algorithm in advance;
and the vehicle matching module 45 is used for inputting the characteristic vector into the vehicle matching degree pre-estimation model to obtain a matching result which is output by the vehicle matching degree pre-estimation model and is used for indicating the matching degree between the first vehicle to be matched and the second vehicle to be matched.
It can be seen that, in the embodiment of the present invention, when the vehicles in different images need to be identified and matched, the feature information of the visual dictionary, the vehicle edge feature, the gray histogram feature, the color feature, and the like in the different images may be extracted to obtain the image feature information corresponding to each of the different images, and then the image feature information corresponding to each of the different images is input into the vehicle matching degree estimation model obtained in advance based on the machine learning algorithm, so as to obtain the matching degree between the vehicles to be matched in the different images, as can be seen from the above, when the vehicles in different images are identified by matching, the feature information in the images may be extracted without manual matching, and then the matching degree estimation model obtained based on the machine learning algorithm is used to perform automatic matching identification, in the embodiment of the invention, the extracted feature information comprises a visual dictionary of an image, vehicle edge features, gray histogram features and color features, and the feature information reflects the inherent features of the vehicle very objectively and comprehensively, so that the final matching result has very high accuracy.
Specifically, the feature extraction module may include a visual vocabulary acquisition sub-module, a dictionary model creation sub-module, and a visual dictionary generation sub-module; wherein the content of the first and second substances,
the visual vocabulary acquisition submodule is used for extracting the characteristic points of the target image by utilizing a preset image characteristic point extraction algorithm to obtain a corresponding vehicle visual vocabulary set;
the dictionary model creating submodule is used for creating a visual dictionary model in advance;
and the visual dictionary generation submodule is used for mapping each vocabulary in the vehicle visual vocabulary set to the corresponding clustering center in the visual dictionary model respectively to obtain a visual dictionary corresponding to the corresponding target image.
The dictionary model creating submodule comprises a sample set acquisition unit, a feature point extraction unit and a vocabulary clustering unit; wherein the content of the first and second substances,
a sample set acquisition unit for acquiring an image sample set; the image sample set comprises images obtained after image acquisition is carried out on different types of vehicles under different shooting parameters and different shooting environments respectively; the shooting parameters comprise a shooting visual angle and a shooting time period; the shooting environment comprises a foggy environment, a rain and snow environment, a sunny environment and a sand and dust environment;
the characteristic point extraction unit is used for extracting characteristic points of each image sample in the image sample set by utilizing an image characteristic point extraction algorithm, and correspondingly obtaining a vehicle visual vocabulary set corresponding to each image sample;
and the vocabulary clustering unit is used for sequentially clustering the vehicle visual vocabulary set corresponding to each image sample by using a K-means clustering algorithm to obtain a visual dictionary model.
In this embodiment, the image feature point extraction algorithm may specifically be a SIFT algorithm or a SURF algorithm.
In addition, the feature extraction module further needs to include: and the edge feature extraction submodule is used for extracting the edge features of the target image by utilizing a first order difference operator or a second order difference operator to obtain the vehicle edge features of the target image.
In this embodiment, the edge feature extraction sub-module may specifically calculate edge information of the target image by using a Sobel operator to obtain an edge image of a vehicle in the target image, and then project the edge image in horizontal and vertical directions to obtain corresponding vehicle edge features.
In addition, in view of the adverse effect on the subsequent processing process caused by the image area where the vehicle windshield is located in the target image, in this embodiment, the vehicle matching system may further include:
and the region matting module is used for matting and removing the image region corresponding to the vehicle windshield from the first target image and the second target image respectively before the characteristic extraction module extracts the characteristic information of the first target image and the second target image.
Secondly, above-mentioned regional module of scratching out can also further be scratched out from the target image with the image area on the vehicle windshield, makes like this only keep the image area below vehicle bonnet and the bonnet in the vehicle area of target image, because contain the most characteristic of vehicle in this image area, so through above-mentioned scratching out the back of handling, can not influence the apparent characteristic of vehicle characteristic in the image, can also reduce information processing volume by a wide margin in addition, be favorable to accelerating subsequent processing speed.
Further, in order to reduce the adverse effect of the image size inconsistency on the subsequent processing in consideration of the problem that the image sizes may be inconsistent between different target images, the vehicle matching system in this embodiment may further include:
and the size scaling module is used for scaling the first target image and the second target image to the same size before the process of respectively extracting the feature information of the first target image and the second target image by the feature extraction module.
In order to reduce the influence of factors such as illumination and the like and improve the subsequent matching accuracy and robustness, the vehicle matching system of the embodiment further includes:
and the illumination compensation preprocessing module is used for respectively carrying out illumination compensation preprocessing on the first target image and the second target image before the characteristic extraction module extracts the characteristic information of the first target image and the second target image.
Specifically, the illumination compensation preprocessing module may adopt an MSRCR algorithm to remove low-frequency information of the image in an airspace, enhance high-frequency information of the image, and obtain the target image without illumination influence.
Further, in this embodiment, the model creation module specifically includes a training set obtaining unit, a feature vector determining unit, and a training unit; wherein the content of the first and second substances,
a training set acquisition unit for acquiring a training set; wherein the training set comprises a positive sample training set and a negative sample training set; the positive sample training set comprises N groups of positive sample images, the negative sample training set comprises M groups of negative sample images, and N and M are positive integers; any group of positive sample images in the positive sample training set comprises sample images obtained after image acquisition is carried out on the same vehicle under different shooting parameters and different shooting environments; any group of negative sample images in the negative sample training set comprises sample images obtained after image acquisition is carried out on different vehicles under the same shooting parameters and the same shooting environment;
the characteristic vector determining unit is used for respectively determining the characteristic vectors of each group of sample images in the training set to obtain a corresponding characteristic vector set;
the training unit is used for performing learning training on each feature vector in the feature vector set by using a machine learning algorithm to obtain a vehicle matching degree estimation model;
the process of determining the feature vector of any group of sample images in the training set by the feature vector determination unit includes: respectively extracting the characteristic information of each sample image in the group of sample images to obtain a corresponding image characteristic information set, and then respectively carrying out characteristic differentiation processing on every two pieces of image characteristic information in the image characteristic information set to obtain a characteristic vector corresponding to the group of sample images.
In this embodiment, the machine learning algorithm preferably adopts an SVM algorithm.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The vehicle matching method and system provided by the invention are described in detail above, and the principle and the implementation of the invention are explained in the text by applying specific examples, and the description of the above examples is only used for helping understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. A vehicle matching method, characterized by comprising:
acquiring a first target image and a second target image; the first target image is an image obtained after image acquisition is carried out on a first vehicle to be matched, and the second target image is an image obtained after image sampling is carried out on a second vehicle to be matched;
respectively extracting the feature information of the first target image and the second target image to correspondingly obtain first image feature information and second image feature information; wherein the first image feature information and the second image feature information each include a visual dictionary, a vehicle edge feature, a grayscale histogram feature, and a color feature of the respective target image;
performing feature differentiation processing on the first image feature information and the second image feature information to obtain corresponding feature vectors;
inputting the characteristic vector into a pre-established vehicle matching degree pre-estimation model to obtain a matching result which is output by the vehicle matching degree pre-estimation model and is used for representing the matching degree between the first vehicle to be matched and the second vehicle to be matched; the vehicle matching degree estimation model is a model obtained based on a machine learning algorithm;
wherein the process of extracting the visual dictionary of any one of the first target image and the second target image comprises:
extracting the feature points of the target image by using a preset image feature point extraction algorithm to obtain a corresponding vehicle vision vocabulary set;
mapping each vocabulary in the vehicle visual vocabulary set to a corresponding clustering center in a pre-established visual dictionary model respectively to obtain a visual dictionary corresponding to the target image;
calculating edge information of a target image by using a Sobel operator to obtain edge images of related vehicles in the target image, and then projecting the edge images in horizontal and vertical directions respectively to obtain corresponding vehicle edge characteristics;
acquiring corresponding gray histogram features in the first target image and the second target image by adopting an Overlap image segmentation strategy;
acquiring color features corresponding to the first target image and the second target image respectively by extracting an H channel histogram of the target image in an HSV color space;
the process of extracting the feature points of the target image by using a preset image feature point extraction algorithm to obtain a corresponding vehicle visual vocabulary set comprises the following steps: and performing normalization processing on the extracted features by adopting the following formula: f. ofnorm=(f*N)/Nnum(ii) a In the formula (f)normRepresenting features after normalization, f representing features before normalization, NnumRepresenting the total number of feature points in the corresponding image sample, N being equal to NnumBelonging to the same order of magnitude and used for ensuring the precision of f.
2. The vehicle matching method according to claim 1, wherein the creation process of the visual dictionary model includes:
acquiring an image sample set; the image sample set comprises images obtained after image acquisition is carried out on different types of vehicles under different shooting parameters and different shooting environments respectively; the shooting parameters comprise a shooting visual angle and a shooting time period; the shooting environment comprises a foggy environment, a rain and snow environment, a sunny environment and a sand and dust environment;
performing feature point extraction processing on each image sample in the image sample set by using the image feature point extraction algorithm to correspondingly obtain a vehicle visual vocabulary set corresponding to each image sample;
and sequentially clustering the vehicle vision vocabulary set corresponding to each image sample by using a K-means clustering algorithm to obtain the vision dictionary model.
3. The vehicle matching method according to claim 1, wherein the process of extracting the feature information of the first target image and the second target image, respectively, is preceded by:
and carrying out matting processing on image areas corresponding to a vehicle windshield from the first target image and the second target image respectively.
4. The vehicle matching method according to claim 1, wherein the process of extracting the feature information of the first target image and the second target image, respectively, is preceded by:
and respectively carrying out illumination compensation pretreatment on the first target image and the second target image.
5. The vehicle matching method according to any one of claims 1 to 4, wherein the creation process of the vehicle matching degree pre-estimation model comprises:
acquiring a training set; wherein the training set comprises a positive sample training set and a negative sample training set; the positive sample training set comprises N groups of positive sample images, the negative sample training set comprises M groups of negative sample images, and both N and M are positive integers; any group of positive sample images in the positive sample training set comprises sample images obtained after image acquisition is carried out on the same vehicle under different shooting parameters and different shooting environments; any group of negative sample images in the negative sample training set comprises sample images obtained after image acquisition is carried out on different vehicles under the same shooting parameters and the same shooting environment;
respectively determining the feature vector of each group of sample images in the training set to obtain a corresponding feature vector set;
performing learning training on each feature vector in the feature vector set by using the machine learning algorithm to obtain the vehicle matching degree estimation model;
the process of determining the feature vector of any group of sample images in the training set comprises the following steps: respectively extracting the characteristic information of each sample image in the group of sample images to obtain a corresponding image characteristic information set, and then respectively carrying out characteristic differentiation processing on every two pieces of image characteristic information in the image characteristic information set to obtain a characteristic vector corresponding to the group of sample images.
6. A vehicle matching system, comprising:
the image acquisition module is used for acquiring a first target image and a second target image; the first target image is an image obtained after image acquisition is carried out on a first vehicle to be matched, and the second target image is an image obtained after image sampling is carried out on a second vehicle to be matched;
the characteristic extraction module is used for respectively extracting the characteristic information of the first target image and the second target image and correspondingly obtaining the characteristic information of the first image and the characteristic information of the second image; wherein the first image feature information and the second image feature information each include a visual dictionary, a vehicle edge feature, a grayscale histogram feature, and a color feature of the respective target image;
the characteristic processing module is used for carrying out characteristic differentiation processing on the first image characteristic information and the second image characteristic information to obtain corresponding characteristic vectors;
the model creating module is used for creating a vehicle matching degree estimation model in advance based on a machine learning algorithm;
the vehicle matching module is used for inputting the feature vector into the vehicle matching degree pre-estimation model to obtain a matching result which is output by the vehicle matching degree pre-estimation model and used for representing the matching degree between the first vehicle to be matched and the second vehicle to be matched;
wherein the process of extracting the visual dictionary of any one of the first target image and the second target image comprises:
extracting the feature points of the target image by using a preset image feature point extraction algorithm to obtain a corresponding vehicle vision vocabulary set;
mapping each vocabulary in the vehicle visual vocabulary set to a corresponding clustering center in a pre-established visual dictionary model respectively to obtain a visual dictionary corresponding to the target image;
calculating edge information of a target image by using a Sobel operator to obtain edge images of related vehicles in the target image, and then projecting the edge images in horizontal and vertical directions respectively to obtain corresponding vehicle edge characteristics;
acquiring corresponding gray histogram features in the first target image and the second target image by adopting an Overlap image segmentation strategy;
acquiring color features corresponding to the first target image and the second target image respectively by extracting an H channel histogram of the target image in an HSV color space;
the process of extracting the feature points of the target image by using a preset image feature point extraction algorithm to obtain a corresponding vehicle visual vocabulary set comprises the following steps: and performing normalization processing on the extracted features by adopting the following formula: f. ofnorm=(f*N)/Nnum(ii) a In the formula (f)normRepresenting features after normalization, f representing features before normalization, NnumRepresenting the total number of feature points in the corresponding image sample, N being equal to NnumBelonging to the same order of magnitude and used for ensuring the precision of f.
7. The vehicle matching system of claim 6, further comprising:
and the region matting module is used for matting and removing the image region corresponding to the vehicle windshield from the first target image and the second target image respectively before the feature extraction module extracts the feature information of the first target image and the second target image.
8. The vehicle matching system of claim 6, further comprising:
and the illumination compensation preprocessing module is used for respectively carrying out illumination compensation preprocessing on the first target image and the second target image before the feature extraction module extracts the feature information of the first target image and the second target image.
9. The vehicle matching system of any of claims 6-8, wherein the model creation module comprises:
a training set acquisition unit for acquiring a training set; wherein the training set comprises a positive sample training set and a negative sample training set; the positive sample training set comprises N groups of positive sample images, the negative sample training set comprises M groups of negative sample images, and both N and M are positive integers; any group of positive sample images in the positive sample training set comprises sample images obtained after image acquisition is carried out on the same vehicle under different shooting parameters and different shooting environments; any group of negative sample images in the negative sample training set comprises sample images obtained after image acquisition is carried out on different vehicles under the same shooting parameters and the same shooting environment;
the characteristic vector determining unit is used for respectively determining the characteristic vectors of each group of sample images in the training set to obtain a corresponding characteristic vector set;
the training unit is used for performing learning training on each feature vector in the feature vector set by using the machine learning algorithm to obtain the vehicle matching degree estimation model;
wherein the process of determining the feature vector of any group of sample images in the training set by the feature vector determination unit comprises: respectively extracting the characteristic information of each sample image in the group of sample images to obtain a corresponding image characteristic information set, and then respectively carrying out characteristic differentiation processing on every two pieces of image characteristic information in the image characteristic information set to obtain a characteristic vector corresponding to the group of sample images.
CN201611080118.5A 2016-11-30 2016-11-30 Vehicle matching method and system Active CN106778777B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611080118.5A CN106778777B (en) 2016-11-30 2016-11-30 Vehicle matching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611080118.5A CN106778777B (en) 2016-11-30 2016-11-30 Vehicle matching method and system

Publications (2)

Publication Number Publication Date
CN106778777A CN106778777A (en) 2017-05-31
CN106778777B true CN106778777B (en) 2021-07-06

Family

ID=58901312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611080118.5A Active CN106778777B (en) 2016-11-30 2016-11-30 Vehicle matching method and system

Country Status (1)

Country Link
CN (1) CN106778777B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10916135B2 (en) * 2018-01-13 2021-02-09 Toyota Jidosha Kabushiki Kaisha Similarity learning and association between observations of multiple connected vehicles
CN109087300B (en) * 2018-09-20 2020-10-16 视睿(杭州)信息科技有限公司 Automatic detection method and device for LED chip support faults
CN110348392B (en) * 2019-07-12 2020-08-25 上海眼控科技股份有限公司 Vehicle matching method and device
CN110598758A (en) * 2019-08-23 2019-12-20 伟龙金溢科技(深圳)有限公司 Training modeling method, vehicle charging method, management system, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810505A (en) * 2014-02-19 2014-05-21 北京大学 Vehicle identification method and system based on multilayer descriptors
CN104199842A (en) * 2014-08-07 2014-12-10 同济大学 Similar image retrieval method based on local feature neighborhood information

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916383B (en) * 2010-08-25 2013-03-20 浙江师范大学 Vehicle detecting, tracking and identifying system based on multi-camera
CN103049340A (en) * 2012-10-26 2013-04-17 中山大学 Image super-resolution reconstruction method of visual vocabularies and based on texture context constraint
CN102932605B (en) * 2012-11-26 2014-12-24 南京大学 Method for selecting camera combination in visual perception network
WO2014110629A1 (en) * 2013-01-17 2014-07-24 Sensen Networks Pty Ltd Automated vehicle recognition
CN103279739B (en) * 2013-05-10 2016-05-11 浙江捷尚视觉科技股份有限公司 A kind of deck detection method based on vehicle characteristics coupling
CN104504410A (en) * 2015-01-07 2015-04-08 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud
CN104615998B (en) * 2015-02-15 2017-10-24 武汉大学 A kind of vehicle retrieval method based on various visual angles
CN105354533B (en) * 2015-09-28 2018-11-09 江南大学 A kind of unlicensed vehicle model recognizing method of bayonet based on bag of words
CN105354273A (en) * 2015-10-29 2016-02-24 浙江高速信息工程技术有限公司 Method for fast retrieving high-similarity image of highway fee evasion vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810505A (en) * 2014-02-19 2014-05-21 北京大学 Vehicle identification method and system based on multilayer descriptors
CN104199842A (en) * 2014-08-07 2014-12-10 同济大学 Similar image retrieval method based on local feature neighborhood information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于矩形特征的无空间参考DEM图像模糊匹配;杨彦通等;《物探化探计算技术》;20160331;第1.3、2.1、2.2节 *

Also Published As

Publication number Publication date
CN106778777A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN107358596B (en) Vehicle loss assessment method and device based on image, electronic equipment and system
CN107729818B (en) Multi-feature fusion vehicle re-identification method based on deep learning
EP2575077B1 (en) Road sign detecting method and road sign detecting apparatus
CN101673338B (en) Fuzzy license plate identification method based on multi-angle projection
CN106778777B (en) Vehicle matching method and system
CN107480585B (en) Target detection method based on DPM algorithm
Wang et al. An effective method for plate number recognition
CN111402264A (en) Image region segmentation method and device, model training method thereof and computer equipment
CN104299009B (en) License plate character recognition method based on multi-feature fusion
CN105046255A (en) Vehicle tail character recognition based vehicle type identification method and system
CN108052904B (en) Method and device for acquiring lane line
CN107194393B (en) Method and device for detecting temporary license plate
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
CN112016605A (en) Target detection method based on corner alignment and boundary matching of bounding box
CN114387591A (en) License plate recognition method, system, equipment and storage medium
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115272306B (en) Solar cell panel grid line enhancement method utilizing gradient operation
CN102521582B (en) Human upper body detection and splitting method applied to low-contrast video
Bulugu Algorithm for license plate localization and recognition for tanzania car plate numbers
Kumar et al. An efficient approach for automatic number plate recognition for low resolution images
CN112418199B (en) Multi-modal information extraction method and device, electronic equipment and storage medium
Das et al. Automatic License Plate Recognition Technique using Convolutional Neural Network
Tian et al. Robust traffic sign detection in complex road environments
CN110188601B (en) Airport remote sensing image detection method based on learning
KR20180071552A (en) Lane Detection Method and System for Camera-based Road Curvature Estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant