CN109740478B - Vehicle detection and identification method, device, computer equipment and readable storage medium - Google Patents

Vehicle detection and identification method, device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN109740478B
CN109740478B CN201811596741.5A CN201811596741A CN109740478B CN 109740478 B CN109740478 B CN 109740478B CN 201811596741 A CN201811596741 A CN 201811596741A CN 109740478 B CN109740478 B CN 109740478B
Authority
CN
China
Prior art keywords
vehicle
detected
image
local
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811596741.5A
Other languages
Chinese (zh)
Other versions
CN109740478A (en
Inventor
杨先明
王海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yang Xianming
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201811596741.5A priority Critical patent/CN109740478B/en
Publication of CN109740478A publication Critical patent/CN109740478A/en
Application granted granted Critical
Publication of CN109740478B publication Critical patent/CN109740478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention is applicable to the technical field of image processing, and provides a vehicle detection and identification method, a device, computer equipment and a readable storage medium, wherein the method comprises the following steps: collecting a current video image; acquiring position information of a vehicle to be detected according to the video image, and intercepting the image of the vehicle to be detected; according to the vehicle image to be detected, based on a sliding window mechanism, acquiring characteristics in a sliding window area, and determining overall appearance information of the vehicle to be detected by combining a preset data set; according to the characteristics in the sliding window area, local characteristics of the vehicle to be detected are obtained, and the local characteristics of the vehicle to be detected are classified and identified; and analyzing and determining the color of the vehicle body of the vehicle to be detected through an HSV model according to the image of the vehicle to be detected. The invention can judge the specific brand, model and color of the vehicle, can effectively reduce the workload of the existing manual processing, and realizes a more efficient automatic vehicle management target.

Description

Vehicle detection and identification method, device, computer equipment and readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a vehicle detection and identification method, apparatus, computer device, and readable storage medium.
Background
With the rapid development of social economy in China in recent years, the annual sales and the conservation of vehicles in China are in the first place in the world, but the explosive growth of vehicles also causes frequent cases of criminal cases involved in vehicles, and brings new challenges to public safety. With the development of cities and the demand of public safety, traffic bayonets utilizing video monitoring are emerging in large numbers. The video monitoring has very important practical significance and very wide application prospect in aspects of violation inspection, hit-and-run pursuit, suspicion vehicle distributed monitoring, fake-licensed vehicle identification, traffic management and investigation and case handling efficiency improvement and the like.
However, at present, the bayonet monitoring picture mainly relies on manual interpretation of various characteristics of the required vehicle, so that the workload is large and the efficiency is low; or only rely on license plate recognition algorithm, detection efficiency is low, and can't effectively detect the vehicle that does not have the license plate, overlaps the license plate or deliberately shelter from the license plate, leads to missing to examine, makes the vehicle management degree of difficulty increase.
Disclosure of Invention
The embodiment of the invention provides a vehicle detection and identification method, which aims to solve the technical problems.
The embodiment of the invention is realized in such a way that a vehicle detection and identification method comprises the following steps:
responding to a vehicle identification request to be detected, and collecting a current video image;
acquiring position information of a vehicle to be detected according to the video image, and intercepting the image of the vehicle to be detected;
according to the vehicle image to be detected, based on a sliding window mechanism, acquiring characteristics in a sliding window area, and determining overall appearance information of the vehicle to be detected by combining a preset data set;
according to the characteristics in the sliding window area, local characteristics of the vehicle to be detected are obtained, and the local characteristics of the vehicle to be detected are classified and identified;
according to the vehicle image to be detected, analyzing and determining the color of the vehicle body of the vehicle to be detected through an HSV model;
and matching the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected with a preset data set, and outputting the identification result of the vehicle to be detected.
The embodiment of the invention also provides a vehicle detection and identification device, which comprises:
the video acquisition unit is used for responding to a vehicle identification request to be detected and acquiring a current video image;
the position determining unit is used for acquiring the position information of the vehicle to be detected according to the video image and intercepting the image of the vehicle to be detected;
The appearance determining unit is used for acquiring characteristics in a sliding window area based on a sliding window mechanism according to the image of the vehicle to be detected and determining overall appearance information of the vehicle to be detected by combining a preset data set;
the characteristic classifying and identifying unit is used for acquiring local characteristics of the vehicle to be detected according to the characteristics in the sliding window area and classifying and identifying the local characteristics of the vehicle to be detected;
the color recognition unit is used for analyzing and determining the color of the vehicle body of the vehicle to be detected through an HSV model according to the image of the vehicle to be detected; and
and the output unit is used for matching with a preset data set according to the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected and outputting the identification result of the vehicle to be detected.
The embodiment of the invention provides computer equipment, which is characterized by comprising a processor, wherein the processor is used for realizing the steps of the vehicle detection and identification method when executing a computer program stored in a memory.
A computer-readable storage medium, on which a computer program is stored, according to an embodiment of the present invention is characterized in that: the computer program when executed by a processor carries out the steps of the vehicle detection and identification method as described above.
According to the embodiment of the invention, the position information of the vehicle to be detected is obtained according to the current video image, and the image of the vehicle to be detected is intercepted; according to the vehicle image to be detected, based on a sliding window mechanism, acquiring characteristics in a sliding window area, and determining overall appearance information of the vehicle to be detected by combining a preset data set; according to the characteristics in the sliding window area, local characteristics of the vehicle to be detected are obtained, and the local characteristics of the vehicle to be detected are classified and identified; according to the vehicle image to be detected, analyzing and determining the color of the vehicle body of the vehicle to be detected through an HSV model; matching with a preset data set according to the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected, and outputting the identification result of the vehicle to be detected; compared with the vehicle identification technology in the prior art, the invention can judge specific brands, models and colors of vehicles, improve the speed and precision of the existing algorithm, effectively lighten the workload of the existing manual processing, simultaneously effectively reduce the operation cost, improve the automation level and the economic benefit and realize the more efficient automatic vehicle management target.
Drawings
FIG. 1 is a flowchart of a method for detecting and identifying a vehicle according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method for detecting and identifying a vehicle according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a pyramid sampling flow provided by an embodiment of the present invention;
FIG. 4 is a schematic representation of a regression of vehicle position provided by an embodiment of the present invention;
FIG. 5 is a flowchart of another method for detecting and identifying a vehicle according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a Faster-RCNN model in accordance with an embodiment of the present invention;
FIG. 7 is a flowchart of another method for detecting and identifying a vehicle according to an embodiment of the present invention;
FIG. 8 is a flowchart of an implementation of an optimized vehicle detection and identification method provided by an embodiment of the present invention;
FIG. 9 is a flow chart of another implementation of an optimized vehicle detection and identification method provided by an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a vehicle detecting and identifying device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific embodiments in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
In order to further describe the technical means and effects adopted by the present invention for achieving the intended purpose, the following detailed description is given of the specific embodiments, structures, features and effects according to the present invention with reference to the accompanying drawings and preferred embodiments.
In real life, the known clues are quite clear, such as the type, color and on-board personnel characteristic of illegal vehicles are known, but the clues of license plates are easy to be lost (for example, when a case happens, a general person can hardly consciously remember a license plate, a plurality of illegal persons can deliberately shield the license plate or use fake and fake license plate license plates before making a case, but the appearance and the color of the vehicles cannot be easily changed). Therefore, the embodiment of the invention directly utilizes the computer to quickly and efficiently fully mine the existing image information, realizes the retrieval and identification of the type and the characteristics of the vehicle, namely, determines the brand, the color, the specific model and even other detail characteristics of the vehicle, and has very important practical significance and very wide application prospect for clearly checking violations, catching and catching the hit-and-run and suspected vehicle distributed monitoring, fake-licensed vehicle identification and accelerating criminal investigation and case breaking efficiency and speed. Specifically, the invention starts from the actual requirements of intelligent traffic systems and traffic gate monitoring, and researches the automatic vehicle feature detection and identification technology based on the computer vision technology, thereby reducing the manual processing burden, improving the processing efficiency and providing technical foundation and support for security and control and other applications in intelligent traffic.
According to the vehicle detection and identification method provided by the embodiment of the invention, the position information of the vehicle to be detected is obtained from the acquired video image, so that the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected are determined, the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected are matched with the preset data set, and the identification results of the model, the color and the like of the vehicle to be detected are output; compared with the vehicle identification technology in the prior art, the invention can judge specific brands, models and colors of vehicles, improve the speed and precision of the existing algorithm, effectively lighten the workload of the existing manual processing, simultaneously effectively reduce the operation cost, improve the automation level and the economic benefit and realize the more efficient automatic vehicle management target.
Fig. 1 shows a flow of implementation of a vehicle detection and recognition method according to an embodiment of the present invention, and for convenience of explanation, only the portions relevant to the embodiment of the present invention are shown, which are described in detail below:
in step S101, a current video image is acquired in response to a vehicle identification request to be detected.
In the embodiment of the invention, the video image can be a video image shot by a monitor, a video camera, a camera and the like arranged in the current road or some parking lots, squares and other areas; the response to the vehicle identification request to be detected acquires a current video image, specifically: responding to a vehicle identification request to be detected, and acquiring a current video image when the vehicle to be detected enters a video monitoring area.
In step S102, position information of a vehicle to be detected is acquired according to the video image, and an image of the vehicle to be detected is intercepted.
In the embodiment of the invention, the video image can be divided into a plurality of frames of images, and the position or the size of the vehicle in each frame of video image can be inevitably distinguished, namely the vehicle to be detected can appear at any position of the images, and the size of the vehicle in the images can be any size.
In the embodiment of the present invention, as shown in fig. 2, the step S102 specifically includes the following steps:
in step S201, a length-width hierarchical reduction process is performed on each frame of video image in a preset ratio.
In the actual vehicle positioning process, the problem that the target vehicle may appear at any position of the image exists in the image to be detected, and the vehicle may be of any size is generally adopted by a pyramid type multi-scale exhaustive search method.
In the embodiment of the invention, the images have very large difference, and the number, the size and the position of the vehicles included in each image are different. In order for the system to locate and identify vehicles of different sizes in any image, pyramid-based image sampling methods are used. Firstly, the color image is changed into 256 gray level image, if the input image is gray level image, the gray level processing is not carried out, and the gray level processing can not only keep most of the information of texture space and the like of the image, but also greatly reduce the calculated amount; as shown in the pyramid sampling flow chart in fig. 3, when an input image is subjected to pyramid sampling, the input image needs to be subjected to length-width grading reduction according to a fixed proportion until the reduced length-width is smaller than the fixed detection window size.
In step S202, the processed video images are sequentially subjected to sliding scanning processing through a preset detection window, and the position coordinate information of the vehicle to be detected is obtained.
In the embodiment of the invention, the size of the preset detection window is freely set according to the size of the actual image, and the size of the preset detection window can be smaller than that of the detection window only by ensuring that the input image is subjected to the length-width grading reduction processing in proportion. The sliding scanning processing is sequentially performed on the processed video image through a preset detection window, and the position coordinate information of the vehicle to be detected is obtained, including: in each proportion of images, sequentially carrying out sliding scanning by using a detection window with a constant size, recording the position of the window, continuously extracting and scanning to obtain the window, identifying the window while scanning, judging whether the window is a vehicle or a non-vehicle, inputting the window into a deep neural network for discrimination, and recording the window identified as the vehicle and the position of the window so as to facilitate the subsequent vehicle positioning; and merging all windows identified as vehicles to obtain a final identification result, and positioning the vehicles according to the position information of the related windows.
In step S203, a nonlinear function of the vehicle position information to be detected is fitted according to the vehicle position coordinate information to be detected, and an image of the vehicle to be detected is intercepted.
In the embodiment of the invention, after the video image is acquired, an image of any frame is acquired, the vehicle to be detected can appear at any position of the image, and the size of the vehicle in the image can be any size, so that the problem of nonlinear function regression of the vehicle position is fitted by adopting aspect ratio of the detection window to scale and modify network loss parameters at will.
As shown in the vehicle position regression diagram in fig. 4, the original image firstly extracts the convolution characteristic, then scans with a rectangular frame of 3*3 size, and the 3*3 characteristic diagram is a relatively large fixed area, and scales the length and width of the area respectively, so that 9 different rectangular areas can be generated, as shown in the right half area of the upper diagram, object judgment and prediction and rectangular frame position and size fine adjustment are performed on the different rectangular areas in the upper diagram, and the optimal position coordinates are output, so that the problem of deformation of objects caused by different shooting positions is solved.
After the vehicle position is found, for the acquired video image, the time interval of every two frames is very small, so that the running speed of the vehicle in the two frames can be considered very slow, and the vehicle can be assumed to do uniform motion, therefore, a Kalman filter can be adopted, and the state model is as follows:
X t =A·X t-1 +W t
Wherein X is t Is the state of the vehicle at time t, X t-1 Is the state of the vehicle at the moment t-1, A is a system matrix, W t For white gaussian noise, the following formula is satisfied:
P(W t )~N(0,Q)
where Q is the covariance matrix of the motion noise.
Definition X t =(x v ,dx v ,y v ,dy v ). Wherein x is v ,dx v ,y v ,dy v The system is characterized in that the system is respectively used for measuring the measurement quantity and the speed of the vehicle center in the x direction and the measurement quantity and the speed of the vehicle center in the y direction in the running process of the vehicle, and the matrix of the system is as follows:
Figure BDA0001921489550000071
wherein Δt is the acquisition processing time of the sensor data.
The measurement model of the Kalman filter is:
Z t =C·X t +V t
wherein Z is t For the observation vector, C is the observation matrix vector, V t Is Gaussian white noise and meets the following formula:
P(V t )~N(0,R)
wherein R is a covariance matrix of the measurement noise.
Defining the observation vector of the system as Z t =(x v ,y v ) T . The observation matrix is initialized as follows:
Figure BDA0001921489550000072
giving the initial state estimation error covariance matrix as P 0 =I 4×4 Covariance matrix Q of initial process system noise 0 =I 4×4 The covariance matrix of the initial measurement noise is R 0 =I 2×4 . Wherein I is an identity matrix.
In step S103, according to the image of the vehicle to be detected, based on a sliding window mechanism, features in the sliding window area are acquired, and the overall appearance information of the vehicle to be detected is determined in combination with a preset data set.
In the embodiment of the invention, the preset data set is expanded on the basis of the existing vehicle system data set, and a set of more targeted and complete vehicle detection and automatic identification data set is established, which is mainly realized by the following ways:
1) The current perfect data set in China is a CompCars data set established by Shang Xiaoou team of hong Kong Chinese university, the data set contains 163 images of vehicle brands, and 1716 types of vehicle types, 136727 types of vehicle pictures and 27618 types of partial vehicle pictures, besides, the data set also contains pictures of various different view angles of the vehicle, internal characteristics of the vehicle and external characteristics of the vehicle, and can be used for learning vehicle detail characteristics, such as the number of doors and windows of the vehicle. All data in the dataset is composed of data acquired over the network and in the surveillance video. Vehicle model training, vehicle brand training, vehicle attribute training, and the like can be performed.
2) The network resource is automatically downloaded through the crawler software, and the downloaded pictures are not necessarily automobile pictures because the network resource is not necessarily accurate, so that the data of the pictures need to be cleaned, namely each picture is manually checked.
3) And acquiring automobile shooting images of scenes such as communities, roads and the like.
In the embodiment of the invention, the pertinence and rationality of the data set directly influence the accuracy of the detection and identification algorithm, and the larger the number and the more variety of pictures in the training set, the stronger the accuracy and the robustness of the algorithm.
In an embodiment of the present invention, as shown in fig. 5, the step S103 specifically includes:
in step S501, gray scale transformation and normalization processing are performed on the vehicle image to be detected.
In the embodiment of the invention, the normalization processing specifically comprises size normalization, gray distribution normalization histogram equalization and direction normalization; the reason for normalizing the image is that: because the characteristics of the vehicles, such as the size, the degree of new and old, the position and the like, are different, and meanwhile, the distances, the angles, the ambient illumination intensity and the like are different when the camera shoots the samples, the obtained vehicle images are different, and therefore, a normalization preprocessing process is needed before the characteristics of the samples are extracted and identified.
In step S502, according to the processed vehicle image to be detected, based on a sliding window mechanism, the features in the sliding window area and the regression coordinate values of the corresponding area positions are obtained.
In the embodiment of the invention, according to the processed vehicle image to be detected, based on a sliding window mechanism, the regression coordinate values of the features in the sliding window area and the corresponding area positions are obtained, specifically: the method is characterized in that a vehicle image to be measured is processed based on a Faster R-CNN network, wherein the Faster R-CNN model is based on a convolutional neural network frame caffe (convolutional architecture for fast feature embedding) and mainly comprises two modules: the RPN candidate box extraction module and the FastR-CNN target (vehicle) detection module are shown in the model block diagram in FIG. 6. The RPN is a full convolutional neural network, and is used for extracting high-quality candidate boxes, namely, finding possible positions of vehicles in the graph in advance: fastR-CNN detects and identifies vehicles in the candidate boxes based on the candidate boxes extracted by the RPN. The RPN is constructed by adding one convolution layer and two parallel fully connected layers after the final layer of convolution feature map of the ImageNet pre-training model ZF. The RPN takes the convolution feature matrix extracted from the original image as input, outputs a series of rectangular candidate frames and whether the rectangular candidate region is the fraction of the target, and the structural schematic diagram of the RPN is shown in fig. 6. The RPN adopts a sliding window mechanism, a small sliding window is added on the characteristic diagram of the last shared convolutional layer of the convolutional neural network, the sliding window is fully connected with the space with the size of n multiplied by n of the input characteristic diagram, each sliding window is mapped into a low-dimensional short vector, the vector is input into two parallel fully connected network layers, one network layer is responsible for outputting whether the characteristic in the area of the sliding window belongs to an image background or a target, and the other network layer is responsible for outputting regression coordinate values of the position of the area.
In step S503, a feature vector is formed according to the feature in the moving window region and the regression coordinate values of the corresponding region position, and the feature vector is reduced by adopting a principal component analysis algorithm.
In the embodiment of the present invention, the principal component analysis algorithm PCA (Principal Component Analysis) is a commonly used data analysis method. PCA transforms raw data into a set of linearly independent representations of each dimension through linear transformation, and can be used for extracting main characteristic components of data and is commonly used for dimension reduction of high-dimension data.
In the embodiment of the invention, the methods of Gabor wavelet transformation, fast-RCNN deep learning and the like are combined to extract local features (Haar features, CNN features, LBP features, HOG features and the like) of the vehicle body, block statistical histograms are carried out, histogram sequences are combined to form feature vectors describing sample images, and finally a PCA algorithm is adopted to reduce the dimension of the feature vectors. According to the above, when the n×n sliding window slides on the convolution feature map matrix, each sliding position corresponds to k different anchor frames (candidate regions in the original image assumed by human) in the original image, so that one full-connection layer outputs 2×k-dimensional vectors corresponding to scores of k anchor frame targets and backgrounds, and the other full-connection layer outputs 4×k-dimensional vectors representing transformation parameters of k anchor frames corresponding to real target frames. Each anchor frame is centered on the center of the current sliding window and corresponds to one dimension and one aspect ratio, respectively.
In the embodiment of the present invention, dimension reduction means that information is lost, and a loss function for one image is defined as:
Figure BDA0001921489550000101
wherein: i is the candidate frame index selected in one batch, p i The probability that the candidate box i is the target. If the candidate box is a positive label, the corresponding real region label
Figure BDA0001921489550000102
1, otherwise->
Figure BDA0001921489550000103
Is 0.t is t i 4 parameterized coordinates, vectors, representing predicted bounding boxes,>
Figure BDA0001921489550000104
is the coordinate vector of the corresponding real region bounding box. Classification loss L cls The log-loss for two categories (target and non-target) is defined as:
Figure BDA0001921489550000105
for regression loss, defined as:
Figure BDA0001921489550000106
wherein smooths L1 (x) The method comprises the following steps:
Figure BDA0001921489550000107
for regression, parameters of 4 coordinates were used:
Figure BDA0001921489550000108
wherein (x, y) is the center coordinates of the predicted bounding box; (x) a ,y a ) Coordinates that are candidate boxes; (x) * ,y * ) Coordinates of the bounding box for the real area; w and h are the width and height of the bounding box, respectively; n (N) cls And N reg Is a normalization parameter; lambda is the balance factor.
In step S504, according to the feature vector, the overall appearance information of the vehicle to be detected is determined in combination with a preset data set.
In the embodiment of the invention, according to the feature vector, the overall appearance information of the vehicle to be detected is determined by combining a preset data set, specifically: matching the feature vector with a preset data set and judging the similarity; the preset data set is the same as that described above, and will not be described in detail here.
In step S104, local features of the vehicle to be detected are obtained according to the features in the sliding window area, and the local features of the vehicle to be detected are classified and identified.
In the embodiment of the present invention, as shown in fig. 7, the step S104 specifically includes the following steps:
in step S701, a local feature image of the vehicle to be detected is acquired according to the features in the sliding window area.
In embodiments of the present invention, the local feature image generally requires texture features, SIFT (Scale-invariant feature transform) features, HOG (Histogram of Oriented Gradient) features, and the like.
In step S702, the local feature image is preprocessed based on the FasterR-CNN network, and the local feature of the vehicle to be detected is acquired.
In step S703, the local features of the vehicle to be detected are classified and identified by using the support vector machine as a classifier and combining with a preset vehicle local feature database.
In the embodiment of the invention, the support vector machine (Support Vector Machine, SVM) is a novel machine learning method, the classification capability is obviously better than that of some traditional learning methods, the method can be used for solving the problems of classification, outlier detection, regression, clustering and the like, and the algorithm shows a plurality of special advantages in solving the problems of small sample, nonlinearity and high-dimensional pattern recognition and is widely applied to various pattern recognition problems. By adopting an SVM classifier, any M-class problem can be decomposed into (M-1) class identification problems.
In the embodiment of the invention, the SVM classification algorithm is essentially developed from a linear classifier. The linear classifier is characterized in that a linear function is used as a judgment boundary to distinguish two types of samples, and the classification method is simple in model and easy to calculate. The linear discriminant function may be written as follows:
w·x+b=0
w determines the direction of the hyperplane of decision boundaries and b determines the position of the decision plane. Since many straight lines satisfy the condition, it is necessary to design a certain algorithm to find the optimal classification straight line, and in this case, the support vector is used for the function. The goal is to find an optimal classification hyperplane that separates the sample sets, possibly more than one hyperplane separating the sample sets, for which purpose an optimal hyperplane should be found that not only correctly classifies the two classes of sample points, but also maximizes the distance or separation of the separated sample points from the hyperplane. To maximize the separation, two parallel hyperplanes are defined.
w·x+b=1
w·x+b=-1
To ensure no trainingThe training samples appear in the middle of these two hyperplanes, for all sample points x i The following inequality must be satisfied.
ω i (w·x+b)≥1
Between the two hyperplanes the distance is 2/|||w|, in order to maximize the space to be covered, it is necessary to minimize w and satisfy the above equation. This is in fact a constrained quadratic programming problem, a convex problem, and there is no locally optimal solution. For this purpose, the Lagrange multiplier method can be used to solve, and the Lagrange objective function is defined as:
Figure BDA0001921489550000121
Wherein a is i Is a lagrange multiplier that can be biased in order for L to be minimized with respect to w, b:
Figure BDA0001921489550000122
Figure BDA0001921489550000123
then the two formulas are brought into the original formula to obtain a sample point x i ,x i Corresponding category or label W i ,W j Relationship between:
Figure BDA0001921489550000124
from this, the final equation that needs to be optimized is:
Figure BDA0001921489550000125
Figure BDA0001921489550000131
in the embodiment of the invention, the preset vehicle local characteristic database comprises various vehicle local information such as vehicle marks, vehicle lamps, vehicle bumpers and the like.
In the embodiment of the invention, a support vector machine (Support Vector Machine, SVM) is added to classify and identify the local features of the vehicle (such as a logo, a car lamp, a bumper, a windshield and the like) on the basis of the FasterR-CNN network-based vehicle feature detection.
In step S105, the vehicle body color to be detected is analyzed and determined by an HSV model according to the vehicle image to be detected.
In an embodiment of the present invention, HSV (Value) is a color space created from visual properties of colors, also known as a hexagonal pyramid model. The parameters of the color in this model are respectively: hue (H), saturation (S), brightness (V); wherein, the hue (H) is measured by an angle, the value range is 0-360 degrees, the red is 0 degrees, the green is 120 degrees, the blue is 240 degrees, and the complementary colors are: yellow 60 °, cyan 180 °, magenta 300 °; the saturation S is the value range of 0.0-1.0, and the larger the value is, the more saturated the color is; the brightness V is a value ranging from 0 (black) to 255 (white).
In an embodiment of the present invention, as shown in fig. 8, the step S105 specifically includes the following steps:
in step S801, the vehicle image to be detected is subjected to gradation conversion processing.
In the embodiment of the invention, the vehicle image to be detected is subjected to gray level conversion processing so as to only contain the brightness information of the image, so that the detection processing time is saved, and meanwhile, the original information of the image can be reserved to the greatest extent, and the method comprises the following steps of:
cmax(x,y)=max(r(x,y),g(x,y),b(x,y))
cmin(x,y)=min(r(x,y),g(x,y),b(x,y))
RGB maximum and minimum values for each point in the image are defined.
γ(x,y)=cmax(x,y)-cmin(x,y)
Gamma (x, y) represents the gray scale of the pixel point of the image, and the whole gray scale image is linearly stretched, so that the image segmentation is facilitated.
In step S802, the processed image of the vehicle to be detected is subjected to a segmentation process by a threshold segmentation method, and RGB features of the hood area of the vehicle to be detected are identified.
In the embodiment of the invention, the threshold segmentation method is the maximum inter-class variance method (OTSU), and the OTSU mainly searches the maximum inter-class variance according to the gray characteristic of the image to divide the image into a background and a target. For image I (x, y), the segmentation threshold value of the object and the background is marked as T, and the proportion of the number of pixels of the object to the whole image is marked as w 0 Its average gray scale mu 0 The method comprises the steps of carrying out a first treatment on the surface of the The number of background pixels is w 1 The average gray scale is mu 1 The method comprises the steps of carrying out a first treatment on the surface of the The total average gray level of the image is denoted μ and the inter-class variance is denoted g. Among the probability statistics are:
μ=w 0 μ 0 +w 1 μ 1
the definition of the inter-class variance g is as follows:
g=w 01 -μ)+w 10 -μ)
the following formula can be obtained:
g=w 0 w 101 )
when the variance g is maximum, it can be considered that the foreground and background differences are maximum at this time, and the gradation T at this time is the optimal threshold value, and image segmentation is performed using this.
The RGB information of the car body is extracted by utilizing the segmentation area, the Cregion is taken as the segmentation car body area, and the RGB values of the x and y points of the image are set as follows:
Figure BDA0001921489550000141
Figure BDA0001921489550000142
the RGB space structure does not accord with subjective judgment of people on color similarity, and HSV accords with subjective knowledge of people on color. The RGB color space of the image is converted into HSV color space.
Figure BDA0001921489550000143
Figure BDA0001921489550000144
TR is a conversion function of RGB to HSV color space.
In step S803, according to the RGB features, a corresponding color is analyzed and identified based on the HSV model, and the color with the largest display frequency is determined as the color of the vehicle body to be detected.
In practical application, the RGB space structure does not accord with subjective judgment of people on color similarity, HSV accords with subjective knowledge of people on colors, so that RGB color space of an image is converted into HSV color space, finally, an HSV model is analyzed, frequency numbers displayed by various colors can be displayed by adopting a histogram, the color with the largest frequency number is selected as a main color of the image, and the main color is used as a basis for judging the colors of a vehicle.
In step S106, matching with a preset data set according to the overall appearance information, the local features and the vehicle body color of the vehicle to be detected, and outputting the identification result of the vehicle to be detected.
In the embodiment of the present invention, as shown in fig. 9, the step S106 specifically includes the following steps:
in step S901, a model of the vehicle to be detected is determined according to the sum of the overall appearance information of the vehicle to be detected and the confidence output result of the local feature.
In step S902, according to the model number and the color of the vehicle body to be detected, matching with corresponding information in a preset data set, and outputting a recognition result of the vehicle to be detected.
According to the vehicle detection and identification method provided by the embodiment of the invention, the position information of the vehicle to be detected is obtained from the acquired video image, so that the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected are determined, the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected are matched with the preset data set, and the identification results of the model, the color and the like of the vehicle to be detected are output; compared with the vehicle identification technology in the prior art, the vehicle identification method can judge specific brands, models and colors of the vehicles, and based on deep learning means such as a fast R-CNN and an SVM, the speed and the accuracy of the existing algorithm are greatly improved, the existing manual processing workload can be effectively reduced, meanwhile, the operation cost can be effectively reduced, the automation level and the economic benefit are improved, and the more efficient automatic vehicle management goal is realized.
Fig. 10 shows a structure of a vehicle detecting and identifying device according to an embodiment of the present invention, and for convenience of explanation, only the portions related to the embodiment of the present invention are shown in detail as follows:
the vehicle detecting and recognizing apparatus includes a video capturing unit 100, a position determining unit 200, an appearance determining unit 300, a feature classifying and recognizing unit 400, a color recognizing unit 500, and an output unit 600.
The video acquisition unit 100 is used for responding to a vehicle identification request to be detected and acquiring a current video image.
In the embodiment of the present invention, the video acquisition unit 100 is configured to acquire a current video image in response to a vehicle identification request to be detected; the video image can be a video image shot by a monitor, a video camera, a camera and the like arranged in the current road or some areas such as a parking lot, a square and the like; the response to the vehicle identification request to be detected acquires a current video image, specifically: responding to a vehicle identification request to be detected, and acquiring a current video image when the vehicle to be detected enters a video monitoring area.
The position determining unit 200 is configured to obtain position information of the vehicle to be detected according to the video image, and intercept an image of the vehicle to be detected.
In the embodiment of the present invention, the position determining unit 200 is configured to obtain, according to the video image, position information of a vehicle to be detected, and intercept an image of the vehicle to be detected; the video image may be divided into a plurality of frames, and the vehicle position or the vehicle size in each frame of video image may inevitably have a difference, that is, the vehicle to be detected may appear in any position of the image, and the size of the vehicle in the image may be any size.
The appearance determining unit 300 is configured to obtain, according to the image of the vehicle to be detected, based on a sliding window mechanism, features in a sliding window area, and determine overall appearance information of the vehicle to be detected in combination with a preset data set.
In the embodiment of the present invention, the appearance determining unit 300 is configured to obtain, according to the image of the vehicle to be detected, based on a sliding window mechanism, features in a sliding window area, and determine overall appearance information of the vehicle to be detected in combination with a preset data set; the preset data set is expanded on the basis of the existing vehicle system data set, and a set of more targeted and complete vehicle detection and automatic identification data set is established; the method for determining the overall appearance information of the vehicle to be detected based on the sliding window mechanism and the preset data set comprises the following steps: carrying out gray level transformation and normalization processing on the vehicle image to be detected; acquiring the characteristics in the sliding window area and the regression coordinate values of the positions of the corresponding areas based on a sliding window mechanism according to the processed vehicle image to be detected; forming a feature vector according to the features in the moving window region and the regression coordinate values of the corresponding region positions, and adopting a principal component analysis algorithm to reduce the dimension of the feature vector; and according to the feature vector, combining a preset data set to determine the overall appearance information of the vehicle to be detected.
The feature classification and identification unit 400 is configured to obtain local features of the vehicle to be detected according to the features in the sliding window area, and classify and identify the local features of the vehicle to be detected.
In the embodiment of the present invention, the feature classification and identification unit 400 is configured to obtain local features of a vehicle to be detected according to features in the sliding window area, and classify and identify the local features of the vehicle to be detected; the method for acquiring the local features of the vehicle to be detected according to the features in the sliding window area, classifying and identifying the local features of the vehicle to be detected specifically comprises the following steps: collecting a local feature image of the vehicle to be detected according to the features in the sliding window area; preprocessing the local feature image based on a fast R-CNN network, and acquiring local features of a vehicle to be detected; and classifying and identifying the local features of the vehicle to be detected by taking the support vector machine as a classifier and combining a preset vehicle local feature database.
The color recognition unit 500 is configured to analyze and determine the color of the vehicle body to be detected through an HSV model according to the image of the vehicle to be detected.
In the embodiment of the present invention, the color recognition unit 500 is configured to analyze and determine a color of a vehicle body of the vehicle to be detected through an HSV model according to the image of the vehicle to be detected; according to the vehicle image to be detected, analyzing and determining the color of the vehicle body of the vehicle to be detected through an HSV model, and specifically comprising the following steps: carrying out gray level conversion processing on the vehicle image to be detected; dividing the processed vehicle image to be detected by a threshold segmentation method, and identifying RGB features of a vehicle engine hood area to be detected; and according to the RGB features, analyzing and identifying the corresponding color based on the HSV model, and determining the color with the largest display frequency as the color of the vehicle body to be detected.
And the output unit 600 is configured to match with a preset data set according to the overall appearance information, the local features and the vehicle body color of the vehicle to be detected, and output the identification result of the vehicle to be detected.
In the embodiment of the present invention, the output unit 600 is configured to match with a preset data set according to the overall appearance information, the local features, and the vehicle body color of the vehicle to be detected, and output the recognition result of the vehicle to be detected; the method for detecting the vehicle comprises the steps of matching the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected with a preset data set, and outputting the identification result of the vehicle to be detected, wherein the method specifically comprises the following steps: determining the model of the vehicle to be detected according to the sum of the overall appearance information of the vehicle to be detected and the confidence output result of the local features; and matching the vehicle model to be detected and the vehicle body color with corresponding information in a preset data set, and outputting the identification result of the vehicle to be detected.
According to the vehicle detection and identification device provided by the embodiment of the invention, the position information of the vehicle to be detected is obtained from the acquired video image, so that the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected are determined, the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected are matched with the preset data set, and the identification results of the model, the color and the like of the vehicle to be detected are output; compared with the vehicle identification technology in the prior art, the invention can judge specific brands, models and colors of vehicles, and based on deep learning means such as FasterR-CNN, SVM and the like, the speed and the precision of the existing algorithm are greatly improved, the workload of the existing manual processing can be effectively reduced, the operation cost can be effectively reduced, the automation level and the economic benefit are improved, and the more efficient automatic vehicle management goal is realized.
The embodiment of the invention also provides a computer device, which comprises a processor, wherein the processor is used for realizing the steps of the vehicle detection and identification method provided by the method embodiments when executing the computer program stored in the memory.
Embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program/instruction which, when executed by the above-mentioned processor, implements the steps of the vehicle detection and identification method provided by the above-mentioned respective method embodiments.
For example, a computer program may be split into one or more modules, one or more modules stored in memory and executed by a processor to perform the present invention. One or more modules may be a series of computer program instruction segments capable of performing particular functions to describe the execution of a computer program in a computer device. For example, the computer program may be divided into the steps of the vehicle detection and identification method provided by the various method embodiments described above.
It will be appreciated by those skilled in the art that the foregoing description of a computer device is merely exemplary and is not intended to be limiting, and that more or fewer components than the foregoing description may be included, or certain components may be combined, or different components may be included, for example, input-output devices, network access devices, buses, etc.
The processor may be a Central processing unit (Central ProcessingUnit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the computer device, connecting the various parts of the overall user terminal using various interfaces and lines.
The memory may be used to store the computer program and/or modules, and the processor may implement various functions of the computer device by running or executing the computer program and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The modules/units integrated with the computer device may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as stand alone products. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (8)

1. A method of vehicle detection and identification, the method comprising: responding to a vehicle identification request to be detected, and collecting a current video image; acquiring position information of a vehicle to be detected according to the video image, and intercepting the image of the vehicle to be detected; according to the vehicle image to be detected, based on a sliding window mechanism, acquiring characteristics in a sliding window area, and determining overall appearance information of the vehicle to be detected by combining a preset data set; according to the characteristics in the sliding window area, local characteristics of the vehicle to be detected are obtained, and the local characteristics of the vehicle to be detected are classified and identified; according to the vehicle image to be detected, analyzing and determining the color of the vehicle body of the vehicle to be detected through an HSV model; matching with a preset data set according to the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected, and outputting the identification result of the vehicle to be detected;
The method for acquiring the local features of the vehicle to be detected according to the features in the sliding window area, classifying and identifying the local features of the vehicle to be detected specifically comprises the following steps: collecting a local feature image of the vehicle to be detected according to the features in the sliding window area; preprocessing the local feature image based on a fast R-CNN network, and acquiring local features of a vehicle to be detected; classifying and identifying the local features of the vehicle to be detected by taking a support vector machine as a classifier and combining a preset vehicle local feature database;
the method for detecting the vehicle comprises the steps of matching the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected with a preset data set, and outputting the identification result of the vehicle to be detected, wherein the method specifically comprises the following steps: determining the model of the vehicle to be detected according to the sum of the overall appearance information of the vehicle to be detected and the confidence output result of the local features; and matching the vehicle model to be detected and the vehicle body color with corresponding information in a preset data set, and outputting the identification result of the vehicle to be detected.
2. The vehicle detection and recognition method according to claim 1, wherein the step of acquiring the current video image in response to the vehicle recognition request to be detected comprises: responding to a vehicle identification request to be detected, and judging whether the vehicle to be detected enters a video monitoring area or not; and when the vehicle to be detected enters the video monitoring area, acquiring a current video image.
3. The method for detecting and identifying a vehicle according to claim 1, wherein the step of acquiring the position information of the vehicle to be detected based on the video image and capturing the image of the vehicle to be detected comprises the steps of: performing length-width grading reduction processing on each frame of video image according to a preset proportion; sequentially carrying out sliding scanning treatment on the treated video images through a preset detection window, and acquiring the position coordinate information of the vehicle to be detected; fitting a nonlinear function of the position information of the vehicle to be detected according to the position coordinate information of the vehicle to be detected, and intercepting an image of the vehicle to be detected.
4. The method for detecting and identifying a vehicle according to claim 1, wherein the step of acquiring the features in the sliding window area based on the sliding window mechanism according to the image of the vehicle to be detected, and determining the overall appearance information of the vehicle to be detected in combination with the preset data set specifically comprises: carrying out gray level transformation and normalization processing on the vehicle image to be detected; acquiring the characteristics in the sliding window area and the regression coordinate values of the positions of the corresponding areas based on a sliding window mechanism according to the processed vehicle image to be detected; forming a feature vector according to the features in the moving window region and the regression coordinate values of the corresponding region positions, and adopting a principal component analysis algorithm to reduce the dimension of the feature vector; and according to the feature vector, combining a preset data set to determine the overall appearance information of the vehicle to be detected.
5. The vehicle detection and recognition method according to claim 1, wherein the analyzing and determining the color of the vehicle body to be detected by the HSV model according to the image of the vehicle to be detected specifically includes: carrying out gray level conversion processing on the vehicle image to be detected; dividing the processed vehicle image to be detected by a threshold segmentation method, and identifying RGB features of a vehicle engine hood area to be detected; and according to the RGB features, analyzing and identifying the corresponding color based on the HSV model, and determining the color with the largest display frequency as the color of the vehicle body to be detected.
6. A vehicle detection and identification device, the device comprising: the video acquisition unit is used for responding to a vehicle identification request to be detected and acquiring a current video image; the position determining unit is used for acquiring the position information of the vehicle to be detected according to the video image and intercepting the image of the vehicle to be detected; the appearance determining unit is used for acquiring characteristics in a sliding window area based on a sliding window mechanism according to the image of the vehicle to be detected and determining overall appearance information of the vehicle to be detected by combining a preset data set; the characteristic classifying and identifying unit is used for acquiring local characteristics of the vehicle to be detected according to the characteristics in the sliding window area and classifying and identifying the local characteristics of the vehicle to be detected; the color recognition unit is used for analyzing and determining the color of the vehicle body of the vehicle to be detected through an HSV model according to the image of the vehicle to be detected; the output unit is used for matching with a preset data set according to the overall appearance information, the local characteristics and the vehicle body color of the vehicle to be detected and outputting the identification result of the vehicle to be detected;
The local feature image acquisition unit is used for acquiring a local feature image of the vehicle to be detected according to the features in the sliding window area; the local feature acquisition unit is used for preprocessing the local feature image based on a Faster R-CNN network and acquiring the local feature of the vehicle to be detected; the local feature classification and identification unit is used for classifying and identifying the local features of the vehicle to be detected by taking a support vector machine as a classifier and combining a preset vehicle local feature database;
the vehicle model to be detected determining unit is used for determining the vehicle model to be detected according to the sum of the confidence output results of the whole appearance information and the local features of the vehicle to be detected; and the vehicle to be detected identification unit is used for matching with corresponding information in a preset data set according to the type and the color of the vehicle to be detected and outputting an identification result of the vehicle to be detected.
7. A computer device, characterized in that it comprises a processor for implementing the steps of the method according to any of claims 1-5 when executing a computer program stored in a memory.
8. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program implementing the steps of the method according to any of claims 1-5 when executed by a processor.
CN201811596741.5A 2018-12-26 2018-12-26 Vehicle detection and identification method, device, computer equipment and readable storage medium Active CN109740478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811596741.5A CN109740478B (en) 2018-12-26 2018-12-26 Vehicle detection and identification method, device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811596741.5A CN109740478B (en) 2018-12-26 2018-12-26 Vehicle detection and identification method, device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN109740478A CN109740478A (en) 2019-05-10
CN109740478B true CN109740478B (en) 2023-04-28

Family

ID=66359964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811596741.5A Active CN109740478B (en) 2018-12-26 2018-12-26 Vehicle detection and identification method, device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN109740478B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036421A (en) * 2019-05-16 2020-12-04 搜狗(杭州)智能科技有限公司 Image processing method and device and electronic equipment
CN111982415A (en) * 2019-05-24 2020-11-24 杭州海康威视数字技术股份有限公司 Pipeline leakage detection method and device
CN110378381B (en) * 2019-06-17 2024-01-19 华为技术有限公司 Object detection method, device and computer storage medium
CN110322522B (en) * 2019-07-11 2023-06-16 山东领能电子科技有限公司 Vehicle color recognition method based on target recognition area interception
CN110675373B (en) * 2019-09-12 2023-04-07 珠海格力智能装备有限公司 Component installation detection method, device and system
CN110751053B (en) * 2019-09-26 2022-02-22 高新兴科技集团股份有限公司 Vehicle color identification method, device, equipment and storage medium
CN111191539B (en) * 2019-12-20 2021-01-29 江苏常熟农村商业银行股份有限公司 Certificate authenticity verification method and device, computer equipment and storage medium
CN111144372A (en) * 2019-12-31 2020-05-12 上海眼控科技股份有限公司 Vehicle detection method, device, computer equipment and storage medium
CN111340775B (en) * 2020-02-25 2023-09-29 湖南大学 Parallel method, device and computer equipment for acquiring ultrasonic standard section
CN112164223B (en) * 2020-02-27 2022-04-29 浙江恒隆智慧科技集团有限公司 Intelligent traffic information processing method and device based on cloud platform
CN111353444A (en) * 2020-03-04 2020-06-30 上海眼控科技股份有限公司 Marker lamp monitoring method and device, computer equipment and storage medium
CN112016502B (en) * 2020-09-04 2023-12-26 平安国际智慧城市科技股份有限公司 Safety belt detection method, safety belt detection device, computer equipment and storage medium
CN115626159A (en) * 2021-07-01 2023-01-20 信扬科技(佛山)有限公司 Vehicle warning system and method and automobile
CN115880565B (en) * 2022-12-06 2023-09-05 江苏凤火数字科技有限公司 Neural network-based scraped vehicle identification method and system
CN117269180B (en) * 2023-11-24 2024-03-12 成都数之联科技股份有限公司 Vehicle appearance detection method, device, server and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184393A (en) * 2011-06-20 2011-09-14 苏州两江科技有限公司 Method for judging automobile type according to license plate recognition
CN107730905A (en) * 2017-06-13 2018-02-23 银江股份有限公司 Multitask fake license plate vehicle vision detection system and method based on depth convolutional neural networks
CN107798335A (en) * 2017-08-28 2018-03-13 浙江工业大学 A kind of automobile logo identification method for merging sliding window and Faster R CNN convolutional neural networks
CN108875754A (en) * 2018-05-07 2018-11-23 华侨大学 A kind of vehicle recognition methods again based on more depth characteristic converged network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184393A (en) * 2011-06-20 2011-09-14 苏州两江科技有限公司 Method for judging automobile type according to license plate recognition
CN107730905A (en) * 2017-06-13 2018-02-23 银江股份有限公司 Multitask fake license plate vehicle vision detection system and method based on depth convolutional neural networks
CN107798335A (en) * 2017-08-28 2018-03-13 浙江工业大学 A kind of automobile logo identification method for merging sliding window and Faster R CNN convolutional neural networks
CN108875754A (en) * 2018-05-07 2018-11-23 华侨大学 A kind of vehicle recognition methods again based on more depth characteristic converged network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"交通卡口车辆检测与自动识别技术研究";顾人舒;《中国优秀硕士论文全文数据库 信息科技辑》;20150315(第03期);第9-47页 *
"基于属性的车辆检索算法研究";于明月;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315(第03期);第12、52-64页 *
"基于深度学习的车辆定位及车型识别研究";张飞云;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20161115(第11期);第21-31页 *

Also Published As

Publication number Publication date
CN109740478A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109740478B (en) Vehicle detection and identification method, device, computer equipment and readable storage medium
WO2020173022A1 (en) Vehicle violation identifying method, server and storage medium
Silva et al. A flexible approach for automatic license plate recognition in unconstrained scenarios
WO2019169816A1 (en) Deep neural network for fine recognition of vehicle attributes, and training method thereof
US9721173B2 (en) Machine learning approach for detecting mobile phone usage by a driver
Alvarez et al. Road detection based on illuminant invariance
Al-Ghaili et al. Vertical-edge-based car-license-plate detection method
Wang et al. An effective method for plate number recognition
Malik et al. Detection and recognition of traffic signs from road scene images
JP2016110635A (en) Adapted vocabularies for matching image signatures with fisher vectors
Prates et al. Brazilian license plate detection using histogram of oriented gradients and sliding windows
Soomro et al. Vehicle number recognition system for automatic toll tax collection
Sugiharto et al. Traffic sign detection based on HOG and PHOG using binary SVM and k-NN
Liu et al. Multi-type road marking recognition using adaboost detection and extreme learning machine classification
CN105787475A (en) Traffic sign detection and identification method under complex environment
Wali et al. Shape matching and color segmentation based traffic sign detection system
Zaklouta et al. Segmentation masks for real-time traffic sign recognition using weighted HOG-based trees
Ahmad et al. A Review of Automatic Number Plate Recognition
Zheng et al. Shadow removal for pedestrian detection and tracking in indoor environments
Jeong et al. Homogeneity patch search method for voting-based efficient vehicle color classification using front-of-vehicle image
Anagnostopoulos et al. Using sliding concentric windows for license plate segmentation and processing
Lashkov et al. Edge-computing-facilitated nighttime vehicle detection investigations with CLAHE-enhanced images
Huu et al. Proposing WPOD-NET combining SVM system for detecting car number plate
Mutholib et al. Optimization of ANPR algorithm on Android mobile phone
Prates et al. An adaptive vehicle license plate detection at higher matching degree

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230410

Address after: No. 17, Unit 1, Building 42, Chenguang Community, Fushan District, Yantai City, Shandong Province, 264006

Applicant after: Yang Xianming

Address before: No.1 Lanhai Road, hi tech Zone, Yantai City, Shandong Province

Applicant before: SHANDONG CHUANGKE AUTOMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant