CN109359696B - Vehicle money identification method, system and storage medium - Google Patents

Vehicle money identification method, system and storage medium Download PDF

Info

Publication number
CN109359696B
CN109359696B CN201811268068.2A CN201811268068A CN109359696B CN 109359696 B CN109359696 B CN 109359696B CN 201811268068 A CN201811268068 A CN 201811268068A CN 109359696 B CN109359696 B CN 109359696B
Authority
CN
China
Prior art keywords
features
vehicle
neural network
layer
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811268068.2A
Other languages
Chinese (zh)
Other versions
CN109359696A (en
Inventor
周曦
施志祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhongke Yuncong Technology Co ltd
Original Assignee
Chongqing Zhongke Yuncong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhongke Yuncong Technology Co ltd filed Critical Chongqing Zhongke Yuncong Technology Co ltd
Priority to CN201811268068.2A priority Critical patent/CN109359696B/en
Publication of CN109359696A publication Critical patent/CN109359696A/en
Application granted granted Critical
Publication of CN109359696B publication Critical patent/CN109359696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing, and discloses a vehicle money identification method, a system and a storage medium, wherein the method comprises the following steps: extracting global features and a plurality of local features of the vehicle image by using a multi-granularity neural network; fusing the global features and the plurality of local features of the vehicle images by using a feature fusion neural network to obtain category features corresponding to the vehicle images; obtaining the probability that the vehicle image belongs to each vehicle money category by utilizing a classification neural network according to the category characteristics; and determining the vehicle money with the highest probability as the vehicle money corresponding to the vehicle image. The vehicle money identification method provided by the invention can be used for extracting the features by utilizing the multi-granularity neural network with global and local mixed, and fusing the global features and the local features, so that the vehicle money type of the vehicle image can be accurately identified.

Description

Vehicle money identification method, system and storage medium
Technical Field
The invention relates to the field of image processing, in particular to a vehicle money identification method, a vehicle money identification system and a storage medium.
Background
With the rise of artificial intelligence trend, the application of vehicle tracking and vehicle retrieval technology in actual life becomes more and more extensive, and the technology becomes an important means for urban traffic safety, urban traffic control and illegal pursuit. Theoretically, the license plate number is used as a unique identity mark of a vehicle, the problems of vehicle tracking and vehicle retrieval can be perfectly solved, however, in real life, a complete and correct license plate number cannot be obtained due to reasons such as shielding, illumination, motion blurring and artificial counterfeiting. The vehicle style is another important feature of the vehicle, which can well avoid the influence of the above adverse factors, and the vehicle tracking and vehicle retrieval can be realized in a certain time and space range by combining attributes such as vehicle color, so that the vehicle style identification technology is more and more important and becomes an essential module of a vehicle tracking and vehicle retrieval system.
Disclosure of Invention
The present invention is directed to a method, a system and a storage medium for identifying a vehicle money, so as to solve the above problems.
In a first aspect, the present invention provides a vehicle money identification method, including:
extracting global features and a plurality of local features of the vehicle image by using a multi-granularity neural network;
fusing the global features and the plurality of local features of the vehicle images by using a feature fusion neural network to obtain category features corresponding to the vehicle images;
obtaining the probability that the vehicle image belongs to each vehicle money category by utilizing a classification neural network according to the category characteristics;
and determining the vehicle money with the highest probability as the vehicle money corresponding to the vehicle image.
Optionally, the multi-granularity neural network includes a first convolution layer, a maximum pooling layer, at least one basic feature extraction module, and at least three spatial feature extraction modules, where the first convolution layer, the maximum pooling layer, and the at least one basic feature extraction module are sequentially connected, and outputs of the at least one basic feature extraction module are respectively connected with the at least three spatial feature extraction modules; the basic feature extraction module is used for extracting basic features in the vehicle image, each spatial feature extraction module is used for learning corresponding global features or local features, the basic feature extraction module comprises a conv _ block and at least one identity _ block, and the conv _ block and the identity _ block are sequentially connected.
Optionally, each of the spatial feature extraction modules includes an average pooling layer, a full junction layer, and a softmax activation layer, and the average pooling layer, the full junction layer, and the softmax activation layer are sequentially connected.
Optionally, the feature fusion neural network includes a splicing layer, a dimension transformation layer, a second convolution layer, a third convolution layer, a fourth convolution layer and a fifth convolution layer, the splicing layer, the dimension transformation layer, the second convolution layer, the third convolution layer, the fourth convolution layer and the fifth convolution layer are sequentially connected, the splicing layer is used for splicing a plurality of outputs of the multi-granularity neural network, and the dimension transformation layer is used for performing dimension transformation on the output of the splicing layer.
Optionally, the convolution kernels of the second convolutional layer and the fourth convolutional layer are the same, the convolution kernel of the third convolutional layer and the convolution kernel of the fifth convolutional layer are the same, and the convolution kernel of the second convolutional layer, the convolution kernel of the third convolutional layer, the convolution kernel of the fourth convolutional layer, and the convolution kernel of the fifth convolutional layer are asymmetric convolution kernels.
Optionally, the obtaining, according to the category features, the probability that the vehicle image belongs to each vehicle money category by using a classification neural network specifically includes: and the classification neural network fuses the features of each spatial branch in the classification features through a sixth convolutional layer and outputs the probability that the vehicle image belongs to each vehicle money classification through a softmax activation layer.
Optionally, before extracting the global features and the plurality of local features of the vehicle image using the multi-granularity neural network, the method further includes: and training the multi-granularity neural network by using the training samples in the training set until the training end condition is met.
Optionally, after the training of the multi-granularity neural network by using the training samples in the training set until a training end condition is satisfied, the method further includes: and fixing parameters in the multi-granularity neural network, and training the feature fusion neural network and the classification neural network by using the training samples.
In a second aspect, the present invention provides a vehicle money identification system, comprising:
the characteristic extraction module is used for extracting global characteristics and a plurality of local characteristics of the vehicle image through a multi-granularity neural network; the feature fusion module is used for fusing the global features and the plurality of local features of the vehicle images through a feature fusion neural network to obtain category features corresponding to the vehicle images; the classification module is used for obtaining the probability that the vehicle image belongs to each vehicle money category through a classification neural network according to the category characteristics; and the vehicle money determining module is used for determining the vehicle money with the highest probability as the vehicle money corresponding to the vehicle image.
In a third aspect, the present invention provides an electronic device comprising: a processor and a memory, the memory storing processor-executable machine-readable instructions, which when executed by the processor, perform the method of the first aspect.
In a fourth aspect, the present invention provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the first aspect.
Compared with the prior art, the vehicle money identification method provided by the invention can be used for identifying the vehicle money of the input vehicle image through the multi-granularity neural network, the characteristic fusion neural network and the classification neural network which are sequentially connected, extracting the global general characteristic and the local characteristic of the image through the multi-granularity neural network, fusing the global characteristic and the local characteristic through the characteristic fusion neural network, and determining the probability that the vehicle in the image belongs to each vehicle money category through the classification neural network.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, alternative embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart illustrating a vehicle payment identification method provided by the present invention;
FIG. 2 illustrates a block diagram of a multi-granular neural network provided by the present invention;
FIG. 3 illustrates a detailed block diagram of a multi-granular neural network provided by the present invention;
FIG. 4 shows a block diagram of a conv _ block and identity _ block;
FIG. 5 illustrates a block diagram of a feature fusion neural network provided by the present invention;
FIG. 6 illustrates a detailed block diagram of the feature fusion neural network and classification network provided by the present invention;
FIG. 7 is another flow chart of a vehicle payment identification method provided by the present invention;
FIG. 8 is a schematic diagram of a vehicle payment identification system provided by the present invention.
Icon:
a first convolution layer-10; max pooling layer-11; a basic feature extraction module-12; a spatial feature extraction module-13; splicing layer-20; a dimension transformation layer-21; a second convolutional layer-22; a third convolutional layer-23; a fourth convolutional layer-24; a fifth convolutional layer-25; a feature extraction module-30; a feature fusion module-31; a classification module-32; and a vehicle money determining module-33.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
First embodiment
The artificial neural network is a research hotspot in the field of artificial intelligence at the present stage, abstracts the human brain neuron network from the aspect of information processing, and forms different neural networks according to different connection modes by establishing a certain simple model. The present embodiment provides a vehicle money identification method through a plurality of neural networks, and the following vehicle money identification technologies are mainly used in the present embodiment.
The first method is that identification is carried out based on a general object identification model, a final model is obtained by training a general object identification framework on a vehicle type identification data set, the accuracy of vehicle type identification is low in the method, because the difference between general object identification categories is large, the difference between different categories of vehicle type identification is small, the same vehicle is required to be identified into the same category under different colors and different shooting visual angles, and the problem cannot be well solved by the general object identification framework at present.
The second type, based on local part identification model, through parts such as detection car light, door window, wheel, locomotive and rear of a vehicle, extract the characteristic of each local part and merge it, then utilize the classifier to carry out classification recognition, nevertheless receive the visual angle influence of shooting and can't detect all appointed local parts, can make this method inefficacy, this method need extract characteristic identification after detecting each part earlier when using simultaneously, the identification process is complicated and the discernment is long, difficult real-time vehicle tracking that realizes.
And thirdly, based on 3D model matching identification, the 3D pose of the vehicle is estimated, and the vehicle is matched and identified with the projection image under the similar pose in the 3D model library. The method can well solve the problem of the shooting visual angle, but cannot solve the problems of illumination, motion blur and the like, so that the recognition accuracy rate in a real scene is low.
In this embodiment, a plurality of neural networks are used to complete the extraction and fusion of features of an input vehicle image and the identification of a vehicle model, please refer to fig. 1, the method includes:
step 101: and extracting global features and a plurality of local features of the vehicle image by using a multi-granularity neural network.
Step 102: and fusing the global features and the plurality of local features of the vehicle images by using the feature fusion neural network to obtain the category features corresponding to the vehicle images.
Step 103: and obtaining the probability that the vehicle image belongs to each vehicle money category by utilizing a classification neural network according to the category characteristics corresponding to the vehicle image.
Step 104: and determining the vehicle money with the highest probability as the vehicle money corresponding to the vehicle image.
The multi-granularity neural network, the feature fusion neural network and the classification neural network are connected in sequence, wherein the first-stage multi-granularity neural network is used for abstracting different spatial parts of an input vehicle image, extracting global features and local features of the vehicle image, the global features are general features on the whole, the local features are detail features of all spatial regions, the global features and the local features are combined together, and the input image can be represented by more information and details, if the input image is classified and output only by directly extracting the global features of the vehicle, the final recognition effect is not ideal because the detail information of all the local regions in the image is easy to ignore in the global features, the mixed multi-granularity neural network which mixes the global features and the local features is adopted, the global information and the local detail information of the image can be simultaneously paid attention to, so that the multi-granularity neural network can learn more representative information in the image.
The method comprises the steps of inputting a vehicle image into a multi-granularity neural network, wherein the vehicle image can be a vehicle region part detected by a vehicle detection model at a front stage in a traffic management system or a vehicle picture of a real scene, extracting basic features of the input vehicle image through the multi-granularity neural network, respectively learning global features and local features of the image through different branches, learning only by paying attention to corresponding features in each branch, connecting the output of the global branches and the output of the local branches with the input of a feature fusion neural network, fusing all feature graphs of the vehicle image through the feature fusion neural network, outputting the fused class features into a classification neural network, and outputting the probability that a vehicle in the image belongs to each vehicle money class through the classification neural network.
Specifically, referring to fig. 2, the multi-granularity neural network includes a first convolution layer 10, a maximum pooling layer 11, at least one basic feature extraction module 12, and at least three spatial feature extraction modules 13, where the first convolution layer 10, the maximum pooling layer 11, and the at least one basic feature extraction module 12 are sequentially connected, an output of the basic feature extraction module 12 is respectively connected to each of the at least three spatial feature extraction modules 13, where the basic feature extraction module 12 is configured to extract basic features in an input vehicle image, including contours, edges, textures, and the like of each part of a vehicle, split a feature map output by the basic feature extraction module in a horizontal direction, obtain multiple spatial part branches, and extract features of corresponding spatial parts through different spatial feature extraction modules 13.
It should be noted that, when the input vehicle image has a small size and a simple background, only one basic feature extraction module 12 may be implemented to extract the basic features of the vehicle image, but in practical applications, the size of the input vehicle image is generally large and the background is complex, so that the number of basic feature extraction modules 12 is increased adaptively to obtain a better basic feature extraction effect.
Each basic feature extraction module 12 comprises a conv _ block and at least one identity _ block, wherein the conv _ block and the identity _ block are connected in sequence, and the structure of the conv _ block is the same as that of ResNet50 in the residual error network. The residual error network comprises two basic blocks, wherein one basic block is conv _ block and is used for reducing the size of a characteristic diagram, removing redundant information in the characteristic diagram and extracting more critical characteristic information, the input dimension and the output dimension of the more critical characteristic information are different and cannot be connected in series, and the other basic block is identity _ block and is used for learning more representative characteristic information from similar parts of different vehicles in a complex background, and the sizes of the input characteristic diagram and the output characteristic diagram are unchanged. When the background of the input picture is complex, or when multiple types of vehicles are similar, in order to extract more representative feature information, multiple identity _ blocks may be connected in series after the conv _ block.
Fig. 3 is a specific connection structure of a multi-granularity neural network provided in this embodiment, where four basic feature extraction modules are used, which are Block2, Block3, Block4, and Block5 in fig. 3, each Block is connected in sequence, features in a vehicle image are extracted by multiple blocks, basic features of the image are obtained and output, Block2 includes a conv _ Block and two identity _ blocks which are connected in series in sequence, Block3 includes a conv _ Block and three identity _ blocks which are connected in series in sequence, Block4 includes a conv _ Block and three identity _ blocks which are connected in series in sequence, Block5 includes a conv _ Block and two identity _ blocks which are connected in series in sequence, and Block5 outputs the basic features of the vehicle image.
Specifically, this embodiment shows the structure of each conv _ Block and identity _ Block in Block2, Block3, Block4 and Block5, where the conv _ Block includes three convolutional layers connected in sequence, the sizes of convolutional cores in each convolutional layer are 1 × 1, 3 × 3 and 1 × 1, respectively, the output of the convolutional layer in the last layer is added to the output of another 1 × 1 convolutional network and then input to the relu activation function layer, and the input of the 1 × 1 convolutional network is connected to the input of the conv _ Block; the identity _ block comprises three convolutional layers which are connected in sequence, the sizes of convolutional cores in each convolutional layer are 1 multiplied by 1, 3 multiplied by 3 and 1 multiplied by 1 respectively, the output of the last convolutional layer is added with the input of the identity _ block, and then the sum is input into the relu activation function layer.
In the multi-granularity neural network, each spatial feature extraction module comprises an averaging pooling layer, an fc full connection layer and a softmax activation layer, the averaging pooling layer, the full connection layer and the softmax activation layer are sequentially connected, features output to the branches by a Block5 in the basic feature extraction module 12 are continuously learned, three branches are local branches which are horizontally divided by a feature map output by a Block5 in the four branches connected by a Block5, the 4 th branch is a global branch, and the three different local branches can learn significant features of different regions through the averaging pooling layer, the full connection layer and the softmax activation layer by the connection structure and learn global features on the global branch. It can be understood that according to the connection structure of fig. 3, the size of the feature map output in Block5 is 14 × 14, and the size of the feature map of the three branches obtained by slicing the feature map in the horizontal direction is, for example, 14 × 5, 14 × 4, and 14 × 5, so that at least one slice of the feature map output by Block5 is required to obtain two local branches and one global branch, and at least three spatial feature extraction modules are obtained.
The multi-granularity neural network is different from the existing commonly used multi-granularity neural network, the branches of a plurality of spatial feature extraction modules are completely independent, the features of different parts are respectively extracted, the base layer network weights of the multi-granularity neural network are shared, the model complexity is reduced, the recognition speed is accelerated, the global general features and the local significant features of the image are extracted in a global and multi-local mixed mode, the global and local mutual supervision and promotion are realized, and the extracted features are more stable.
Referring to fig. 5, the feature fusion neural network includes a splicing layer 20, a dimension conversion layer 21, a second convolution layer 22, a third convolution layer 23, a fourth convolution layer 24, and a fifth convolution layer 25, the splicing layer 20, the dimension conversion layer 21, the second convolution layer 22, the third convolution layer 23, the fourth convolution layer 24, and the fifth convolution layer 25 are sequentially connected, wherein the splicing layer 20 is used for splicing a plurality of outputs of the multi-granularity neural network, and the dimension conversion layer 21 is used for performing dimension conversion on the output of the splicing layer 20.
Fig. 6 shows a connection structure of a feature fusion neural network and a classification neural network provided in this embodiment, four branches of a multi-granularity neural network are merged by concatemate operation through a concatenation layer, and dimension transformation is performed on an output of the concatenation layer through reshape operation, according to the connection structure shown in fig. 3, reshape can transform the dimension into 4 × vehicle money category number × 1, where 4 rows respectively represent 3 local branches and 1 global branch, each row represents features on different spatial regions, and each row represents features of different categories, thereby achieving decoupling of each feature on space and between categories.
It should be noted that, as can be seen from fig. 6, the feature fusion neural network is different from a common symmetric convolution kernel for feature fusion, in this embodiment, two sets of convolution networks of asymmetric convolution kernels connected in series are used to fuse features between each spatial branch and each class, and sizes of the convolution kernels are 4 × 1 and 1 × 7, respectively, where a 4 × 1 convolution network fuses features between different spatial branches of the same class, and a 1 × 7 convolution network fuses features between similar classes of the same spatial branch, so that decoupling and fusion of features between spaces and classes are realized, so that feature fusion is more efficient and accurate, it can be understood that sizes of convolution kernels of four convolution layers in a feature fusion neural network can be adaptively adjusted according to a previous-level multi-granularity neural network and an actual situation, which is not limited in this embodiment, for example, a previous-level multi-granularity neural network uses one global branch and two local branches, the convolution kernel size of the 4 x 1 convolutional network may be changed to 3 x 1.
The characteristic fusion neural network fuses the characteristics of the image by adopting the convolution network of the asymmetric convolution kernel, so that the parameter quantity is greatly reduced, the characteristic fusion is more efficient, the finally obtained classification type characteristics are more remarkable, and the accuracy of vehicle money identification can be effectively improved.
The input of the classification neural network is connected with the output of the feature fusion neural network, according to the connection structure of fig. 3 and fig. 6, the output dimensionality of the feature fusion neural network is 4 × the number of vehicle types × 8, in fig. 6, the classification neural network fuses the features of each spatial branch in the class features output by the feature fusion neural network through a 4 × 1 convolution network, the feature dimensionality obtained after fusion is 1 × the number of vehicle types × 1, and the probability that a vehicle in the input vehicle image belongs to each vehicle type is obtained through a softmax activation layer and output.
In the feature fusion neural network, features among similar categories of the same spatial branch are fused through a 1 × 7 convolutional network, so when the neural network is classified to output probabilities, probabilities of vehicle types with similar features are arranged together, for example, probabilities of a series of similar vehicle types in the BMW vehicle types are arranged together to be output.
It should be noted that the sizes of the convolution kernels in the detailed structures of the multi-granularity neural network, the feature fusion neural network and the classification network shown in the drawings are only one embodiment, and should not be construed as limiting the invention.
In summary, in the above scheme, the salient features and the global general features of each local region are extracted from the input vehicle picture through the multi-granularity neural network, the features extracted by the multi-granularity neural network are spatially divided into a plurality of local branches and global branches, and the same column of each branch represents features of different classes, so that all the features are decoupled between the space and the classes; furthermore, the convolution neural network of two-stage series asymmetric convolution kernels in the feature fusion neural network is used for respectively fusing each spatial branch feature and each inter-class feature; furthermore, the convolutional neural network of an asymmetric convolutional kernel in the classification neural network fuses the spatial branch characteristics to obtain the characteristics of each category, the probability that the vehicle in the picture belongs to each vehicle type is obtained through the softmax activation layer, the vehicle type with the highest probability is determined as the vehicle type corresponding to the input vehicle image, the identification speed in the whole process is high, the influence of the color of the vehicle and the shooting view angle is avoided, and the high identification accuracy can be achieved aiming at the influences of factors such as local shielding, motion blurring and dark light.
It should be understood that, in the above-mentioned usage phase for vehicle money identification, the input vehicle image determines the style of the output vehicle through the multi-granularity neural network, the feature fusion neural network, and the classification neural network, and before the usage phase, each neural network needs to be trained through a large number of training samples in the training set, which is shown in fig. 7 and includes:
step 201: and training the multi-granularity neural network by using the training samples on the training set.
And training the multi-granularity neural network on a training set containing a plurality of vehicle images until the training is stable.
Step 202: and fixing parameters in the multi-granularity neural network, and training the characteristic fusion neural network and the classification neural network by using the training samples.
After the multi-granularity neural network is trained, parameters in the multi-granularity neural network are fixed, the multi-granularity neural network, the characteristic fusion neural network and the classification neural network are sequentially connected, training samples concentrated in training are input into the multi-granularity neural network, the characteristic fusion neural network and the classification neural network are trained together until the characteristic fusion neural network and the classification neural network are stably trained, and then all the neural networks can be finely adjusted through a small learning rate, so that a better classification and identification effect is achieved.
Second embodiment
The present embodiment provides a vehicle money identification system, referring to fig. 8, including:
the feature extraction module 30 is configured to extract global features and a plurality of local features of the vehicle image through a multi-granularity neural network;
the feature fusion module 31 is configured to fuse the global feature and the plurality of local features of the vehicle image through a feature fusion neural network to obtain a category feature corresponding to the vehicle image;
the classification module 32 is used for obtaining the probability that the vehicle image belongs to each vehicle money category through a classification neural network according to the category characteristics;
and the vehicle money determining module 33 determines the vehicle money with the highest probability as the vehicle money corresponding to the vehicle image.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
The present embodiment also provides an electronic device, including: a processor and a memory storing processor-executable machine-readable instructions which, when executed by the processor, perform the method described in the first embodiment.
The present embodiment also provides a storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the method described in the first embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. The system embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a notebook computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only an alternative embodiment of the present invention and is not intended to limit the present invention, and various modifications and variations of the present invention may occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A vehicle money identification method is characterized by comprising the following steps:
extracting global features and a plurality of local features of the vehicle image by using a multi-granularity neural network;
extracting basic features of an input vehicle image through a multi-granularity neural network, segmenting the basic features at least once to obtain two local branches and a global branch, respectively learning the global features and the local features of the image through different branches, obtaining the global features through the global branches, obtaining the local features through the local branches, and learning only by paying attention to the corresponding features in each branch;
fusing the global features and the plurality of local features of the vehicle images by using a feature fusion neural network to obtain category features corresponding to the vehicle images;
obtaining the probability that the vehicle image belongs to each vehicle money category by utilizing a classification neural network according to the category characteristics;
and determining the vehicle money with the highest probability as the vehicle money corresponding to the vehicle image.
2. The method according to claim 1, wherein the multi-granularity neural network comprises a first convolutional layer, a maximum pooling layer, at least one basic feature extraction module and at least three spatial feature extraction modules, wherein the first convolutional layer, the maximum pooling layer and the at least one basic feature extraction module are connected in sequence, and the output of the at least one basic feature extraction module is respectively connected with the at least three spatial feature extraction modules;
the basic feature extraction module is used for extracting basic features in the vehicle image, each spatial feature extraction module is used for learning corresponding global features or local features, the basic feature extraction module comprises a conv _ block and at least one identity _ block, and the conv _ block and the identity _ block are sequentially connected.
3. The method of claim 2, wherein each of the spatial feature extraction modules comprises an averaging pooling layer, a fully-connected layer, and a softmax activation layer, the averaging pooling layer, the fully-connected layer, and the softmax activation layer being connected in series.
4. The method of claim 3, wherein the feature fusion neural network comprises a stitching layer, a dimension transformation layer, a second convolution layer, a third convolution layer, a fourth convolution layer, and a fifth convolution layer, wherein the stitching layer, the dimension transformation layer, the second convolution layer, the third convolution layer, the fourth convolution layer, and the fifth convolution layer are connected in sequence, wherein the stitching layer is configured to stitch a plurality of outputs of the multi-granularity neural network, and wherein the dimension transformation layer is configured to perform dimension transformation on the output of the stitching layer.
5. The method of claim 4, wherein the convolution kernels of the second convolutional layer and the fourth convolutional layer are the same, wherein the convolution kernels of the third convolutional layer and the fifth convolutional layer are the same, and wherein the convolution kernels of the second convolutional layer, the third convolutional layer, the fourth convolutional layer, and the fifth convolutional layer are asymmetric convolution kernels.
6. The method according to claim 5, wherein the probability that the vehicle image belongs to each vehicle money category is obtained by using a classification neural network according to the category features, specifically:
and the classification neural network fuses the features of each spatial branch in the classification features through a sixth convolutional layer and outputs the probability that the vehicle image belongs to each vehicle money classification through a softmax activation layer.
7. The method according to any one of claims 1-6, wherein prior to extracting global features and a plurality of local features of the vehicle image using the multi-granular neural network, the method further comprises:
and training the multi-granularity neural network by using the training samples in the training set until the training end condition is met.
8. The method of claim 7, wherein after the training the multi-granular neural network with the training samples in the training set until a training end condition is satisfied, the method further comprises:
and fixing parameters in the multi-granularity neural network, and training the feature fusion neural network and the classification neural network by using the training samples.
9. A vehicle money identification system, comprising:
the characteristic extraction module is used for extracting global characteristics and a plurality of local characteristics of the vehicle image through a multi-granularity neural network; extracting basic features of an input vehicle image through a multi-granularity neural network, segmenting the basic features at least once to obtain two local branches and a global branch, respectively learning the global features and the local features of the image through different branches, obtaining the global features through the global branches, obtaining the local features through the local branches, and learning only by paying attention to the corresponding features in each branch;
the feature fusion module is used for fusing the global features and the plurality of local features of the vehicle images through a feature fusion neural network to obtain category features corresponding to the vehicle images;
the classification module is used for obtaining the probability that the vehicle image belongs to each vehicle money category through a classification neural network according to the category characteristics;
and the vehicle money determining module is used for determining the vehicle money with the highest probability as the vehicle money corresponding to the vehicle image.
10. A storage medium, having stored thereon a computer program which, when executed by a processor, performs the method according to any one of claims 1-8.
CN201811268068.2A 2018-10-29 2018-10-29 Vehicle money identification method, system and storage medium Active CN109359696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811268068.2A CN109359696B (en) 2018-10-29 2018-10-29 Vehicle money identification method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811268068.2A CN109359696B (en) 2018-10-29 2018-10-29 Vehicle money identification method, system and storage medium

Publications (2)

Publication Number Publication Date
CN109359696A CN109359696A (en) 2019-02-19
CN109359696B true CN109359696B (en) 2021-04-02

Family

ID=65347074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811268068.2A Active CN109359696B (en) 2018-10-29 2018-10-29 Vehicle money identification method, system and storage medium

Country Status (1)

Country Link
CN (1) CN109359696B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611414B (en) * 2019-02-22 2023-10-24 杭州海康威视数字技术股份有限公司 Vehicle searching method, device and storage medium
CN111832369A (en) * 2019-04-23 2020-10-27 中国移动通信有限公司研究院 Image identification method and device and electronic equipment
CN110210750A (en) * 2019-05-29 2019-09-06 北京天正聚合科技有限公司 A kind of method, apparatus, electronic equipment and storage medium identifying Shopping Guide's business
CN110347164A (en) * 2019-08-08 2019-10-18 北京云迹科技有限公司 A kind of speed adjusting method, device and storage medium
CN110516583A (en) * 2019-08-21 2019-11-29 中科视语(北京)科技有限公司 A kind of vehicle recognition methods, system, equipment and medium again
CN112446402B (en) * 2019-09-03 2024-03-26 顺丰科技有限公司 Loading rate identification method, loading rate identification device, computer equipment and storage medium
CN110795580B (en) * 2019-10-23 2023-12-08 武汉理工大学 Vehicle weight identification method based on space-time constraint model optimization
CN110969610A (en) * 2019-12-03 2020-04-07 杭州天铂云科光电科技有限公司 Power equipment infrared chart identification method and system based on deep learning
CN111461181B (en) * 2020-03-16 2021-09-07 北京邮电大学 Vehicle fine-grained classification method and device
CN111598131B (en) 2020-04-17 2023-08-25 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and storage medium
CN112101246A (en) * 2020-09-18 2020-12-18 济南博观智能科技有限公司 Vehicle identification method, device, equipment and medium
CN112132031B (en) * 2020-09-23 2024-04-16 平安国际智慧城市科技股份有限公司 Vehicle style identification method and device, electronic equipment and storage medium
CN112766407B (en) * 2021-01-29 2023-12-05 北京达佳互联信息技术有限公司 Image recognition method, device and storage medium
CN112990048B (en) * 2021-03-26 2021-11-23 中科视语(北京)科技有限公司 Vehicle pattern recognition method and device
CN113177502B (en) * 2021-05-13 2024-02-02 北京罗克维尔斯科技有限公司 Method and device for detecting looking-around obstacle, medium, vehicle-mounted system and vehicle
CN113920313B (en) * 2021-09-29 2022-09-09 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114998668A (en) * 2022-06-30 2022-09-02 北京京东方技术开发有限公司 Feature extraction method and device, storage medium and electronic equipment
CN117011569A (en) * 2022-09-26 2023-11-07 腾讯科技(深圳)有限公司 Image processing method and related device
CN117173100B (en) * 2023-07-31 2024-04-05 东莞市言科新能源有限公司 Polymer lithium ion battery production control system and method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021933A (en) * 2017-11-23 2018-05-11 深圳市华尊科技股份有限公司 Neural network recognization model and recognition methods
CN108681707A (en) * 2018-05-15 2018-10-19 桂林电子科技大学 Wide-angle model recognizing method and system based on global and local Fusion Features

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7529391B2 (en) * 2005-12-29 2009-05-05 Microsoft Corporation Signature verification
CN105488517B (en) * 2015-11-30 2019-03-26 杭州全实鹰科技有限公司 A kind of vehicle brand type identifier method based on deep learning
US10699151B2 (en) * 2016-06-03 2020-06-30 Miovision Technologies Incorporated System and method for performing saliency detection using deep active contours
CN106548145A (en) * 2016-10-31 2017-03-29 北京小米移动软件有限公司 Image-recognizing method and device
CN107330463B (en) * 2017-06-29 2020-12-08 南京信息工程大学 Vehicle type identification method based on CNN multi-feature union and multi-kernel sparse representation
CN108334892B (en) * 2017-12-26 2020-11-17 新智数字科技有限公司 Vehicle type identification method, device and equipment based on convolutional neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021933A (en) * 2017-11-23 2018-05-11 深圳市华尊科技股份有限公司 Neural network recognization model and recognition methods
CN108681707A (en) * 2018-05-15 2018-10-19 桂林电子科技大学 Wide-angle model recognizing method and system based on global and local Fusion Features

Also Published As

Publication number Publication date
CN109359696A (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN109359696B (en) Vehicle money identification method, system and storage medium
US10691952B2 (en) Adapting to appearance variations when tracking a target object in video sequence
Dai et al. TIRNet: Object detection in thermal infrared images for autonomous driving
Laroca et al. A robust real-time automatic license plate recognition based on the YOLO detector
CN108960266B (en) Image target detection method and device
Qi et al. Convolutional neural network based detection and judgement of environmental obstacle in vehicle operation
US20200125885A1 (en) Vehicle insurance image processing method, apparatus, server, and system
Feris et al. Large-scale vehicle detection, indexing, and search in urban surveillance videos
WO2019243863A1 (en) Vehicle re-identification techniques using neural networks for image analysis, viewpoint-aware pattern recognition, and generation of multi-view vehicle representations
Pfeuffer et al. Semantic segmentation of video sequences with convolutional lstms
US20170032247A1 (en) Media classification
Khalid et al. Automatic measurement of the traffic sign with digital segmentation and recognition<? show [AQ ID= Q1]?>
Nandi et al. Traffic sign detection based on color segmentation of obscure image candidates: a comprehensive study
US8965115B1 (en) Adaptive multi-modal detection and fusion in videos via classification-based-learning
Mahmood et al. Towards a fully automated car parking system
Ma et al. RETRACTED: Research on vehicle license plate recognition technology based on deep convolutional neural networks
CN109034086B (en) Vehicle weight identification method, device and system
Lee et al. Available parking slot recognition based on slot context analysis
Wang et al. Spatial attention for multi-scale feature refinement for object detection
Nguyen et al. Vehicle re-identification with learned representation and spatial verification and abnormality detection with multi-adaptive vehicle detectors for traffic video analysis.
US20220156528A1 (en) Distance-based boundary aware semantic segmentation
Farag A lightweight vehicle detection and tracking technique for advanced driving assistance systems
van den Brand et al. Instance-level segmentation of vehicles by deep contours
Zheng et al. Active discriminative dictionary learning for weather recognition
US11423262B2 (en) Automatically filtering out objects based on user preferences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 401122 5 stories, Block 106, West Jinkai Avenue, Yubei District, Chongqing

Applicant after: Chongqing Zhongke Yuncong Technology Co., Ltd.

Address before: 400 000 5 stories of Block 106, West Jinkai Avenue, Yubei District, Chongqing

Applicant before: CHONGQING ZHONGKE YUNCONG TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant