CN110348393A - Vehicle characteristics extract model training method, vehicle identification method and equipment - Google Patents

Vehicle characteristics extract model training method, vehicle identification method and equipment Download PDF

Info

Publication number
CN110348393A
CN110348393A CN201910632120.6A CN201910632120A CN110348393A CN 110348393 A CN110348393 A CN 110348393A CN 201910632120 A CN201910632120 A CN 201910632120A CN 110348393 A CN110348393 A CN 110348393A
Authority
CN
China
Prior art keywords
sample image
vehicle
image
negative sample
positive sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910632120.6A
Other languages
Chinese (zh)
Other versions
CN110348393B (en
Inventor
周康明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN201910632120.6A priority Critical patent/CN110348393B/en
Publication of CN110348393A publication Critical patent/CN110348393A/en
Application granted granted Critical
Publication of CN110348393B publication Critical patent/CN110348393B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of vehicle characteristics and extracts model training method, vehicle identification method and equipment, by obtaining the multiple image in vehicle monitoring video flowing, two vehicle images of same vehicle are intercepted from two field pictures and are spliced into positive sample image, and two vehicle images of different vehicle are intercepted from two field pictures and are spliced into negative sample image;A part is chosen from each positive sample image and negative sample image is used as test data;Design include two branches twin network model, by test data positive sample image or negative sample image split two vehicle images respectively;Based on twin network model, obtain the feature of two vehicle images of the positive sample image or negative sample image in test data, judge that the twin network model is that training is completed also to be to continue with training based on obtained feature, the accuracy rate of the feature of the vehicle image obtained by twin network model can be improved, and then improve the accuracy rate of the feature identification vehicle based on vehicle image.

Description

Vehicle characteristics extract model training method, vehicle identification method and equipment
Technical field
The present invention relates to computer fields more particularly to a kind of vehicle characteristics to extract model training method, vehicle identification side Method and equipment.
Background technique
The vehicle identification method of vehicle at present, in use, accuracy rate is lower before engineering.At the same time, before engineering To when finding certain same or different vehicle match mistake, it is difficult to solve a problem promptly.
Summary of the invention
Model training method, vehicle identification method are extracted it is an object of the present invention to provide a kind of vehicle characteristics and are set It is standby.
According to an aspect of the invention, there is provided a kind of vehicle characteristics extract model training method, this method comprises:
The multiple image in vehicle monitoring video flowing is obtained, two vehicle images of same vehicle are intercepted from two field pictures And it is spliced into positive sample image, two vehicle images of different vehicle are intercepted from two field pictures and is spliced into negative sample image;
A part is chosen from each positive sample image and negative sample image is used as test data;
Design includes the twin network model of two branches, by the positive sample image or negative sample figure in the test data As splitting two vehicle images respectively;
Based on the twin network model, two of the positive sample image or negative sample image in the test data are obtained The feature of vehicle image judges that the twin network model is that training is completed also to be to continue with training based on obtained feature.
Further, in the above method, it is based on the twin network model, obtains the positive sample figure in the test data The feature of two vehicle images of picture or negative sample image, judges that the twin network model is trained complete based on obtained feature At being also to continue with training, comprising:
Step S41, by from the test data positive sample image or negative sample image split obtain two vehicles respectively The current twin network model of image input carries out feature extraction, obtains the positive sample image or negative sample in the test data The feature of two vehicle images of this image;
Step S42 calculates the spy of two vehicle images of the positive sample image or negative sample image in the test data The similarity of sign judges two of the positive sample image or negative sample image in the test data based on the similarity being calculated Vehicle image is opened, judges that two vehicle images are to belong to positive sample image to still fall within negative sample image, obtains judging result, The actual result of positive sample image or negative sample image that the judging result and two vehicle images belong to is compared,
If comparing unanimously, step S43 will not be chosen for test data in each positive sample image and negative sample image Remainder using the training data as input, and continues to train as training data by way of being fitted loss function After twin network model, step S41 is returned to;
If comparing unanimously, step S44, twin network model training terminates.
Further, in the above method, two of the positive sample image or negative sample image in the test data are calculated The similarity of the feature of vehicle image, comprising:
Pass through norm layers of positive sample image or negative sample figure calculated in the test data of normalization in caffe frame The denominator part of the cosine similarity of the feature of two vehicle images of picture;
The element_wise layers of positive sample calculated in the test data are calculated by the element in the caffe frame The molecular moiety of the cosine similarity of the feature of two vehicle images of image or negative sample image;
It will be calculated by first link InnerProduct layers in the caffe frame and in conjunction with concat layers remaining The denominator part of string similarity and molecular moiety are converted to neural network, and are exported the neural network with softmax.
Further, in the above method, using the training data as input, and by way of being fitted loss function after The continuous twin network model of training, comprising:
By the result for using softmax to export the neural network and the training data as input, and pass through The mode of fitting loss function continues to train twin network model.
Further, in the above method, the multiple image in vehicle monitoring video flowing is obtained, is cut from two field pictures every time It takes two vehicle images of same vehicle and is spliced into positive sample image, intercept two of different vehicle from two field pictures every time Vehicle image is simultaneously spliced into negative sample image, comprising:
All vehicles of different frame image in vehicle monitoring video flowing are found by the detection algorithm of deep learning;
Based on the vehicle found, same color identifier is arranged to the same vehicle in different frame image;
Based on the color identifier, two vehicle images of same color identifier are intercepted from two field pictures, and are carried out Splicing is used as positive sample image, two vehicle images of color identifier not of the same race is intercepted from two field pictures, and carry out splicing work Be negative sample image.
Further, in the above method, by the test data positive sample image or negative sample image split respectively Two vehicle images, comprising:
The size of positive sample image or negative sample image in the test data is uniformly adjusted to pre-set dimension;
By the slice layer of caffe frame, by the unified test data adjusted after size positive sample image or Negative sample image cutting is two vehicle images.
Further, in the above method, by from the test data positive sample image or negative sample image tear open respectively Get two current twin network models of vehicle image input and carry out feature extraction, comprising:
Every two vehicle images segmented are inputted to two GoogLenet of the twin network model respectively Inception-V2 perhaps two GoogLenet Inception-V2 of carry out feature extraction of two ResNet50 branches or Two ResNet50 branches share it is all can learning parameter, wherein first GoogLenet Inception-V2 or ResNet50 branch can learning parameter be iterated, Article 2 GoogLenet Inception-V2 or ResNet50 branch Can learning parameter directly learn from after the iteration of first GoogLenet Inception-V2 or ResNet50 branch Parameter copy.
According to another aspect of the present invention, a kind of vehicle identification method is also provided, this method comprises:
The image of two cars to be matched is inputted in the twin network model that training described in any of the above embodiments is completed;
The feature of the image of the two cars of branch output therein is intercepted from the twin network model;
Calculate the similarity of the feature of the image for the two cars being truncated to, the spy of the image based on the two cars being calculated The similarity of sign and preset similarity threshold judge that two cars are same vehicle or different vehicles.
According to another aspect of the present invention, a kind of equipment for handling in network equipment client information, the equipment are also provided Including the memory for storing computer program instructions and the processor for executing program instructions, wherein when the computer When program instruction is executed by the processor, the vehicle characteristics extraction model training method that the equipment executes any of the above-described is triggered.
According to another aspect of the present invention, a kind of equipment for handling in network equipment client information, the equipment are also provided Including the memory for storing computer program instructions and the processor for executing program instructions, wherein when the computer When program instruction is executed by the processor, triggers the equipment and execute above-mentioned vehicle identification method.
Compared with prior art, the present invention is by obtaining the multiple image in vehicle monitoring video flowing, from two field pictures It intercepts two vehicle images of same vehicle and is spliced into positive sample image, two vehicles of different vehicle are intercepted from two field pictures Image is simultaneously spliced into negative sample image;A part is chosen from each positive sample image and negative sample image as test number According to;Design includes the twin network model of two branches, by the positive sample image or negative sample image point in the test data It Chai Fen not two vehicle images;Based on the twin network model, the positive sample image or negative sample in the test data are obtained The feature of two vehicle images of this image, based on obtained feature judge the twin network model be training complete or after Continuous training can be improved the accuracy rate of the feature of the vehicle image obtained by twin network model, and then improve and be based on vehicle The accuracy rate of the feature identification vehicle of image.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, of the invention other Feature, objects and advantages will become more apparent upon:
The vehicle characteristics that Fig. 1 shows one embodiment of the invention extract the flow chart of model training method;
Fig. 2 shows the flow charts that the vehicle characteristics of another embodiment of the present invention extract model training method;
Fig. 3 shows the schematic diagram of the positive sample image of one embodiment of the invention;
Fig. 4 shows the schematic diagram of the negative sample image of one embodiment of the invention;
Fig. 5 shows the schematic diagram of one embodiment of the invention;
Fig. 6 shows the application scenarios schematic diagram of one embodiment of the invention;
Fig. 7 shows the flow chart of the vehicle identification method of one embodiment of the invention.
The same or similar appended drawing reference represents the same or similar component in attached drawing.
Specific embodiment
Present invention is further described in detail with reference to the accompanying drawing.
In a typical configuration of this application, terminal, the equipment of service network and trusted party include one or more Processor (CPU), input/output interface, network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM), Digital versatile disc (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices or Any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, computer Readable medium does not include non-temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
As shown in Figure 1, the application provides a kind of vehicle characteristics extraction model training method, which comprises
Step S1 obtains the multiple image in vehicle monitoring video flowing, intercepts same vehicle from two field pictures every time Two vehicle images are simultaneously spliced into positive sample image, intercept two vehicle images of different vehicle from two field pictures every time and spell It is connected in negative sample image;
Here, available front-end collection equipment collects a few frame images in vehicle monitoring video flowing, number can use Identical or different vehicle is collected according to sampling instrument, and is spliced respectively as positive sample image or negative sample image;
As shown in figure 3, two images of same vehicle can be intercepted every time from two field pictures and be spliced into positive sample figure Picture, as shown in figure 4, and intercepting two images of different vehicle every time from two field pictures and being spliced into negative sample image;
Step S2 chooses a part from each positive sample image and negative sample image and is used as test data;
Here, such as positive sample image and negative sample image share 1000,600 therein can be chosen as training Data are used as test data for remaining 400;
Step S3, design include two branches twin network model, by the test data positive sample image or Negative sample image splits two vehicle images respectively;
Step S4 is based on the twin network model, obtains the positive sample image or negative sample figure in the test data The feature of two vehicle images of picture judges that the twin network model is that training is completed also to be to continue with instruction based on obtained feature Practice.
Here, the application is based on the twin network model, the positive sample image or negative sample in the test data are obtained The feature of two vehicle images of this image judges whether to continue to train the twin network model based on obtained feature, can To improve the accuracy rate of the feature of the vehicle image obtained by twin network model, and then improve the feature based on vehicle image Identify the accuracy rate of vehicle.
As shown in Fig. 2, the vehicle characteristics of the application extract in the implementation of model training method one, step S4, based on described twin Raw network model, obtains the feature of two vehicle images of the positive sample image or negative sample image in the test data, base Judge that the twin network model is that training is completed also to be to continue with training in obtained feature:
Step S41, by from the test data positive sample image or negative sample image split obtain two vehicles respectively The current twin network model of image input carries out feature extraction, obtains the positive sample image or negative sample in the test data The feature of two vehicle images of this image;
Step S42 calculates the spy of two vehicle images of the positive sample image or negative sample image in the test data The similarity of sign judges two of the positive sample image or negative sample image in the test data based on the similarity being calculated Vehicle image is opened, judges that two vehicle images are to belong to positive sample image to still fall within negative sample image, obtains judging result, The actual result of positive sample image or negative sample image that the judging result and two vehicle images belong to is compared,
If comparison is inconsistent, step S43 will not be chosen for test data in each positive sample image and negative sample image Remainder as training data, using the training data as input, and continue to instruct by way of being fitted loss function After practicing twin network model, step S41 is returned to;
Here, the loss function can use cross entropy loss function;
If comparing unanimously, step S44, twin network model training terminates.
Here, the application proposes a kind of reid algorithm of identification again independent of certain vehicle data amount, and work as engineering Forward direction is found can be directly by continuing to train twin network mould to matching error data are added in training data when matching error Type, to obtain reliable twin network model.
As shown in figure 5, the vehicle characteristics of the application extract in the implementation of model training method one, in step S42, described in calculating The similarity of the feature of two vehicle images of positive sample image or negative sample image in test data, comprising:
Step S411 passes through norm layers of positive sample image calculated in the test data of normalization in caffe frame Or the denominator part of the cosine similarity of the feature of two vehicle images of negative sample image, that is, realize first vehicle image The mould of feature vector A is multiplied with the mould of the feature vector B of second vehicle image, and wherein A and B is by twin network mould respectively Feature vector after type feature extraction;
Step S412 calculates element_wise layers by the element in the caffe frame and calculates the test data In positive sample image or negative sample image two vehicle images feature cosine similarity molecular moiety, that is, realize special It levies vector A and feature vector B and carries out dot product;
Step S413 is incited somebody to action by first link InnerProduct layers in the caffe frame and in conjunction with concat layers The denominator part and molecular moiety that cosine similarity is calculated are converted to neural network, and with softmax by the nerve net Network is exported.
Here, the post-processing network of loss is realized in design, due to being judged to middle using cosine similarity algorithm before engineering Whether two cars are identical, therefore use element_wise, norm, InnerProduct and concat layer in caffe frame Realize the calculating of cosine similarity algorithm.
The vehicle characteristics of the application extract during model training method one implemented, in step S43, using the training data as Input, and continue to train twin network model by way of being fitted loss function, comprising:
By the result for using softmax to export the neural network and the training data as input, and pass through The mode of fitting loss function continues to train twin network model, obtains more reliable twin network model with training.
The vehicle characteristics of the application extract in the implementation of model training method one, and step S1 is obtained in vehicle monitoring video flowing Multiple image, two vehicle images of same vehicle are intercepted from two field pictures every time and are spliced into positive sample image, every time Two vehicle images of different vehicle are intercepted from two field pictures and are spliced into negative sample image, comprising:
Step S11 finds all vehicles of different frame image in vehicle monitoring video flowing by the detection algorithm of deep learning ?;
Same color identifier is arranged to the same vehicle in different frame image based on the vehicle found in step S12;
Step S13 is based on the color identifier, intercepts two vehicles of same color identifier from two field pictures every time Image, and splicing is carried out as positive sample image, two vehicle figures of color identifier not of the same race are intercepted from two field pictures every time Picture, and splicing is carried out as negative sample image.
Here, can more reliable, efficiently obtain positive and negative sample image and negative sample by the color identifier of setting vehicle Image.
The vehicle characteristics of the application extract in the implementation of model training method one, will be in the test data in step S3 Positive sample image or negative sample image split two vehicle images respectively, comprising:
The size of positive sample image or negative sample image in the test data is uniformly adjusted to default by step S31 Size;
Here, the size of positive sample image or negative sample image in the test data can be uniformly adjusted to 400* 200 pixels;
Step S32, by the slice layer of caffe frame, by the positive sample in the test data after unified adjustment size This image or negative sample image cutting are two vehicle images.
Here, can be by the slice layer of caffe frame to the positive sample in the test data after unified adjustment size This image or negative sample image carry out the image for being cut into two 200*200 pixels up and down.
Here, the sample spliced can be split using slice layers with the backbone network of project training network, and It uses using GoogLenet Inception-V2 or ResNet50 network as the oviparity network of frame, two will split respectively Image, which is input in twin network model, carries out feature extraction.
The vehicle characteristics of the application extract in the implementation of model training method one, and step S41 will be from the test data Positive sample image or negative sample image are split respectively obtains the current twin network models progress feature of two vehicle image inputs It extracts, comprising:
Every two vehicle images segmented are inputted to two GoogLenet of the twin network model respectively Inception-V2 perhaps two GoogLenet Inception-V2 of carry out feature extraction of two ResNet50 branches or Two ResNet50 branches share it is all can learning parameter, wherein first GoogLenet Inception-V2 or ResNet50 branch can learning parameter be iterated, Article 2 GoogLenet Inception-V2 or ResNet50 branch Can learning parameter directly learn from after the iteration of first GoogLenet Inception-V2 or ResNet50 branch Parameter copy.
As shown in fig. 7, the application also provides a kind of vehicle identification method, which comprises
Step S5 inputs the image of two cars to be matched in the twin network model that training above-mentioned is completed;
Step S6 intercepts the spy of the image of the two cars of branch output therein from the twin network model Sign;
Step S7 calculates the similarity of the feature of the image for the two cars being truncated to, based on the two cars being calculated The similarity of the feature of image and preset similarity threshold judge that two cars are same vehicle or different vehicles.
Here, to reduce feedforward network burden, and similarity threshold such as cosine similarity threshold can be adjusted with scene change Value, therefore can only intercept the characteristic extraction part of a twin network model branch therein, the i.e. the last layer of feedforward network It is the full articulamentum of n dimension;
Before engineering in, twin network model is read, inputs the image of two cars to be matched, after feature extraction, into The similar calculating of row cosine sets a cosine similarity threshold value, as shown in FIG. 6 if cosine similarity is more than or equal to threshold value In image, two cars belong to same vehicle, are less than threshold value, then two cars belong to different vehicle.
According to another aspect of the present invention, a kind of equipment for handling in network equipment client information, the equipment are also provided Including the memory for storing computer program instructions and the processor for executing program instructions, wherein when the computer When program instruction is executed by the processor, the vehicle characteristics extraction model training method that the equipment executes any of the above-described is triggered.
According to another aspect of the present invention, a kind of equipment for handling in network equipment client information, the equipment are also provided Including the memory for storing computer program instructions and the processor for executing program instructions, wherein when the computer When program instruction is executed by the processor, triggers the equipment and execute above-mentioned vehicle identification method.
The detailed content of each equipment and storage medium embodiment of the invention, for details, reference can be made to the correspondences of each method embodiment Part, here, repeating no more.
Obviously, those skilled in the art can carry out various modification and variations without departing from the essence of the application to the application Mind and range.In this way, if these modifications and variations of the application belong to the range of the claim of this application and its equivalent technologies Within, then the application is also intended to include these modifications and variations.
It should be noted that the present invention can be carried out in the assembly of software and/or software and hardware, for example, can adopt With specific integrated circuit (ASIC), general purpose computer or any other realized similar to hardware device.In one embodiment In, software program of the invention can be executed to implement the above steps or functions by processor.Similarly, of the invention Software program (including relevant data structure) can be stored in computer readable recording medium, for example, RAM memory, Magnetic or optical driver or floppy disc and similar devices.In addition, some of the steps or functions of the present invention may be implemented in hardware, example Such as, as the circuit cooperated with processor thereby executing each step or function.
In addition, a part of the invention can be applied to computer program product, such as computer program instructions, when its quilt When computer executes, by the operation of the computer, it can call or provide according to the method for the present invention and/or technical solution. And the program instruction of method of the invention is called, it is possibly stored in fixed or moveable recording medium, and/or pass through Broadcast or the data flow in other signal-bearing mediums and transmitted, and/or be stored according to described program instruction operation In the working storage of computer equipment.Here, according to one embodiment of present invention including a device, which includes using Memory in storage computer program instructions and processor for executing program instructions, wherein when the computer program refers to When enabling by processor execution, method and/or skill of the device operation based on aforementioned multiple embodiments according to the present invention are triggered Art scheme.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie In the case where without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims Variation is included in the present invention.Any reference signs in the claims should not be construed as limiting the involved claims.This Outside, it is clear that one word of " comprising " does not exclude other units or steps, and odd number is not excluded for plural number.That states in device claim is multiple Unit or device can also be implemented through software or hardware by a unit or device.The first, the second equal words are used to table Show title, and does not indicate any particular order.

Claims (10)

1. a kind of vehicle characteristics extract model training method, which is characterized in that this method comprises:
The multiple image in vehicle monitoring video flowing is obtained, two vehicle images of same vehicle are intercepted from two field pictures and are spelled It is connected in positive sample image, two vehicle images of different vehicle are intercepted from two field pictures and is spliced into negative sample image;
A part is chosen from each positive sample image and negative sample image is used as test data;
Design includes the twin network model of two branches, by the positive sample image or negative sample image point in the test data It Chai Fen not two vehicle images;
Based on the twin network model, two vehicles of the positive sample image or negative sample image in the test data are obtained The feature of image judges that the twin network model is that training is completed also to be to continue with training based on obtained feature.
2. obtaining the test number the method according to claim 1, wherein being based on the twin network model The feature of two vehicle images of positive sample image or negative sample image in judges the twin net based on obtained feature Network model is that training is completed also to be to continue with training, comprising:
Step S41, by from the test data positive sample image or negative sample image split obtain two vehicle figures respectively As the current twin network model progress feature extraction of input, the positive sample image or negative sample figure in the test data are obtained The feature of two vehicle images of picture;
Step S42 calculates the feature of two vehicle images of the positive sample image or negative sample image in the test data Similarity judges two vehicles of the positive sample image or negative sample image in the test data based on the similarity being calculated Image, judges that two vehicle images are to belong to positive sample image to still fall within negative sample image, judging result is obtained, by institute The actual result for stating positive sample image or negative sample image that judging result and two vehicle images belong to is compared,
If comparing unanimously, step S43 will not be chosen for the residue of test data in each positive sample image and negative sample image Part is used as training data, using the training data as inputting, and by way of being fitted loss function continues to train twin After network model, step S41 is returned to;
If comparing unanimously, step S44, twin network model training terminates.
3. according to the method described in claim 2, it is characterized in that, calculating the positive sample image or negative sample in the test data The similarity of the feature of two vehicle images of this image, comprising:
Pass through positive sample images in norm layer calculating test data of normalization in caffe frame or negative sample image The denominator part of the cosine similarity of the feature of two vehicle images;
The element_wise layers of positive sample image calculated in the test data are calculated by the element in the caffe frame Or the molecular moiety of the cosine similarity of the feature of two vehicle images of negative sample image;
Cosine phase will be calculated by first link InnerProduct layers in the caffe frame and in conjunction with concat layers It is converted to neural network like the denominator part of degree and molecular moiety, and is exported the neural network with softmax.
4. according to the method described in claim 3, it is characterized in that, using the training data as input, and by fitting damage The mode for losing function continues to train twin network model, comprising:
By the result for using softmax to export the neural network and the training data as input, and pass through fitting The mode of loss function continues to train twin network model.
5. the method according to claim 1, wherein obtaining the multiple image in vehicle monitoring video flowing, every time Two vehicle images of same vehicle are intercepted from two field pictures and are spliced into positive sample image, are intercepted from two field pictures every time Two vehicle images of different vehicle are simultaneously spliced into negative sample image, comprising:
All vehicles of different frame image in vehicle monitoring video flowing are found by the detection algorithm of deep learning;
Based on the vehicle found, same color identifier is arranged to the same vehicle in different frame image;
Based on the color identifier, two vehicle images of same color identifier are intercepted from two field pictures, and are spliced As positive sample image, two vehicle images of color identifier not of the same race are intercepted from two field pictures, and carry out splicing as negative Sample image.
6. the method according to claim 1, wherein by positive sample image or negative sample in the test data Image splits two vehicle images respectively, comprising:
The size of positive sample image or negative sample image in the test data is uniformly adjusted to pre-set dimension;
By the slice layer of caffe frame, by the positive sample image or negative sample in the test data after unified adjustment size The cutting of this image is two vehicle images.
7. according to the method described in claim 2, it is characterized in that, will be from the positive sample image or negative sample in the test data This image is split respectively obtains the current twin network models progress feature extraction of two vehicle image inputs, comprising:
Every two vehicle images segmented are inputted to two GoogLenet of the twin network model respectively Inception-V2 perhaps two GoogLenet Inception-V2 of carry out feature extraction of two ResNet50 branches or Two ResNet50 branches share it is all can learning parameter, wherein first GoogLenetInception-V2 or ResNet50 branch can learning parameter be iterated, Article 2 GoogLenetInception-V2 or ResNet50 branch Can learning parameter directly learn to join from after the iteration of first GoogLenetInception-V2 ResNet50 branch Number copy.
8. a kind of vehicle identification method, which is characterized in that this method comprises:
The image of two cars to be matched is inputted into the twin network mould that training as described in any one of claims 1 to 7 is completed In type;
The feature of the image of the two cars of branch output therein is intercepted from the twin network model;
The similarity of the feature of the image for the two cars being truncated to is calculated, the feature of the image based on the two cars being calculated Similarity and preset similarity threshold judge that two cars are same vehicle or different vehicles.
9. a kind of equipment for handling in network equipment client information, which includes for storing depositing for computer program instructions Reservoir and processor for executing program instructions, wherein when the computer program instructions are executed by the processor, triggering should Method described in any one of equipment perform claim requirement 1 to 7.
10. a kind of equipment for handling in network equipment client information, the equipment include for storing computer program instructions Memory and processor for executing program instructions, wherein when the computer program instructions are executed by the processor, triggering Method described in equipment perform claim requirement 8.
CN201910632120.6A 2019-07-12 2019-07-12 Vehicle feature extraction model training method, vehicle identification method and equipment Expired - Fee Related CN110348393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910632120.6A CN110348393B (en) 2019-07-12 2019-07-12 Vehicle feature extraction model training method, vehicle identification method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910632120.6A CN110348393B (en) 2019-07-12 2019-07-12 Vehicle feature extraction model training method, vehicle identification method and equipment

Publications (2)

Publication Number Publication Date
CN110348393A true CN110348393A (en) 2019-10-18
CN110348393B CN110348393B (en) 2020-11-20

Family

ID=68176088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910632120.6A Expired - Fee Related CN110348393B (en) 2019-07-12 2019-07-12 Vehicle feature extraction model training method, vehicle identification method and equipment

Country Status (1)

Country Link
CN (1) CN110348393B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826484A (en) * 2019-11-05 2020-02-21 上海眼控科技股份有限公司 Vehicle weight recognition method and device, computer equipment and model training method
CN111353580A (en) * 2020-02-03 2020-06-30 中国人民解放军国防科技大学 Training method of target detection network, electronic device and storage medium
CN111612820A (en) * 2020-05-15 2020-09-01 北京百度网讯科技有限公司 Multi-target tracking method, and training method and device of feature extraction model
CN111881791A (en) * 2020-07-16 2020-11-03 北京宙心科技有限公司 Security identification method and system
CN112184640A (en) * 2020-09-15 2021-01-05 中保车服科技服务股份有限公司 Image detection model construction method and device and image detection method and device
CN113256992A (en) * 2021-07-15 2021-08-13 智道网联科技(北京)有限公司 Processing method and device based on vehicle road cloud

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228142A (en) * 2016-07-29 2016-12-14 西安电子科技大学 Face verification method based on convolutional neural networks and Bayesian decision
CN108388888A (en) * 2018-03-23 2018-08-10 腾讯科技(深圳)有限公司 A kind of vehicle identification method, device and storage medium
CN108446612A (en) * 2018-03-07 2018-08-24 腾讯科技(深圳)有限公司 vehicle identification method, device and storage medium
CN108596277A (en) * 2018-05-10 2018-09-28 腾讯科技(深圳)有限公司 A kind of testing vehicle register identification method, apparatus and storage medium
US20190147320A1 (en) * 2017-11-15 2019-05-16 Uber Technologies, Inc. "Matching Adversarial Networks"
CN109767456A (en) * 2019-01-09 2019-05-17 上海大学 A kind of method for tracking target based on SiameseFC frame and PFP neural network
CN109800624A (en) * 2018-11-27 2019-05-24 上海眼控科技股份有限公司 A kind of multi-object tracking method identified again based on pedestrian
CN109886141A (en) * 2019-01-28 2019-06-14 同济大学 A kind of pedestrian based on uncertainty optimization discrimination method again

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228142A (en) * 2016-07-29 2016-12-14 西安电子科技大学 Face verification method based on convolutional neural networks and Bayesian decision
US20190147320A1 (en) * 2017-11-15 2019-05-16 Uber Technologies, Inc. "Matching Adversarial Networks"
CN108446612A (en) * 2018-03-07 2018-08-24 腾讯科技(深圳)有限公司 vehicle identification method, device and storage medium
CN108388888A (en) * 2018-03-23 2018-08-10 腾讯科技(深圳)有限公司 A kind of vehicle identification method, device and storage medium
CN108596277A (en) * 2018-05-10 2018-09-28 腾讯科技(深圳)有限公司 A kind of testing vehicle register identification method, apparatus and storage medium
CN109800624A (en) * 2018-11-27 2019-05-24 上海眼控科技股份有限公司 A kind of multi-object tracking method identified again based on pedestrian
CN109767456A (en) * 2019-01-09 2019-05-17 上海大学 A kind of method for tracking target based on SiameseFC frame and PFP neural network
CN109886141A (en) * 2019-01-28 2019-06-14 同济大学 A kind of pedestrian based on uncertainty optimization discrimination method again

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
QIAN ZHANG ET AL.: "Vehicle Verification Based on Deep Siamese Network with Similarity Metric", 《LECTURE NOTES IN COMPUTER SCIENCE, 773–782. DOI:10.1007/978-3-319-77380-3_74》 *
中国航天科工集团第三研究员三一〇所: "《自助系统与人工智能领域技术发展报告》", 30 April 2017, 国防工业出版社 *
唐怡冬: "基于信息融合的车辆跟踪技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 *
李洁樱: "基于孪生卷积神经网络的车辆一致性判别方法", 《中国交通信息化》 *
杨清夙: "车辆视频检测与跟踪系统的研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技辑》 *
甘小楚: "基于深度学习的图片文字相关性计算算法研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
赵作升: "基于HSV颜色空间的视频车辆检测与跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826484A (en) * 2019-11-05 2020-02-21 上海眼控科技股份有限公司 Vehicle weight recognition method and device, computer equipment and model training method
CN111353580A (en) * 2020-02-03 2020-06-30 中国人民解放军国防科技大学 Training method of target detection network, electronic device and storage medium
CN111353580B (en) * 2020-02-03 2023-06-20 中国人民解放军国防科技大学 Training method of target detection network, electronic equipment and storage medium
CN111612820A (en) * 2020-05-15 2020-09-01 北京百度网讯科技有限公司 Multi-target tracking method, and training method and device of feature extraction model
CN111612820B (en) * 2020-05-15 2023-10-13 北京百度网讯科技有限公司 Multi-target tracking method, training method and device of feature extraction model
CN111881791A (en) * 2020-07-16 2020-11-03 北京宙心科技有限公司 Security identification method and system
CN111881791B (en) * 2020-07-16 2021-10-15 北京宙心科技有限公司 Security identification method and system
CN112184640A (en) * 2020-09-15 2021-01-05 中保车服科技服务股份有限公司 Image detection model construction method and device and image detection method and device
CN113256992A (en) * 2021-07-15 2021-08-13 智道网联科技(北京)有限公司 Processing method and device based on vehicle road cloud

Also Published As

Publication number Publication date
CN110348393B (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN110348393A (en) Vehicle characteristics extract model training method, vehicle identification method and equipment
CN108846355B (en) Image processing method, face recognition device and computer equipment
EP3690742A1 (en) Method for auto-labeling training images for use in deep learning network to analyze images with high precision, and auto-labeling device using the same
EP3486838A1 (en) System and method for semi-supervised conditional generative modeling using adversarial networks
CN108460415B (en) Language identification method
KR102042168B1 (en) Methods and apparatuses for generating text to video based on time series adversarial neural network
US20120155766A1 (en) Patch description and modeling for image subscene recognition
EP3989158A1 (en) Method, apparatus and device for video similarity detection
CN114241505B (en) Method and device for extracting chemical structure image, storage medium and electronic equipment
CN113177630B (en) Data memory elimination method and device for deep learning model
US20200151458A1 (en) Apparatus and method for video data augmentation
US8204889B2 (en) System, method, and computer-readable medium for seeking representative images in image set
CN110348392B (en) Vehicle matching method and device
Li et al. Image manipulation localization using attentional cross-domain CNN features
CN112906631A (en) Dangerous driving behavior detection method and detection system based on video
CN111461211B (en) Feature extraction method for lightweight target detection and corresponding detection method
CN111241873A (en) Image reproduction detection method, training method of model thereof, payment method and payment device
CN110766077A (en) Method, device and equipment for screening sketch in evidence chain image
CN113298015A (en) Video character social relationship graph generation method based on graph convolution network
CN109522921A (en) Statement similarity method of discrimination and equipment
CN115984949A (en) Low-quality face image recognition method and device with attention mechanism
CN113805977B (en) Test evidence obtaining method, model training method, device, equipment and storage medium
Song et al. Text Siamese network for video textual keyframe detection
CN115620083A (en) Model training method, face image quality evaluation method, device and medium
CN115705758A (en) Living body identification method, living body identification device, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Vehicle feature extraction model training method, vehicle recognition method and equipment

Effective date of registration: 20220211

Granted publication date: 20201120

Pledgee: Shanghai Bianwei Network Technology Co.,Ltd.

Pledgor: SHANGHAI EYE CONTROL TECHNOLOGY Co.,Ltd.

Registration number: Y2022310000023

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201120