CN111611414A - Vehicle retrieval method, device and storage medium - Google Patents

Vehicle retrieval method, device and storage medium Download PDF

Info

Publication number
CN111611414A
CN111611414A CN201910134010.7A CN201910134010A CN111611414A CN 111611414 A CN111611414 A CN 111611414A CN 201910134010 A CN201910134010 A CN 201910134010A CN 111611414 A CN111611414 A CN 111611414A
Authority
CN
China
Prior art keywords
vehicle
image
matching
similarity
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910134010.7A
Other languages
Chinese (zh)
Other versions
CN111611414B (en
Inventor
隋煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910134010.7A priority Critical patent/CN111611414B/en
Publication of CN111611414A publication Critical patent/CN111611414A/en
Application granted granted Critical
Publication of CN111611414B publication Critical patent/CN111611414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a vehicle retrieval method, a vehicle retrieval device and a storage medium, and belongs to the technical field of image retrieval. The method comprises the following steps: the method comprises the steps of obtaining a shot image of a vehicle to be retrieved, calling a target network model, inputting the shot image into the target network model, outputting vehicle image features, wherein the vehicle image features are used for describing vehicle global information and comprise specific dimension sections used for describing specific local areas of the vehicle, the target network model is used for determining the vehicle image features of any vehicle based on the shot image of any vehicle, retrieving data related to the vehicle from a database based on the vehicle image features, the database stores a plurality of matching image features, and each matching image feature comprises a matching local area feature corresponding to the specific local area. The method and the device can avoid the need of carrying out feature extraction for multiple times, and improve the retrieval efficiency.

Description

Vehicle retrieval method, device and storage medium
Technical Field
The embodiment of the application relates to the technical field of image retrieval, in particular to a vehicle retrieval method, a vehicle retrieval device and a storage medium.
Background
At present, the image retrieval technology is widely applied to the field of intelligent transportation. For example, in some application scenarios, there may be a need to search for a vehicle, and the search may be implemented by image search based on a captured image of the vehicle.
In the related art, not only the entire vehicle search may be generally performed based on the captured image of the vehicle, but also a partial area image may be extracted from the captured image to perform the secondary search based on the partial area image. In implementation, the vehicle characteristics may be extracted, and a plurality of matching images matching the vehicle characteristics may be retrieved from the database. Then, local region images are respectively extracted from the shot image and the acquired multiple matching images, and region features of the local region images are respectively extracted so as to determine a matching image which is most matched with the local region in the shot image according to the extracted region features, so that the relevant information of the vehicle can be retrieved from the database based on the determined matching image.
However, in the above implementation, since feature extraction needs to be performed many times, the operation is cumbersome, and the retrieval efficiency is low.
Disclosure of Invention
The embodiment of the application provides a vehicle retrieval method, a vehicle retrieval device and a storage medium, and can solve the problem of low retrieval efficiency. The technical scheme is as follows:
in a first aspect, a vehicle retrieval method is provided, the method comprising:
acquiring a shot image of a vehicle to be retrieved;
calling a target network model, inputting the shot image into the target network model, and outputting vehicle image features, wherein the vehicle image features are used for describing vehicle global information and comprise specific dimension segments used for describing specific local areas of vehicles, and the target network model is used for determining the vehicle image features of any vehicle based on the shot image of the any vehicle;
retrieving data associated with the vehicle from a database storing a plurality of matching image features based on the vehicle image feature, each matching image feature comprising a matching local region feature corresponding to the particular local region.
Optionally, the retrieving data associated with the vehicle from a database based on the vehicle image feature includes:
determining cosine similarity between the vehicle image features and each matched image feature in the database to obtain a first similarity score corresponding to each matched image feature;
according to the sequence of the first similarity values from large to small, obtaining the matching image characteristics corresponding to the first similarity values of the previous preset number from the database;
determining matching local area features corresponding to the specific local area from each obtained matching image feature to obtain the preset number of matching local area features;
determining cosine similarity between the features in the specific dimension section and each of the preset number of matched local region features to obtain a preset number of second similarity scores;
retrieving data associated with the vehicle from the database based on the predetermined number of first similarity scores and the predetermined number of second similarity scores.
Optionally, each matching image feature has the same data structure as the vehicle image feature, and determining a matching local region feature corresponding to the specific local region from each acquired matching image feature includes:
determining a location of a feature within the particular dimensional segment in the vehicle image feature;
and acquiring the matching features corresponding to the positions from each acquired matching image feature to obtain the matching local region feature corresponding to the specific local region in each matching image feature.
Optionally, when the database stores a plurality of correspondences between matching image features and vehicle information, the retrieving data associated with the vehicle from the database based on the preset number of first similarity scores and the preset number of second similarity scores includes:
respectively carrying out weighted summation on each first similarity value in the preset number of first similarity values and the corresponding second similarity value in the preset number of second similarity values to obtain a preset number of third similarity values;
determining a maximum third similarity value from the preset number of third similarity values;
determining the matched image feature corresponding to the maximum third similarity value from the preset number of matched image features;
and acquiring vehicle data corresponding to the determined matching image features from the corresponding relation between the plurality of matching image features of the database and the vehicle information to obtain data associated with the vehicle.
Optionally, the target network model is obtained by training a network model to be trained based on the plurality of image samples, the vehicle category label in each image sample, and the location information of the specific local area.
In a second aspect, a vehicle retrieval apparatus is provided, the apparatus comprising:
the acquisition module is used for acquiring a shot image of a vehicle to be retrieved;
the system comprises a calling module, a target network model and a display module, wherein the calling module is used for calling the target network model, inputting the shot image into the target network model and outputting vehicle image characteristics, the vehicle image characteristics are used for describing vehicle global information and comprise specific dimension segments used for describing specific local areas of vehicles, and the target network model is used for determining the vehicle image characteristics of any vehicle based on the shot image of the any vehicle;
a retrieval module configured to retrieve data associated with the vehicle from a database based on the image features of the vehicle, the database storing a plurality of matching image features, each matching image feature including a matching local region feature corresponding to the specific local region.
Optionally, the retrieval module is configured to:
determining cosine similarity between the vehicle image features and each matched image feature in the database to obtain a first similarity score corresponding to each matched image feature;
according to the sequence of the first similarity values from large to small, obtaining the matching image characteristics corresponding to the first similarity values of the previous preset number from the database;
determining matching local area features corresponding to the specific local area from each obtained matching image feature to obtain the preset number of matching local area features;
determining cosine similarity between the features in the specific dimension section and each of the preset number of matched local region features to obtain a preset number of second similarity scores;
retrieving data associated with the vehicle from the database based on the predetermined number of first similarity scores and the predetermined number of second similarity scores.
Optionally, the retrieval module is configured to:
determining the position of the feature in the vehicle image feature in the specific dimension segment, wherein each matched image feature has the same data structure as the vehicle image feature;
and acquiring the matching features corresponding to the positions from each acquired matching image feature to obtain the matching local region feature corresponding to the specific local region in each matching image feature.
Optionally, the retrieval module is configured to:
respectively carrying out weighted summation on each first similarity value in the preset number of first similarity values and the corresponding second similarity value in the preset number of second similarity values to obtain a preset number of third similarity values;
determining a maximum third similarity value from the preset number of third similarity values;
determining the matched image feature corresponding to the maximum third similarity value from the preset number of matched image features;
and acquiring vehicle data corresponding to the determined matching image features from the corresponding relation between the plurality of matching image features of the database and the vehicle information to obtain data associated with the vehicle.
Optionally, the target network model is obtained by training a network model to be trained based on a plurality of image samples, the vehicle category label of each image sample, and the location information of the specific local area.
In a third aspect, a computer-readable storage medium is provided, the computer-readable storage medium having stored thereon instructions that, when executed by a processor, implement the vehicle retrieval method of the first aspect described above.
In a fourth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the vehicle retrieval method of the first aspect described above.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
and acquiring a shot image of the vehicle to be retrieved, calling a target network model, inputting the shot image into the target network model, and outputting the vehicle image characteristics of the vehicle. The vehicle image features are used for describing vehicle global information in whole, and the specific dimension segments included in the vehicle image features are used for describing specific local areas of the vehicle, that is, the features capable of describing the vehicle global and specific local areas can be extracted at one time through the target network model. And then, based on the extracted vehicle image characteristics, the data related to the vehicle can be retrieved from the database, so that the characteristic extraction can be avoided from being carried out for multiple times, and the retrieval efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart illustrating a vehicle retrieval method according to an exemplary embodiment;
FIG. 2 is a schematic illustration of a vehicle shown in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a feature configuration in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating the structure of a vehicle retrieval device according to an exemplary embodiment;
fig. 5 is a schematic diagram illustrating a structure of a terminal according to an exemplary embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before describing the vehicle retrieval method provided by the embodiment of the present application in detail, the application scenario and the implementation environment related to the embodiment of the present application are briefly described.
First, a brief description is given of an application scenario related to an embodiment of the present application.
In the field of intelligent transportation, there is a need for retrieving vehicles, for example, when a traffic police wants to inquire about a escape route of a hit vehicle, it can be determined by retrieval at which gates the vehicle has appeared, and further, during the retrieval process, it can be precisely retrieved according to some specific local areas of the vehicle with significant features, for example, the specific local areas are pendant areas, etc. At present, generally, a vehicle search mode can be adopted to perform vehicle retrieval, that is, a shot image of a vehicle can be obtained, the vehicle characteristics of the shot image are extracted, then all vehicle matching characteristics matched with the vehicle characteristics are inquired from a database, and then all matched images matched with the vehicle are searched according to all the inquired vehicle matching characteristics. Thereafter, specific local area feature extraction may be performed on the captured image and all the matching images, respectively, and then specific local area feature matching may be performed, thereby retrieving a matching vehicle closest to the vehicle. However, in the current implementation, the retrieval efficiency of the vehicle is low because the feature extraction needs to be performed for many times. Therefore, the embodiment of the present application provides a vehicle retrieval method, which can avoid the need of performing feature extraction for multiple times, and improve the retrieval efficiency.
Next, a brief description will be given of an implementation environment related to the embodiments of the present application.
The vehicle retrieval method provided by the embodiment of the application can be executed by intelligent equipment, and the intelligent equipment is configured or connected with a camera so as to shoot the vehicle through the camera. In practice, the smart device may be installed in a setting such as a gate, an electronic toll booth, or the like. In one possible implementation, the smart device may also be connected to a server, and the server may be configured with a database to store data related to vehicles via the database, so that the smart device may retrieve a vehicle in the database based on a captured image of the vehicle.
In some embodiments, the intelligent device may be an intelligent camera device, or the intelligent device may also be a terminal, a tablet computer, a portable computer, and the like, which is not limited in this embodiment of the present application.
After the application scenarios and implementation environments related to the embodiments of the present application are described, a vehicle retrieval method provided by the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a vehicle retrieval method according to an exemplary embodiment, which is described as an example of the vehicle retrieval method executed by an intelligent device in the embodiment of the present application, and the vehicle retrieval method may include the following steps:
step 101: and acquiring a shot image of the vehicle to be retrieved.
In daily life, a camera is usually installed in a scene such as a gate, an electronic toll booth, a speed limit area, etc., and the shooting range of the camera is adjusted so that a passing vehicle is shot by the camera to obtain a shot image of the vehicle.
In some embodiments, the smart device may store a captured image captured by the camera. Further, the intelligent device can acquire the shot image of the vehicle to be retrieved after receiving the retrieval instruction. The retrieval instruction may be triggered by a user, which may be triggered by a preset operation. That is, the smart device may provide a search option and an image selection option, and when a user wants to search for a certain vehicle, may select a photographed image of the vehicle that wants to be searched based on the image selection option and click the search option to trigger a search instruction, at which time, the image pickup device performs an operation of acquiring the photographed image.
The preset operation may be a click operation, a slide operation, a shake operation, and the like, which is not limited in the embodiment of the present application.
In a possible implementation manner, after the smart device obtains the shot image, the shot image may be subjected to processing such as denoising, which is not limited in the embodiment of the present application.
Step 102: and calling a target network model, inputting the shot image into the target network model, and outputting vehicle image characteristics, wherein the vehicle image characteristics are used for describing vehicle global information and comprise specific dimension segments used for describing specific local areas of vehicles, and the target network model is used for determining the vehicle image characteristics of any vehicle based on the shot image of the any vehicle.
The target network model is obtained through deep learning training. In a possible implementation manner, the target network model may include an input layer, a convolutional layer, a pooling layer, and an output layer, and after the smart device inputs the captured image into the target network model, the target network model processes the captured image sequentially through the input layer, the convolutional layer, the pooling layer, and the output layer, and outputs the vehicle image feature.
It should be noted that, the above description is only made by taking the target network model as an example including an input layer, a convolutional layer, a pooling layer, and an output layer, in another embodiment, the target network model may further include other network layers, for example, may further include a sampling layer, and the like, which is not limited in this embodiment.
The number of the specific local areas of the vehicle may be one or more. In addition, the vehicle image features also include globally relevant features, which may include other features in the vehicle other than the particular local region. Further, when the number of the specific local areas is multiple, the global association feature further includes an information feature for describing an association relationship between the multiple specific local areas.
The specific local area may be preset, and in some embodiments, referring to fig. 2, the specific local area may include a roof area 1, an annual inspection mark area 2, a left decoration area 3, a right decoration area 4, a pendant area 5, a vehicle body area 6, a left vehicle lamp area 7, and a right vehicle lamp area 8.
In addition, when the number of the specific local regions is multiple, the features of the specific local regions and the global associated features may be arranged in the vehicle image feature according to a preset rule, and the data length of the features and the global associated features in each specific dimension segment may be a preset data length. The preset rule can be set according to actual requirements, the preset data length can be set by a user according to actual requirements, and the intelligent equipment can be set by default, and the method and the device are not limited in the embodiment of the application.
For example, referring to fig. 3, fig. 3 is a schematic structural diagram of a vehicle image feature according to an exemplary embodiment, where the feature of each specific local area is described by 128 data lengths, and the globally relevant feature is described by 512 data lengths, that is, the data length of the vehicle image feature is 1536.
It should be noted that, in implementation, different data lengths may be used to describe the characteristics of each specific local area in the plurality of specific local areas. In addition, the arrangement order of the features of the specific local regions and the global correlation features in the vehicle image features may be set according to actual requirements, which is not limited in the embodiment of the present application.
It is worth mentioning that the global correlation characteristics of the vehicle and the characteristics of the specific local area are output at one time through the target network model, so that the need of extracting the characteristics for multiple times is avoided, and the vehicle retrieval efficiency is improved.
Further, the target network model is obtained by training the network model to be trained based on the plurality of image samples, the vehicle category label of each image sample, and the position information of the specific local area.
In implementation, a plurality of image samples may be obtained, a vehicle is subjected to region division on each of the plurality of image samples according to a region division rule, and the category of the vehicle in each image sample is labeled, so as to obtain a vehicle category label and position information of a specific local region of each image sample. And then, inputting the plurality of image samples, the vehicle class label of each image sample and the position information of the specific local area into a network model to be trained for deep training to obtain the target network model, so that the target network model can determine the vehicle image characteristics of any vehicle based on the shot image of the vehicle.
In a possible implementation manner, the network model to be trained may be a deep convolutional neural network, and further, the network model to be trained may be a google initiation network, a residual error network (ResNet), and the like, which is not limited in this embodiment.
Further, during training, the training sample may include other information besides the location information of the specific local area, which is not limited in this embodiment of the application.
Step 103: based on the vehicle image features, data associated with the vehicle is retrieved from a database storing a plurality of matching image features, each matching image feature including a matching local region feature corresponding to the particular local region.
Specifically, each matching image feature has the same data structure as the vehicle image feature. In some embodiments, retrieving the data associated with the vehicle from the database based on the image characteristic of the vehicle may include the following steps:
1031: and determining cosine similarity between the vehicle image characteristics and each matched image characteristic in the database to obtain a first similarity score corresponding to each matched image characteristic.
In this embodiment, the vehicle may be subjected to vehicle matching based on the vehicle image features, that is, the matching image features of the matching vehicle similar to the vehicle as a whole in the database are determined. In implementation, the intelligent device determines cosine similarity between the vehicle image feature and each matching image feature in the database to determine matching degree between each matching image feature in the database and the vehicle image feature, and obtains a first similarity score corresponding to each matching image feature.
For convenience of description, the first similarity score corresponding to each matching image feature determined by the intelligent device is denoted as siWherein the value range of i is [1, N ]]The N is the number of matching image features in the database.
1032: and according to the sequence of the first similarity scores from large to small, acquiring the matched image features corresponding to the first similarity scores of the previous preset number from the database.
The greater the first similarity score is, the more similar the matching image feature corresponding to the first similarity score is to the vehicle image feature, and further, the greater the overall similarity between the matching vehicle corresponding to the matching image feature and the vehicle corresponding to the vehicle image feature can be described. Therefore, the intelligent device may obtain, based on the obtained first similarity score, a preset number of matching image features having a greater similarity to the vehicle image feature from the database according to a certain proportion or number. In implementation, the obtained first similarity score corresponding to each matching image feature may be sorted according to the descending order of the first similarity scores. The intelligent device determines a preset number of first similarity scores from the sorted first similarity scores, and then obtains the matched image features corresponding to the preset number of first similarity scores from the database.
The preset number can be set by a user according to actual requirements in a self-defined mode, and can also be set by the intelligent device in a default mode.
1033: and determining the matched local area features corresponding to the specific local area from each acquired matched image feature to obtain the preset number of matched local area features.
After the intelligent device obtains the preset number of matched image features with large similarity from the database, the similarity between the vehicle to be retrieved and the specific local area of the matched vehicle can be further determined based on the preset number of matched image features. For this purpose, the intelligent device determines a matching local region feature corresponding to the specific local region from each acquired matching image feature.
In one possible implementation manner, each matching image feature has the same data structure as the vehicle image feature, and accordingly, determining a specific implementation of the matching local region feature corresponding to the specific local region from each acquired matching image feature may include: and determining the position of the feature in the specific dimension segment in the vehicle image feature, and acquiring the matched feature corresponding to the position from each acquired matched image feature to obtain the matched local region feature corresponding to the specific local region in each matched image feature.
The data structure of each matching image feature is the same as the data structure of the vehicle image feature, that is, the position of the matching local region feature of the specific local region in each matching image feature is the same as the position of the feature in the specific dimension segment of the specific local region in the vehicle image feature, and the data length is the same, for example, the data structure of each matching image feature is as shown in fig. 3.
When the data structure of each matching image feature is the same as that of the vehicle image feature, please refer to fig. 3, assuming that the position of the feature in the specific dimension segment in the vehicle image feature is [0,127], the intelligent device obtains the matching local area feature corresponding to the position [0,127] from each matching image feature obtained in the step 1032, and obtains the matching local area feature corresponding to the specific local area in each matching image feature.
It should be noted that, when the number of the specific local areas is multiple, the specific local area that the user wants to match may be specified according to actual requirements, at this time, the smart device determines a matching local area feature corresponding to the specific local area from each obtained matching image feature, and then retrieves the vehicle according to the following implementation manner.
1034: determining cosine similarity between the features in the specific dimension segment and each of the preset number of matched local region features to obtain a preset number of second similarity scores.
That is, the smart device determines a matching degree between a specific local area of each matching vehicle and a specific local area of the vehicle from the determined preset number of matching vehicles, so as to obtain a preset number of second similarity scores.
For convenience of description, the preset number of second similarity scores determined by the smart device is denoted as qiWherein the value range of i is [1, K ]]And K represents the preset number.
1035: retrieving data associated with the vehicle from the database based on the predetermined number of first similarity scores and the predetermined number of second similarity scores.
Since the first similarity score is used for representing the overall matching degree of the vehicle, and the second similarity score is used for representing the matching degree of a certain local area in the matching vehicle similar to the vehicle to be retrieved, the data related to the vehicle to be retrieved can be accurately retrieved from the database based on the preset number of first similarity scores and the preset number of second similarity scores.
In one possible implementation manner, when the database stores a plurality of corresponding relationships between matching image features and vehicle information, a specific implementation of retrieving data associated with the vehicle from the database based on the preset number of first similarity scores and the preset number of second similarity scores may include: and respectively carrying out weighted summation on each first similarity score in the preset number of first similarity scores and the corresponding second similarity score in the preset number of second similarity scores to obtain a preset number of third similarity scores, determining the maximum third similarity score from the preset number of third similarity scores, determining the matched image feature corresponding to the maximum third similarity score from the preset number of matched image features, and obtaining the vehicle data corresponding to the determined matched image feature from the corresponding relation between the plurality of matched image features of the database and the vehicle information to obtain the data associated with the vehicle.
Continuing with the above example, the smart device will siAnd q isiAnd carrying out weighted summation calculation, thus obtaining a preset number of third similarity values. The greater the third similarity value is, the higher the matching degree between the corresponding matching image feature and the vehicle image feature is, so that the intelligent device determines the maximum similarity value from the obtained preset number of third similarity values, and then determines the matching image feature corresponding to the third similarity value, namely the matching image feature of the matching vehicle which is similar to the vehicle as a whole and has a similar specific local area. The intelligent device inquires the vehicle data corresponding to the matched image characteristics from the data to obtain data associated with the vehicle to be retrieved.
It should be noted that the embodiment only needs to store the matching image features of the vehicle in the database, and does not need to store a large number of vehicle pictures or deep-learned vehicle feature maps in the database, so that the storage space is saved.
It should be noted that, the above is described by taking the smart device as an example to execute the steps 1031 to 1035, and in another embodiment, the steps may also be executed by the server and the result of the determination is sent to the smart device, so that the operation load of the smart device may be reduced.
In the embodiment of the application, a shot image of a vehicle to be retrieved is acquired, a target network model is called, the shot image is input into the target network model, and vehicle image characteristics of the vehicle are output. The vehicle image features are used for describing vehicle global information in whole, and the specific dimension segments included in the vehicle image features are used for describing specific local areas of the vehicle, that is, the features capable of describing the vehicle global and specific local areas can be extracted at one time through the target network model. And then, based on the extracted vehicle image characteristics, the data related to the vehicle can be retrieved from the database, so that the characteristic extraction can be avoided from being carried out for multiple times, and the retrieval efficiency is improved.
Fig. 4 is a schematic structural diagram illustrating a vehicle retrieval apparatus according to an exemplary embodiment, which may be implemented by software, hardware, or a combination of both. The vehicle retrieval device may include:
an obtaining module 410, configured to obtain a captured image of a vehicle to be retrieved;
a calling module 420, configured to call a target network model, input the captured image into the target network model, and output a vehicle image feature, where the vehicle image feature is used to describe vehicle global information and includes a specific dimension segment used to describe a specific local area of a vehicle, and the target network model is used to determine a vehicle image feature of any vehicle based on the captured image of the any vehicle;
a retrieving module 430, configured to retrieve data associated with the vehicle from a database based on the image features of the vehicle, where the database stores a plurality of matching image features, and each matching image feature includes a matching local area feature corresponding to the specific local area.
Optionally, the retrieving module 430 is configured to:
determining cosine similarity between the vehicle image features and each matched image feature in the database to obtain a first similarity score corresponding to each matched image feature;
according to the sequence of the first similarity values from large to small, obtaining the matching image characteristics corresponding to the first similarity values of the previous preset number from the database;
determining matching local area features corresponding to the specific local area from each obtained matching image feature to obtain the preset number of matching local area features;
determining cosine similarity between the features in the specific dimension section and each of the preset number of matched local region features to obtain a preset number of second similarity scores;
retrieving data associated with the vehicle from the database based on the predetermined number of first similarity scores and the predetermined number of second similarity scores.
Optionally, the retrieving module 430 is configured to:
determining the position of the feature in the vehicle image feature in the specific dimension segment, wherein each matched image feature has the same data structure as the vehicle image feature;
and acquiring the matching features corresponding to the positions from each acquired matching image feature to obtain the matching local region feature corresponding to the specific local region in each matching image feature.
Optionally, the retrieving module 430 is configured to:
respectively carrying out weighted summation on each first similarity value in the preset number of first similarity values and the corresponding second similarity value in the preset number of second similarity values to obtain a preset number of third similarity values;
determining a maximum third similarity value from the preset number of third similarity values;
determining the matched image feature corresponding to the maximum third similarity value from the preset number of matched image features;
and acquiring vehicle data corresponding to the determined matching image features from the corresponding relation between the plurality of matching image features of the database and the vehicle information to obtain data associated with the vehicle.
Optionally, the target network model is obtained by training a network model to be trained based on the plurality of image samples, the vehicle category label of each image sample, and the location information of the specific local area.
In the embodiment of the application, a shot image of a vehicle to be retrieved is acquired, a target network model is called, the shot image is input into the target network model, and vehicle image characteristics of the vehicle are output. The vehicle image features are used for describing vehicle global information in whole, and the specific dimension segments included in the vehicle image features are used for describing specific local areas of the vehicle, that is, the features capable of describing the vehicle global and specific local areas can be extracted at one time through the target network model. And then, based on the extracted vehicle image characteristics, the data related to the vehicle can be retrieved from the database, so that the characteristic extraction can be avoided from being carried out for multiple times, and the retrieval efficiency is improved.
It should be noted that: in the vehicle search device provided in the above embodiment, when the vehicle search method is implemented, only the division of the above functional modules is taken as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the vehicle retrieval device provided by the above embodiment and the vehicle retrieval method embodiment belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and is not described herein again.
Fig. 5 shows a block diagram of a terminal 500 according to an exemplary embodiment of the present application. The terminal 500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer iv, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 500 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the terminal 500 includes: a processor 501 and a memory 502.
The processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 501 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the vehicle retrieval method provided by method embodiments herein.
In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502 and peripheral interface 503 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 503 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, touch screen display 505, camera 506, audio circuitry 507, positioning components 508, and power supply 509.
The peripheral interface 503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 504 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 504 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 504 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has the ability to capture touch signals on or over the surface of the display screen 505. The touch signal may be input to the processor 501 as a control signal for processing. At this point, the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 may be one, providing the front panel of the terminal 500; in other embodiments, the display screens 505 may be at least two, respectively disposed on different surfaces of the terminal 500 or in a folded design; in still other embodiments, the display 505 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 500. Even more, the display screen 505 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 505 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 506 is used to capture images or video. Optionally, camera assembly 506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 507 may also include a headphone jack.
The positioning component 508 is used to locate the current geographic position of the terminal 500 for navigation or LBS (location based Service). The positioning component 508 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 509 is used to power the various components in terminal 500. The power source 509 may be alternating current, direct current, disposable or rechargeable. When power supply 509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 500 also includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: acceleration sensor 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, optical sensor 515, and proximity sensor 516.
The acceleration sensor 511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 500. For example, the acceleration sensor 511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 501 may control the touch screen 505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect a body direction and a rotation angle of the terminal 500, and the gyro sensor 512 may cooperate with the acceleration sensor 511 to acquire a 3D motion of the user on the terminal 500. The processor 501 may implement the following functions according to the data collected by the gyro sensor 512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 513 may be disposed on a side bezel of the terminal 500 and/or an underlying layer of the touch display screen 505. When the pressure sensor 513 is disposed on the side frame of the terminal 500, a user's holding signal of the terminal 500 may be detected, and the processor 501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the touch display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 514 is used for collecting a fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 514 may be provided on the front, back, or side of the terminal 500. When a physical button or a vendor Logo is provided on the terminal 500, the fingerprint sensor 514 may be integrated with the physical button or the vendor Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the touch display screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 505 is turned down. In another embodiment, processor 501 may also dynamically adjust the shooting parameters of camera head assembly 506 based on the ambient light intensity collected by optical sensor 515.
A proximity sensor 516, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 500. The proximity sensor 516 is used to collect the distance between the user and the front surface of the terminal 500. In one embodiment, when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 gradually decreases, the processor 501 controls the touch display screen 505 to switch from the bright screen state to the dark screen state; when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 becomes gradually larger, the processor 501 controls the touch display screen 505 to switch from the screen-rest state to the screen-on state.
Those skilled in the art will appreciate that the configuration shown in fig. 5 is not intended to be limiting of terminal 500 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The embodiment of the application also provides a non-transitory computer readable storage medium, and when instructions in the storage medium are executed by a processor of the mobile terminal, the mobile terminal is enabled to execute the vehicle retrieval method provided by the embodiment.
The embodiment of the application also provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the vehicle retrieval method provided by the above embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. A vehicle retrieval method, characterized in that the method comprises:
acquiring a shot image of a vehicle to be retrieved;
calling a target network model, inputting the shot image into the target network model, and outputting vehicle image features, wherein the vehicle image features are used for describing vehicle global information and comprise specific dimension segments used for describing specific local areas of vehicles, and the target network model is used for determining the vehicle image features of any vehicle based on the shot image of the any vehicle;
retrieving data associated with the vehicle from a database storing a plurality of matching image features based on the vehicle image feature, each matching image feature comprising a matching local region feature corresponding to the particular local region.
2. The method of claim 1, wherein retrieving data associated with the vehicle from a database based on the vehicle image feature comprises:
determining cosine similarity between the vehicle image features and each matched image feature in the database to obtain a first similarity score corresponding to each matched image feature;
according to the sequence of the first similarity values from large to small, obtaining the matching image characteristics corresponding to the first similarity values of the previous preset number from the database;
determining matching local area features corresponding to the specific local area from each obtained matching image feature to obtain the preset number of matching local area features;
determining cosine similarity between the features in the specific dimension section and each of the preset number of matched local region features to obtain a preset number of second similarity scores;
retrieving data associated with the vehicle from the database based on the predetermined number of first similarity scores and the predetermined number of second similarity scores.
3. The method of claim 2, wherein each matching image feature has the same data structure as the vehicle image feature, and wherein determining the matching local region feature corresponding to the specific local region from each acquired matching image feature comprises:
determining a location of a feature within the particular dimensional segment in the vehicle image feature;
and acquiring the matching features corresponding to the positions from each acquired matching image feature to obtain the matching local region feature corresponding to the specific local region in each matching image feature.
4. The method of claim 2, wherein when the database stores a plurality of correspondences between matching image features and vehicle information, said retrieving data associated with the vehicle from the database based on the predetermined number of first similarity scores and the predetermined number of second similarity scores comprises:
respectively carrying out weighted summation on each first similarity value in the preset number of first similarity values and the corresponding second similarity value in the preset number of second similarity values to obtain a preset number of third similarity values;
determining a maximum third similarity value from the preset number of third similarity values;
determining the matched image feature corresponding to the maximum third similarity value from the preset number of matched image features;
and acquiring vehicle data corresponding to the determined matching image features from the corresponding relation between the plurality of matching image features of the database and the vehicle information to obtain data associated with the vehicle.
5. The method of claim 1, wherein the target network model is trained on a network model to be trained based on a plurality of image samples, a vehicle class label for each image sample, and location information for a particular local area.
6. A vehicle retrieval apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a shot image of a vehicle to be retrieved;
the system comprises a calling module, a target network model and a display module, wherein the calling module is used for calling the target network model, inputting the shot image into the target network model and outputting vehicle image characteristics, the vehicle image characteristics are used for describing vehicle global information and comprise specific dimension segments used for describing specific local areas of vehicles, and the target network model is used for determining the vehicle image characteristics of any vehicle based on the shot image of the any vehicle;
a retrieval module configured to retrieve data associated with the vehicle from a database based on the image features of the vehicle, the database storing a plurality of matching image features, each matching image feature including a matching local region feature corresponding to the specific local region.
7. The apparatus of claim 6, wherein the retrieval module is to:
determining cosine similarity between the vehicle image features and each matched image feature in the database to obtain a first similarity score corresponding to each matched image feature;
according to the sequence of the first similarity values from large to small, obtaining the matching image characteristics corresponding to the first similarity values of the previous preset number from the database;
determining matching local area features corresponding to the specific local area from each obtained matching image feature to obtain the preset number of matching local area features;
determining cosine similarity between the features in the specific dimension section and each of the preset number of matched local region features to obtain a preset number of second similarity scores;
retrieving data associated with the vehicle from the database based on the predetermined number of first similarity scores and the predetermined number of second similarity scores.
8. The apparatus of claim 7, wherein the retrieval module is to:
determining the position of the feature in the vehicle image feature in the specific dimension segment, wherein each matched image feature has the same data structure as the vehicle image feature;
and acquiring the matching features corresponding to the positions from each acquired matching image feature to obtain the matching local region feature corresponding to the specific local region in each matching image feature.
9. The apparatus of claim 7, wherein the retrieval module is to:
respectively carrying out weighted summation on each first similarity value in the preset number of first similarity values and the corresponding second similarity value in the preset number of second similarity values to obtain a preset number of third similarity values;
determining a maximum third similarity value from the preset number of third similarity values;
determining the matched image feature corresponding to the maximum third similarity value from the preset number of matched image features;
and acquiring vehicle data corresponding to the determined matching image features from the corresponding relation between the plurality of matching image features of the database and the vehicle information to obtain data associated with the vehicle.
10. The apparatus of claim 6, wherein the target network model is trained on a network model to be trained based on a plurality of image samples, a vehicle class label for each image sample, and location information for a particular local feature.
11. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of any of the methods of claims 1-5.
CN201910134010.7A 2019-02-22 2019-02-22 Vehicle searching method, device and storage medium Active CN111611414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910134010.7A CN111611414B (en) 2019-02-22 2019-02-22 Vehicle searching method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910134010.7A CN111611414B (en) 2019-02-22 2019-02-22 Vehicle searching method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111611414A true CN111611414A (en) 2020-09-01
CN111611414B CN111611414B (en) 2023-10-24

Family

ID=72202973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910134010.7A Active CN111611414B (en) 2019-02-22 2019-02-22 Vehicle searching method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111611414B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569911A (en) * 2021-06-28 2021-10-29 北京百度网讯科技有限公司 Vehicle identification method and device, electronic equipment and storage medium
CN115222896A (en) * 2022-09-20 2022-10-21 荣耀终端有限公司 Three-dimensional reconstruction method and device, electronic equipment and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197538A (en) * 2017-12-21 2018-06-22 浙江银江研究院有限公司 A kind of bayonet vehicle searching system and method based on local feature and deep learning
CN108197326A (en) * 2018-02-06 2018-06-22 腾讯科技(深圳)有限公司 A kind of vehicle retrieval method and device, electronic equipment, storage medium
CN108229468A (en) * 2017-06-28 2018-06-29 北京市商汤科技开发有限公司 Vehicle appearance feature recognition and vehicle retrieval method, apparatus, storage medium, electronic equipment
CN108596277A (en) * 2018-05-10 2018-09-28 腾讯科技(深圳)有限公司 A kind of testing vehicle register identification method, apparatus and storage medium
CN109063768A (en) * 2018-08-01 2018-12-21 北京旷视科技有限公司 Vehicle recognition methods, apparatus and system again
CN109359696A (en) * 2018-10-29 2019-02-19 重庆中科云丛科技有限公司 A kind of vehicle money recognition methods, system and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229468A (en) * 2017-06-28 2018-06-29 北京市商汤科技开发有限公司 Vehicle appearance feature recognition and vehicle retrieval method, apparatus, storage medium, electronic equipment
WO2019001481A1 (en) * 2017-06-28 2019-01-03 北京市商汤科技开发有限公司 Vehicle appearance feature identification and vehicle search method and apparatus, storage medium, and electronic device
CN108197538A (en) * 2017-12-21 2018-06-22 浙江银江研究院有限公司 A kind of bayonet vehicle searching system and method based on local feature and deep learning
CN108197326A (en) * 2018-02-06 2018-06-22 腾讯科技(深圳)有限公司 A kind of vehicle retrieval method and device, electronic equipment, storage medium
CN108596277A (en) * 2018-05-10 2018-09-28 腾讯科技(深圳)有限公司 A kind of testing vehicle register identification method, apparatus and storage medium
CN109063768A (en) * 2018-08-01 2018-12-21 北京旷视科技有限公司 Vehicle recognition methods, apparatus and system again
CN109359696A (en) * 2018-10-29 2019-02-19 重庆中科云丛科技有限公司 A kind of vehicle money recognition methods, system and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569911A (en) * 2021-06-28 2021-10-29 北京百度网讯科技有限公司 Vehicle identification method and device, electronic equipment and storage medium
CN115222896A (en) * 2022-09-20 2022-10-21 荣耀终端有限公司 Three-dimensional reconstruction method and device, electronic equipment and computer-readable storage medium

Also Published As

Publication number Publication date
CN111611414B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
CN109829456B (en) Image identification method and device and terminal
CN108629747B (en) Image enhancement method and device, electronic equipment and storage medium
CN110490179B (en) License plate recognition method and device and storage medium
CN110650379B (en) Video abstract generation method and device, electronic equipment and storage medium
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN110839128B (en) Photographing behavior detection method and device and storage medium
CN108132790B (en) Method, apparatus and computer storage medium for detecting a garbage code
CN109886208B (en) Object detection method and device, computer equipment and storage medium
CN112084811B (en) Identity information determining method, device and storage medium
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN111754386B (en) Image area shielding method, device, equipment and storage medium
CN112261491B (en) Video time sequence marking method and device, electronic equipment and storage medium
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN113918767A (en) Video clip positioning method, device, equipment and storage medium
CN110677713B (en) Video image processing method and device and storage medium
CN109547847B (en) Method and device for adding video information and computer readable storage medium
CN111753606A (en) Intelligent model upgrading method and device
CN107944024B (en) Method and device for determining audio file
CN111611414B (en) Vehicle searching method, device and storage medium
CN111127541A (en) Vehicle size determination method and device and storage medium
CN111860064B (en) Video-based target detection method, device, equipment and storage medium
CN112508959A (en) Video object segmentation method and device, electronic equipment and storage medium
CN114817709A (en) Sorting method, device, equipment and computer readable storage medium
CN111179628B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN112990424A (en) Method and device for training neural network model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant