CN112329772B - Vehicle part identification method, device, electronic device and storage medium - Google Patents

Vehicle part identification method, device, electronic device and storage medium Download PDF

Info

Publication number
CN112329772B
CN112329772B CN202011227250.0A CN202011227250A CN112329772B CN 112329772 B CN112329772 B CN 112329772B CN 202011227250 A CN202011227250 A CN 202011227250A CN 112329772 B CN112329772 B CN 112329772B
Authority
CN
China
Prior art keywords
vehicle component
vehicle
type
recognition result
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011227250.0A
Other languages
Chinese (zh)
Other versions
CN112329772A (en
Inventor
刘海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dasou Vehicle Software Technology Co Ltd
Original Assignee
Zhejiang Dasou Vehicle Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dasou Vehicle Software Technology Co Ltd filed Critical Zhejiang Dasou Vehicle Software Technology Co Ltd
Priority to CN202011227250.0A priority Critical patent/CN112329772B/en
Publication of CN112329772A publication Critical patent/CN112329772A/en
Application granted granted Critical
Publication of CN112329772B publication Critical patent/CN112329772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a vehicle part identification method, a device, an electronic device and a storage medium, wherein the vehicle part identification method comprises the following steps: acquiring a vehicle image of a vehicle to be identified; processing the vehicle image by using the vehicle part position prediction model to obtain a first recognition result; processing the vehicle image by using the vehicle part type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle part recognized by the vehicle part type prediction model; and acquiring the type name of the target vehicle component from the second recognition result, acquiring the position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as a third recognition result. According to the method and the device for identifying the target vehicle component, the problem of low efficiency of identifying the target vehicle component is solved, and the efficiency and the accuracy of identifying the target vehicle component are improved.

Description

Vehicle part identification method, device, electronic device and storage medium
Technical Field
The present application relates to the field of computer vision, and in particular to a vehicle component identification method, device, electronic device and storage medium.
Background
Vehicle detection is an important link in a vehicle transaction flow, and particularly in a second-hand vehicle transaction, the trading willingness and the trading price are directly influenced. In the vehicle detection process, the vehicle component needs to be identified, and the identification result of the vehicle component directly influences the vehicle detection result, so that the identification of the vehicle component is particularly important.
At present, the vehicle component identification mainly outputs the results of all vehicle component types in the target vehicle image through a neural network, and because the target vehicle image generally comprises a plurality of vehicle components, one target vehicle image outputs a plurality of vehicle component identification results, and in the vehicle component detection field, one target vehicle image generally corresponds to only a few target vehicle component names, so that a inspector needs to search for the target vehicle component names from a plurality of vehicle component identification results, the searching efficiency is lower and lower along with the increase of the number of the identification results, and when the identification results are too many, the efficiency of searching for the target vehicle component names is even lower than that of a manual direct marking mode.
At present, no effective solution is proposed for the problem of inefficiency in identifying the target vehicle component in the related art.
Disclosure of Invention
The embodiment of the application provides a vehicle component identification method, a device, an electronic device and a storage medium, which are used for at least solving the problem of low efficiency of identifying a target vehicle component in the related art.
In a first aspect, an embodiment of the present application provides a vehicle component identification method, including:
acquiring a vehicle image of a vehicle to be identified;
processing the vehicle image by using the vehicle part position prediction model to obtain a first identification result, wherein the first identification result comprises the type name and the position information of the vehicle part identified by the vehicle part position prediction model;
processing the vehicle image by using the vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type joint training of the vehicle component;
and acquiring the type name of the target vehicle component from the second recognition result, acquiring the position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as a third recognition result.
In some of these embodiments, the method further comprises:
Acquiring a training sample set of a vehicle part type prediction model, wherein the training sample set comprises a plurality of training samples, each training sample mainly comprises a training image and label information of the training image, and the label information comprises a type name of a vehicle part contained in the training image and a superior category name to which the type name belongs;
and training the vehicle part type prediction model in a supervised learning manner by using the training sample set, wherein a loss function used for training the vehicle part type prediction model is formed by combining a first loss function and a second loss function, the first loss function is used for representing the loss classified based on the type name, and the second loss function is used for representing the loss classified based on the superior type name.
In some of these embodiments, the number of type names of the target vehicle component acquired from the second recognition result is 1, 2, or 3.
In some of these embodiments, the second recognition result further includes: identifying a confidence level of the identified vehicle component by the vehicle component type prediction model;
the obtaining the type name of the target vehicle component from the second recognition result comprises: and determining that the confidence coefficient is larger than a preset threshold value or TopN vehicle parts from the second recognition result as target vehicle parts, and acquiring the type names of the target vehicle parts, wherein the TopN vehicle parts refer to the first N vehicle parts after the vehicle parts in the second recognition result are ordered in descending order according to the confidence coefficient, and N is an integer larger than or equal to 1.
In some of these embodiments, before the type name of the target vehicle component is obtained from the second recognition result, the method further includes:
and deleting the type name of the vehicle component with the confidence coefficient smaller than the preset threshold value from the second recognition result.
In some of these embodiments, after obtaining the type name of the target vehicle component from the second recognition result, obtaining the position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as the third recognition result, the method further includes:
deleting the type name of the target vehicle component from the second recognition result, and executing the following steps again: and acquiring the type name of the target vehicle component from the second recognition result, acquiring the position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as a third recognition result.
In some of these embodiments, after obtaining the type name of the target vehicle component from the second recognition result, obtaining the position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as the final recognition result, the method further includes:
And marking the third recognition result on the vehicle image to obtain a marked image, and outputting the marked image.
In a second aspect, an embodiment of the present application provides a vehicle component recognition apparatus, including:
the acquisition module is used for acquiring a vehicle image of the vehicle to be identified;
a first processing module, configured to process a vehicle image using a vehicle component position prediction model, to obtain a first recognition result, where the first recognition result includes a type name and position information of a vehicle component recognized by the vehicle component position prediction model;
the second processing module is used for processing the vehicle image by using the vehicle part type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle part recognized by the vehicle part type prediction model, and the vehicle part type prediction model is based on multi-level type joint training of the vehicle part;
and the third processing module is used for acquiring the type name of the target vehicle component from the second recognition result, acquiring the position information of the target vehicle component from the first recognition result and taking the type name and the position information of the target vehicle component as a third recognition result.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the vehicle component identification method according to the first aspect as described above when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, implements a vehicle component identification method as in the first aspect described above.
Compared with the related art, the vehicle component recognition method, the device, the electronic device and the storage medium provided by the embodiment of the application are used for acquiring the vehicle image of the vehicle to be recognized; processing the vehicle image by using the vehicle part position prediction model to obtain a first identification result, wherein the first identification result comprises the type name and the position information of the vehicle part identified by the vehicle part position prediction model; processing the vehicle image by using the vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type joint training of the vehicle component; the type name of the target vehicle component is obtained from the second identification result, the position information of the target vehicle component is obtained from the first identification result, and the type name and the position information of the target vehicle component are used as the third identification result, so that the problem of low efficiency in identifying the target vehicle component is solved, the efficiency and the accuracy in identifying the target vehicle component are improved, and a inspector is relieved from complicated work.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a hardware configuration block diagram of a terminal of a vehicle component recognition method according to an embodiment of the present application;
FIG. 2 is a flow chart of a vehicle component identification method according to an embodiment of the present application;
FIG. 3 is a flow chart of a vehicle component identification method according to a preferred embodiment of the present application;
FIG. 4 is a flow chart for fusing vehicle component detection results with multi-level category prediction results in accordance with a preferred embodiment of the present application;
fig. 5 is a block diagram of a vehicle component recognition apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described and illustrated below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments provided herein, are intended to be within the scope of the present application. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the embodiments described herein can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar terms herein do not denote a limitation of quantity, but rather denote the singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means greater than or equal to two. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
Term interpretation:
residual network: the residual network is a convolutional neural network proposed by 4 scholars from Microsoft Research, and the advantages of image classification and object recognition were obtained in ImageNet large-scale visual recognition competition (ImageNet Large Scale Visual Recognition Challenge, ILSVRC) in 2015. The residual network is characterized by easy optimization and can improve accuracy by increasing considerable depth. The residual blocks inside the deep neural network are connected in a jumping mode, and the gradient disappearance problem caused by depth increase in the deep neural network is relieved.
The method embodiment provided in this embodiment may be executed in a terminal, a computer or a similar computing device. Taking the operation on the terminal as an example, fig. 1 is a block diagram of the hardware structure of the terminal of the vehicle part recognition method of the embodiment of the present application. As shown in fig. 1, comprises a processor 11 and a memory 12 in which computer program instructions are stored.
In particular, the processor 11 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 12 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 12 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, solid state Drive (Solid State Drive, SSD), flash memory, optical Disk, magneto-optical Disk, tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. The memory 12 may include removable or non-removable (or fixed) media, where appropriate. The memory 12 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 12 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 12 includes Read-Only Memory (ROM) and random access Memory (Random Access Memory, RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (Programmable Read-Only Memory, abbreviated PROM), an erasable PROM (Erasable Programmable Read-Only Memory, abbreviated EPROM), an electrically erasable PROM (Electrically Erasable Programmable Read-Only Memory, abbreviated EEPROM), an electrically rewritable ROM (Electrically Alterable Read-Only Memory, abbreviated EAROM), or a FLASH Memory (FLASH), or a combination of two or more of these. The RAM may be Static Random-Access Memory (SRAM) or dynamic Random-Access Memory (Dynamic Random Access Memory DRAM), where the DRAM may be a fast page mode dynamic Random-Access Memory (Fast Page Mode Dynamic Random Access Memory FPMDRAM), extended data output dynamic Random-Access Memory (Extended Date Out Dynamic Random Access Memory EDODRAM), synchronous dynamic Random-Access Memory (Synchronous Dynamic Random-Access Memory SDRAM), or the like, as appropriate.
Memory 12 may be used to store or cache various data files that need to be processed and/or communicated, as well as possible computer program instructions for execution by processor 11.
The processor 11 implements any of the vehicle component identification methods in the above-described embodiments by reading and executing computer program instructions stored in the memory 12.
In some of these embodiments, the terminal may also include a communication interface 13 and a bus 10. As shown in fig. 1, the processor 11, the memory 12, and the communication interface 13 are connected via the bus 10 and perform communication with each other.
The communication interface 13 is used to implement communications between various modules, devices, units and/or units in the embodiments of the present application. The communication interface 13 may also enable communication with other components such as: and the external equipment, the image/data acquisition equipment, the database, the external storage, the image/data processing workstation and the like are used for data communication.
Bus 10 includes hardware, software, or both, coupling the components of the terminals to one another. Bus 10 includes, but is not limited to, at least one of: data Bus (Data Bus), address Bus (Address Bus), control Bus (Control Bus), expansion Bus (Expansion Bus), local Bus (Local Bus). By way of example, and not limitation, bus 10 may include a graphics acceleration interface (Accelerated Graphics Port), abbreviated AGP, or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) Bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an industry standard architecture (Industry Standard Architecture, ISA) Bus, a wireless bandwidth (InfiniBand) interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a micro channel architecture (Micro Channel Architecture, abbreviated MCa) Bus, a peripheral component interconnect (Peripheral Component Interconnect, abbreviated PCI) Bus, a PCI-Express (PCI-X) Bus, a serial advanced technology attachment (Serial Advanced Technology Attachment, abbreviated SATA) Bus, a video electronics standards association local (Video Electronics Standards Association Local Bus, abbreviated VLB) Bus, or other suitable Bus, or a combination of two or more of the foregoing. Bus 10 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
The present embodiment provides a vehicle component recognition method, fig. 2 is a flowchart of the vehicle component recognition method according to the embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
step S201, a vehicle image of a vehicle to be identified is acquired.
A vehicle image of a vehicle to be identified is acquired, the vehicle image including a vehicle component to be identified.
Step S202, processing the vehicle image using the vehicle component position prediction model, and obtaining a first recognition result, wherein the first recognition result includes a type name and position information of the vehicle component recognized by the vehicle component position prediction model.
And inputting the vehicle image to be identified into a vehicle component prediction model, identifying the vehicle component in the vehicle image to be identified by the vehicle component prediction model, and outputting the identification result, wherein the identification result comprises the type of the vehicle component to be identified and the position information of the vehicle component to be identified.
In the present embodiment, acquiring the vehicle component position prediction model includes: acquiring a training sample set of a vehicle part position prediction model, wherein the training sample set comprises a plurality of training samples, each training sample mainly comprises a training image and label information of the training image, and the label information comprises type names of vehicle parts contained in the training image;
The vehicle component position prediction model is trained in a supervised learning manner using the training sample set. In the above manner, the vehicle component position prediction model is acquired.
In step S203, the vehicle image is processed using the vehicle component type prediction model, so as to obtain a second recognition result, where the second recognition result includes a type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type joint training of the vehicle component.
And inputting the vehicle image to be identified into a vehicle component type prediction model, predicting the type name of the vehicle component in the vehicle image to be identified by the vehicle component type prediction model, and outputting a prediction result, wherein the prediction result comprises the type name of the vehicle component to be identified.
In the present embodiment, the vehicle component recognition method further includes: acquiring a training sample set of a vehicle part type prediction model, wherein the training sample set comprises a plurality of training samples, each training sample mainly comprises a training image and label information of the training image, and the label information comprises a type name of a vehicle part contained in the training image and a superior category name to which the type name belongs;
And training the vehicle part type prediction model in a supervised learning manner by using the training sample set, wherein a loss function used for training the vehicle part type prediction model is formed by combining a first loss function and a second loss function, the first loss function is used for representing the loss classified based on the type name, and the second loss function is used for representing the loss classified based on the superior type name. In the above manner, the vehicle component type prediction model that can identify the name of the vehicle component type is acquired.
Step S204, obtaining the type name of the target vehicle component from the second recognition result, obtaining the position information of the target vehicle component from the first recognition result, and using the type name and the position information of the target vehicle component as the third recognition result.
And selecting the type name of the target vehicle component from the second recognition result, judging whether the type name of the target vehicle component exists in the first recognition result, and if so, outputting the position information corresponding to the type of the target vehicle component from the first recognition result.
In the present embodiment, the number of the type names of the target vehicle components acquired from the second recognition result is 1, 2, or 3.
Through the steps S201 to S204, a vehicle image of the vehicle to be identified is acquired; processing the vehicle image by using the vehicle part position prediction model to obtain a first identification result, wherein the first identification result comprises the type name and the position information of the vehicle part identified by the vehicle part position prediction model; processing the vehicle image by using the vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type joint training of the vehicle component; the type name of the target vehicle component is obtained from the second identification result, the position information of the target vehicle component is obtained from the first identification result, and the type name and the position information of the target vehicle component are used as the third identification result, so that the problem of low efficiency in identifying the target vehicle component is solved, the efficiency and the accuracy in identifying the target vehicle component are improved, and a inspector is relieved from complicated work.
In some of these embodiments, the second recognition result further includes: identifying a confidence level of the identified vehicle component by the vehicle component type prediction model;
The obtaining the type name of the target vehicle component from the second recognition result comprises: and determining that the confidence coefficient is larger than a preset threshold value or TopN vehicle parts from the second recognition result as target vehicle parts, and acquiring the type names of the target vehicle parts, wherein the TopN vehicle parts refer to the first N vehicle parts after the vehicle parts in the second recognition result are ordered in descending order according to the confidence coefficient, and N is an integer larger than or equal to 1. In this way, the target vehicle component type may be thresholded or ranked according to the confidence level.
In some of these embodiments, before the type name of the target vehicle component is obtained from the second recognition result, the method further includes: and deleting the type name of the vehicle component with the confidence coefficient smaller than the preset threshold value from the second recognition result. By the method, the part type which does not belong to the target vehicle in the second recognition result is deleted, and the time for selecting the target vehicle part type is reduced.
In some of these embodiments, before the type name of the target vehicle component is obtained from the second recognition result, the method further includes: and deleting the vehicle parts after the confidence level TopN in the second recognition result. By the method, the part type which does not belong to the target vehicle in the second recognition result is deleted, and the time for selecting the target vehicle part type is reduced.
In some of these embodiments, after obtaining the type name of the target vehicle component from the second recognition result, obtaining the position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as the third recognition result, the method further includes:
deleting the type name of the target vehicle component from the second recognition result, and executing the following steps again: and acquiring the type name of the target vehicle component from the second recognition result, acquiring the position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as a third recognition result. By deleting the identified target vehicle component from the second identification result and updating the second identification result in the above manner, all the vehicle component types to be identified in the second identification result can be identified.
In some of these embodiments, after obtaining the type name of the target vehicle component from the second recognition result, obtaining the position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as the final recognition result, the method further includes:
And marking the third recognition result on the vehicle image to obtain a marked image, and outputting the marked image. By the method, the identified target vehicle component can be marked in the vehicle image to be identified, so that a inspector can conveniently judge whether the target vehicle component is accurately identified.
The embodiments of the present application are described and illustrated below by means of preferred embodiments.
Fig. 3 is a flowchart of a vehicle component identification method according to a preferred embodiment of the present application, as shown in fig. 3, the vehicle component identification of the preferred embodiment includes the steps of:
in step S301, a vehicle image to be recognized is acquired.
Acquiring a vehicle image to be identified by a camera, the vehicle image to be identified comprising a vehicle component to be identified, the vehicle component types include front bumpers, rear bumpers, left headlights, right headlights, front hoods, left front doors, right front doors, left rear doors, right rear doors, left front fenders, right front fenders, left rear fenders, right rear fenders, rear hoods, front windshields, rear windshields, left rear view mirrors, right rear view mirrors, left tail lights, right tail lights, left fog lights, right front windows, left front windows, right rear windows, left rear windows, right bottom edges, left bottom edges, right front door handles, left rear door handles, right rear door handles, left front tires, left rear tires, right rear tires the automobile front tire comprises a right front tire, a right rear tire, a right front wheel arch, a left front wheel arch, a right rear wheel arch, a left front steel ring, a left rear steel ring, a right front steel ring, a right rear steel ring, a left front door trim, a right front door trim, a left rear door trim, a right rear door trim, a left front door trim, a front bumper trim, a rear bumper trim, a front bumper guard, a rear bumper guard, a front bumper deflector, a rear bumper deflector, a roof, a steering wheel, a left front seat, a right front seat, a left rear seat, a right rear seat, a center console, an engine, a water tank, an engine upper guard and an engine lower guard.
Step S302, detecting a vehicle component of the vehicle image to be identified using the vehicle component detection model.
And detecting the vehicle part image to be identified by using a pre-trained vehicle part detection model, wherein the detection result of the vehicle part is marked as P, and the vehicle part detection model can be deployed on the mobile computing device or the cloud server.
(a) A vehicle component detection training dataset is created.
The vehicle component detection training data set is marked by a professional on the vehicle component image. The vehicle component detection training data set comprises a training image and label information marked according to the training image, wherein the label information comprises a vehicle component type and position information corresponding to the vehicle component type. Wherein the position information is denoted as Bbox, and is generally represented by coordinates of upper left corner and lower right corner of circumscribed rectangular frame of the area occupied by the vehicle part type in the vehicle part image, and the upper left corner and lower right corner are marked as (Bbox x1 ,Bbox y1 ,Bbox x2 ,Bbox y2 ),(Bbox x1 ,Bbox y1 ) Is the upper left corner coordinates, (Bbox) x2 ,Bbox y2 ) Is the lower right corner coordinates. The vehicle part type is denoted as C 3 The types of vehicle components in the embodiments of the present invention mainly include 66 types: front bumper, rear bumper, left front headlight, right front headlight, front hood, left front door, right front door, left rear door, right rear door, left front fender, right front fender A left rear fender, a right rear fender, a rear cover, a front windshield, a rear windshield, a left rear view mirror, a right rear view mirror a left tail lamp, a right tail lamp, a left fog lamp, a right front window, a left front window, a right rear window, a left rear window the left front door handle, the left rear door handle, the right rear door handle, the left front tire, the left rear tire, the right front tire, the right rear tire, the right front wheel arch, the left front wheel arch, the right rear wheel arch, the left rear wheel arch left front steel ring, left rear steel ring, right front steel ring, right rear steel ring, left front door trim, right front door trim, left rear door trim, right rear door trim, left front door trim, front bumper trim, rear bumper trim, front bumper guard, rear bumper guard, front bumper baffle, rear bumper baffle, roof, steering wheel, left front seat, right front seat, left rear seat, right rear seat, center console, engine, water tank, engine upper shield and engine lower shield. In this way, a data set of vehicle component detections is obtained.
(b) Training and deployment of a vehicle component detection model.
Constructing a vehicle component detection model by using a convolutional neural network (Convolutional Neural Network, CNN), using a residual network by using a main network structure, using a single-stage detector (Single Shot MultiBox Detector, SSD) by using a detection method, training the vehicle component detection model constructed by the CNN by using a supervised learning method based on the vehicle component detection training data set created in the step S302 (a), and deploying the vehicle component detection model on computer equipment, wherein the residual network can use ResNet-50. By the method, a trained vehicle part detection model is obtained, and preparation is made for subsequent vehicle part detection.
(c) And inputting the vehicle component image to be identified into a vehicle component detection model to obtain a vehicle component identification result.
Inputting the vehicle component image to be identified into the vehicle component detection model trained in the step S302 (b), and obtaining a vehicle component identification result, wherein the vehicle component identification result comprises all vehicle component type names in the image to be identified and position information corresponding to the identified vehicle component type. By the method, all vehicle part types and corresponding position information in the image to be identified are acquired, and preparation is made for the accurate position of the target vehicle part type to be acquired later.
Step S303, predicting the type of the vehicle component of the vehicle image to be identified by using the multi-stage category prediction model.
(a) A multi-level category prediction training dataset is created.
The multi-level category prediction training dataset is annotated by a practitioner with images of the vehicle component. The multi-level category prediction training data set comprises a training image and multi-level category label information marked according to the training image, wherein the multi-level category label mainly comprises a primary category label and a tertiary category label, and the tertiary category label is the same as the vehicle part type C in the step S302 (a) 3 The same applies. The first class label is marked as C 1 Mainly comprising 3 classes, vehicle appearance, vehicle interior trim and engine compartment. By the method, the training data set comprising the multi-level category labels is obtained, and preparation is made for training of a subsequent multi-level category prediction model.
(b) Training and deployment of a multi-level category prediction model.
Constructing a multi-level category prediction model by using CNN, using a residual network for a backbone network structure, using a cross entropy function for a loss function, and recording the total loss function of the model as L total The method consists of two loss functions of a first class and a third class, and is specifically as follows:
L total =α*L C1 +(1-α)*L C3
wherein L is C1 For the first class loss function, L C3 The loss function is predicted for the three-level category, a being set to 0.6. Based on the multi-level category prediction training data set created in step S303 (a), training is completed on the multi-level category prediction model constructed by the CNN through a supervised learning method, and the training data set is deployed on a computer device. By the method, the trained multi-level category prediction model can be obtained, and the vehicle type prediction results of the first-level category and the third-level category can be obtained through the model.
(c) And inputting the vehicle component image to be identified into a multi-level category prediction model to obtain a predicted target vehicle component type.
Inputting the vehicle part image to be identified into the multi-level category prediction model trained in the step S303 (b), and obtaining a multi-level category prediction result, wherein the multi-level category prediction result comprises a primary category prediction result and a tertiary category prediction result. In this way, the target vehicle component type can be predicted, which is a precondition for the final acquisition of the target vehicle component type.
Step S304, fusing the detection result of the vehicle component and the prediction result of the multi-level category.
Fusing the multi-level category prediction result and the vehicle part detection result, outputting a final target vehicle part identification result, and marking as P final . FIG. 4 is a flow chart for fusing vehicle component detection results with multi-level category prediction results in accordance with a preferred embodiment of the present application. The specific fusion step includes the following steps.
Step S401, selecting a three-level category prediction result of the multi-level category prediction model as a target vehicle component type prediction result.
Taking the three-level category prediction result of the multi-level category prediction model as a target vehicle component type prediction result, wherein the target vehicle component type prediction result comprises a target vehicle component type name in an image to be recognized and a probability score of a predicted target vehicle component type
Step S402, selecting the category with the highest probability score in the target vehicle part type prediction result as the candidate part type.
The predicted target vehicle component types are arranged according to the descending order of probability scores of the predicted target vehicle component types, a list C is generated, the highest predicted target vehicle component type in the list C is selected as a candidate target vehicle component type, and the candidate target vehicle component type is marked as C candidate
Step S403, traversing the component detection result.
The vehicle component detection result P is traversed.
Step S404, determining whether the component detection result is the same as the current candidate component type.
Judgment partDetection result P and current candidate part type C candidate Whether or not the same, if the same, C candidate As the final target vehicle component type, step S405 is entered. Otherwise, the process advances to step S406.
Step S405 outputs a final target vehicle component recognition result, i.e., the current candidate component type.
Step S406, deleting the current candidate component type from the target vehicle component type prediction result.
Deletion of C in C candidate Updating C, and continuing to execute the steps S402-S404 until the predicted result C of the type of the target vehicle component is empty.
In the above manner, the final target vehicle component type is obtained by sorting the probability scores of the predicted target vehicles, and all the target vehicle components in the prediction result C are finally output by updating the predicted vehicle component type list.
Step S305, a final target vehicle component recognition result is output.
And marking the final target vehicle part recognition result in the vehicle part image to be recognized according to the target vehicle part type and the position information of the target vehicle part type in the image to be recognized as the final target vehicle part recognition result. By the method, the identified target vehicle component is marked in the image to be identified, and the inspector can check the identification name of the target vehicle component only by observing the image to be identified, so that the working efficiency of the inspector is improved.
The present embodiment also provides a vehicle component recognition device, which is used to implement the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 5 is a block diagram of a vehicle component recognition apparatus according to an embodiment of the present application, as shown in fig. 5, including:
An acquisition module 51 for acquiring a vehicle image of a vehicle to be identified;
a first processing module 52, connected to the obtaining module 51, for processing the vehicle image using the vehicle component position prediction model to obtain a first recognition result, wherein the first recognition result includes a type name and position information of the vehicle component recognized by the vehicle component position prediction model;
a second processing module 53, connected to the obtaining module 51, for processing the vehicle image using the vehicle component type prediction model to obtain a second recognition result, where the second recognition result includes a type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type joint training of the vehicle component;
the third processing module 54 is connected to the first processing module 52 and the second processing module 53, and is configured to obtain the type name of the target vehicle component from the second recognition result, obtain the position information of the target vehicle component from the first recognition result, and use the type name and the position information of the target vehicle component as the third recognition result.
In one embodiment, the vehicle component identification device further comprises a training module of the vehicle component type prediction model, which is connected to the second processing module 53, the training module of the vehicle component type prediction model comprising:
The system comprises an acquisition unit, a storage unit and a storage unit, wherein the acquisition unit is used for acquiring a training sample set of a vehicle part type prediction model, the training sample set comprises a plurality of training samples, each training sample mainly comprises a training image and label information of the training image, and the label information comprises a type name of a vehicle part contained in the training image and a superior category name to which the type name belongs;
the model training unit is connected to the acquisition unit and is used for training the vehicle part type prediction model in a supervised learning mode by using the training sample set, wherein a loss function used for training the vehicle part type prediction model is formed by combining a first loss function and a second loss function, the first loss function is used for representing the loss classified based on the type name, and the second loss function is used for representing the loss classified based on the name of the superior category.
In one embodiment, the number of the type names of the target vehicle component acquired from the second recognition result is 1, 2, or 3.
In one embodiment, the second recognition result further includes: identifying a confidence level of the identified vehicle component by the vehicle component type prediction model; the second processing module 53 is configured to determine, from the second recognition result, a number of vehicle parts with a confidence degree greater than a preset threshold or a confidence degree TopN as a target vehicle part, and obtain a type name of the target vehicle part, where TopN is the first N vehicle parts obtained by sorting the vehicle parts in descending order of the confidence degree in the second recognition result, and N is an integer greater than or equal to 1.
In one embodiment, the vehicle component identification device further includes a deletion module, connected to the second processing module 53, for deleting the type name of the vehicle component with the confidence level smaller than the preset threshold value from the second identification result.
In one embodiment, the vehicle component identification apparatus further includes an update module, coupled to the third processing module 54, for deleting the type name of the target vehicle component from the second identification result; the third processing module 54 is further configured to, after the updating module deletes the type name of the target vehicle component from the second recognition result, obtain the type name of the next target vehicle component from the second recognition result obtained after deleting the type name of the target vehicle component, obtain the location information of the next target vehicle component from the first recognition result, and use the type name and the location information of the next target vehicle component as the next third recognition result.
In one embodiment, the vehicle component recognition device further includes a marking module connected to the third processing module 54 for marking the third recognition result on the vehicle image to obtain a marked image, and outputting the marked image.
The present embodiment also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring a vehicle image of a vehicle to be identified;
s2, processing the vehicle image by using the vehicle part position prediction model to obtain a first recognition result, wherein the first recognition result comprises the type name and the position information of the vehicle part recognized by the vehicle part position prediction model;
s3, processing the vehicle image by using a vehicle part type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle part recognized by the vehicle part type prediction model, and the vehicle part type prediction model is based on multi-level type joint training of the vehicle part;
S4, obtaining the type name of the target vehicle component from the second recognition result, obtaining the position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as a third recognition result.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In addition, in combination with the vehicle component recognition method in the above embodiment, the embodiment of the application may be implemented by providing a storage medium. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements any of the vehicle component identification methods of the above embodiments.
It should be understood by those skilled in the art that the technical features of the above embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (9)

1. A vehicle component identification method, characterized by comprising:
acquiring a vehicle image of a vehicle to be identified;
processing the vehicle image by using a vehicle part position prediction model to obtain a first identification result, wherein the first identification result comprises a type name and position information of the vehicle part identified by the vehicle part position prediction model;
processing the vehicle image by using a vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type joint training of the vehicle component; the training process of the vehicle component type prediction model comprises the following steps:
acquiring a training sample set of the vehicle part type prediction model, wherein the training sample set comprises a plurality of training samples, each training sample consists of a training image and label information of the training image, and the label information comprises a type name of a vehicle part contained in the training image and a superior category name to which the type name belongs;
training the vehicle part type prediction model in a supervised learning manner by using the training sample set, wherein a loss function used for training the vehicle part type prediction model is formed by combining a first loss function and a second loss function, the first loss function is used for representing the loss classified based on the type name, and the second loss function is used for representing the loss classified based on the name of the superior category;
And acquiring the type name of the target vehicle component from the second identification result, acquiring the position information of the target vehicle component from the first identification result, and taking the type name and the position information of the target vehicle component as a third identification result.
2. The vehicle component recognition method according to claim 1, characterized in that the number of the type names of the target vehicle component acquired from the second recognition result is 1, 2, or 3.
3. The vehicle component identification method according to claim 1, characterized in that the second identification result further includes: identifying a confidence level of the identified vehicle component by the vehicle component type prediction model;
the obtaining the type name of the target vehicle component from the second recognition result comprises: and determining that the confidence coefficient is larger than a preset threshold value or TopN vehicle parts from the second recognition result as the target vehicle parts, and acquiring the type names of the target vehicle parts, wherein TopN vehicle parts are the first N vehicle parts after the vehicle parts in the second recognition result are ordered in descending order according to the confidence coefficient, and N is an integer larger than or equal to 1.
4. The vehicle component identification method according to claim 3, characterized in that before the type name of the target vehicle component is acquired from the second identification result, the method further comprises:
and deleting the type name of the vehicle component with the confidence coefficient smaller than the preset threshold value from the second recognition result.
5. The vehicle component recognition method according to claim 1, characterized in that, after obtaining a type name of a target vehicle component from the second recognition result, obtaining position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as a third recognition result, the method further comprises:
deleting the type name of the target vehicle component from the second recognition result, and executing the following steps again: and acquiring the type name of the target vehicle component from the second identification result, acquiring the position information of the target vehicle component from the first identification result, and taking the type name and the position information of the target vehicle component as a third identification result.
6. The vehicle component recognition method according to claim 1, characterized in that, after obtaining a type name of a target vehicle component from the second recognition result, obtaining position information of the target vehicle component from the first recognition result, and taking the type name and the position information of the target vehicle component as final recognition results, the method further comprises:
And marking the third recognition result on the vehicle image to obtain a marked image, and outputting the marked image.
7. A vehicle component recognition apparatus, characterized by comprising:
the acquisition module is used for acquiring a vehicle image of the vehicle to be identified;
a first processing module, configured to process the vehicle image using a vehicle component position prediction model, to obtain a first recognition result, where the first recognition result includes a type name and position information of a vehicle component recognized by the vehicle component position prediction model;
the second processing module is used for processing the vehicle image by using a vehicle part type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle part recognized by the vehicle part type prediction model, and the vehicle part type prediction model is based on multi-level type joint training of the vehicle part; the training process of the vehicle component type prediction model comprises the following steps: acquiring a training sample set of the vehicle part type prediction model, wherein the training sample set comprises a plurality of training samples, each training sample consists of a training image and label information of the training image, and the label information comprises a type name of a vehicle part contained in the training image and a superior category name to which the type name belongs; training the vehicle part type prediction model in a supervised learning manner by using the training sample set, wherein a loss function used for training the vehicle part type prediction model is formed by combining a first loss function and a second loss function, the first loss function is used for representing the loss classified based on the type name, and the second loss function is used for representing the loss classified based on the name of the superior category;
And the third processing module is used for acquiring the type name of the target vehicle component from the second recognition result, acquiring the position information of the target vehicle component from the first recognition result and taking the type name and the position information of the target vehicle component as a third recognition result.
8. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the vehicle component identification method of any one of claims 1 to 6.
9. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the vehicle component identification method of any one of claims 1 to 6 when run.
CN202011227250.0A 2020-11-06 2020-11-06 Vehicle part identification method, device, electronic device and storage medium Active CN112329772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011227250.0A CN112329772B (en) 2020-11-06 2020-11-06 Vehicle part identification method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011227250.0A CN112329772B (en) 2020-11-06 2020-11-06 Vehicle part identification method, device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112329772A CN112329772A (en) 2021-02-05
CN112329772B true CN112329772B (en) 2024-03-05

Family

ID=74316249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011227250.0A Active CN112329772B (en) 2020-11-06 2020-11-06 Vehicle part identification method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112329772B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570001B (en) * 2021-09-22 2022-02-15 深圳市信润富联数字科技有限公司 Classification identification positioning method, device, equipment and computer readable storage medium
CN114155417B (en) * 2021-12-13 2022-07-19 中国科学院空间应用工程与技术中心 Image target identification method and device, electronic equipment and computer storage medium
CN114627443B (en) * 2022-03-14 2023-06-09 小米汽车科技有限公司 Target detection method, target detection device, storage medium, electronic equipment and vehicle

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018157862A1 (en) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Vehicle type recognition method and device, storage medium and electronic device
CN110147707A (en) * 2018-10-25 2019-08-20 初速度(苏州)科技有限公司 A kind of high-precision vehicle identification method and system
CN110991506A (en) * 2019-11-22 2020-04-10 高新兴科技集团股份有限公司 Vehicle brand identification method, device, equipment and storage medium
JP2020517015A (en) * 2017-04-11 2020-06-11 アリババ・グループ・ホールディング・リミテッドAlibaba Group Holding Limited Picture-based vehicle damage assessment method and apparatus, and electronic device
CN111382808A (en) * 2020-05-29 2020-07-07 浙江大华技术股份有限公司 Vehicle detection processing method and device
CN111666898A (en) * 2020-06-09 2020-09-15 北京字节跳动网络技术有限公司 Method and device for identifying class to which vehicle belongs
CN111680556A (en) * 2020-04-29 2020-09-18 平安国际智慧城市科技股份有限公司 Method, device and equipment for identifying vehicle type at traffic gate and storage medium
CN111881741A (en) * 2020-06-22 2020-11-03 浙江大华技术股份有限公司 License plate recognition method and device, computer equipment and computer-readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651881B (en) * 2016-12-28 2023-04-28 同方威视技术股份有限公司 Vehicle inspection system, vehicle part recognition method and system
CN110570389B (en) * 2018-09-18 2020-07-17 阿里巴巴集团控股有限公司 Vehicle damage identification method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018157862A1 (en) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Vehicle type recognition method and device, storage medium and electronic device
JP2020517015A (en) * 2017-04-11 2020-06-11 アリババ・グループ・ホールディング・リミテッドAlibaba Group Holding Limited Picture-based vehicle damage assessment method and apparatus, and electronic device
CN110147707A (en) * 2018-10-25 2019-08-20 初速度(苏州)科技有限公司 A kind of high-precision vehicle identification method and system
CN110991506A (en) * 2019-11-22 2020-04-10 高新兴科技集团股份有限公司 Vehicle brand identification method, device, equipment and storage medium
CN111680556A (en) * 2020-04-29 2020-09-18 平安国际智慧城市科技股份有限公司 Method, device and equipment for identifying vehicle type at traffic gate and storage medium
CN111382808A (en) * 2020-05-29 2020-07-07 浙江大华技术股份有限公司 Vehicle detection processing method and device
CN111666898A (en) * 2020-06-09 2020-09-15 北京字节跳动网络技术有限公司 Method and device for identifying class to which vehicle belongs
CN111881741A (en) * 2020-06-22 2020-11-03 浙江大华技术股份有限公司 License plate recognition method and device, computer equipment and computer-readable storage medium

Also Published As

Publication number Publication date
CN112329772A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN112329772B (en) Vehicle part identification method, device, electronic device and storage medium
CN109657716B (en) Vehicle appearance damage identification method based on deep learning
CN106845412B (en) Obstacle identification method and device, computer equipment and readable medium
CN113033604B (en) Vehicle detection method, system and storage medium based on SF-YOLOv4 network model
US20200111061A1 (en) Apparatus and Method for Combined Visual Intelligence
CN106845416B (en) Obstacle identification method and device, computer equipment and readable medium
CN112906823B (en) Target object recognition model training method, recognition method and recognition device
CN108960074B (en) Small-size pedestrian target detection method based on deep learning
CN110097108B (en) Method, device, equipment and storage medium for identifying non-motor vehicle
CN109657599B (en) Picture identification method of distance-adaptive vehicle appearance part
CN109034086A (en) Vehicle recognition methods, apparatus and system again
CN112738470A (en) Method for detecting parking in expressway tunnel
CN111340026A (en) Training method of vehicle annual payment identification model and vehicle annual payment identification method
JP5293321B2 (en) Object identification device and program
CN117540153B (en) Tunnel monitoring data prediction method and system
CN114419584A (en) Improved traffic sign identification and positioning method by inhibiting YOLOv4 by using non-maximum value
CN111126271B (en) Bayonet snap image vehicle detection method, computer storage medium and electronic equipment
CN117809458A (en) Real-time assessment method and system for traffic accident risk
CN110532904B (en) Vehicle identification method
CN113701642A (en) Method and system for calculating appearance size of vehicle body
CN110727762B (en) Method, device, storage medium and electronic equipment for determining similar texts
CN115984786A (en) Vehicle damage detection method and device, terminal and storage medium
CN115311630A (en) Method and device for generating distinguishing threshold, training target recognition model and recognizing target
CN108960199A (en) Target pedestrian detection method, device and electronic equipment
CN111368784B (en) Target identification method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant