CN112329772A - Vehicle component identification method, device, electronic device and storage medium - Google Patents

Vehicle component identification method, device, electronic device and storage medium Download PDF

Info

Publication number
CN112329772A
CN112329772A CN202011227250.0A CN202011227250A CN112329772A CN 112329772 A CN112329772 A CN 112329772A CN 202011227250 A CN202011227250 A CN 202011227250A CN 112329772 A CN112329772 A CN 112329772A
Authority
CN
China
Prior art keywords
vehicle component
type
target vehicle
prediction model
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011227250.0A
Other languages
Chinese (zh)
Other versions
CN112329772B (en
Inventor
刘海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dasou Vehicle Software Technology Co Ltd
Original Assignee
Zhejiang Dasou Vehicle Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dasou Vehicle Software Technology Co Ltd filed Critical Zhejiang Dasou Vehicle Software Technology Co Ltd
Priority to CN202011227250.0A priority Critical patent/CN112329772B/en
Publication of CN112329772A publication Critical patent/CN112329772A/en
Application granted granted Critical
Publication of CN112329772B publication Critical patent/CN112329772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The present application relates to a vehicle component identification method, a device, an electronic device, and a storage medium, wherein the vehicle component identification method includes: acquiring a vehicle image of a vehicle to be identified; processing the vehicle image by using a vehicle component position prediction model to obtain a first recognition result; processing the vehicle image by using the vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model; the type name of the target vehicle component is acquired from the second recognition result, the position information of the target vehicle component is acquired from the first recognition result, and the type name and the position information of the target vehicle component are taken as a third recognition result. Through the method and the device, the problem of low efficiency of identifying the target vehicle part is solved, and the efficiency and the accuracy of identifying the target vehicle part are improved.

Description

Vehicle component identification method, device, electronic device and storage medium
Technical Field
The present application relates to the field of computer vision, and more particularly, to a vehicle component recognition method, apparatus, electronic apparatus, and storage medium.
Background
Vehicle detection is a very important link in vehicle transaction flow, and particularly in second-hand vehicle transaction, the willingness and price of transaction are directly influenced. In the vehicle detection process, the vehicle component needs to be identified, and the identification result of the vehicle component directly influences the vehicle detection result, so that the identification of the vehicle component is particularly important.
Currently, vehicle component recognition mainly outputs results of all vehicle component types in a target vehicle image through a neural network, because the target vehicle image usually comprises a plurality of vehicle components, a target vehicle image outputs a plurality of vehicle component recognition results, and in the field of vehicle component detection, a target vehicle image usually corresponds to only a few target vehicle component names, so a detector needs to search for the target vehicle component names from a plurality of vehicle component recognition results, the searching efficiency is lower and lower as the number of recognition results is increased, and when the number of recognition results is too large, the efficiency of searching for the target vehicle component names is even lower than that of a manual direct labeling mode.
At present, no effective solution is provided for the problem of low efficiency of identifying target vehicle components in the related art.
Disclosure of Invention
The embodiment of the application provides a vehicle component identification method, a vehicle component identification device, an electronic device and a storage medium, and aims to at least solve the problem that in the related art, the efficiency of identifying a target vehicle component is low.
In a first aspect, an embodiment of the present application provides a vehicle component identification method, including:
acquiring a vehicle image of a vehicle to be identified;
processing the vehicle image by using the vehicle component position prediction model to obtain a first identification result, wherein the first identification result comprises the type name and the position information of the vehicle component identified by the vehicle component position prediction model;
processing the vehicle image by using a vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type combination training of the vehicle component;
the type name of the target vehicle component is acquired from the second recognition result, the position information of the target vehicle component is acquired from the first recognition result, and the type name and the position information of the target vehicle component are taken as a third recognition result.
In some of these embodiments, the method further comprises:
acquiring a training sample set of a vehicle component type prediction model, wherein the training sample set comprises a plurality of training samples, each training sample mainly comprises a training image and label information of the training image, and the label information comprises a type name of a vehicle component contained in the training image and a superior category name to which the type name belongs;
and training a vehicle component type prediction model in a supervised learning mode by using a training sample set, wherein a loss function used for training the vehicle component type prediction model is formed by combining a first loss function and a second loss function, the first loss function is used for representing the loss classified based on the type name, and the second loss function is used for representing the loss classified based on the superior category name.
In some of the embodiments, the number of the type names of the target vehicle components acquired from the second recognition result is 1, 2, or 3.
In some embodiments, the second recognition result further comprises: identifying a confidence level of the exported vehicle component by the vehicle component type prediction model;
the obtaining of the type name of the target vehicle component from the second recognition result includes: and determining vehicle components with the confidence degrees larger than a preset threshold or TopN from the second recognition result as target vehicle components, and acquiring the type names of the target vehicle components, wherein the TopN vehicle components refer to the first N vehicle components which are sorted according to the confidence degrees in the second recognition result in a descending order, and N is an integer larger than or equal to 1.
In some of these embodiments, prior to obtaining the type name of the target vehicle component from the second recognition result, the method further comprises:
and deleting the type name of the vehicle component with the confidence coefficient smaller than the preset threshold value from the second recognition result.
In some of these embodiments, after obtaining the type name of the target vehicle component from the second recognition result, obtaining the location information of the target vehicle component from the first recognition result, and using the type name and the location information of the target vehicle component as the third recognition result, the method further comprises:
deleting the type name of the target vehicle component from the second recognition result, and executing the following steps again: the type name of the target vehicle component is acquired from the second recognition result, the position information of the target vehicle component is acquired from the first recognition result, and the type name and the position information of the target vehicle component are taken as a third recognition result.
In some of these embodiments, after obtaining the type name of the target vehicle component from the second recognition result, obtaining the location information of the target vehicle component from the first recognition result, and taking the type name and the location information of the target vehicle component as the final recognition result, the method further comprises:
and marking the third recognition result on the vehicle image to obtain a marked image, and outputting the marked image.
In a second aspect, an embodiment of the present application provides a vehicle component identification apparatus, including:
the acquisition module is used for acquiring a vehicle image of the vehicle to be identified;
the first processing module is used for processing the vehicle image by using the vehicle component position prediction model to obtain a first identification result, wherein the first identification result comprises the type name and the position information of the vehicle component identified by the vehicle component position prediction model;
the second processing module is used for processing the vehicle image by using the vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type combination training of the vehicle component;
and the third processing module is used for acquiring the type name of the target vehicle component from the second identification result, acquiring the position information of the target vehicle component from the first identification result, and taking the type name and the position information of the target vehicle component as a third identification result.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the vehicle component identification method according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the vehicle component identification method according to the first aspect.
Compared with the related art, the vehicle part identification method, the vehicle part identification device, the electronic device and the storage medium provided by the embodiment of the application acquire the vehicle image of the vehicle to be identified; processing the vehicle image by using the vehicle component position prediction model to obtain a first identification result, wherein the first identification result comprises the type name and the position information of the vehicle component identified by the vehicle component position prediction model; processing the vehicle image by using a vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type combination training of the vehicle component; the type name of the target vehicle component is obtained from the second recognition result, the position information of the target vehicle component is obtained from the first recognition result, and the type name and the position information of the target vehicle component are used as a third recognition result, so that the problem of low efficiency of recognizing the target vehicle component is solved, the efficiency and the accuracy of recognizing the target vehicle component are improved, and a detector is liberated from complicated work.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a terminal of a vehicle component identification method according to an embodiment of the present application;
FIG. 2 is a flow chart of a vehicle component identification method according to an embodiment of the present application;
FIG. 3 is a flow chart of a vehicle component identification method according to a preferred embodiment of the present application;
FIG. 4 is a flow chart fusing vehicle component detection results and multi-level category prediction results in accordance with a preferred embodiment of the present application;
fig. 5 is a block diagram of the structure of a vehicle component recognition apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
Interpretation of terms:
residual error network: the residual network is a convolutional neural network proposed by 4 scholars from Microsoft Research, and wins image classification and object Recognition in the 2015 ImageNet Large Scale Visual Recognition Competition (ILSVRC). The residual network is characterized by easy optimization and can improve accuracy by adding considerable depth. The inner residual block uses jump connection, and the problem of gradient disappearance caused by depth increase in a deep neural network is relieved.
The method provided by the embodiment can be executed in a terminal, a computer or a similar operation device. Taking an example of the operation on a terminal, fig. 1 is a hardware configuration block diagram of the terminal of the vehicle component identification method according to the embodiment of the present application. As shown in fig. 1, includes a processor 11 and a memory 12 storing computer program instructions.
Specifically, the processor 11 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 12 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 12 may include a Hard Disk Drive (Hard Disk Drive, abbreviated to HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 12 may include removable or non-removable (or fixed) media, where appropriate. The memory 12 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 12 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, Memory 12 includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (EAROM), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended data output Dynamic Random-Access Memory (EDODRAM), a Synchronous Dynamic Random-Access Memory (SDRAM), and the like.
The memory 12 may be used to store or cache various data files that need to be processed and/or used for communication, as well as possible computer program instructions executed by the processor 11.
The processor 11 implements any of the vehicle component identification methods in the above embodiments by reading and executing computer program instructions stored in the memory 12.
In some of these embodiments, the terminal may also include a communication interface 13 and a bus 10. As shown in fig. 1, the processor 11, the memory 12, and the communication interface 13 are connected via a bus 10 to complete communication therebetween.
The communication interface 13 is used for implementing communication between modules, devices, units and/or equipment in the embodiment of the present application. The communication interface 13 may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
The bus 10 comprises hardware, software, or both coupling the components of the terminal to each other. Bus 10 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 10 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a HyperTransport (HT) Interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) Interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a Microchannel Architecture (MCA) Bus, a PCI (Peripheral Component Interconnect) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (AGP) Bus, a Local Video Association (Video Electronics Bus), abbreviated VLB) bus or other suitable bus or a combination of two or more of these. Bus 10 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The embodiment provides a vehicle component identification method, and fig. 2 is a flowchart of the vehicle component identification method according to the embodiment of the application, and as shown in fig. 2, the flowchart includes the following steps:
step S201, a vehicle image of a vehicle to be identified is acquired.
And acquiring a vehicle image of the vehicle to be identified, wherein the vehicle image comprises the vehicle part to be identified.
Step S202, processing the vehicle image by using the vehicle component position prediction model to obtain a first recognition result, wherein the first recognition result comprises the type name and the position information of the vehicle component recognized by the vehicle component position prediction model.
And inputting the vehicle image to be recognized into a vehicle component prediction model, recognizing the vehicle component in the vehicle image to be recognized by the vehicle component prediction model, and outputting a recognition result, wherein the recognition result comprises the type of the vehicle component to be recognized and the position information of the vehicle component to be recognized.
In the present embodiment, obtaining the vehicle component position prediction model includes: obtaining a training sample set of a vehicle component position prediction model, wherein the training sample set comprises a plurality of training samples, each training sample mainly comprises a training image and label information of the training image, and the label information comprises the type name of a vehicle component contained in the training image;
the vehicle component position prediction model is trained in a supervised learning manner using a training sample set. In the above manner, a vehicle component position prediction model is obtained.
Step S203, processing the vehicle image by using the vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is jointly trained on the multi-level types of the vehicle component.
Inputting the vehicle image to be recognized into a vehicle component type prediction model, predicting the type name of the vehicle component in the vehicle image to be recognized by the vehicle component type prediction model, and outputting the prediction result, wherein the prediction result comprises the type name of the vehicle component to be recognized.
In the present embodiment, the vehicle component identification method further includes: acquiring a training sample set of a vehicle component type prediction model, wherein the training sample set comprises a plurality of training samples, each training sample mainly comprises a training image and label information of the training image, and the label information comprises a type name of a vehicle component contained in the training image and a superior category name to which the type name belongs;
and training a vehicle component type prediction model in a supervised learning mode by using a training sample set, wherein a loss function used for training the vehicle component type prediction model is formed by combining a first loss function and a second loss function, the first loss function is used for representing the loss classified based on the type name, and the second loss function is used for representing the loss classified based on the superior category name. In this way, a vehicle component type prediction model that can identify the vehicle component type name is obtained.
Step S204, the type name of the target vehicle component is acquired from the second identification result, the position information of the target vehicle component is acquired from the first identification result, and the type name and the position information of the target vehicle component are used as a third identification result.
And selecting the type name of the target vehicle component from the second identification result, judging whether the type name of the target vehicle component exists in the first identification result, and if so, outputting the position information corresponding to the type of the target vehicle component from the first identification result.
In the present embodiment, the number of the type names of the target vehicle components acquired from the second recognition result is 1, 2, or 3.
Through the steps S201 to S204, the vehicle image of the vehicle to be identified is obtained; processing the vehicle image by using the vehicle component position prediction model to obtain a first identification result, wherein the first identification result comprises the type name and the position information of the vehicle component identified by the vehicle component position prediction model; processing the vehicle image by using a vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type combination training of the vehicle component; the type name of the target vehicle component is obtained from the second recognition result, the position information of the target vehicle component is obtained from the first recognition result, and the type name and the position information of the target vehicle component are used as a third recognition result, so that the problem of low efficiency of recognizing the target vehicle component is solved, the efficiency and the accuracy of recognizing the target vehicle component are improved, and a detector is liberated from complicated work.
In some embodiments, the second recognition result further comprises: identifying a confidence level of the exported vehicle component by the vehicle component type prediction model;
the obtaining of the type name of the target vehicle component from the second recognition result includes: and determining vehicle components with the confidence degrees larger than a preset threshold or TopN from the second recognition result as target vehicle components, and acquiring the type names of the target vehicle components, wherein the TopN vehicle components refer to the first N vehicle components which are sorted according to the confidence degrees in the second recognition result in a descending order, and N is an integer larger than or equal to 1. In this manner, the target vehicle component type may be set or ranked according to confidence.
In some of these embodiments, prior to obtaining the type name of the target vehicle component from the second recognition result, the method further comprises: and deleting the type name of the vehicle component with the confidence coefficient smaller than the preset threshold value from the second recognition result. Through the mode, the part types which do not belong to the target vehicle in the second recognition result are deleted, and the time for selecting the part types of the target vehicle is shortened.
In some of these embodiments, prior to obtaining the type name of the target vehicle component from the second recognition result, the method further comprises: the vehicle components after the confidence TopN in the second recognition result are deleted. Through the mode, the part types which do not belong to the target vehicle in the second recognition result are deleted, and the time for selecting the part types of the target vehicle is shortened.
In some of these embodiments, after obtaining the type name of the target vehicle component from the second recognition result, obtaining the location information of the target vehicle component from the first recognition result, and using the type name and the location information of the target vehicle component as the third recognition result, the method further comprises:
deleting the type name of the target vehicle component from the second recognition result, and executing the following steps again: the type name of the target vehicle component is acquired from the second recognition result, the position information of the target vehicle component is acquired from the first recognition result, and the type name and the position information of the target vehicle component are taken as a third recognition result. In this way, all the types of vehicle components to be identified in the second recognition result can be identified by deleting the identified target vehicle component from the second recognition result and updating the second recognition result.
In some of these embodiments, after obtaining the type name of the target vehicle component from the second recognition result, obtaining the location information of the target vehicle component from the first recognition result, and taking the type name and the location information of the target vehicle component as the final recognition result, the method further comprises:
and marking the third recognition result on the vehicle image to obtain a marked image, and outputting the marked image. Through the mode, the recognized target vehicle component can be marked in the vehicle image to be recognized, and a detector can conveniently judge whether the target vehicle component is recognized accurately.
The embodiments of the present application are described and illustrated below by means of preferred embodiments.
Fig. 3 is a flowchart of a vehicle component identification method according to a preferred embodiment of the present application, and as shown in fig. 3, the vehicle component identification of the preferred embodiment includes the steps of:
step S301, a vehicle image to be recognized is acquired.
Acquiring a vehicle image to be identified through a camera, wherein the vehicle image to be identified comprises a vehicle component to be identified, the type of the vehicle component comprises a front bumper, a rear bumper, a left headlamp, a right headlamp, a front cover, a left front door, a right front door, a left rear door, a right rear door, a left front fender, a right rear fender, a rear cover, a front windshield, a rear windshield, a left rearview mirror, a right rearview mirror, a left taillight, a right taillight, a left fog light, a right front window, a left front window, a right rear window, a left rear window, a right bottom large side, a left bottom large side, a right front door handle, a left front door handle, a right rear door handle, a left front tire, a left rear wheel tire, a right front tire, a right rear wheel arch, a left front wheel arch, a left rear wheel rim, a right rear wheel arch, the automobile front bumper comprises a right rear steel ring, a left front door decorative strip, a right front door decorative strip, a left rear door decorative strip, a right rear door decorative strip, a left front door decorative plate, a front bumper decorative strip, a rear bumper decorative strip, a front bumper guard plate, a rear bumper guard plate, a front bumper guide plate, a rear bumper guide plate, a roof, a steering wheel, a left front seat, a right front seat, a left rear seat, a right rear seat, a center console, an engine, a water tank, an upper engine shield and a lower engine shield.
Step S302, vehicle component detection is carried out on the vehicle image to be recognized by using the vehicle component detection model.
And detecting the vehicle component image to be recognized by using a pre-trained vehicle component detection model, and recording the detection result of the vehicle component as P, wherein the vehicle component detection model can be deployed on the mobile computing device and can also be deployed on the cloud server.
(a) A vehicle component detection training dataset is created.
The vehicle component detection training data set is labeled by a professional on the vehicle component image. The vehicle component detection training data set comprises training images and label information labeled according to the training images, wherein the label information comprises vehicle component types and vehicle partsAnd position information corresponding to the type of the part. The position information is denoted as Bbox, and is usually represented by coordinates of upper left corner and lower right corner points of a circumscribed rectangular frame of an area occupied by the vehicle component type in the vehicle component image, and the coordinates of the upper left corner and the lower right corner points are denoted as (Bbox)x1,Bboxy1,Bboxx2,Bboxy2),(Bboxx1,Bboxy1) As the coordinates of the upper left corner point, (Bbox)x2,Bboxy2) The coordinates of the lower right corner points. Vehicle part type C3The vehicle component types of the embodiment of the present invention mainly include 66 types: front bumper, rear bumper, left headlamp, right headlamp, front cover, left front door, right front door, left rear door, right rear door, left front fender, right front fender, left rear fender, right rear fender, rear cover, front windshield, rear windshield, left rearview mirror, right rearview mirror, left tail light, right tail light, left fog light, right front window, left front window, right rear window, left rear window, right bottom flange, left bottom flange, right front door handle, left rear door handle, right rear door handle, left front tire, left rear tire, right front wheel flange, left front wheel flange, right rear wheel flange, left front wheel rim, left rear wheel rim, right front wheel rim, right rear wheel rim, left front door trim, right front door trim, left rear door trim, right rear door trim, left bumper trim, front door, front bumper trim panel, front door, right rear door trim panel, rear door, The rear bumper guard strip, front bumper backplate, rear bumper backplate, front bumper guide plate, rear bumper guide plate, roof, steering wheel, left front seat, right front seat, left back seat, right back seat, center console, engine, water tank, engine upper shield and engine lower shield. In the above manner, a data set of vehicle component detections is obtained.
(b) And training and deploying a vehicle component detection model.
The method comprises the steps of constructing a vehicle component detection model by using a Convolutional Neural Network (CNN), using a residual Network for a backbone Network structure, using a Single-stage Detector (SSD) for the detection method, detecting a training data set based on the vehicle component created in the step S302(a), completing training on the vehicle component detection model constructed by the CNN by a supervised learning method, and deploying the vehicle component detection model on computer equipment, wherein the residual Network can use ResNet-50. Through the method, the trained vehicle component detection model is obtained, and preparation is made for subsequent vehicle component detection.
(c) And inputting the vehicle component image to be identified into the vehicle component detection model to obtain a vehicle component identification result.
Inputting the image of the vehicle component to be recognized into the vehicle component detection model trained in step S302(b), and obtaining a vehicle component recognition result, where the vehicle component recognition result includes names of all vehicle component types in the image to be recognized and position information corresponding to the recognized vehicle component types. Through the method, all vehicle component types and corresponding position information in the image to be recognized are obtained, and preparation is made for obtaining accurate positions of the target vehicle component types subsequently.
Step S303, using the multi-level category prediction model to predict the vehicle component type of the vehicle image to be recognized.
(a) A multi-level category prediction training data set is created.
The multi-level category prediction training data set is labeled by a professional on the vehicle part image. The multi-level category prediction training data set comprises training images and multi-level category label information labeled according to the training images, the multi-level category labels mainly comprise first-level category labels and third-level category labels, and the third-level category labels are matched with the vehicle component type C in the step S302(a)3The same is true. The first class label is marked as C1Mainly comprises 3 types, vehicle appearance, vehicle interior and engine compartment. Through the method, the training data set comprising the multi-level category labels is obtained, and preparation is made for the subsequent training of the multi-level category prediction model.
(b) And (5) training and deploying a multi-level category prediction model.
Using CNN to construct a multi-level category prediction model, using a residual error network as a main network structure, using a cross entropy function as a loss function, and recording the total loss function of the model as LtotalConsists of two loss functions of a first class and a third class,the method comprises the following specific steps:
Ltotal=α*LC1+(1-α)*LC3
wherein L isC1Is a first order class loss function, LC3A is set to 0.6 for the three-level category prediction loss function. Based on the multi-level category prediction training data set created in step S303(a), the multi-level category prediction model constructed by the CNN is trained by a supervised learning method and deployed on a computer device. By the method, the trained multi-level category prediction model can be obtained, and vehicle type prediction results of the first-level category and the third-level category can be obtained through the model.
(c) And inputting the vehicle component image to be identified into the multi-level category prediction model to obtain the predicted target vehicle component type.
And (3) inputting the vehicle component image to be identified into the multi-level category prediction model trained in the step (S303) (b) to obtain a multi-level category prediction result, wherein the multi-level category prediction result comprises a first-level category prediction result and a third-level category prediction result. By the above manner, the target vehicle component type can be predicted, which is a precondition for finally obtaining the target vehicle component type.
And step S304, fusing the vehicle component detection result and the multi-level category prediction result.
Fusing the multi-level category prediction result and the vehicle part detection result, outputting a final target vehicle part identification result, and recording as Pfinal. FIG. 4 is a flow chart for fusing vehicle component detection results and multi-level category prediction results according to a preferred embodiment of the present application. The specific fusion step includes the following steps.
Step S401, selecting a three-level category prediction result of the multi-level category prediction model as a target vehicle component type prediction result.
Taking the three-level category prediction result of the multi-level category prediction model as a target vehicle component type prediction result, wherein the target vehicle component type prediction result comprises a target vehicle component type name in the image to be recognized and a probability score of a predicted target vehicle component type
And step S402, selecting the category with the highest probability score in the target vehicle component type prediction result as a candidate component type.
Arranging the predicted target vehicle component types according to the descending order of the probability scores of the predicted target vehicle component types, generating a list C, selecting the highest predicted target vehicle component type in the list C as a candidate target vehicle component type, and recording the candidate target vehicle component type as the candidate target vehicle component type Ccandidate
In step S403, the component detection result is traversed.
And traversing the vehicle part detection result P.
In step S404, it is determined whether the component detection result is the same as the current candidate component type.
Judging part detection result P and current candidate part type CcandidateIf they are the same, CcandidateAs the final target vehicle component type, the process proceeds to step S405. Otherwise, the process proceeds to step S406.
In step S405, the final target vehicle component identification result, i.e., the current candidate component type, is output.
In step S406, the current candidate component type is deleted from the target vehicle component type prediction result.
Deletion of C in CcandidateAnd updating C, and continuing to execute the steps S402-S404 until the target vehicle component type prediction result C is empty.
In the above manner, the final target vehicle component type is obtained by sorting the probability scores of the predicted target vehicles, and all the target vehicle components in the prediction result C are finally output by updating the predicted vehicle component type list.
In step S305, the final target vehicle component recognition result is output.
And marking the final target vehicle component recognition result in the vehicle component image to be recognized according to the target vehicle component type and the position information of the target vehicle component type in the image to be recognized as the final target vehicle component recognition result. Through the mode, the recognized target vehicle component is marked in the image to be recognized, and the detection operator can check the identification name of the target vehicle component only by observing the image to be recognized, so that the working efficiency of the detection operator is improved.
The present embodiment further provides a vehicle component recognition apparatus, which is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a configuration of a vehicle component recognition apparatus according to an embodiment of the present application, which includes, as shown in fig. 5:
the acquiring module 51 is used for acquiring a vehicle image of a vehicle to be identified;
a first processing module 52, connected to the obtaining module 51, for processing the vehicle image by using the vehicle component position prediction model to obtain a first recognition result, wherein the first recognition result includes the type name and the position information of the vehicle component recognized by the vehicle component position prediction model;
the second processing module 53 is connected to the obtaining module 51, and is configured to process the vehicle image by using the vehicle component type prediction model to obtain a second recognition result, where the second recognition result includes the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type joint training of the vehicle component;
and a third processing module 54, connected to the first processing module 52 and the second processing module 53, for obtaining the type name of the target vehicle component from the second recognition result, obtaining the position information of the target vehicle component from the first recognition result, and regarding the type name and the position information of the target vehicle component as a third recognition result.
In one embodiment, the vehicle component recognition apparatus further includes a training module of a vehicle component type prediction model, which is connected to the second processing module 53, the training module of the vehicle component type prediction model includes:
the system comprises an acquisition unit, a prediction unit and a prediction unit, wherein the acquisition unit is used for acquiring a training sample set of a vehicle component type prediction model, the training sample set comprises a plurality of training samples, each training sample mainly comprises a training image and label information of the training image, and the label information comprises a type name of a vehicle component contained in the training image and a superior category name to which the type name belongs;
and the model training unit is connected to the obtaining unit and used for training the vehicle component type prediction model in a supervised learning mode by using the training sample set, wherein a loss function used for training the vehicle component type prediction model is formed by combining a first loss function and a second loss function, the first loss function is used for representing the loss classified based on the type name, and the second loss function is used for representing the loss classified based on the superior category name.
In one embodiment, the number of the type names of the target vehicle components acquired from the second recognition result is 1, 2, or 3.
In one embodiment, the second recognition result further includes: identifying a confidence level of the exported vehicle component by the vehicle component type prediction model; the second processing module 53 is configured to determine, from the second recognition result, vehicle components with confidence degrees that are greater than a preset threshold or TopN vehicle components with confidence degrees that are an integer greater than or equal to 1 as target vehicle components, and acquire type names of the target vehicle components, where TopN vehicle components refer to the first N vehicle components that are sorted according to the confidence degrees in the second recognition result in a descending order.
In one embodiment, the vehicle component recognition apparatus further includes a deletion module, connected to the second processing module 53, for deleting the type name of the vehicle component with the confidence level smaller than the preset threshold value from the second recognition result.
In one embodiment, the vehicle component recognition apparatus further comprises an updating module, connected to the third processing module 54, for deleting the type name of the target vehicle component from the second recognition result; the third processing module 54 is further configured to obtain the type name of the next target vehicle component from the second recognition result obtained after the type name of the target vehicle component is deleted by the updating module, obtain the location information of the next target vehicle component from the first recognition result, and take the type name and the location information of the next target vehicle component as the next third recognition result.
In one embodiment, the vehicle component recognition device further comprises a marking module, which is connected to the third processing module 54, and is configured to mark the third recognition result on the vehicle image to obtain a mark image, and output the mark image.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a vehicle image of the vehicle to be identified;
s2, processing the vehicle image by using the vehicle component position prediction model to obtain a first recognition result, wherein the first recognition result comprises the type name and the position information of the vehicle component recognized by the vehicle component position prediction model;
s3, processing the vehicle image by using the vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is jointly trained on the multi-level types of the vehicle component;
and S4, acquiring the type name of the target vehicle component from the second identification result, acquiring the position information of the target vehicle component from the first identification result, and taking the type name and the position information of the target vehicle component as a third identification result.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the vehicle component identification method in the above embodiments, the embodiments of the present application may be implemented by providing a storage medium. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the vehicle component identification methods of the above embodiments.
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A vehicle component identification method, characterized by comprising:
acquiring a vehicle image of a vehicle to be identified;
processing the vehicle image by using a vehicle component position prediction model to obtain a first identification result, wherein the first identification result comprises the type name and the position information of the vehicle component identified by the vehicle component position prediction model;
processing the vehicle image by using a vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is based on multi-level type joint training of the vehicle component;
and acquiring the type name of the target vehicle component from the second identification result, acquiring the position information of the target vehicle component from the first identification result, and taking the type name and the position information of the target vehicle component as a third identification result.
2. The vehicle component identification method according to claim 1, characterized by further comprising:
acquiring a training sample set of the vehicle component type prediction model, wherein the training sample set comprises a plurality of training samples, each training sample mainly comprises a training image and label information of the training image, and the label information comprises a type name of a vehicle component contained in the training image and a superior category name to which the type name belongs;
and training the vehicle component type prediction model in a supervised learning mode by using the training sample set, wherein a loss function used for training the vehicle component type prediction model is formed by combining a first loss function and a second loss function, the first loss function is used for representing the loss classified based on the type name, and the second loss function is used for representing the loss classified based on the superior category name.
3. The vehicle component identification method according to claim 1, characterized in that the number of type names of target vehicle components acquired from the second identification result is 1, 2, or 3.
4. The vehicle component identification method according to claim 1, characterized in that the second identification result further includes: identifying a confidence level of the exported vehicle component by the vehicle component type prediction model;
the obtaining of the type name of the target vehicle component from the second recognition result includes: determining vehicle components with confidence degrees larger than a preset threshold or TopN confidence degrees from the second recognition result as the target vehicle components, and acquiring the type names of the target vehicle components, wherein TopN vehicle components refer to the first N vehicle components which are sorted according to the confidence degrees in the second recognition result in a descending order, and N is an integer greater than or equal to 1.
5. The vehicle component identification method according to claim 4, characterized in that, before obtaining the type name of the target vehicle component from the second identification result, the method further comprises:
and deleting the type name of the vehicle component with the confidence coefficient smaller than the preset threshold value from the second recognition result.
6. The vehicle component identification method according to claim 1, characterized in that after acquiring a type name of a target vehicle component from the second identification result, acquiring location information of the target vehicle component from the first identification result, and regarding the type name and the location information of the target vehicle component as a third identification result, the method further comprises:
deleting the type name of the target vehicle component from the second recognition result, and executing the following steps again: and acquiring the type name of the target vehicle component from the second identification result, acquiring the position information of the target vehicle component from the first identification result, and taking the type name and the position information of the target vehicle component as a third identification result.
7. The vehicle component identification method according to claim 1, characterized in that after acquiring a type name of a target vehicle component from the second identification result, acquiring location information of the target vehicle component from the first identification result, and taking the type name and the location information of the target vehicle component as a final identification result, the method further comprises:
and marking the third recognition result on the vehicle image to obtain a marked image, and outputting the marked image.
8. A vehicle component recognition apparatus characterized by comprising:
the acquisition module is used for acquiring a vehicle image of the vehicle to be identified;
the first processing module is used for processing the vehicle image by using a vehicle component position prediction model to obtain a first identification result, wherein the first identification result comprises the type name and the position information of the vehicle component identified by the vehicle component position prediction model;
the second processing module is used for processing the vehicle image by using a vehicle component type prediction model to obtain a second recognition result, wherein the second recognition result comprises the type name of the vehicle component recognized by the vehicle component type prediction model, and the vehicle component type prediction model is jointly trained on the multi-level types of the vehicle component;
and the third processing module is used for acquiring the type name of the target vehicle component from the second identification result, acquiring the position information of the target vehicle component from the first identification result, and taking the type name and the position information of the target vehicle component as a third identification result.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to run the computer program to perform the vehicle component identification method of any one of claims 1 to 7.
10. A storage medium, in which a computer program is stored, wherein the computer program is arranged to carry out the vehicle component identification method according to any one of claims 1 to 7 when executed.
CN202011227250.0A 2020-11-06 2020-11-06 Vehicle part identification method, device, electronic device and storage medium Active CN112329772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011227250.0A CN112329772B (en) 2020-11-06 2020-11-06 Vehicle part identification method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011227250.0A CN112329772B (en) 2020-11-06 2020-11-06 Vehicle part identification method, device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112329772A true CN112329772A (en) 2021-02-05
CN112329772B CN112329772B (en) 2024-03-05

Family

ID=74316249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011227250.0A Active CN112329772B (en) 2020-11-06 2020-11-06 Vehicle part identification method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112329772B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570001A (en) * 2021-09-22 2021-10-29 深圳市信润富联数字科技有限公司 Classification identification positioning method, device, equipment and computer readable storage medium
CN114155417A (en) * 2021-12-13 2022-03-08 中国科学院空间应用工程与技术中心 Image target identification method and device, electronic equipment and computer storage medium
CN114627443A (en) * 2022-03-14 2022-06-14 小米汽车科技有限公司 Target detection method and device, storage medium, electronic equipment and vehicle

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180182126A1 (en) * 2016-12-28 2018-06-28 Nuctech Company Limited Vehicle inspection system, and method and system for identifying part of vehicle
WO2018157862A1 (en) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Vehicle type recognition method and device, storage medium and electronic device
CN110147707A (en) * 2018-10-25 2019-08-20 初速度(苏州)科技有限公司 A kind of high-precision vehicle identification method and system
US20200089990A1 (en) * 2018-09-18 2020-03-19 Alibaba Group Holding Limited Method and apparatus for vehicle damage identification
CN110991506A (en) * 2019-11-22 2020-04-10 高新兴科技集团股份有限公司 Vehicle brand identification method, device, equipment and storage medium
JP2020517015A (en) * 2017-04-11 2020-06-11 アリババ・グループ・ホールディング・リミテッドAlibaba Group Holding Limited Picture-based vehicle damage assessment method and apparatus, and electronic device
CN111382808A (en) * 2020-05-29 2020-07-07 浙江大华技术股份有限公司 Vehicle detection processing method and device
CN111666898A (en) * 2020-06-09 2020-09-15 北京字节跳动网络技术有限公司 Method and device for identifying class to which vehicle belongs
CN111680556A (en) * 2020-04-29 2020-09-18 平安国际智慧城市科技股份有限公司 Method, device and equipment for identifying vehicle type at traffic gate and storage medium
CN111881741A (en) * 2020-06-22 2020-11-03 浙江大华技术股份有限公司 License plate recognition method and device, computer equipment and computer-readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180182126A1 (en) * 2016-12-28 2018-06-28 Nuctech Company Limited Vehicle inspection system, and method and system for identifying part of vehicle
WO2018157862A1 (en) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Vehicle type recognition method and device, storage medium and electronic device
JP2020517015A (en) * 2017-04-11 2020-06-11 アリババ・グループ・ホールディング・リミテッドAlibaba Group Holding Limited Picture-based vehicle damage assessment method and apparatus, and electronic device
US20200089990A1 (en) * 2018-09-18 2020-03-19 Alibaba Group Holding Limited Method and apparatus for vehicle damage identification
CN110147707A (en) * 2018-10-25 2019-08-20 初速度(苏州)科技有限公司 A kind of high-precision vehicle identification method and system
CN110991506A (en) * 2019-11-22 2020-04-10 高新兴科技集团股份有限公司 Vehicle brand identification method, device, equipment and storage medium
CN111680556A (en) * 2020-04-29 2020-09-18 平安国际智慧城市科技股份有限公司 Method, device and equipment for identifying vehicle type at traffic gate and storage medium
CN111382808A (en) * 2020-05-29 2020-07-07 浙江大华技术股份有限公司 Vehicle detection processing method and device
CN111666898A (en) * 2020-06-09 2020-09-15 北京字节跳动网络技术有限公司 Method and device for identifying class to which vehicle belongs
CN111881741A (en) * 2020-06-22 2020-11-03 浙江大华技术股份有限公司 License plate recognition method and device, computer equipment and computer-readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570001A (en) * 2021-09-22 2021-10-29 深圳市信润富联数字科技有限公司 Classification identification positioning method, device, equipment and computer readable storage medium
CN114155417A (en) * 2021-12-13 2022-03-08 中国科学院空间应用工程与技术中心 Image target identification method and device, electronic equipment and computer storage medium
CN114627443A (en) * 2022-03-14 2022-06-14 小米汽车科技有限公司 Target detection method and device, storage medium, electronic equipment and vehicle

Also Published As

Publication number Publication date
CN112329772B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN109657716B (en) Vehicle appearance damage identification method based on deep learning
CN112329772B (en) Vehicle part identification method, device, electronic device and storage medium
US11393191B2 (en) Method and system for obtaining vehicle target views from a video stream
CN106845412B (en) Obstacle identification method and device, computer equipment and readable medium
CN106845416B (en) Obstacle identification method and device, computer equipment and readable medium
CN106709475B (en) Obstacle recognition method and device, computer equipment and readable storage medium
CN109034086B (en) Vehicle weight identification method, device and system
JP2019185347A (en) Object recognition device and object recognition method
CN113033604A (en) Vehicle detection method, system and storage medium based on SF-YOLOv4 network model
CN112906823B (en) Target object recognition model training method, recognition method and recognition device
CN108960074B (en) Small-size pedestrian target detection method based on deep learning
CN110889421A (en) Target detection method and device
CN112738470A (en) Method for detecting parking in expressway tunnel
CN111881741A (en) License plate recognition method and device, computer equipment and computer-readable storage medium
CN112861567B (en) Vehicle type classification method and device
CN110097108B (en) Method, device, equipment and storage medium for identifying non-motor vehicle
JP5293321B2 (en) Object identification device and program
CN114140025A (en) Multi-modal data-oriented vehicle insurance fraud behavior prediction system, method and device
CN110532904B (en) Vehicle identification method
CN110727762B (en) Method, device, storage medium and electronic equipment for determining similar texts
CN113379169B (en) Information processing method, device, equipment and medium
CN115984786A (en) Vehicle damage detection method and device, terminal and storage medium
CN111368784B (en) Target identification method, device, computer equipment and storage medium
CN114419584A (en) Improved traffic sign identification and positioning method by inhibiting YOLOv4 by using non-maximum value
CN113361413A (en) Mileage display area detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant