CN111209957B - Vehicle part identification method, device, computer equipment and storage medium - Google Patents

Vehicle part identification method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN111209957B
CN111209957B CN202010005186.5A CN202010005186A CN111209957B CN 111209957 B CN111209957 B CN 111209957B CN 202010005186 A CN202010005186 A CN 202010005186A CN 111209957 B CN111209957 B CN 111209957B
Authority
CN
China
Prior art keywords
vehicle
component
image
name
vehicle component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010005186.5A
Other languages
Chinese (zh)
Other versions
CN111209957A (en
Inventor
丁晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010005186.5A priority Critical patent/CN111209957B/en
Priority to PCT/CN2020/093350 priority patent/WO2021135065A1/en
Publication of CN111209957A publication Critical patent/CN111209957A/en
Application granted granted Critical
Publication of CN111209957B publication Critical patent/CN111209957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a vehicle part identification method, a device, computer equipment and a storage medium, which improve the accuracy of vehicle part identification. For improving the accuracy of vehicle component recognition. The method comprises the following steps: acquiring an image of a processed vehicle; inputting the processed vehicle image into a pre-trained neural network to obtain a target vehicle image, wherein the target vehicle image comprises the vehicle part names of the recognized vehicle parts; matching the vehicle part name on the target vehicle image with the vehicle part name in a preset vehicle part mapping table to determine the vehicle part which cannot be identified by the neural network; determining a coordinate area of the unidentifiable vehicle component on the target vehicle image; and marking the vehicle part names corresponding to the unrecognizable vehicle parts on the target vehicle image according to the coordinate areas.

Description

Vehicle part identification method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image recognition, and in particular, to a vehicle component recognition method, apparatus, computer device, and storage medium.
Background
With the development of image recognition technology, image recognition technology has also been rapidly developed in the field of vehicle recognition. Currently, vehicle identification is mainly performed by means of a neural network, i.e. a vehicle picture is input into the neural network after training in advance, so as to identify vehicle components (such as a fender, a door, a bumper, a bonnet, etc.) through the neural network. However, in the present technology, there is a great disadvantage in identifying a part of a vehicle only through a neural network, for example, if the neural network cannot identify a part of the vehicle body, the part of the part is considered to be missing, and the accuracy of identifying the vehicle part is not high.
Disclosure of Invention
The embodiment of the invention provides a vehicle part identification method, a device, computer equipment and a storage medium, which are used for solving the problem of low accuracy of vehicle part identification.
A vehicle component identification method, comprising:
acquiring an image of a processed vehicle;
inputting the processed vehicle image into a pre-trained neural network to obtain a target vehicle image, wherein the target vehicle image comprises the vehicle part names of the recognized vehicle parts;
matching the vehicle part name on the target vehicle image with the vehicle part name in a preset vehicle part mapping table to determine the vehicle part which cannot be identified by the neural network;
determining a coordinate area of the unidentifiable vehicle component on the target vehicle image;
and marking the vehicle part names corresponding to the unrecognizable vehicle parts on the target vehicle image according to the coordinate areas.
A vehicle component identification apparatus comprising:
a first acquisition module for acquiring an image of a vehicle to be processed;
the second acquisition module is used for inputting the processed vehicle image into a pre-trained neural network to obtain a target vehicle image, wherein the target vehicle image comprises the vehicle part names of the recognized vehicle parts;
the first determining module is used for matching the vehicle part names on the target vehicle image with the vehicle part names in a preset vehicle part mapping table so as to determine the vehicle parts which cannot be identified by the neural network;
a second determining module for determining a coordinate area of the unidentifiable vehicle component on the target vehicle image;
and the marking module is used for marking the vehicle part name corresponding to the unidentifiable vehicle part on the target vehicle image according to the coordinate area.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the vehicle component identification method described above when the computer program is executed.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the vehicle component identification method described above.
In one scheme realized by the vehicle component identification method, the device, the computer equipment and the storage medium, aiming at the problem that the neural network cannot identify a component at a certain part of a vehicle body and then considers that the component at the certain part is missing, the invention combines the output of the neural network and a preset vehicle component mapping table to determine the vehicle component which cannot be identified by the neural network, and calculates the coordinate area of the unrecognizable vehicle component on a target vehicle image, thereby marking the unrecognizable vehicle component on the target vehicle image according to the coordinate area, marking the coordinate area of the unrecognizable vehicle component on the target vehicle image, avoiding the problem that the vehicle component is considered to be missing because the neural network cannot identify the certain vehicle component, enabling the vehicle component which cannot be identified by the neural network to be identified, and improving the vehicle component identification accuracy.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for identifying a vehicle component in an embodiment of the invention;
FIG. 2 is a flowchart of step S40 according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of a right-hand vehicle component of the vehicle in an embodiment of the invention;
FIG. 4 is a flowchart of another embodiment of step S40 in the present invention;
FIG. 5 is a schematic block diagram of a vehicle component identification apparatus in accordance with an embodiment of the invention;
FIG. 6 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In one embodiment, as shown in fig. 1, there is provided a vehicle component recognition method including the steps of:
s10: an image of the processed vehicle is acquired.
S20: and inputting the processed vehicle image into a preset neural network to obtain a target vehicle image, wherein the target vehicle image comprises the vehicle part name of each vehicle part.
In this embodiment, a vehicle image to be processed is acquired first, then the vehicle image to be processed is input into a preset neural network (the neural network is trained in advance), recognition of vehicle components is performed, a target vehicle image is obtained, and the recognized target vehicle image includes vehicle component names corresponding to the recognized vehicle components, such as front doors, rear doors, and lappets.
S30: and matching the vehicle part names on the target vehicle image with the vehicle part names in a preset vehicle part mapping table to determine the vehicle parts which cannot be identified in the neural network.
In the step, the vehicle part names on the target vehicle image and the vehicle part names in a preset vehicle part mapping table are matched, and if each vehicle part name on the target vehicle image has a corresponding vehicle part name in the vehicle part mapping table and is matched with the corresponding vehicle part name, the neural network is considered to be complete in recognition, namely each vehicle part on the processed vehicle image can be recognized; if the vehicle part name on the target vehicle image is not matched with the vehicle part name in the vehicle part mapping table, the neural network is considered to be incapable of identifying a part, and the unrecognizable vehicle part is a part which is not matched with the vehicle part name in the vehicle part mapping table, for example, the vehicle part name on the target vehicle image comprises a lappet, a front door and a rear door, the vehicle part name in the vehicle part mapping table comprises a front lappet, a front door and a rear door, a front bumper, the vehicle part name on the target vehicle image is matched with the vehicle part name in the vehicle part mapping table, and after the fact that the front bumper is not matched with the vehicle part name on the target vehicle image is obtained, the unrecognizable vehicle part is determined to be the front bumper.
S40: a coordinate area of the unidentifiable vehicle component on the target vehicle image is determined.
S50: and marking the vehicle part names corresponding to the unrecognizable vehicle parts on the target vehicle image according to the coordinate areas.
After determining that the unidentifiable vehicle component is a front bumper, a coordinate area of the front bumper on the target vehicle image is calculated. And finally, marking the vehicle part name corresponding to the unrecognizable vehicle part on the target vehicle image according to the coordinate area, namely marking the front bumper on the coordinate area of the target image.
In the embodiment, aiming at the problem that the neural network cannot identify the part of a certain part of the vehicle body and then considers that the part of the certain part is missing, the scheme combines the output of the neural network and a preset vehicle part mapping table to determine the vehicle part which cannot be identified by the neural network, and calculates the coordinate area of the unrecognizable vehicle part on the target vehicle image, so that the unrecognizable vehicle part is marked on the target vehicle image according to the coordinate area, the coordinate area of the unrecognizable vehicle part can be marked on the target vehicle image, the problem that the vehicle part is considered to be missing because the neural network cannot identify the certain vehicle part is avoided, the vehicle part which cannot be identified by the neural network can be identified, and the vehicle part identification accuracy is improved.
In an embodiment, the arrangement sequence of the names of the vehicle parts in the vehicle part mapping table corresponds to the arrangement sequence of the parts on the vehicle, and the unrecognizable vehicle parts include parts at two ends of the non-mapping table, where the parts at two ends of the non-mapping table are other vehicle parts except the vehicle part corresponding to the first vehicle part name and the vehicle part corresponding to the last vehicle part name in the vehicle part mapping table;
as shown in fig. 2, in step S40, a coordinate area of an unidentifiable vehicle component on a target vehicle image is determined, specifically including the steps of:
s41: and if the unrecognizable vehicle part is a vehicle part at two ends of the non-mapping table, determining a first reference part name and a second reference part name which are closest to the unrecognizable vehicle part in the vehicle part mapping table, wherein the vehicle parts at two ends of the non-mapping table are other vehicle parts except for the vehicle part corresponding to the first vehicle part name and the last vehicle part name in the vehicle part mapping table.
S42: and acquiring the vertex coordinates and the bottom point coordinates of the vehicle component corresponding to the first reference component name and the vertex coordinates and the bottom point coordinates of the vehicle component corresponding to the second reference component name in the target vehicle image, wherein the vertex coordinates and the bottom point coordinates are the vertex coordinates and the bottom point coordinates closest to one side of the vehicle component which cannot be identified.
S43: the vertex coordinates and the bottom point coordinates of the first reference component name, and the vertex coordinates and the bottom point coordinates of the second reference component name are used as the unidentifiable coordinate areas of the vehicle component on the target vehicle image.
For easy understanding, the vehicle component mapping table in this embodiment is explained first, and the vehicle component mapping table includes two parts, namely a lateral direction mapping table and a longitudinal direction mapping table, where the mapping table in each direction is divided into a plurality of mapping sub-tables with different viewing angles, as follows:
lateral direction mapping table:
left viewing angle: left front lappet, left front door, left rear lappet;
right viewing angle: a right front fender, a right front door, a right rear door, and a right rear fender.
Longitudinal mapping table:
left viewing angle: left front door left skirt and left rear door left skirt;
right viewing angle: right front door right skirt and right rear door right skirt.
Front viewing angle: front hood, front bumper;
rear view angle: rear hood, rear bumper.
If the left front door or the right rear door is absent from the matched target vehicle image, the unrecognizable vehicle part is considered to be the part at the two ends of the non-mapping table.
If the unrecognizable vehicle part is a part at two ends of the non-mapping table, determining a first reference part name and a second reference part name which are closest to the unrecognizable vehicle part in the vehicle part mapping table, and then acquiring the vertex coordinates and the bottom point coordinates of the first reference part name in the target vehicle image. For example, as shown in fig. 3, if the unidentifiable vehicle component is a right front door, first and second reference component names, that is, a right front fender and a right rear door, that are closest to the right front door are determined first, and then the vertex coordinates and the bottom point coordinates of the right front fender and the right rear door, which are the vertex coordinates and the bottom point coordinates closest to the unidentifiable vehicle component side, are acquired from the target vehicle image, and the vertex coordinates and the bottom point coordinates of the left front fender and the left rear door are taken as the coordinate positions of the left front door on the target vehicle image.
In an embodiment, the vehicle component mapping table further includes aspect ratios of the vehicle components corresponding to the respective vehicle component names, as shown in fig. 4, and in step S40, the determining the coordinate area of the unidentifiable vehicle component on the target vehicle image further includes the following steps:
s44: if the unrecognizable vehicle part is a part at two ends of the mapping table, determining a third reference part name closest to the unrecognizable vehicle part in the vehicle part mapping table, wherein the parts at two ends of the mapping table are the vehicle part corresponding to the first vehicle part name and the vehicle part corresponding to the last vehicle part name in the vehicle part mapping table.
S45: and acquiring the vertex coordinates and the bottom coordinates of the vehicle component corresponding to the third reference component name on the target vehicle image.
S46: and taking the vertex coordinates and the bottom point coordinates corresponding to the third reference part name as the first vertex coordinates and the first bottom point coordinates of the unidentifiable vehicle part.
S47: taking the difference value between the vertical coordinates of the top point coordinates and the bottom point coordinates corresponding to the third reference component name as the height of the unrecognizable vehicle component, and calculating the width of the unrecognizable vehicle component according to the height of the unrecognizable vehicle component and the aspect ratio example obtained from the vehicle mapping table.
S48: and calculating second vertex coordinates and second bottom point coordinates of the unidentifiable vehicle component according to the height and width of the unidentifiable vehicle component.
S49: the first vertex coordinates, the second base point coordinates, the second vertex coordinates, and the second base point coordinates are taken as coordinate areas of the unidentifiable vehicle component on the target vehicle image.
As shown in the above-described vehicle component map, if the left front fender or the left rear fender is missing in the target vehicle image, the vehicle component that is not recognized is considered to be a component at both ends of the map.
For example, if the unrecognizable vehicle component is a component at two ends of the mapping table, for example, the unrecognizable vehicle component is a left rear fender, the vertex coordinates and the bottom point coordinates of a third reference component (left front door) closest to the left rear fender are obtained, the vertex coordinates and the bottom point coordinates of the left front door are used as the first vertex coordinates and the first bottom point coordinates of the left rear fender, the vertex coordinates and the bottom point coordinates of the left rear fender are obtained, the difference between the ordinate coordinates of the vertex coordinates and the bottom point coordinates of the third reference component is calculated, the left rear fender height is obtained, and the width of the left rear fender is obtained in combination with the aspect ratio example of the left rear fender. After the left rear fender is wide and high, the second vertex coordinate and the second bottom point coordinate of the left rear fender can be calculated, the vertex coordinate and the bottom point coordinate of the other side of the left rear fender are obtained, and the area defined by the first vertex coordinate, the first bottom point coordinate, the second vertex coordinate and the second bottom point coordinate is taken as the coordinate position of the left rear fender on the target vehicle image.
In an embodiment, the vehicle component mapping table further includes a front label or a rear label corresponding to each of the vehicle components, where the front label and the rear label are used to mark that the vehicle component is located at a front portion and a rear portion of the vehicle, respectively; after step S20, that is, after the target vehicle image is obtained, the vehicle component recognition method further includes the steps of:
s61: center point coordinates of the respective vehicle components on the target vehicle image are determined.
S62: and calculating an average value m of the abscissa of the center point coordinate corresponding to the front label and an average value n of the abscissa of the center point coordinate corresponding to the rear label.
S63: comparing the magnitudes of m and n, and if m is greater than n, executing step S64; if m is smaller than n, step S65 is executed.
S64: determining that the vehicle component is located to the right of a vehicle operator's seat;
s65: it is determined that the vehicle component is located to the left of the vehicle operator's station.
With respect to steps S61-S65, it is understood that the front label or the rear label is used to mark that the vehicle component is located at the front or rear of the vehicle, respectively, wherein the front and rear are demarcated by the end of the front door. It should be noted that, generally speaking, a person can only shoot the left side or the right side of the vehicle when shooting, and cannot shoot both the left side and the right side at the same time, so that under the condition that the coordinate axis direction is fixed, by comparing the sizes of n and m, the position of the vehicle part can be determined, and the left and right problems of the vehicle part which cannot be identified can be identified.
In another embodiment, it may also be determined whether the vehicle component is located to the left or right of the vehicle's driving location by:
s61': center point coordinates of the respective vehicle components on the target vehicle image are determined.
S62': and calculating the sum M of the abscissa of the center point coordinate corresponding to the front label and the sum N of the abscissa of the center point coordinate corresponding to the rear label.
S63': comparing the sizes of M and N, if M is larger than N, executing step S64 ', and if M is smaller than N, executing step S65'.
S64': it is determined that the vehicle component is located to the right of the vehicle drive location.
S65': it is determined that the vehicle component is located to the left of the vehicle operator's station.
For steps S61 '-S65', it can be understood that, in addition to using the average value of the abscissas of the center point coordinates corresponding to the front tag and the rear tag, the sum of the abscissas of the center point coordinates corresponding to the front tag and the rear tag can reflect whether the vehicle component is located on the left or the right, so that in the above steps S61 '-S65', the sum of the abscissas of the center point coordinates corresponding to the front tag and the rear tag is used to determine which side of the vehicle component is located on the left or the right of the vehicle, thereby improving the feasibility of the scheme.
In an embodiment, the vehicle component mapping table includes a plurality of mapping sub-tables, and after step S20, that is, after the target vehicle image is obtained, the vehicle component recognition method further includes the following steps:
s71: if the neural network identifies the same vehicle component as a plurality of different target vehicle component names, determining the view map sub-table in which each target vehicle component name is located;
s72: determining a matched view map sub-table according to the matching condition of the vehicle part names output by the neural network and the vehicle part names in each view map sub-table;
s73: and determining that the component names of the same vehicle component are the target vehicle component names in the matched view map sub-table.
For example, if a certain vehicle component is identified as a front bumper and a rear bumper at the same time, it is determined whether there is a front cover on the target vehicle image, and if there is a front cover, the vehicle component is proved to be a front bumper.
Specifically, as described above, the vehicle component map includes two parts, i.e., a lateral direction map and a longitudinal direction map, and the map in each direction is divided into a plurality of view map sub-tables of different views according to the view direction, for example:
lateral direction mapping table:
left front lappet, left front door, left rear lappet;
a right front fender, a right front door, a right rear fender in a right view angle;
longitudinal mapping table:
left viewing angle: left front door left skirt and left rear door left skirt;
right viewing angle: a right front door right skirt and a right rear door right skirt;
front viewing angle: front hood, front bumper;
rear view angle: rear hood, rear bumper.
For example, if the same vehicle component is identified by the neural network as a front bumper and a rear bumper (i.e., the front bumper and the rear bumper are target vehicle component names), determining a front view map table of a longitudinal direction map table in which the front bumper is located and a rear view map table of a longitudinal direction map table in which the rear bumper is located, and then matching the vehicle component name output by the neural network with the vehicle component name in each view map table, that is, determining whether a front vehicle cover or a rear vehicle cover exists in the vehicle component names output by the neural network, and if the front vehicle cover exists, determining that the front view map table of the longitudinal direction map table is the matched view map table, wherein the front bumper in the map table is the component name of the same vehicle component; and if the vehicle is a rear vehicle cover, the rear view angle mapping sub-table representing the longitudinal direction mapping table is a matched view angle mapping sub-table, and the rear bumper in the mapping sub-table is the part name of the same vehicle part.
It can be seen that the problem that the neural network often recognizes errors for similarly shaped components, such as the front bumper being recognized as both the front bumper and the rear bumper, is solved by this embodiment.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In one embodiment, a vehicle component recognition apparatus is provided, which corresponds to the vehicle component recognition method in the above embodiment one by one. As shown in fig. 5, the vehicle component recognition apparatus 10 includes a first acquisition module 101, a second acquisition module 102, a first determination module 103, a second determination module 104, and a marking module 105. The functional modules are described in detail as follows:
a first acquisition module 101 for acquiring an image of a vehicle to be processed;
a second obtaining module 102, configured to input the processed vehicle image into a neural network trained in advance, to obtain a target vehicle image, where the target vehicle image includes a vehicle component name of the identified vehicle component;
a first determining module 103, configured to match a vehicle part name on the target vehicle image with a vehicle part name in a preset vehicle part mapping table, so as to determine a vehicle part that cannot be identified by the neural network;
a second determining module 104 for determining a coordinate area of the unidentifiable vehicle component on the target vehicle image;
and a marking module 105, configured to mark a vehicle part name corresponding to the unrecognizable vehicle part on the target vehicle image according to the coordinate area.
In an embodiment, the arrangement order of the names of the respective vehicle components in the vehicle component mapping table corresponds to the arrangement order of the respective components on the vehicle, and the second determining module 104 is specifically configured to:
if the unrecognizable vehicle part is a vehicle part at two ends of a non-mapping table, determining a first reference part name and a second reference part name which are closest to the unrecognizable vehicle part in the vehicle part mapping table, wherein the vehicle parts at two ends of the non-mapping table are other vehicle parts except for the vehicle parts corresponding to the first vehicle part name and the last vehicle part name in the vehicle part mapping table;
acquiring vertex coordinates and bottom point coordinates of the vehicle component corresponding to the first reference component name and vertex coordinates and bottom point coordinates of the vehicle component corresponding to the second reference component name in the target vehicle image, wherein the vertex coordinates and the bottom point coordinates are the vertex coordinates and the bottom point coordinates closest to one side of the unidentifiable vehicle component;
and taking the vertex coordinates and the bottom point coordinates of the first reference part name and the vertex coordinates and the bottom point coordinates of the second reference part name as coordinate areas of the unidentifiable vehicle part on the target vehicle image.
The specific definition of the vehicle component recognition device may be referred to above as the definition of the vehicle component recognition method, and will not be described here. The respective modules in the above-described vehicle component recognition apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is for communicating with an external server via a network connection. The computer program is executed by a processor to implement a vehicle component identification method.
In one embodiment, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of when executing the computer program:
acquiring an image of a processed vehicle;
inputting the processed vehicle image into a pre-trained neural network to obtain a target vehicle image, wherein the target vehicle image comprises the vehicle part names of the recognized vehicle parts;
matching the vehicle part name on the target vehicle image with the vehicle part name in a preset vehicle part mapping table to determine the vehicle part which cannot be identified by the neural network;
determining a coordinate area of the unidentifiable vehicle component on the target vehicle image;
and marking the vehicle part names corresponding to the unrecognizable vehicle parts on the target vehicle image according to the coordinate areas.
In particular, for the steps implemented when the processor executes the computer program, reference may be made to the description of the foregoing method embodiments.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image of a processed vehicle;
inputting the processed vehicle image into a pre-trained neural network to obtain a target vehicle image, wherein the target vehicle image comprises the vehicle part names of the recognized vehicle parts;
matching the vehicle part name on the target vehicle image with the vehicle part name in a preset vehicle part mapping table to determine the vehicle part which cannot be identified by the neural network;
determining a coordinate area of the unidentifiable vehicle component on the target vehicle image;
and marking the vehicle part names corresponding to the unrecognizable vehicle parts on the target vehicle image according to the coordinate areas.
With regard in particular to the steps which are carried out by a computer program when executed by a processor, reference is made to the description of the embodiments of the method described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (7)

1. A vehicle component identification method, characterized by comprising:
acquiring an image of a processed vehicle;
inputting the processed vehicle image into a pre-trained neural network to obtain a target vehicle image, wherein the target vehicle image comprises the vehicle part names of the recognized vehicle parts;
matching the vehicle part name on the target vehicle image with the vehicle part name in a preset vehicle part mapping table to determine the vehicle part which cannot be identified by the neural network;
determining a coordinate area of the unidentifiable vehicle component on the target vehicle image;
marking the vehicle part names corresponding to the unrecognizable vehicle parts on the target vehicle image according to the coordinate areas;
the vehicle component mapping table comprises a plurality of view angle mapping sub-tables which are divided according to different view angle directions, and after the target vehicle image is obtained, the vehicle component identification method further comprises the following steps:
if the neural network identifies the same vehicle component as a plurality of different target vehicle component names, determining the view map sub-table in which each target vehicle component name is located;
determining a matched view map sub-table according to the matching condition of the vehicle part names output by the neural network and the vehicle part names in each view map sub-table;
determining that the component names of the same vehicle component are the target vehicle component names in the matched view map sub-table;
the arrangement order of the names of the vehicle parts in the vehicle part mapping table corresponds to the arrangement order of the parts on the vehicle, and the determining the coordinate area of the unidentifiable vehicle part on the target vehicle image includes:
if the unrecognizable vehicle part is a vehicle part at two ends of a non-mapping table, determining a first reference part name and a second reference part name which are closest to the unrecognizable vehicle part in the vehicle part mapping table, wherein the vehicle parts at two ends of the non-mapping table are other vehicle parts except for the vehicle parts corresponding to the first vehicle part name and the last vehicle part name in the vehicle part mapping table;
acquiring vertex coordinates and bottom point coordinates of the vehicle component corresponding to the first reference component name and vertex coordinates and bottom point coordinates of the vehicle component corresponding to the second reference component name in the target vehicle image, wherein the vertex coordinates and the bottom point coordinates are the vertex coordinates and the bottom point coordinates closest to one side of the unidentifiable vehicle component;
and taking the vertex coordinates and the bottom point coordinates of the first reference part name and the vertex coordinates and the bottom point coordinates of the second reference part name as coordinate areas of the unidentifiable vehicle part on the target vehicle image.
2. The vehicle component identification method of claim 1, wherein the vehicle component map further includes aspect ratio examples of vehicle components corresponding to respective vehicle component names, and the determining the coordinate area of the unidentifiable vehicle component on the target vehicle image further includes the steps of:
if the unrecognizable vehicle component is a component at two ends of the mapping table, determining a third reference component name closest to the unrecognizable vehicle component in the vehicle component mapping table, wherein the components at two ends of the mapping table are the vehicle component corresponding to the first vehicle component name and the vehicle component corresponding to the last vehicle component name in the vehicle component mapping table;
acquiring the vertex coordinates and the bottom coordinates of the vehicle component corresponding to the third reference component name on the target vehicle image;
taking the vertex coordinates and the bottom point coordinates corresponding to the third reference part name as the first vertex coordinates and the first bottom point coordinates of the unidentifiable vehicle part;
taking the difference value between the vertical coordinates of the top point coordinates and the bottom point coordinates corresponding to the third reference part name as the height of the unrecognizable vehicle part;
calculating the width of the unrecognizable vehicle part according to the height of the unrecognizable vehicle part and the aspect ratio obtained from the vehicle part mapping table;
calculating a second vertex coordinate and a second bottom point coordinate of the unrecognizable vehicle part according to the height and the width of the unrecognizable vehicle part;
and taking the first vertex coordinates, the second bottom point coordinates, the second vertex coordinates and the second bottom point coordinates as coordinate areas of the unidentifiable vehicle component on the target vehicle image.
3. The vehicle component identification method of any one of claims 1-2, wherein the vehicle component map further includes a front label or a rear label corresponding to each of the vehicle components, the front label and the rear label being used to mark that the vehicle component is located at a front portion and a rear portion of a vehicle, respectively; after the target vehicle image is obtained, the vehicle component recognition method further includes:
determining center point coordinates of each of the vehicle components on the target vehicle image;
calculating an average value m of the abscissa of the center point coordinate corresponding to the front label and an average value n of the abscissa of the center point coordinate corresponding to the rear label;
comparing the sizes of said m and said n;
if m is greater than n, determining that the vehicle component is positioned on the right side of a vehicle driving position;
and if the m is smaller than the n, determining that the vehicle part is positioned at the left side of the vehicle driving position.
4. The vehicle component identification method of any one of claims 1-2, wherein the vehicle component map further includes a front label or a rear label corresponding to each of the vehicle components, the front label and the rear label being used to mark that the vehicle component is located at a front portion and a rear portion of a vehicle, respectively; after the target vehicle image is obtained, the vehicle component recognition method further includes:
determining center point coordinates of each of the vehicle components on the target vehicle image;
calculating the sum M of the abscissa coordinates of the center point coordinates corresponding to the front label; or the sum N of the abscissa of the center point coordinates corresponding to the rear label;
comparing the sizes of the M and the N;
if M is larger than N, determining that the vehicle part is positioned on the right side of a vehicle driving position;
and if the M is smaller than the N, determining that the vehicle part is positioned at the left side of the vehicle driving position.
5. A vehicle component recognition apparatus, characterized by comprising:
a first acquisition module for acquiring an image of a vehicle to be processed;
the second acquisition module is used for inputting the processed vehicle image into a pre-trained neural network to obtain a target vehicle image, wherein the target vehicle image comprises the vehicle part names of the recognized vehicle parts;
the first determining module is used for matching the vehicle part names on the target vehicle image with the vehicle part names in a preset vehicle part mapping table so as to determine the vehicle parts which cannot be identified by the neural network;
a second determining module for determining a coordinate area of the unidentifiable vehicle component on the target vehicle image;
a marking module, configured to mark a vehicle component name corresponding to the unrecognizable vehicle component on the target vehicle image according to the coordinate area;
the vehicle component recognition device is further configured to:
after the vehicle component mapping table comprises a plurality of view angle mapping sub-tables divided according to different view angle directions and the target vehicle image is obtained, if the neural network identifies the same vehicle component as a plurality of different target vehicle component names, determining the view angle mapping sub-table in which each target vehicle component name is located;
determining a matched view map sub-table according to the matching condition of the vehicle part names output by the neural network and the vehicle part names in each view map sub-table;
determining that the component names of the same vehicle component are the target vehicle component names in the matched view map sub-table;
the arrangement order of the names of the vehicle parts in the vehicle part mapping table corresponds to the arrangement order of the parts on the vehicle, and the second determining module is specifically configured to:
if the unrecognizable vehicle part is a vehicle part at two ends of a non-mapping table, determining a first reference part name and a second reference part name which are closest to the unrecognizable vehicle part in the vehicle part mapping table, wherein the vehicle parts at two ends of the non-mapping table are other vehicle parts except for the vehicle parts corresponding to the first vehicle part name and the last vehicle part name in the vehicle part mapping table;
acquiring vertex coordinates and bottom point coordinates of the vehicle component corresponding to the first reference component name and vertex coordinates and bottom point coordinates of the vehicle component corresponding to the second reference component name in the target vehicle image, wherein the vertex coordinates and the bottom point coordinates are the vertex coordinates and the bottom point coordinates closest to one side of the unidentifiable vehicle component;
and taking the vertex coordinates and the bottom point coordinates of the first reference part name and the vertex coordinates and the bottom point coordinates of the second reference part name as coordinate areas of the unidentifiable vehicle part on the target vehicle image.
6. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the vehicle component identification method according to any one of claims 1 to 4 when the computer program is executed.
7. A computer-readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the vehicle component identification method according to any one of claims 1 to 4.
CN202010005186.5A 2020-01-03 2020-01-03 Vehicle part identification method, device, computer equipment and storage medium Active CN111209957B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010005186.5A CN111209957B (en) 2020-01-03 2020-01-03 Vehicle part identification method, device, computer equipment and storage medium
PCT/CN2020/093350 WO2021135065A1 (en) 2020-01-03 2020-05-29 Vehicle component identification method and apparatus, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010005186.5A CN111209957B (en) 2020-01-03 2020-01-03 Vehicle part identification method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111209957A CN111209957A (en) 2020-05-29
CN111209957B true CN111209957B (en) 2023-07-18

Family

ID=70789524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010005186.5A Active CN111209957B (en) 2020-01-03 2020-01-03 Vehicle part identification method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111209957B (en)
WO (1) WO2021135065A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392218A (en) * 2017-04-11 2017-11-24 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN108090838A (en) * 2017-11-21 2018-05-29 阿里巴巴集团控股有限公司 Identify method, apparatus, server, client and the system of damaged vehicle component
CN109523556A (en) * 2018-09-30 2019-03-26 百度在线网络技术(北京)有限公司 Vehicle part dividing method and device
CN110458301A (en) * 2019-07-11 2019-11-15 深圳壹账通智能科技有限公司 A kind of damage identification method of vehicle part, device, computer equipment and storage medium
CN110570388A (en) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 Method, device and equipment for detecting components of vehicle

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358596B (en) * 2017-04-11 2020-09-18 阿里巴巴集团控股有限公司 Vehicle loss assessment method and device based on image, electronic equipment and system
CN110070536B (en) * 2019-04-24 2022-08-30 南京邮电大学 Deep learning-based PCB component detection method
CN110532897B (en) * 2019-08-07 2022-01-04 北京科技大学 Method and device for recognizing image of part

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392218A (en) * 2017-04-11 2017-11-24 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN108090838A (en) * 2017-11-21 2018-05-29 阿里巴巴集团控股有限公司 Identify method, apparatus, server, client and the system of damaged vehicle component
CN110570388A (en) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 Method, device and equipment for detecting components of vehicle
CN109523556A (en) * 2018-09-30 2019-03-26 百度在线网络技术(北京)有限公司 Vehicle part dividing method and device
CN110458301A (en) * 2019-07-11 2019-11-15 深圳壹账通智能科技有限公司 A kind of damage identification method of vehicle part, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"多目及单目环境下的人体朝向分析";鹿建国 等;《微型机与应用》;20100630;第29卷(第12期);第45-48页 *

Also Published As

Publication number Publication date
WO2021135065A1 (en) 2021-07-08
CN111209957A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN110796082B (en) Nameplate text detection method and device, computer equipment and storage medium
CN111242126A (en) Irregular text correction method and device, computer equipment and storage medium
CN111723863B (en) Fruit tree flower identification and position acquisition method and device, computer equipment and storage medium
WO2021184564A1 (en) Image-based accident liability determination method and apparatus, computer device, and storage medium
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN113850807B (en) Image sub-pixel matching positioning method, system, device and medium
CN112991456A (en) Shooting positioning method and device, computer equipment and storage medium
CN114998856A (en) 3D target detection method, device, equipment and medium of multi-camera image
CN115601774A (en) Table recognition method, apparatus, device, storage medium and program product
CN114219806B (en) Automobile radar detection method, device, equipment, medium and product
CN110263754B (en) Method and device for removing shading of off-screen fingerprint, computer equipment and storage medium
CN112580499A (en) Text recognition method, device, equipment and storage medium
CN112766275B (en) Seal character recognition method and device, computer equipment and storage medium
CN111209957B (en) Vehicle part identification method, device, computer equipment and storage medium
CN111178224A (en) Object rule judging method and device, computer equipment and storage medium
CN114882211A (en) Time sequence data automatic labeling method and device, electronic equipment, medium and product
CN111179342B (en) Object pose estimation method and device, storage medium and robot
CN111445513B (en) Plant canopy volume acquisition method and device based on depth image, computer equipment and storage medium
CN112241705A (en) Target detection model training method and target detection method based on classification regression
CN109993067B (en) Face key point extraction method and device, computer equipment and storage medium
CN110880003A (en) Image matching method and device, storage medium and automobile
CN113505745B (en) Character recognition method and device, electronic equipment and storage medium
CN114005052A (en) Target detection method and device for panoramic image, computer equipment and storage medium
CN112083898A (en) Building design drawing printing method and device, computer equipment and storage medium
CN112233020A (en) Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant