CN111553268A - Vehicle part identification method and device, computer equipment and storage medium - Google Patents
Vehicle part identification method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN111553268A CN111553268A CN202010344787.9A CN202010344787A CN111553268A CN 111553268 A CN111553268 A CN 111553268A CN 202010344787 A CN202010344787 A CN 202010344787A CN 111553268 A CN111553268 A CN 111553268A
- Authority
- CN
- China
- Prior art keywords
- target vehicle
- key
- vehicle
- picture
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 239000013598 vector Substances 0.000 claims description 148
- 239000011159 matrix material Substances 0.000 claims description 29
- 238000013527 convolutional neural network Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 abstract description 6
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 16
- 238000004364 calculation method Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the field of artificial intelligence, is applied to the field of intelligent traffic, and provides a vehicle component identification method and device based on image detection, computer equipment and a storage medium. The method comprises the following steps: and identifying the target vehicle picture to be detected to obtain an identification result, determining the display area and the vehicle model of the target vehicle according to the identification result, and determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model. And acquiring the standard relative position relation and the standard contour line of each key component according to the vehicle model, and correcting the display area according to the standard relative position relation and the shooting angle. And performing edge approximation comparison on the contour line of the target vehicle in the corrected display area and the standard contour line to determine the actual positions of all key components of the target vehicle. By adopting the method, the actual positions of all key parts of the target vehicle are further determined through edge approximate comparison, and the accuracy of vehicle key part identification is improved.
Description
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a vehicle component identification method, apparatus, computer device, and storage medium.
Background
With the increasing development of social economy, vehicles are widely used in daily life, and damage caused by natural disasters or accidents in the using process of the vehicles needs to be settled so as to promote subsequent claim settlement matters.
In a vehicle damage assessment scenario, vehicle exterior components need to be identified. According to the traditional identification mode, all parts of the automobile on the automobile damage picture are identified by receiving the automobile damage picture shot on the scene, and the damage condition of all parts of the appearance of the automobile is determined. However, due to the complex situation of the accident site and the limitation of the shooting technology, the positions of the parts of the damaged vehicle and the damage situation of the damaged parts are easily determined.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a vehicle component identification method, device, computer device, and storage medium capable of improving the accuracy of vehicle exterior component identification in a vehicle damage assessment scenario.
A vehicle component identification method, the method comprising:
acquiring a target vehicle picture to be detected, and identifying the target vehicle picture to obtain identification results of key components of the target vehicle picture;
determining a display area of a target vehicle on the target vehicle picture and a vehicle model of the target vehicle according to the identification result;
determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model;
acquiring a standard relative position relation and a standard contour line of each key component of the corresponding vehicle according to the vehicle model;
correcting the display area according to the standard relative position relation and the shooting angle of the target vehicle picture;
extracting the corrected contour line of the target vehicle in the display area;
and performing edge approximate comparison on the contour line of the target vehicle and the standard contour line to determine the actual positions of all key parts of the target vehicle.
In one embodiment, acquiring a target vehicle picture to be detected, identifying the target vehicle picture, and obtaining an identification result of each key component of the target vehicle picture includes:
acquiring a target vehicle picture to be detected;
acquiring a convolutional neural network model after training of a sample set vehicle picture;
inputting the target vehicle picture into the trained convolutional neural network model, and identifying the target vehicle picture;
acquiring the recognition result of each key component of the target vehicle picture;
further comprising:
and uploading the identification result to a block chain.
In one embodiment, the determining, according to the recognition result, a display area of a target vehicle on the target vehicle picture and a vehicle model of the target vehicle includes:
determining the relative positions of key components of the target vehicle based on the identification result;
determining a display area of the target vehicle on the target vehicle picture according to the relative position of each key component of the target vehicle;
and extracting the target vehicle in the display area, and determining the vehicle model of the target vehicle.
In one embodiment, the determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model includes:
obtaining key point angle vectors of all key components of a target vehicle according to the recognition results of all key components of the target vehicle, and determining a feature vector matrix corresponding to all key components of the target vehicle;
acquiring a baseline characteristic vector of each key component of the corresponding vehicle according to the vehicle model of the target vehicle;
and comparing the baseline characteristic vector with the characteristic vector matrix, rotating the baseline characteristic vector, and determining the shooting angle of the target vehicle picture to be detected.
In one embodiment, the edge approximation comparing the contour line of the target vehicle with the standard contour line to determine the actual positions of the key components of the target vehicle includes:
carrying out edge approximate comparison on the contour line of the target vehicle and the standard contour line to obtain an edge approximate comparison result;
establishing an incidence relation between the key components of the target vehicle and the key components of vehicles of the same vehicle model based on the edge approximate comparison result;
and determining the actual positions of all key components of the target vehicle according to the association relationship and the standard relative position relationship.
In one embodiment, the obtaining, according to the recognition result of each key component of the target vehicle, a key point angle vector of each key component of the target vehicle, and determining a feature vector matrix corresponding to each key component of the target vehicle includes:
determining the key point position of each key component according to the identification result of each key component of the target vehicle;
extracting key points on the positions of the key points, and calculating key point angle vectors between any two key points based on a preset arrangement sequence;
determining a relevant angle vector corresponding to the key point according to the key point angle vector;
and obtaining a characteristic vector matrix corresponding to each key component of the target vehicle according to the key point angle vector and the corresponding related angle vector.
In one embodiment, the comparing the baseline eigenvector with the eigenvector matrix, rotating the baseline eigenvector, and determining the shooting angle of the target vehicle picture to be detected includes:
extracting a horizontal feature vector and a vertical feature vector of the feature vector matrix;
comparing the horizontal feature vector and the vertical feature vector with the baseline feature vector to obtain a comparison result;
determining a rotation angle of the baseline feature vector based on the comparison result;
rotating the baseline characteristic vector according to the rotation angle, and calculating a geometric mean value of angle vectors of public key points in the baseline vector rotation process; the public key points are key points which are common to the feature vector matrix and the baseline feature vector;
and when the geometric mean value of the angle vector of the public key point reaches a preset threshold value, obtaining the shooting angle of the target vehicle picture to be detected.
A vehicle component identification apparatus, the apparatus comprising:
the identification result generation module is used for acquiring a target vehicle picture to be detected, identifying the target vehicle picture and obtaining identification results of all key components of the target vehicle picture;
the display area determining module is used for determining a display area of a target vehicle on the target vehicle picture and a vehicle model of the target vehicle according to the identification result;
the shooting angle determining module is used for determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model;
the first acquisition module is used for acquiring the standard relative position relation and the standard contour line of each key component of the corresponding vehicle according to the vehicle model;
the display area correction module is used for correcting the display area according to the standard relative position relation and the shooting angle of the target vehicle picture;
the target vehicle contour line extraction module is used for extracting the corrected contour line of the target vehicle in the display area;
and the actual position determining module of the key component is used for performing edge approximate comparison on the contour line of the target vehicle and the standard contour line to determine the actual position of each key component of the target vehicle.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a target vehicle picture to be detected, and identifying the target vehicle picture to obtain identification results of key components of the target vehicle picture;
determining a display area of a target vehicle on the target vehicle picture and a vehicle model of the target vehicle according to the identification result;
determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model;
acquiring a standard relative position relation and a standard contour line of each key component of the corresponding vehicle according to the vehicle model;
correcting the display area according to the standard relative position relation and the shooting angle of the target vehicle picture;
extracting the corrected contour line of the target vehicle in the display area;
and performing edge approximate comparison on the contour line of the target vehicle and the standard contour line to determine the actual positions of all key parts of the target vehicle.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a target vehicle picture to be detected, and identifying the target vehicle picture to obtain identification results of key components of the target vehicle picture;
determining a display area of a target vehicle on the target vehicle picture and a vehicle model of the target vehicle according to the identification result;
determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model;
acquiring a standard relative position relation and a standard contour line of each key component of the corresponding vehicle according to the vehicle model;
correcting the display area according to the standard relative position relation and the shooting angle of the target vehicle picture;
extracting the corrected contour line of the target vehicle in the display area;
and performing edge approximate comparison on the contour line of the target vehicle and the standard contour line to determine the actual positions of all key parts of the target vehicle.
According to the vehicle component identification method, the vehicle component identification device, the computer equipment and the storage medium, the identification result of each key component of the target vehicle picture is obtained by identifying the acquired target vehicle picture to be detected, the identification result is uploaded to the block chain, and the display area of the target vehicle on the target vehicle picture and the vehicle model of the target vehicle are determined according to the identification result. And determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model, and acquiring the standard relative position relation and the standard contour line of each key component of the corresponding vehicle according to the vehicle model. And correcting the display area according to the standard relative position relation and the shooting angle of the target vehicle picture to obtain the display area covering each key component of the vehicle as far as possible. By extracting the contour line of the target vehicle in the corrected display area and performing edge approximation comparison on the contour line of the target vehicle and the standard contour line, the association relationship among the components can be established. And further determining the actual positions of all key components of the target vehicle based on the association relationship between the contour lines and the components, so that the accuracy of identifying the key components of the vehicle is improved.
Drawings
FIG. 1 is a diagram of an embodiment of a vehicle component identification method;
FIG. 2 is a schematic flow chart diagram of a vehicle component identification method in one embodiment;
FIG. 3 is a schematic flow chart illustrating the process of obtaining recognition results of key components of a picture of a target vehicle according to one embodiment;
FIG. 4 is a labeled schematic diagram of key components of a first portion of a standard vehicle in one embodiment;
FIG. 5 is a labeled schematic diagram of key components of a second portion of a standard vehicle in accordance with one embodiment;
FIG. 6 is a schematic diagram of a process for determining a display area on a picture of a target vehicle and a vehicle model of the target vehicle in one embodiment;
FIG. 7 is a schematic diagram illustrating a process for determining a camera angle of a picture of a target vehicle according to one embodiment;
FIG. 8 is a schematic illustration of a keypoint location of a key component of a target vehicle in one embodiment;
FIG. 9 is a schematic illustration of keypoint angle vectors of key components of a target vehicle in one embodiment;
FIG. 10 is a schematic illustration of a baseline feature vector of a sample vehicle in one implementation;
FIG. 11 is a schematic diagram of a vector comparison of a vehicle component identification method according to an embodiment;
FIG. 12 is a block diagram showing the construction of a vehicle component recognition apparatus according to an embodiment;
FIG. 13 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The vehicle component identification method provided by the application can be applied to the application environment shown in FIG. 1. Wherein the terminal 102 and the server 104 communicate via a network. The server 104 receives a target vehicle picture to be detected sent by the terminal 102, identifies the target vehicle picture to obtain identification results of key components of the target vehicle picture, determines a display area of a target vehicle on the target vehicle picture and a vehicle model of the target vehicle according to the identification results, determines a shooting angle of the target vehicle picture to be detected according to the identification results and the vehicle model, obtains a standard relative position relation and a standard contour line of each key component of a corresponding vehicle according to the vehicle model, and corrects the display area according to the standard relative position relation and the shooting angle of the target vehicle picture. And determining the actual positions of all key components of the target vehicle by extracting the contour line of the target vehicle in the corrected display area and performing edge approximation comparison on the contour line of the target vehicle and the standard contour line. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a vehicle component identification method is provided, which is described by taking the method as an example applied to the server in fig. 1, and comprises the following steps:
step S202, a target vehicle picture to be detected is obtained, the target vehicle picture is identified, and identification results of all key components of the target vehicle picture are obtained.
Specifically, a target vehicle picture to be detected is obtained, and a convolutional neural network model trained by the sample set vehicle picture is obtained. And then inputting the target vehicle picture into the trained convolutional neural network model, identifying the target vehicle picture, and acquiring the identification result of each key component of the target vehicle picture output by the trained convolutional neural network model.
Further, the convolutional neural network model is trained by using the sample set vehicle pictures labeled on each key component, so that the trained convolutional neural network model can be obtained. The trained convolutional neural network model can be used for identifying the target vehicle picture to obtain the identification result of each key component of the target vehicle.
And step S204, determining the display area of the target vehicle on the target vehicle picture and the vehicle model of the target vehicle according to the recognition result.
Specifically, the relative position of each key component of the target vehicle is determined based on the identification result, the display area of the target vehicle on the target vehicle picture is determined according to the relative position of each key component of the target vehicle, the target vehicle in the display area is extracted, and the vehicle model of the target vehicle can be determined.
Furthermore, the relative position of each key component of the target vehicle can be determined by analyzing the recognition result, and then the display area of the target vehicle on the target vehicle picture can be preliminarily determined according to the relative position of each key component of the target vehicle. By extracting the target vehicle in the display area, matching the extracted target vehicle with the existing sample vehicles in the database, and when the matching is successful, determining the vehicle model of the target vehicle.
And step S206, determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model.
Specifically, according to the recognition result of each key component of the target vehicle, the key point angle vector of each key component of the target vehicle is obtained, and the feature vector matrix corresponding to each key component of the target vehicle is determined. And then obtaining a baseline characteristic vector of each key component of the corresponding vehicle according to the vehicle model of the target vehicle, comparing the baseline characteristic vector with the characteristic vector matrix, rotating the baseline characteristic vector, and determining the shooting angle of the target vehicle picture to be detected.
The method comprises the steps of determining the key point positions of all key components of a target vehicle, extracting key points on all key points, and calculating to obtain a key point angle vector between any two key points based on a preset key point arrangement sequence. According to the vehicle model of the target vehicle, sample vehicles of the same vehicle model can be further obtained, the feature vectors of all key parts of the sample vehicles are extracted, and the feature vectors are determined to be the baseline feature vectors.
And step S208, acquiring the standard relative position relation and the standard contour line of each key component of the corresponding vehicle according to the vehicle model.
Specifically, according to the determined vehicle model of the target vehicle, vehicles of the same vehicle model are determined from the sample vehicles, and the standard relative position relation and the standard contour line of each key component of the corresponding vehicle are obtained.
The key components of the sample vehicle include a left front lamp, a right front lamp, a left tail lamp, a right tail lamp, a left front door, a right front door, a left rear door, a right rear door, a left front wheel, a right front wheel, a left rear wheel, a right rear wheel, a front windshield, a rear windshield, a bonnet, a tailgate cover, a left front window, a right front window, a left rear window, a right rear window, a left rear mirror, a right rear mirror, a front fender and a rear fender, and the number of the key components is 24.
And step S210, correcting the display area according to the standard relative position relation and the shooting angle of the target vehicle picture.
Specifically, according to the key components of the sample vehicle, the labeled relative position relationship among the key components and the determined shooting angle of the target vehicle picture, the display area of the target vehicle on the target vehicle picture is corrected.
And further, correcting the display area of the target vehicle on the target vehicle picture, substantially realizing the deformity correction of the target vehicle, and converting the current target vehicle picture to be processed into the target vehicle picture direction suitable for analysis according to the shooting angle of the target vehicle picture so as to obtain the display area covering each part of the vehicle as much as possible.
In step S212, the contour line of the target vehicle in the corrected display area is extracted.
Specifically, contour lines of the target vehicle are obtained by extracting the contour lines of the target vehicle in the corrected display area by adopting an edge recognition algorithm.
Further, the edge detection of the target vehicle on the target vehicle picture specifically includes: 1) filtering: designing a filter to reduce noise; 2) enhancing: highlighting the points with obvious change of gray level in the field by utilizing an enhancement algorithm, and finishing by calculating the gradient amplitude; 3) and (3) detection: judging by using a shaving amplitude threshold value, and detecting to obtain edge points; 4) positioning: the position of the edge is accurately determined.
In this embodiment, edge detection and extraction may be performed on the target vehicle by using an edge extraction operator such as a Robert operator or a Sober operator, so as to obtain a contour line of the target vehicle.
And step S214, performing edge approximate comparison on the contour line of the target vehicle and the standard contour line, and determining the actual positions of all key parts of the target vehicle.
Specifically, the edge approximation comparison result is obtained by performing edge approximation comparison on the contour line of the target vehicle and the standard contour line. And establishing an incidence relation between the key parts of the target vehicle and the key parts of the vehicles with the same vehicle model based on the edge approximate comparison result, and further determining the actual positions of the key parts of the target vehicle according to the incidence relation and the standard relative position relation.
The method comprises the steps of establishing an association relationship between a key component of a target vehicle and key components of vehicles of the same vehicle type through edge approximate comparison and contour recognition, and accurately recognizing the positions of the key components of the target vehicle by combining the standard relative position relationship of the key components of the vehicles of the same vehicle type to obtain the actual positions of the key components of the target vehicle with the association relationship.
According to the vehicle component identification method, the identification result of each key component of the target vehicle picture is obtained by identifying the acquired target vehicle picture to be detected, and the display area of the target vehicle on the target vehicle picture and the vehicle model of the target vehicle are determined according to the identification result. And determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model, and acquiring the standard relative position relation and the standard contour line of each key component of the corresponding vehicle according to the vehicle model. And correcting the display area according to the standard relative position relation and the shooting angle of the target vehicle picture to obtain the display area covering each key component of the vehicle as far as possible. By extracting the contour line of the target vehicle in the corrected display area and performing edge approximation comparison on the contour line of the target vehicle and the standard contour line, the association relationship among the components can be established. Based on the association relation between the contour lines and the parts, the actual positions of all key parts of the target vehicle are further determined, the accuracy of vehicle key part identification is improved, and the scheme can be applied to scenes such as intelligent traffic vehicle violation detection and the like, so that the construction of an intelligent city is promoted.
In an embodiment, as shown in fig. 3, the step of identifying the target picture to be detected, that is, the step of acquiring the target vehicle picture to be detected, identifying the target vehicle picture, and obtaining the identification result of each key component of the target vehicle picture specifically includes the following steps S302 to S308:
step S302, a target vehicle picture to be detected is obtained.
Before the target vehicle picture to be detected is obtained, the picture to be detected obtained from the terminal equipment needs to be confirmed, and whether the target vehicle exists on the picture to be detected is judged. And when the target vehicle exists on the image to be detected through target detection, determining the image to be detected as the image of the target vehicle to be detected, and acquiring the determined image of the target vehicle to be detected.
And step S304, acquiring the convolutional neural network model trained by the sample set vehicle picture.
The generating of the trained convolutional neural network model specifically includes: and acquiring a plurality of standard vehicle pictures which are determined to include the vehicle from the database, obtaining a sample set according to the plurality of standard vehicle pictures, and labeling each key component of the vehicle pictures of the sample set. And then training the convolutional neural network model according to the labeled sample set vehicle picture to generate the trained convolutional neural network model.
Further, referring to fig. 4 and 5, the step of labeling each key component of the vehicle picture of the training sample set may be provided, where fig. 4 provides a labeling schematic diagram of a key component of a first part of a standard vehicle, and fig. 5 provides a labeling schematic diagram of a key component of a second part of the standard vehicle. As shown in fig. 4 and 5, key components of the standard vehicle include a left front lamp, a right front lamp, a left rear lamp, a right rear lamp, a left front door, a right front door, a left rear door, a right rear door, a left front wheel, a right front wheel, a left rear wheel, a right rear wheel, a front windshield, a rear windshield, a hood, a trunk lid, a left front window, a right front window, a left rear window, a right rear window, a left rear view mirror, a right rear view mirror, a front fender, and a rear fender.
And S306, inputting the target vehicle picture into the trained convolutional neural network model, and identifying the target vehicle picture.
Specifically, the obtained target vehicle picture is input into a trained convolutional neural network model, the trained convolutional neural network model is utilized to identify the target vehicle picture, and identification results of all key components of the corresponding target vehicle picture are generated.
And step S308, acquiring the identification result of each key component of the target vehicle picture.
Specifically, the recognition results of all key components of the target vehicle picture are obtained by obtaining the output results of the trained convolutional neural network model for the target vehicle picture.
Step 309 further comprises: and uploading the recognition result to the block chain. And uploading the identification result to the block chain, wherein the user equipment can obtain the image identification result from the block chain, and the merchant equipment can also obtain the image identification result.
In the steps, the target vehicle picture to be detected and the convolutional neural network model trained by the sample set vehicle picture are obtained, the target vehicle picture is input into the trained convolutional neural network model, and the target vehicle picture is identified, so that the identification result of each key component of the target vehicle picture can be obtained, a user does not need to manually screen the target vehicle picture, the error rate of screening the target vehicle picture is reduced, and the identification accuracy of each key component of the target vehicle picture is further improved.
In one embodiment, as shown in fig. 6, the step of determining the display area on the target vehicle picture and the vehicle model of the target vehicle, that is, the step of determining the display area of the target vehicle on the target vehicle picture and the vehicle model of the target vehicle according to the recognition result, specifically includes the following steps S602 to S606:
and step S602, determining the relative positions of the key parts of the target vehicle based on the identification result.
Specifically, the recognition result of each key component of the target vehicle picture comprises the relative position relationship of each key component of the target vehicle, and the relative position of each key component of the target vehicle can be determined by analyzing the recognition result of each key component of the target vehicle picture.
And step S604, determining a display area of the target vehicle on the target vehicle picture according to the relative position of each key component of the target vehicle.
Specifically, the display position of the target vehicle on the target vehicle picture is preliminarily determined according to the relative position of each key component of the target vehicle, so that the display area of the target vehicle on the target vehicle picture is obtained.
And step S606, extracting the target vehicle in the display area and determining the vehicle model of the target vehicle.
Specifically, the target vehicle in the display area is extracted, the target vehicle is matched with different sample vehicles existing in the database, when the matching is successful, the vehicle model of the sample vehicle is obtained, and the vehicle model of the sample vehicle is determined as the vehicle model of the target vehicle.
In the steps, the relative position of each key component of the target vehicle is determined by utilizing the identification result of each key component of the target vehicle picture, so that the display area of the target vehicle on the target vehicle picture and the vehicle model of the target vehicle are determined, a user does not need to manually mark the key component of the target vehicle picture to be detected, manual operation is reduced, the workload and errors in the component marking process are reduced, and the accuracy of marking the key components of the vehicle is improved.
In one embodiment, as shown in fig. 7, the step of determining the shooting angle of the target vehicle picture to be detected, that is, the step of determining the shooting angle of the target vehicle picture to be detected according to the recognition result and the vehicle model, specifically includes the following steps S702 to S706:
step S702, obtaining the key point angle vector of each key component of the target vehicle according to the identification result of each key component of the target vehicle, and determining the feature vector matrix corresponding to each key component of the target vehicle.
Specifically, referring to FIG. 8, FIG. 8 provides a schematic illustration of the keypoint location of a target vehicle key component. As shown in fig. 8, the recognition results of the respective key components of the target vehicle are analyzed to determine the key point positions of the respective key components, and the key points at the respective key point positions are extracted. And calculating a key point angle vector between any two key points based on a preset key point arrangement sequence, and determining a relevant angle vector corresponding to the key point according to the key point angle vector. And further, obtaining a feature vector matrix corresponding to each key component of the target vehicle according to the key point angle vector and the corresponding related angle vector.
The coordinates of the left headlight of the target vehicle shown in fig. 8 are recorded as (0,0), the position of the left headlight is recorded as the origin of the coordinate system, the coordinate system is established, and the coordinate positions of other key points included in the coordinate system are sequentially acquired and recorded. The preset key point arrangement sequence can be a clockwise sequence, fig. 9 is referred to, fig. 9 provides a key point angle vector schematic diagram of key components of the target vehicle, and referring to fig. 9, the key point angle vector between any two key points is calculated based on the preset key point arrangement sequence, and the key point angle vector of each key component of the target vehicle can be obtained.
Further, the direction relationship between any key point and the current key point can be expressed by using a complex exponential by recording the angle vectors of any key point and the current key point as anAnd according to the key point angle vector, determining that the relevant angle vector of the corresponding key point is as follows:
according to the key point angle vectors and the corresponding related angle vectors, a feature vector matrix corresponding to each key component of the target vehicle can be obtained, and the feature vector matrix comprises the following steps:
step S704, obtaining the baseline characteristic vector of each key component of the corresponding vehicle according to the vehicle model of the target vehicle.
Specifically, referring to fig. 10, fig. 10 provides a baseline feature vector schematic of a sample vehicle. According to the vehicle model of the target vehicle, sample vehicles of the same vehicle model can be obtained, and baseline feature vectors of all key parts of the sample vehicles are extracted.
Further, by subjecting the vehicle model corresponding to each sample vehicle to 3D model vectorization, the left front lamp, the right front lamp, the left tail lamp, the right tail lamp, the left front door, the right front door, the left rear door, the right rear door, the left front wheel, the right front wheel, the left rear wheel, the right rear wheel, the front windshield, the rear windshield, the engine hood, the tail box cover, the left front window, the right front window, the left rear window, the right rear window, the left rear view mirror, the right rear view mirror, the front fender organic rear fender are taken, and the spatial positions of the respective components are taken as respective central points. The positions of the different components are recorded separately with (0,0, 0) of the left headlight as spatial coordinates.
Wherein any one component is recorded as: pobj=(x,y,z)。
Then the vehicle part composed of the left front lamp, the right front lamp, the left tail lamp, the right tail lamp, the left front door, the right front door, the left rear door, the right rear door, the left front wheel, the right front wheel, the left rear wheel, the right rear wheel, the front windshield, the rear windshield, the engine cover, the trunk cover, the left front window, the right front window, the left rear window, the right rear window, the left rearview mirror, the right rearview mirror, the front fender and the rear fender is a 24-dimensional eigenvector as follows:
in one embodiment, since the vehicle is actually 3D, but the picture of the vehicle taken of the vehicle is planar, the calculation may be performed based on the planar projection of the feature vectors of the key components of the vehicle at any angle, resulting in the projection of the vectors of each key component included in the vehicle on the XY plane. Different projection angles are different, and a basis can be provided for judging the shooting angle through the projected position subsequently.
And step S706, comparing the baseline characteristic vector with the characteristic vector matrix, rotating the baseline characteristic vector, and determining the shooting angle of the target vehicle picture to be detected.
Specifically, a comparison result is obtained by extracting a horizontal feature vector and a vertical feature vector of the feature vector matrix and comparing the horizontal feature vector and the vertical feature vector with the baseline feature vector. And then determining the rotation angle of the baseline characteristic vector based on the comparison result. And rotating the baseline characteristic vector according to the rotation angle, and calculating the geometric mean value of the angle vectors of the common key points in the baseline vector rotation process, wherein the common key points are the key points common to the characteristic vector matrix and the baseline characteristic vector. And when the geometric mean value of the angle vector of the public key point reaches a preset threshold value, obtaining the shooting angle of the target vehicle picture to be detected.
Further, as shown in fig. 11, fig. 11 provides a vector comparison schematic diagram of a vehicle component identification method. Because the shooting angles are different, the projection positions of the key points of the target vehicle on the target vehicle picture are not consistent with the projection positions of the key points of the sample vehicle, and the vector comparison schematic diagram of the vehicle component identification method shown in fig. 11 can be obtained by comparing the baseline feature vector of the sample vehicle with the horizontal feature vector and the vertical feature vector of the key points of the target vehicle.
The baseline characteristic vector is not overlapped with the horizontal characteristic vector and the vertical characteristic vector, so that the baseline characteristic vector can be rotated according to the rotation angle determined by vector comparison. And calculating the geometric mean value of the angle vectors of the public key points in the rotation process of the baseline vector by rotating the baseline feature vector. When the baseline feature vector is gradually approximated to the horizontal feature vector and the vertical feature vector by rotating the baseline feature vector, the geometric mean of the angle vectors of the common keypoints is gradually reduced. And when the geometric mean value of the angle vectors of the public key points is the minimum value, obtaining the shooting angle of the target vehicle picture.
The calculation formula of the geometric mean of the angle vectors of the public key points is as follows:
wherein,an angle vector representing a common keypoint of the target vehicle,and representing the angle vectors of the common key points of the sample vehicles, and obtaining the geometric mean value of the angle vectors of the common key points based on the calculation formula. In the rotating process, when the geometric mean value of the angle vectors of the public key points is the minimum value, the shooting angle of the target vehicle picture is obtained.
In the above step, the key point angle vectors of the key components of the target vehicle are obtained according to the recognition results of the key components of the target vehicle, and the feature vector matrix corresponding to the key components of the target vehicle is determined. The base line characteristic vectors of all key components of the corresponding vehicle are obtained according to the vehicle model of the target vehicle, the base line characteristic vectors are compared with the characteristic vector matrix, the base line characteristic vectors are rotated, the shooting angle of the target vehicle picture can be quickly determined, the image shooting quality is judged, preparation is made for subsequent vehicle picture quality inspection, and the working efficiency of vehicle picture detection is improved.
In one embodiment, after obtaining the edge approximate comparison result, the method further includes: and determining deformation key components of the target vehicle based on the edge approximate comparison result, and calculating deformation rates of the deformation key components.
Specifically, based on the edge approximation comparison result, deformation key components of the target vehicle are determined, and damaged key components are determined. And then calculating the deformation rate of the deformation key components, comparing the deformation key components with corresponding key components of the sample vehicle, and quantifying the damaged condition of the key components based on the deformation rate.
In the steps, based on the edge approximate comparison result, deformation key parts of the target vehicle are determined, deformation rates of the deformation key parts are calculated, on the basis of identifying whether the key parts are damaged, the deformation rates of the damaged key parts are calculated, the damaged conditions are quantized, the damaged conditions of the target vehicle are better judged, vehicle damage assessment is timely and accurately achieved, and working efficiency is improved.
It should be understood that, although the steps in the flowcharts of fig. 2, 3, 6 and 7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, 6, and 7 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 12, there is provided a vehicle component recognition apparatus including: a recognition result generation module 1202, a display area determination module 1204, a shooting angle determination module 1206, a first acquisition module 1208, a display area correction module 1210, a target vehicle contour line extraction module 1212, and a key component actual position determination module 1214, wherein:
the identification result generation module 1202 is configured to acquire a target vehicle picture to be detected, identify the target vehicle picture to obtain an identification result of each key component of the target vehicle picture, upload the identification result to a block chain, where the user equipment may obtain an image identification result from the block chain, and the merchant equipment may also obtain the image identification result.
And a display area determining module 1204, configured to determine, according to the recognition result, a display area of the target vehicle on the target vehicle picture and a vehicle model of the target vehicle.
And the shooting angle determining module 1206 is used for determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model.
The first obtaining module 1208 is configured to obtain a standard relative position relationship and a standard contour line of each key component of the corresponding vehicle according to the vehicle model.
And the display area correcting module 1210 is used for correcting the display area according to the standard relative position relationship and the shooting angle of the target vehicle picture.
And the target vehicle contour line extracting module 1212 is configured to extract a contour line of the target vehicle in the corrected display area.
And the actual position determining module 1214 of the key component is configured to compare the contour line of the target vehicle with the standard contour line in an edge approximation manner, and determine the actual position of each key component of the target vehicle.
In the vehicle component recognition device, the recognition result of each key component of the target vehicle picture is obtained by recognizing the acquired target vehicle picture to be detected, and the display area of the target vehicle on the target vehicle picture and the vehicle model of the target vehicle are determined according to the recognition result. And determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model, and acquiring the standard relative position relation and the standard contour line of each key component of the corresponding vehicle according to the vehicle model. And correcting the display area according to the standard relative position relation and the shooting angle of the target vehicle picture to obtain the display area covering each key component of the vehicle as far as possible. By extracting the contour line of the target vehicle in the corrected display area and performing edge approximation comparison on the contour line of the target vehicle and the standard contour line, the association relationship among the components can be established. And further determining the actual positions of all key components of the target vehicle based on the association relationship between the contour lines and the components, so that the accuracy of identifying the key components of the vehicle is improved.
In one embodiment, the photographing angle determining module is further configured to:
obtaining key point angle vectors of all key components of the target vehicle according to the recognition results of all key components of the target vehicle, and determining a feature vector matrix corresponding to all key components of the target vehicle; acquiring a baseline characteristic vector of each key component of a corresponding vehicle according to the vehicle model of the target vehicle; and comparing the baseline characteristic vector with the characteristic vector matrix, rotating the baseline characteristic vector, and determining the shooting angle of the target vehicle picture to be detected.
In the shooting angle determining module, the key point angle vectors of the key components of the target vehicle are obtained according to the recognition results of the key components of the target vehicle, and the feature vector matrix corresponding to the key components of the target vehicle is determined. The base line characteristic vectors of all key components of the corresponding vehicle are obtained according to the vehicle model of the target vehicle, the base line characteristic vectors are compared with the characteristic vector matrix, the base line characteristic vectors are rotated, the shooting angle of the target vehicle picture can be quickly determined, the image shooting quality is judged, preparation is made for subsequent vehicle picture quality inspection, and the working efficiency of vehicle picture detection is improved.
In one embodiment, the recognition result generating module is further configured to:
acquiring a target vehicle picture to be detected; acquiring a convolutional neural network model after training of a sample set vehicle picture; inputting the target vehicle picture into the trained convolutional neural network model, and identifying the target vehicle picture; and acquiring the identification result of each key component of the target vehicle picture.
In the identification result generation module, the identification result of each key component of the target vehicle picture can be obtained by acquiring the target vehicle picture to be detected and the convolutional neural network model trained by the sample set vehicle picture, inputting the target vehicle picture into the trained convolutional neural network model, and identifying the target vehicle picture, so that the user does not need to manually screen the target vehicle picture, the error rate of screening the target vehicle picture is reduced, and the identification accuracy of each key component of the target vehicle picture is further improved.
In one embodiment, the display area determination module is further configured to:
determining the relative positions of key components of the target vehicle based on the identification result; determining a display area of the target vehicle on the target vehicle picture according to the relative position of each key component of the target vehicle; and extracting the target vehicle in the display area, and determining the vehicle model of the target vehicle.
According to the display area determining module, the relative positions of the key parts of the target vehicle are determined by utilizing the identification results of the key parts of the target vehicle picture, so that the display area of the target vehicle on the target vehicle picture and the vehicle model of the target vehicle are determined, a user does not need to manually mark the key parts of the target vehicle picture to be detected, manual operation is reduced, the workload and errors in the component marking process are reduced, and the accuracy of marking the key parts of the vehicle is improved.
In one embodiment, there is provided a vehicle component identification apparatus, further comprising a deformation rate calculation module for:
and determining deformation key components of the target vehicle based on the edge approximate comparison result, and calculating deformation rates of the deformation key components.
According to the vehicle component identification device, the deformation key components of the target vehicle are determined based on the edge approximate comparison result, the deformation rate of the deformation key components is calculated, the deformation rate of the damaged key components is calculated on the basis of identifying whether the key components are damaged, the damaged condition is quantified, the damaged condition of the target vehicle is better judged, vehicle damage assessment is timely and accurately achieved, and the working efficiency is improved.
For specific limitations of the vehicle component recognition device, reference may be made to the above limitations of the vehicle component recognition method, which are not described herein again. The respective modules in the vehicle component recognition apparatus described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 13. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing the relative positions of the key parts of the target vehicle, the contour line of the target vehicle, the labeled relative position relation of all the key parts of the sample vehicle with the same vehicle model and the standard contour line. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a vehicle component identification method.
Those skilled in the art will appreciate that the architecture shown in fig. 13 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the above-described method embodiments when the processor executes the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the respective method embodiment as described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A vehicle component identification method, the method comprising:
acquiring a target vehicle picture to be detected, and identifying the target vehicle picture to obtain identification results of key components of the target vehicle picture;
determining a display area of a target vehicle on the target vehicle picture and a vehicle model of the target vehicle according to the identification result;
determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model;
acquiring a standard relative position relation and a standard contour line of each key component of the corresponding vehicle according to the vehicle model;
correcting the display area according to the standard relative position relation and the shooting angle of the target vehicle picture;
extracting the corrected contour line of the target vehicle in the display area;
and performing edge approximate comparison on the contour line of the target vehicle and the standard contour line to determine the actual positions of all key parts of the target vehicle.
2. The method according to claim 1, wherein the step of obtaining a target vehicle picture to be detected, and identifying the target vehicle picture to obtain identification results of key components of the target vehicle picture comprises the following steps:
acquiring a target vehicle picture to be detected;
acquiring a convolutional neural network model after training of a sample set vehicle picture;
inputting the target vehicle picture into the trained convolutional neural network model, and identifying the target vehicle picture;
acquiring the recognition result of each key component of the target vehicle picture;
further comprising:
and uploading the identification result to a block chain.
3. The method of claim 2, wherein the determining a display area of a target vehicle on the target vehicle picture and a vehicle model of the target vehicle according to the recognition result comprises:
determining the relative positions of key components of the target vehicle based on the identification result;
determining a display area of the target vehicle on the target vehicle picture according to the relative position of each key component of the target vehicle;
and extracting the target vehicle in the display area, and determining the vehicle model of the target vehicle.
4. The method according to claim 2, wherein the determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model comprises:
obtaining key point angle vectors of all key components of a target vehicle according to the recognition results of all key components of the target vehicle, and determining a feature vector matrix corresponding to all key components of the target vehicle;
acquiring a baseline characteristic vector of each key component of the corresponding vehicle according to the vehicle model of the target vehicle;
and comparing the baseline characteristic vector with the characteristic vector matrix, rotating the baseline characteristic vector, and determining the shooting angle of the target vehicle picture to be detected.
5. The method according to any one of claims 1 to 4, wherein the edge-approximate comparison of the contour line of the target vehicle with the standard contour line to determine the actual positions of the key components of the target vehicle comprises:
carrying out edge approximate comparison on the contour line of the target vehicle and the standard contour line to obtain an edge approximate comparison result;
establishing an incidence relation between the key components of the target vehicle and the key components of vehicles of the same vehicle model based on the edge approximate comparison result;
and determining the actual positions of all key components of the target vehicle according to the association relationship and the standard relative position relationship.
6. The method according to claim 4, wherein the obtaining the key point angle vector of each key component of the target vehicle according to the identification result of each key component of the target vehicle and determining the feature vector matrix corresponding to each key component of the target vehicle comprises:
determining the key point position of each key component according to the identification result of each key component of the target vehicle;
extracting key points on the positions of the key points, and calculating key point angle vectors between any two key points based on a preset arrangement sequence;
determining a relevant angle vector corresponding to the key point according to the key point angle vector;
and obtaining a characteristic vector matrix corresponding to each key component of the target vehicle according to the key point angle vector and the corresponding related angle vector.
7. The method according to claim 4, wherein the comparing the baseline eigenvector with the eigenvector matrix, rotating the baseline eigenvector, and determining the shooting angle of the target vehicle picture to be detected comprises:
extracting a horizontal feature vector and a vertical feature vector of the feature vector matrix;
comparing the horizontal feature vector and the vertical feature vector with the baseline feature vector to obtain a comparison result;
determining a rotation angle of the baseline feature vector based on the comparison result;
rotating the baseline characteristic vector according to the rotation angle, and calculating a geometric mean value of angle vectors of public key points in the baseline vector rotation process; the public key points are key points which are common to the feature vector matrix and the baseline feature vector;
and when the geometric mean value of the angle vector of the public key point reaches a preset threshold value, obtaining the shooting angle of the target vehicle picture to be detected.
8. A vehicle component identification device, characterized in that the device comprises:
the identification result generation module is used for acquiring a target vehicle picture to be detected, identifying the target vehicle picture and obtaining identification results of all key components of the target vehicle picture;
the display area determining module is used for determining a display area of a target vehicle on the target vehicle picture and a vehicle model of the target vehicle according to the identification result;
the shooting angle determining module is used for determining the shooting angle of the target vehicle picture to be detected according to the identification result and the vehicle model;
the first acquisition module is used for acquiring the standard relative position relation and the standard contour line of each key component of the corresponding vehicle according to the vehicle model;
the display area correction module is used for correcting the display area according to the standard relative position relation and the shooting angle of the target vehicle picture;
the target vehicle contour line extraction module is used for extracting the corrected contour line of the target vehicle in the display area;
and the actual position determining module of the key component is used for performing edge approximate comparison on the contour line of the target vehicle and the standard contour line to determine the actual position of each key component of the target vehicle.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010344787.9A CN111553268A (en) | 2020-04-27 | 2020-04-27 | Vehicle part identification method and device, computer equipment and storage medium |
PCT/CN2020/106040 WO2021217940A1 (en) | 2020-04-27 | 2020-07-31 | Vehicle component recognition method and apparatus, computer device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010344787.9A CN111553268A (en) | 2020-04-27 | 2020-04-27 | Vehicle part identification method and device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111553268A true CN111553268A (en) | 2020-08-18 |
Family
ID=72005889
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010344787.9A Pending CN111553268A (en) | 2020-04-27 | 2020-04-27 | Vehicle part identification method and device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111553268A (en) |
WO (1) | WO2021217940A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113484031A (en) * | 2021-06-30 | 2021-10-08 | 重庆长安汽车股份有限公司 | Method for setting noise transfer function target of suspension attachment point |
CN114323583A (en) * | 2021-12-21 | 2022-04-12 | 广汽本田汽车有限公司 | Vehicle light detection method, device, equipment and system |
CN116434047A (en) * | 2023-03-29 | 2023-07-14 | 邦邦汽车销售服务(北京)有限公司 | Vehicle damage range determining method and system based on data processing |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114604175B (en) * | 2021-11-26 | 2023-10-13 | 中科云谷科技有限公司 | Method, processor, device and system for determining engineering vehicle |
CN116541790B (en) * | 2023-04-12 | 2024-03-12 | 元始智能科技(南通)有限公司 | New energy vehicle health assessment method and device based on multi-feature fusion |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09329430A (en) * | 1996-06-12 | 1997-12-22 | Minolta Co Ltd | Method and device for measuring degree of damage of object and repair cost calculating device |
KR20140093407A (en) * | 2013-01-18 | 2014-07-28 | 광주과학기술원 | Recognition device, vehicle model recognition apparatus and method |
CN107180413A (en) * | 2017-05-05 | 2017-09-19 | 平安科技(深圳)有限公司 | Car damages picture angle correcting method, electronic installation and readable storage medium storing program for executing |
US9886771B1 (en) * | 2016-05-20 | 2018-02-06 | Ccc Information Services Inc. | Heat map of vehicle damage |
KR101861236B1 (en) * | 2017-09-07 | 2018-05-25 | 김영광 | Method for managing traffic accident |
CN108446618A (en) * | 2018-03-09 | 2018-08-24 | 平安科技(深圳)有限公司 | Car damage identification method, device, electronic equipment and storage medium |
CN108632530A (en) * | 2018-05-08 | 2018-10-09 | 阿里巴巴集团控股有限公司 | A kind of data processing method of car damage identification, device, processing equipment and client |
CN108665373A (en) * | 2018-05-08 | 2018-10-16 | 阿里巴巴集团控股有限公司 | A kind of interaction processing method of car damage identification, device, processing equipment and client |
CN109325531A (en) * | 2018-09-17 | 2019-02-12 | 平安科技(深圳)有限公司 | Car damage identification method, device, equipment and storage medium based on image |
CN109409267A (en) * | 2018-10-15 | 2019-03-01 | 哈尔滨市科佳通用机电股份有限公司 | Rolling stock failure automatic identifying method |
WO2019205376A1 (en) * | 2018-04-26 | 2019-10-31 | 平安科技(深圳)有限公司 | Vehicle damage determination method, server, and storage medium |
CN110826561A (en) * | 2019-11-11 | 2020-02-21 | 上海眼控科技股份有限公司 | Vehicle text recognition method and device and computer equipment |
CN110853060A (en) * | 2019-11-14 | 2020-02-28 | 上海眼控科技股份有限公司 | Vehicle appearance detection method and device, computer equipment and storage medium |
WO2020042800A1 (en) * | 2018-08-31 | 2020-03-05 | 阿里巴巴集团控股有限公司 | Auxiliary method for capturing damage assessment image of vehicle, device, and apparatus |
WO2020071559A1 (en) * | 2018-10-05 | 2020-04-09 | Arithmer株式会社 | Vehicle state evaluation device, evaluation program therefor, and evaluation method therefor |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012035827A (en) * | 2010-08-05 | 2012-02-23 | Shigeo Hirose | Bumper of automobile or the like |
CN109410270B (en) * | 2018-09-28 | 2020-10-27 | 百度在线网络技术(北京)有限公司 | Loss assessment method, device and storage medium |
CN109544623B (en) * | 2018-10-11 | 2021-07-27 | 百度在线网络技术(北京)有限公司 | Method and device for measuring damaged area of vehicle |
-
2020
- 2020-04-27 CN CN202010344787.9A patent/CN111553268A/en active Pending
- 2020-07-31 WO PCT/CN2020/106040 patent/WO2021217940A1/en active Application Filing
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09329430A (en) * | 1996-06-12 | 1997-12-22 | Minolta Co Ltd | Method and device for measuring degree of damage of object and repair cost calculating device |
KR20140093407A (en) * | 2013-01-18 | 2014-07-28 | 광주과학기술원 | Recognition device, vehicle model recognition apparatus and method |
US9886771B1 (en) * | 2016-05-20 | 2018-02-06 | Ccc Information Services Inc. | Heat map of vehicle damage |
CN107180413A (en) * | 2017-05-05 | 2017-09-19 | 平安科技(深圳)有限公司 | Car damages picture angle correcting method, electronic installation and readable storage medium storing program for executing |
KR101861236B1 (en) * | 2017-09-07 | 2018-05-25 | 김영광 | Method for managing traffic accident |
CN108446618A (en) * | 2018-03-09 | 2018-08-24 | 平安科技(深圳)有限公司 | Car damage identification method, device, electronic equipment and storage medium |
WO2019205376A1 (en) * | 2018-04-26 | 2019-10-31 | 平安科技(深圳)有限公司 | Vehicle damage determination method, server, and storage medium |
CN108632530A (en) * | 2018-05-08 | 2018-10-09 | 阿里巴巴集团控股有限公司 | A kind of data processing method of car damage identification, device, processing equipment and client |
CN108665373A (en) * | 2018-05-08 | 2018-10-16 | 阿里巴巴集团控股有限公司 | A kind of interaction processing method of car damage identification, device, processing equipment and client |
WO2020042800A1 (en) * | 2018-08-31 | 2020-03-05 | 阿里巴巴集团控股有限公司 | Auxiliary method for capturing damage assessment image of vehicle, device, and apparatus |
CN109325531A (en) * | 2018-09-17 | 2019-02-12 | 平安科技(深圳)有限公司 | Car damage identification method, device, equipment and storage medium based on image |
WO2020071559A1 (en) * | 2018-10-05 | 2020-04-09 | Arithmer株式会社 | Vehicle state evaluation device, evaluation program therefor, and evaluation method therefor |
CN109409267A (en) * | 2018-10-15 | 2019-03-01 | 哈尔滨市科佳通用机电股份有限公司 | Rolling stock failure automatic identifying method |
CN110826561A (en) * | 2019-11-11 | 2020-02-21 | 上海眼控科技股份有限公司 | Vehicle text recognition method and device and computer equipment |
CN110853060A (en) * | 2019-11-14 | 2020-02-28 | 上海眼控科技股份有限公司 | Vehicle appearance detection method and device, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
杨欣宇;李诚;张宏烈;: "基于机器视觉车祸车辆的识别方法研究", 计算机仿真, no. 07, 15 July 2016 (2016-07-15) * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113484031A (en) * | 2021-06-30 | 2021-10-08 | 重庆长安汽车股份有限公司 | Method for setting noise transfer function target of suspension attachment point |
CN113484031B (en) * | 2021-06-30 | 2022-08-09 | 重庆长安汽车股份有限公司 | Method for setting noise transfer function target of suspension attachment point |
CN114323583A (en) * | 2021-12-21 | 2022-04-12 | 广汽本田汽车有限公司 | Vehicle light detection method, device, equipment and system |
CN114323583B (en) * | 2021-12-21 | 2024-06-04 | 广汽本田汽车有限公司 | Vehicle light detection method, device, equipment and system |
CN116434047A (en) * | 2023-03-29 | 2023-07-14 | 邦邦汽车销售服务(北京)有限公司 | Vehicle damage range determining method and system based on data processing |
CN116434047B (en) * | 2023-03-29 | 2024-01-09 | 邦邦汽车销售服务(北京)有限公司 | Vehicle damage range determining method and system based on data processing |
Also Published As
Publication number | Publication date |
---|---|
WO2021217940A1 (en) | 2021-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111553268A (en) | Vehicle part identification method and device, computer equipment and storage medium | |
CN109886077B (en) | Image recognition method and device, computer equipment and storage medium | |
CN110414507B (en) | License plate recognition method and device, computer equipment and storage medium | |
US9690977B2 (en) | Object identification using 3-D curve matching | |
CN110796082B (en) | Nameplate text detection method and device, computer equipment and storage medium | |
CN110634153A (en) | Target tracking template updating method and device, computer equipment and storage medium | |
CN111178245A (en) | Lane line detection method, lane line detection device, computer device, and storage medium | |
CN111461170A (en) | Vehicle image detection method and device, computer equipment and storage medium | |
CN110807491A (en) | License plate image definition model training method, definition detection method and device | |
CN110826484A (en) | Vehicle weight recognition method and device, computer equipment and model training method | |
CN109271908B (en) | Vehicle loss detection method, device and equipment | |
CN110046577B (en) | Pedestrian attribute prediction method, device, computer equipment and storage medium | |
CN110287936B (en) | Image detection method, device, equipment and storage medium | |
CN112784712B (en) | Missing child early warning implementation method and device based on real-time monitoring | |
CN112241646A (en) | Lane line recognition method and device, computer equipment and storage medium | |
CN112241705A (en) | Target detection model training method and target detection method based on classification regression | |
CN116385745A (en) | Image recognition method, device, electronic equipment and storage medium | |
CN112085721A (en) | Damage assessment method, device and equipment for flooded vehicle based on artificial intelligence and storage medium | |
CN115880695A (en) | Card identification method, card identification model training method and electronic equipment | |
CN111340025A (en) | Character recognition method, character recognition device, computer equipment and computer-readable storage medium | |
CN111046755A (en) | Character recognition method, character recognition device, computer equipment and computer-readable storage medium | |
CN111178200A (en) | Identification method of instrument panel indicator lamp and computing equipment | |
CN109934858B (en) | Image registration method and device | |
CN116958604A (en) | Power transmission line image matching method, device, medium and equipment | |
CN115731179A (en) | Track component detection method, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |