CN116468931A - Vehicle part detection method, device, terminal and storage medium - Google Patents

Vehicle part detection method, device, terminal and storage medium Download PDF

Info

Publication number
CN116468931A
CN116468931A CN202310284512.4A CN202310284512A CN116468931A CN 116468931 A CN116468931 A CN 116468931A CN 202310284512 A CN202310284512 A CN 202310284512A CN 116468931 A CN116468931 A CN 116468931A
Authority
CN
China
Prior art keywords
component
vehicle
target
confidence score
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310284512.4A
Other languages
Chinese (zh)
Inventor
刘金龙
徐焕军
陈年昊
羊铁军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bangbang Automobile Sales Service Beijing Co ltd
Original Assignee
Bangbang Automobile Sales Service Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bangbang Automobile Sales Service Beijing Co ltd filed Critical Bangbang Automobile Sales Service Beijing Co ltd
Priority to CN202310284512.4A priority Critical patent/CN116468931A/en
Publication of CN116468931A publication Critical patent/CN116468931A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle component detection method, a device, a terminal and a storage medium. The method comprises the following steps: acquiring a vehicle part diagram to be detected of a vehicle; performing scene detection on the vehicle component diagram to be detected based on a preset azimuth scene matching network to obtain a vehicle azimuth scene corresponding to the vehicle component diagram to be detected; performing component detection on the vehicle component diagram to be detected based on a preset vehicle component detection model to obtain an initial component class detection result corresponding to the vehicle component diagram to be detected; and correcting the initial component class detection result according to the vehicle azimuth scene to obtain a final component class detection result corresponding to the to-be-detected vehicle component diagram. The invention can solve the problem of error detection or false detection omission of detection of left and right types and front and rear types of similar characteristics during detection of multiple parts of the vehicle, thereby improving the accuracy and effect of detection of the parts of the vehicle and being beneficial to improving the intelligent and marketization degree of the automobile business based on detection of the parts of the vehicle.

Description

Vehicle part detection method, device, terminal and storage medium
Technical Field
The present invention relates to the field of intelligent traffic technologies, and in particular, to a vehicle component detection method, device, terminal, and storage medium.
Background
With the increasing number of vehicles in the society today, vehicle component detection is an important ring in various automobile businesses. For example, in an automobile insurance claim settlement business, after a collision accident of a vehicle, a series of processes such as alarming, responsibility identification, and investigation and photographing by personnel of an insurance company are generally required to perform insurance claim settlement. Whether the damage assessment personnel of the insurance company get to the vehicle picture acquired under the risk control environment of the picture in the scene in person or in order to relieve urban traffic pressure, the user needs to identify the vehicle parts based on the vehicle picture acquired by the intelligent damage assessment system, so that important basis is provided for judging the damage condition of the vehicle and carrying out vehicle insurance claim settlement.
The detection of the vehicle parts is the basis of various automobile businesses, and whether the parts of the vehicle are rapidly identified and segmented in the image determines the intelligent degree and market application rate of the related application of the various automobile businesses. At present, the problems of inaccurate data annotation, poor applicability, low accuracy, poor robustness and the like generally exist in the application of vehicle part detection, and a relatively perfect vehicle part detection segmentation method can be suitable for complicated scenes, such as the conditions of high recognition difficulty, such as low night view finding visibility, visual deformation of vehicle parts caused by view finding position change, picture blurring caused by equipment shaking during shooting and the like. The deep learning method is widely paid attention to researchers at home and abroad once proposed, and the deep learning method is used for positioning and detecting the hardware fitting data set, so that the method has the advantages of smaller influence on the result, stronger feature extraction capability and anti-interference capability due to super parameters.
However, the existing vehicle component detection methods based on deep learning are simple application of a target detection model and an example segmentation model, and can obtain good detection effects when the types of the vehicle damage and the components are fewer, but when the types of the components and the damage are increased, and the characteristics among different types are extremely similar, the problems of error detection or false detection caused by missing detection are easy to occur, the detection accuracy is insufficient, and the task of detecting the vehicle components cannot be completed well.
Disclosure of Invention
The embodiment of the invention provides a vehicle part detection method, a device, a terminal and a storage medium, which are used for solving the problems that the existing vehicle part detection method is easy to detect errors or has missed detection false detection, the detection accuracy is not enough and the task of detecting the vehicle part cannot be completed well.
In a first aspect, an embodiment of the present invention provides a vehicle component detection method, including:
acquiring a vehicle part diagram to be detected of a vehicle;
performing scene detection on the to-be-detected vehicle part graph based on a preset azimuth scene matching network to obtain a vehicle azimuth scene corresponding to the to-be-detected vehicle part graph;
performing component detection on the to-be-detected vehicle component diagram based on a preset vehicle component detection model to obtain an initial component class detection result corresponding to the to-be-detected vehicle component diagram;
And correcting the initial component class detection result according to the vehicle azimuth scene to obtain a final component class detection result corresponding to the to-be-detected vehicle component diagram.
In one possible implementation, the initial component class detection result includes an initial component class and an initial component confidence score, and the final component class detection result includes a final component class and a final component confidence score;
correcting the initial component category detection result according to the vehicle azimuth scene to obtain a final component category detection result corresponding to the to-be-detected vehicle component diagram, wherein the method comprises the following steps:
converting target component categories which are not matched with the standard component categories in the initial component categories according to the standard component categories corresponding to the vehicle azimuth scene, and obtaining converted correction component categories;
and recording the initial component confidence score corresponding to the target component category as a target initial component confidence score, carrying out hierarchical screening on the target initial component confidence score according to a low-dimensional confidence threshold and a high-dimensional confidence threshold, and obtaining a final component category and a final component confidence score of the target component category based on a screening result and the corrected component category.
In one possible implementation manner, the step of classifying and screening the target initial component confidence score according to a low-dimensional confidence threshold and a high-dimensional confidence threshold, and obtaining a final component category and a final component confidence score of the target component category based on a screening result and the corrected component category includes:
judging whether the confidence score of the target initial part is larger than or equal to the low-dimensional confidence threshold;
if the target initial part confidence score is greater than or equal to the low-dimensional confidence threshold, regulating and controlling the target initial part confidence score according to a preset confidence regulating parameter to obtain a corrected part confidence score corresponding to the target part category;
judging whether the correction component confidence score is greater than or equal to the high-dimensional confidence threshold;
and if the corrected component confidence score is greater than or equal to the high-dimensional confidence threshold, obtaining a final component confidence score and a final component category of the target component category according to the corrected component confidence score and the corrected component category corresponding to the corrected component confidence score.
In one possible implementation, after determining whether the target initial component confidence score is greater than or equal to the low-dimensional confidence threshold, further comprising:
If the target initial component confidence score is smaller than the low-dimensional confidence threshold, eliminating the target component category, the target initial component confidence score and a correction component category corresponding to the target component category;
alternatively, after determining whether the corrected component confidence score is greater than or equal to the high-dimensional confidence threshold, further comprising:
and if the corrected component confidence score is smaller than the high-dimensional confidence threshold, eliminating the target component category, the target initial component confidence score, the corrected component category corresponding to the target component category and the corrected component confidence score.
In one possible implementation manner, the adjusting the target initial component confidence score according to a preset confidence adjustment parameter to obtain a corrected component confidence score corresponding to the target component category includes:
according to score scene =[(Scores+conf)|Labels scene ]Obtaining a corrected part confidence score corresponding to the target part category, wherein score is larger than or equal to alpha;
wherein, score scene For the corrected part confidence score corresponding to the target part category, score is the target initial part confidence score, conf is the preset confidence regulation parameter, and the value range is 0.1 and 0.5 ],Labels scene For the corrected component category after the target component category conversion, alpha is the low-dimensional confidence threshold, and the value range is [0,0.1 ]],[A|B]The A event is indicated to occur on the premise of the B event, wherein the result of the A event is the target output.
In one possible implementation manner, the obtaining the final component confidence score and the final component category of the target component category according to the corrected component confidence score and the corrected component category corresponding to the corrected component confidence score includes:
according toObtaining a final part confidence score and a final part class for the target part class;
wherein Scores final Score for final part confidence score for the target part class scene For the corrected part confidence score corresponding to the target part category, score_thr is the high-dimensional confidence threshold, labels final Labels as the final part class of the target part class scene For the corrected component category converted by the target component category corresponding to the corrected component confidence score, [ A|B ]]The A event is indicated to occur on the premise of the B event, wherein the result of the A event is the target output.
In one possible implementation manner, the training process of the preset azimuth scene matching network includes:
Acquiring a training set formed by a vehicle azimuth scene graph, wherein the vehicle azimuth scene graph is a vehicle graph shot from a certain azimuth of a vehicle;
and training the initial azimuth scene matching network according to each vehicle azimuth scene graph in the training set and the vehicle azimuth scene label corresponding to the vehicle azimuth scene graph to obtain a preset azimuth scene matching network.
In a second aspect, an embodiment of the present invention provides a vehicle component detection apparatus including:
the input module is used for acquiring a to-be-detected vehicle part diagram of the vehicle;
the first processing module is used for carrying out scene detection on the to-be-detected vehicle component diagram based on a preset azimuth scene matching network to obtain a vehicle azimuth scene corresponding to the to-be-detected vehicle component diagram;
the second processing module is used for carrying out component detection on the to-be-detected vehicle component diagram based on a preset vehicle component detection model, and obtaining an initial component class detection result corresponding to the to-be-detected vehicle component diagram;
and the third processing module is used for correcting the initial component class detection result according to the vehicle azimuth scene to obtain a final component class detection result corresponding to the to-be-detected vehicle component diagram.
In a third aspect, an embodiment of the present invention provides a terminal, including a memory for storing a computer program and a processor for calling and running the computer program stored in the memory, to perform the steps of the method as described above in the first aspect or any possible implementation manner of the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method as described above in the first aspect or any one of the possible implementations of the first aspect.
The embodiment of the invention provides a vehicle part detection method, a device, a terminal and a storage medium, wherein a vehicle part diagram to be detected of a vehicle is obtained; firstly, carrying out scene detection on a vehicle component diagram to be detected based on a preset azimuth scene matching network, and obtaining a vehicle azimuth scene corresponding to the vehicle component diagram to be detected; performing component detection on the vehicle component diagram to be detected based on a preset vehicle component detection model to obtain an initial component class detection result corresponding to the vehicle component diagram to be detected; and finally, correcting the initial component class detection result according to the vehicle azimuth scene to obtain a final component class detection result corresponding to the to-be-detected vehicle component diagram. Because the concept of the vehicle azimuth scene is introduced in the embodiment, the accurate azimuth scene of the vehicle part diagram to be detected can be obtained firstly based on the preset azimuth scene matching network, on the basis, the initial part type detection result corresponding to the vehicle part diagram to be detected is corrected according to the accurate azimuth scene of the vehicle part diagram to be detected, and the problems of error detection or false detection missing of left and right type detection, front and rear type detection of similar features during multi-part detection of the vehicle can be solved, so that the task of vehicle part detection is completed better, the accuracy and effect of vehicle part detection are improved, and the intelligent and marketized degree of automobile business such as intelligent damage assessment and risk control based on vehicle part detection is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an implementation of a method for detecting a vehicle component according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of an 8+1 azimuth scenario for a vehicle component provided by an embodiment of the present invention;
FIG. 3 is an overall network architecture diagram of a vehicle component detection method provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of converting a target component class according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an initial component class detection result provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a final part class test result provided by an embodiment of the present invention;
fig. 7 is a schematic structural view of a vehicle component detecting apparatus provided in an embodiment of the present invention;
fig. 8 is a schematic diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the following description will be made by way of specific embodiments with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of an implementation of a method for detecting a vehicle component according to an embodiment of the present invention is shown, and details are as follows:
in step 101, a vehicle part map to be detected of a vehicle is acquired.
The vehicle part diagram to be detected can be a vehicle picture acquired by a damage assessment person in a risk control environment, can be a vehicle picture uploaded by a user during intelligent damage assessment, and can also be a vehicle picture acquired during other automobile business processes, such as a vehicle picture acquired during vehicle taking and returning during vehicle renting. The method for detecting the vehicle component according to the embodiment is not limited to the method for acquiring the map of the vehicle component to be detected, and the method for detecting the vehicle component according to the embodiment can be applied to any scene in which the detection of the vehicle component is required.
In step 102, scene detection is performed on the vehicle component diagram to be detected based on a preset azimuth scene matching network, and a vehicle azimuth scene corresponding to the vehicle component diagram to be detected is obtained.
In this embodiment, the preset azimuth scene matching network is a network that can distinguish which azimuth scene of the vehicle to be detected is the vehicle picture acquired relative to the vehicle. Since the vehicle picture is taken at different azimuth scenes with respect to the vehicle, the taken vehicle picture contains different vehicle components. For example, when a vehicle picture is taken in an azimuth scene such as the right side or the front right side of a vehicle, the taken vehicle picture is more likely to include a rearview mirror (right), a front door (right), etc., whereas when a vehicle picture is taken in an azimuth scene such as the right left side or the front left side of a vehicle, the taken vehicle picture is more likely to include a rearview mirror (left), a front door (left). Therefore, the azimuth scenerization rule of the vehicle part is defined so as to perform scene detection on the vehicle part diagram to be detected based on the preset azimuth scene matching network, obtain the vehicle azimuth scene corresponding to the vehicle part diagram to be detected, and be beneficial to accurately detecting the left-right category, the front-back category and the like of similar characteristics during vehicle part detection.
By way of example, as shown in FIG. 2, an 8+1 azimuth scenario for a vehicle component may be defined. Among them, scenes 1 to 8 (i.e., scenes 1 to 8) are azimuth scenes with respect to the front, front right, rear left, front left of the vehicle, and the remaining ones are the backgrounds. Defining 8+1 azimuth scenes of the vehicle parts, and determining a standard vehicle part set corresponding to each azimuth scene, thereby being beneficial to improving the accuracy and effect of subsequent vehicle part detection.
The 8+1 azimuth scene of the vehicle part is defined, so that the vehicle azimuth scene corresponding to the vehicle part diagram to be detected can be obtained quickly and accurately. On this basis, fewer or more azimuth scenes can be defined for improving the scene detection speed or accuracy.
Optionally, the training process of the preset azimuth scene matching network may include:
a training set composed of a vehicle azimuth scene graph is obtained, wherein the vehicle azimuth scene graph is a vehicle graph shot from a certain azimuth of a vehicle.
And training the initial azimuth scene matching network according to each vehicle azimuth scene graph in the training set and the vehicle azimuth scene label corresponding to the vehicle azimuth scene graph to obtain a preset azimuth scene matching network.
For example, 59 types of vehicle components can be selected as experimental objects, and a vehicle component small data set part_data1 required for training a preset azimuth scene matching network is constructed by referring to a construction method of a COCO data set. The sample images of the training set and the test set in the part_data1 data set can be 39058 and 4335 respectively, the quantity ratio is 9:1, and the training set and the test set comprise 8 types of vehicle azimuth scenes and 8 types of scenes as backgrounds, and the training set and the test set are nine types in total. Wherein the vehicle orientation scene graph is similar to the vehicle component graph, but is not an inclusion relationship, wherein some portions are crossed, but most are independent of each other. The vehicle pictures obtained by shooting the view angles of the long-range view and the medium-range view can be selected as the vehicle azimuth scene graph, and the vehicle pictures obtained by shooting the view angles of the short-range view can be also selected as the vehicle azimuth scene graph.
The network similar to the depth residual network Resnet101 can be selected as an initial azimuth scene matching network, and the scene detection precision of the preset azimuth scene matching network obtained through final training is 84.4%. Parameters can be set in the training process: the learning rate lr was 0.0001,batch size and epoch was set to 10. The preset azimuth scene matching network obtained through training in the embodiment can better identify the azimuth scene corresponding to the vehicle, and provides a certain basis for detecting the vehicle components based on the vehicle azimuth scene.
In step 103, component detection is performed on the vehicle component diagram to be detected based on the preset vehicle component detection model, and an initial component class detection result corresponding to the vehicle component diagram to be detected is obtained.
The training process of the preset vehicle part detection model is similar to that of the preset azimuth scene matching network. For example, 59 types of vehicle components can be selected as experimental objects, and a vehicle component big data set part_data2 required for training a preset vehicle component detection model is constructed by referring to a method for constructing a COCO data set. The sample images of the training set and the test set in the part_data2 dataset can be 45503 and 11376 sheets respectively, the number ratio is 3:1, and the training set and the test set respectively comprise 287331 and 71712 category objects, and the total number of the training set and the test set is 59. The vehicle Part map in the part_data2 data set is generally photographed by a damage-assessment person in a risk control scene, and relates to three views of a distant view, a middle view, a near view and the like, so that the part_data2 data set is huge.
For example, the maskrnn model may be utilized as an initial vehicle component detection model for training. In the model training process, NVIDIA 1080Ti professional accelerator card can be used for training and testing, ubuntu16.04.6LTS operating system is used, and CUDA11.1 is used for accelerating training. The computer language used may be python3.8 and the network development framework may be Pytorch. During the training phase, the batch size may be set to 1, and a random gradient descent (SGD) algorithm may be used, the initial learning rate may be set to 0.001, and the learning rate may be reduced to 10% of the original for every 9 epochs, and a maximum of 30 epochs may be iteratively trained. And by using GPUs, the method is easily extended to distributed systems.
In step 104, the initial component class detection result is corrected according to the vehicle azimuth scene, and the final component class detection result corresponding to the to-be-detected vehicle component diagram is obtained.
In this embodiment, a certain detection effect can be obtained by using the preset vehicle component detection model in step 103 for component detection, but since there are 59 component categories, for example, there are more vehicle component categories, and the distinction between the Front and Rear categories and the left and right categories is included, such as ' rearview mirror (right) ' (outer_mirror (right)) ', ' rearview mirror (left) ' (left)) ', ' Rear door glass (right) ' (rear_door_glass (right)) ', ' Front door glass (right) ' (front_door_glass (right)), and the like, the categories of erroneous judgment are easily caused. It should be noted that: the English names of the various categories are used in training and detection.
The deep learning framework is difficult to solve the problem of misjudgment caused by feature similarity among different categories, so the vehicle component detection method provided by the embodiment uses a multifunctional module: and a scene understanding module. As shown in fig. 3, firstly, inputting a to-be-detected vehicle part diagram into a preset azimuth scene matching network similar to a Resnet101 for feature extraction, and performing scene detection after passing through a full-connection layer; and simultaneously, carrying out Feature extraction on the vehicle part graph to be detected through a backbond and an FPN, obtaining a basic Feature graph (Base Feature) by using an area suggestion network RPN (Region Proposal Network, RPN) and a region of interest Pooling unit (Region of Interest Pooling, roI Pooling), and importing the basic Feature graph into a series of full-connection layer generation part types (Labels), part confidence Scores (Scores), target frame positioning and Mask masks to serve as initial part type detection results corresponding to the vehicle part graph to be detected. And finally, carrying out fusion processing on the vehicle azimuth scene obtained by scene detection, the part type obtained by part detection, the part confidence score, the target frame positioning and the Mask through a scene understanding module so as to correct the initial part type detection result according to the vehicle azimuth scene, thereby obtaining a final part type detection result capable of improving the detection effect of the vehicle part.
Optionally, the initial component class detection result includes an initial component class and an initial component confidence score, and the final component class detection result includes a final component class and a final component confidence score.
Correcting the initial component class detection result according to the vehicle azimuth scene to obtain a final component class detection result corresponding to the to-be-detected vehicle component diagram, which may include:
and converting the target component class which is not matched with the standard component class in the initial component class according to the standard component class corresponding to the vehicle azimuth scene, and obtaining the converted corrected component class.
And recording the initial component confidence score corresponding to the target component category as a target initial component confidence score, carrying out hierarchical screening on the target initial component confidence score according to a low-dimensional confidence threshold and a high-dimensional confidence threshold, and obtaining a final component category and a final component confidence score of the target component category based on a screening result and a correction component category.
In this embodiment, the scene understanding module performs the component replacement under the scene driving on Labels that are not matched with the standard component category corresponding to the vehicle azimuth scene and are detected by the preset vehicle component detection model (that is, performs the conversion on the target component category that is not matched with the standard component category in the initial component category), as shown in the following formula:
Labels scene =scene[Labels];
Where scene [ ] represents the introduction of a vehicle orientation scene. Exemplary, the component replacement in scenario 2 (Scene 2) is performed as shown in fig. 4. In order to improve the accuracy of the converted corrected component category, the initial component confidence score corresponding to the target component category subjected to component replacement is subjected to hierarchical screening according to the low-dimensional confidence threshold and the high-dimensional confidence threshold, so that the final component category and the final component confidence score of the target component category are determined.
Optionally, the step of classifying the target initial component confidence score according to the low-dimensional confidence threshold and the high-dimensional confidence threshold, and obtaining the final component category and the final component confidence score of the target component category based on the screening result and the corrected component category may include:
a determination is made as to whether the target initial component confidence score is greater than or equal to a low-dimensional confidence threshold.
And if the confidence score of the target initial part is greater than or equal to the low-dimensional confidence threshold, regulating and controlling the confidence score of the target initial part according to a preset confidence regulating and controlling parameter to obtain the corrected part confidence score corresponding to the target part category.
A determination is made as to whether the correction component confidence score is greater than or equal to a high-dimensional confidence threshold.
And if the corrected component confidence score is greater than or equal to the high-dimensional confidence threshold, obtaining a final component confidence score and a final component category of the target component category according to the corrected component confidence score and the corrected component category corresponding to the corrected component confidence score.
Optionally, adjusting the confidence score of the target initial component according to the preset confidence adjustment parameter to obtain a corrected component confidence score corresponding to the target component category may include:
according to score scene =[(Scores+conf)|Labels scene ]And obtaining the corrected part confidence score corresponding to the target part category by using the score not less than alpha.
Wherein, score scene For the confidence score of the corrected part corresponding to the target part category, the score is the confidence score of the target initial part, conf is a preset confidence regulation parameter, and the value range is 0.1,0.5],Labels scene For the corrected component class after the target component class conversion, alpha is a low-dimensional confidence threshold value, and the value range is [0,0.1 ]],[A|B]The A event is indicated to occur on the premise of the B event, wherein the result of the A event is the target output.
Optionally, obtaining a final component confidence score and a final component category of the target component category according to the corrected component confidence score and the corrected component category corresponding to the corrected component confidence score includes:
According toA final part confidence score and a final part class for the target part class are obtained.
Wherein, score final Score for final part confidence score for target part class scene For the corrected part confidence score corresponding to the target part class, score_thr is a high-dimensional confidence threshold, labels final For the purpose ofFinal part class of target part class, labels scene For the corrected component category after conversion of the target component category corresponding to the corrected component confidence score, [ A|B ]]The A event is indicated to occur on the premise of the B event, wherein the result of the A event is the target output.
In this embodiment, the scene understanding module performs confidence level regulation on the score corresponding to Labels, which is obtained by detecting the preset vehicle component detection model and is not matched with the standard component category corresponding to the vehicle azimuth scene. First, based on a low-dimensional confidence threshold α (i.e., the first level low-dimensional parameters of the hierarchical filtering target mechanism), according to score scene =[(Scores+conf)|Labels scene ]And (3) performing first-stage low-dimensional screening in a hierarchical screening target mechanism by the score not less than alpha, screening out target initial part confidence Scores score which are greater than or equal to a low-dimensional confidence threshold alpha, and regulating and controlling the target initial part confidence Scores by using a preset confidence regulation parameter conf. The preset confidence regulation parameters conf can be adaptively regulated according to the confidence score of the target initial part to be regulated, and when the confidence score of the target initial part is lower, the preset confidence regulation parameters can be properly increased so as to achieve the purpose of regulating the balance confidence score.
On the basis, after the first-stage low-dimensional screening, the second-stage low-dimensional screening is also needed, and the method can be specifically performed according to the scences final =Scores scene ,Scores scene Second-level high-dimensional screening in a hierarchical screening target mechanism is carried out more than or equal to score_thr, and corrected component confidence Scores score which are larger than or equal to a high-dimensional confidence threshold score_thr are screened out scene . In modifying part confidence Scores score scene If the component confidence score is greater than or equal to the high-dimensional confidence threshold score_thr, the component confidence score is revised again scene As the final part confidence score, the part confidence score is corrected scene Corrected component class Labels after corresponding target component class conversion scene As a final part class.
Based on this, the scene understanding module may correct the target frame positioning and Mask of the corresponding target frame in the initial component class detection result based on the correction of the initial component class and the initial component confidence score.
Optionally, after determining whether the target initial component confidence score is greater than or equal to the low-dimensional confidence threshold, further comprising:
and if the target initial component confidence score is smaller than the low-dimensional confidence threshold, eliminating the target component category, the target initial component confidence score and the corrected component category corresponding to the target component category.
Alternatively, after determining whether the correction component confidence score is greater than or equal to the high-dimensional confidence threshold, it may further include:
and if the corrected component confidence score is smaller than the high-dimensional confidence threshold, eliminating the target component category, the target initial component confidence score, the corrected component category corresponding to the target component category and the corrected component confidence score.
In this embodiment, the target initial component confidence score is removed when it is smaller than the low-dimensional confidence threshold, or the corrected component confidence score is smaller than the high-dimensional confidence threshold, so as to improve the accuracy of component detection.
As shown in fig. 5 and fig. 6, the scene understanding module in this embodiment fuses the vehicle azimuth scene and the initial component class detection result, so that the misjudgment problem of the left and right classes and the front and rear classes caused by the feature similarity among the classes is alleviated, and meanwhile, the confidence level regulation and classification screening target mechanism in the scene understanding module can appropriately increase the confidence level of the component class with lower original confidence level, so that the confidence level is detected and processed by exceeding the confidence level threshold, and the missed judgment problem is also correspondingly alleviated. For example, for the missed detection problem, front window glass, bottom edge (right), fog lamp (right), which were missed in fig. 5, are all detected in fig. 6. For the false detection problem, outer_minor (left) is corrected to outer_minor (right), car_left_door is corrected to car_right_door, front_folder (left) is corrected to front_folder (right), and head_lamp (left) is corrected to head_lamp (right). Therefore, the vehicle component detection method provided by the embodiment can obtain good detection effect on the detection problem of multiple components of the vehicle, and the problems of missed judgment and misjudgment can be solved to a certain extent.
The embodiment of the invention obtains the to-be-detected vehicle part diagram of the vehicle; firstly, carrying out scene detection on a vehicle component diagram to be detected based on a preset azimuth scene matching network, and obtaining a vehicle azimuth scene corresponding to the vehicle component diagram to be detected; performing component detection on the vehicle component diagram to be detected based on a preset vehicle component detection model to obtain an initial component class detection result corresponding to the vehicle component diagram to be detected; and finally, correcting the initial component class detection result according to the vehicle azimuth scene to obtain a final component class detection result corresponding to the to-be-detected vehicle component diagram. The concept of the vehicle azimuth scene is introduced, so that the accurate azimuth scene of the vehicle part graph to be detected can be obtained based on the preset azimuth scene matching network, and on the basis, the correction of the initial part class detection result corresponding to the vehicle part graph to be detected is completed specifically through part class conversion, confidence level regulation and control and hierarchical screening target mechanism under the azimuth scene, so that the problem of error or missed detection of left and right class detection and front and rear class detection of similar features during vehicle multi-part detection can be solved, the task of vehicle part detection is completed better, the accuracy and effect of vehicle part detection are improved, and the intelligent and market degree of vehicle business such as intelligent damage, risk control and the like based on vehicle part detection is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
The following are device embodiments of the invention, for details not described in detail therein, reference may be made to the corresponding method embodiments described above.
Fig. 7 shows a schematic structural diagram of a vehicle component detection apparatus according to an embodiment of the present invention, and for convenience of explanation, only the portions related to the embodiment of the present invention are shown, and the details are as follows:
as shown in fig. 7, the vehicle component detection apparatus includes: an input module 71, a first processing module 72, a second processing module 73 and a third processing module 74.
An input module 71 for acquiring a vehicle component diagram to be detected of the vehicle;
the first processing module 72 is configured to perform scene detection on the to-be-detected vehicle component diagram based on a preset azimuth scene matching network, so as to obtain a vehicle azimuth scene corresponding to the to-be-detected vehicle component diagram;
a second processing module 73, configured to perform component detection on the vehicle component diagram to be detected based on a preset vehicle component detection model, and obtain an initial component class detection result corresponding to the vehicle component diagram to be detected;
And a third processing module 74, configured to correct the initial component class detection result according to the vehicle azimuth scene, and obtain a final component class detection result corresponding to the vehicle component diagram to be detected.
The embodiment of the invention obtains the to-be-detected vehicle part diagram of the vehicle; firstly, carrying out scene detection on a vehicle component diagram to be detected based on a preset azimuth scene matching network, and obtaining a vehicle azimuth scene corresponding to the vehicle component diagram to be detected; performing component detection on the vehicle component diagram to be detected based on a preset vehicle component detection model to obtain an initial component class detection result corresponding to the vehicle component diagram to be detected; and finally, correcting the initial component class detection result according to the vehicle azimuth scene to obtain a final component class detection result corresponding to the to-be-detected vehicle component diagram. The concept of the vehicle azimuth scene is introduced, so that the accurate azimuth scene of the vehicle part graph to be detected can be obtained based on the preset azimuth scene matching network, and on the basis, the correction of the initial part class detection result corresponding to the vehicle part graph to be detected is completed specifically through part class conversion, confidence level regulation and control and hierarchical screening target mechanism under the azimuth scene, so that the problem of error or missed detection of left and right class detection and front and rear class detection of similar features during vehicle multi-part detection can be solved, the task of vehicle part detection is completed better, the accuracy and effect of vehicle part detection are improved, and the intelligent and market degree of vehicle business such as intelligent damage, risk control and the like based on vehicle part detection is improved.
In one possible implementation, the initial component class detection result includes an initial component class and an initial component confidence score, and the final component class detection result includes a final component class and a final component confidence score; a third processing module 74, configured to convert, according to a standard component class corresponding to the vehicle azimuth scene, a target component class that is not matched with the standard component class in the initial component class, so as to obtain a converted corrected component class;
and recording the initial component confidence score corresponding to the target component category as a target initial component confidence score, carrying out hierarchical screening on the target initial component confidence score according to a low-dimensional confidence threshold and a high-dimensional confidence threshold, and obtaining a final component category and a final component confidence score of the target component category based on a screening result and the corrected component category.
In one possible implementation, the third processing module 74 may be configured to determine whether the target initial component confidence score is greater than or equal to the low-dimensional confidence threshold;
if the target initial part confidence score is greater than or equal to the low-dimensional confidence threshold, regulating and controlling the target initial part confidence score according to a preset confidence regulating parameter to obtain a corrected part confidence score corresponding to the target part category;
Judging whether the correction component confidence score is greater than or equal to the high-dimensional confidence threshold;
and if the corrected component confidence score is greater than or equal to the high-dimensional confidence threshold, obtaining a final component confidence score and a final component category of the target component category according to the corrected component confidence score and the corrected component category corresponding to the corrected component confidence score.
In one possible implementation, the third processing module 74 may be further configured to reject the target component category, the target initial component confidence score, and a revised component category corresponding to the target component category if the target initial component confidence score is less than the low-dimensional confidence threshold;
or if the corrected component confidence score is smaller than the high-dimensional confidence threshold, eliminating the target component category, the target initial component confidence score, the corrected component category corresponding to the target component category and the corrected component confidence score.
In one possible implementation, the third processing module 74 may be configured to perform the processing according to the scires scene =[(Scores+conf)|Labels scene ]Obtaining a corrected part confidence score corresponding to the target part category, wherein score is larger than or equal to alpha;
Wherein, score scene For the corrected part confidence score corresponding to the target part category, score is the target initial part confidence score, conf is the preset confidence regulation parameter, and the value range is 0.1 and 0.5],Labels scene For the corrected component category after the target component category conversion, alpha is the low-dimensional confidence threshold, and the value range is [0,0.1 ]],[A|B]The A event is indicated to occur on the premise of the B event, wherein the result of the A event is the target output.
In one possible implementation, the third processing module 74 may be configured to, according toObtaining a final part confidence score and a final part class for the target part class;
wherein, score final Score for final part confidence score for the target part class scene For the corrected part confidence score corresponding to the target part category, score_thr is the high-dimensional confidence threshold, labels final Labels as the final part class of the target part class scene For the corrected component category converted by the target component category corresponding to the corrected component confidence score, [ A|B ]]Representation AThe event occurrence is under the precondition of the B event, wherein the result of the A event is the target output.
In one possible implementation, the training process of the preset azimuth scene matching network includes:
Acquiring a training set formed by a vehicle azimuth scene graph, wherein the vehicle azimuth scene graph is a vehicle graph shot from a certain azimuth of a vehicle;
and training the initial azimuth scene matching network according to each vehicle azimuth scene graph in the training set and the vehicle azimuth scene label corresponding to the vehicle azimuth scene graph to obtain a preset azimuth scene matching network.
Fig. 8 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 8, the terminal 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82 stored in the memory 81 and executable on the processor 80. The steps of the various vehicle component detection method embodiments described above, such as steps 101 through 104 shown in fig. 1, are implemented when the processor 80 executes the computer program 82. Alternatively, the processor 80, when executing the computer program 82, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules/units 71 to 74 shown in fig. 7.
By way of example, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to complete the present invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 82 in the terminal 8. For example, the computer program 82 may be split into modules/units 71 to 74 shown in fig. 7.
The terminal 8 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The terminal 8 may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of the terminal 8 and is not intended to limit the terminal 8, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the terminal may further include an input-output device, a network access device, a bus, etc.
The processor 80 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the terminal 8, such as a hard disk or a memory of the terminal 8. The memory 81 may also be an external storage device of the terminal 8, such as a plug-in hard disk provided on the terminal 8, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. Further, the memory 81 may also include both an internal storage unit of the terminal 8 and an external storage device. The memory 81 is used to store computer programs and other programs and data required by the terminal. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal and method may be implemented in other manners. For example, the apparatus/terminal embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiment of detecting a vehicle component. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A vehicle component detection method, characterized by comprising:
acquiring a vehicle part diagram to be detected of a vehicle;
performing scene detection on the to-be-detected vehicle part graph based on a preset azimuth scene matching network to obtain a vehicle azimuth scene corresponding to the to-be-detected vehicle part graph;
performing component detection on the to-be-detected vehicle component diagram based on a preset vehicle component detection model to obtain an initial component class detection result corresponding to the to-be-detected vehicle component diagram;
and correcting the initial component class detection result according to the vehicle azimuth scene to obtain a final component class detection result corresponding to the to-be-detected vehicle component diagram.
2. The vehicle component detection method of claim 1, wherein the initial component category detection result includes an initial component category and an initial component confidence score, and the final component category detection result includes a final component category and a final component confidence score;
correcting the initial component category detection result according to the vehicle azimuth scene to obtain a final component category detection result corresponding to the to-be-detected vehicle component diagram, wherein the method comprises the following steps:
converting target component categories which are not matched with the standard component categories in the initial component categories according to the standard component categories corresponding to the vehicle azimuth scene, and obtaining converted correction component categories;
and recording the initial component confidence score corresponding to the target component category as a target initial component confidence score, carrying out hierarchical screening on the target initial component confidence score according to a low-dimensional confidence threshold and a high-dimensional confidence threshold, and obtaining a final component category and a final component confidence score of the target component category based on a screening result and the corrected component category.
3. The vehicle component detection method according to claim 2, wherein the step of classifying the target initial component confidence score according to a low-dimensional confidence threshold and a high-dimensional confidence threshold, and obtaining a final component category and a final component confidence score of the target component category based on a result of the classifying and the corrected component category, comprises:
Judging whether the confidence score of the target initial part is larger than or equal to the low-dimensional confidence threshold;
if the target initial part confidence score is greater than or equal to the low-dimensional confidence threshold, regulating and controlling the target initial part confidence score according to a preset confidence regulating parameter to obtain a corrected part confidence score corresponding to the target part category;
judging whether the correction component confidence score is greater than or equal to the high-dimensional confidence threshold;
and if the corrected component confidence score is greater than or equal to the high-dimensional confidence threshold, obtaining a final component confidence score and a final component category of the target component category according to the corrected component confidence score and the corrected component category corresponding to the corrected component confidence score.
4. The vehicle component detection method according to claim 3, wherein,
after determining whether the target initial component confidence score is greater than or equal to the low-dimensional confidence threshold, further comprising:
if the target initial component confidence score is smaller than the low-dimensional confidence threshold, eliminating the target component category, the target initial component confidence score and a correction component category corresponding to the target component category;
Alternatively, after determining whether the corrected component confidence score is greater than or equal to the high-dimensional confidence threshold, further comprising:
and if the corrected component confidence score is smaller than the high-dimensional confidence threshold, eliminating the target component category, the target initial component confidence score, the corrected component category corresponding to the target component category and the corrected component confidence score.
5. The vehicle component detection method according to claim 3, wherein the adjusting the target initial component confidence score according to a preset confidence adjustment parameter to obtain a corrected component confidence score corresponding to the target component category includes:
according to score scene =[(Scores+conf)|Labels scene ]Obtaining a corrected part confidence score corresponding to the target part category, wherein score is larger than or equal to alpha;
wherein, score scene For the corrected part confidence score corresponding to the target part category, score is the target initial part confidence score, conf is the preset confidence regulation parameter, and the value range is 0.1 and 0.5],Labels scene For the corrected component category after the target component category conversion, alpha is the low-dimensional confidence threshold, and the value range is [0,0.1 ]],[A|B]The A event is indicated to occur on the premise of the B event, wherein the result of the A event is the target output.
6. The vehicle component detection method according to claim 3, wherein the obtaining the final component confidence score and the final component category of the target component category from the corrected component confidence score and the corrected component category to which the corrected component confidence score corresponds includes:
according toObtaining a final part confidence score and a final part class for the target part class;
wherein, score final Score for final part confidence score for the target part class scene For the corrected part confidence score corresponding to the target part category, score_thr is the high-dimensional confidence threshold, labels final Labels as the final part class of the target part class scene For the corrected component category converted by the target component category corresponding to the corrected component confidence score, [ A|B ]]The A event is indicated to occur on the premise of the B event, wherein the result of the A event is the target output.
7. The vehicle component detection method according to any one of claims 1 to 6, wherein the training process of the preset azimuth scene matching network includes:
acquiring a training set formed by a vehicle azimuth scene graph, wherein the vehicle azimuth scene graph is a vehicle graph shot from a certain azimuth of a vehicle;
And training the initial azimuth scene matching network according to each vehicle azimuth scene graph in the training set and the vehicle azimuth scene label corresponding to the vehicle azimuth scene graph to obtain a preset azimuth scene matching network.
8. A vehicle component detection apparatus, characterized by comprising:
the input module is used for acquiring a to-be-detected vehicle part diagram of the vehicle;
the first processing module is used for carrying out scene detection on the to-be-detected vehicle component diagram based on a preset azimuth scene matching network to obtain a vehicle azimuth scene corresponding to the to-be-detected vehicle component diagram;
the second processing module is used for carrying out component detection on the to-be-detected vehicle component diagram based on a preset vehicle component detection model, and obtaining an initial component class detection result corresponding to the to-be-detected vehicle component diagram;
and the third processing module is used for correcting the initial component class detection result according to the vehicle azimuth scene to obtain a final component class detection result corresponding to the to-be-detected vehicle component diagram.
9. A terminal comprising a memory for storing a computer program and a processor for invoking and running the computer program stored in the memory to perform the method of any of claims 1 to 7.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any of the preceding claims 1 to 7.
CN202310284512.4A 2023-03-22 2023-03-22 Vehicle part detection method, device, terminal and storage medium Pending CN116468931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310284512.4A CN116468931A (en) 2023-03-22 2023-03-22 Vehicle part detection method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310284512.4A CN116468931A (en) 2023-03-22 2023-03-22 Vehicle part detection method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN116468931A true CN116468931A (en) 2023-07-21

Family

ID=87176269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310284512.4A Pending CN116468931A (en) 2023-03-22 2023-03-22 Vehicle part detection method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN116468931A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117719440A (en) * 2024-02-08 2024-03-19 零束科技有限公司 Automobile signal detection method, system and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117719440A (en) * 2024-02-08 2024-03-19 零束科技有限公司 Automobile signal detection method, system and readable storage medium
CN117719440B (en) * 2024-02-08 2024-05-03 零束科技有限公司 Automobile signal detection method, system and readable storage medium

Similar Documents

Publication Publication Date Title
US20200160040A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
CN107230218B (en) Method and apparatus for generating confidence measures for estimates derived from images captured by vehicle-mounted cameras
US10373024B2 (en) Image processing device, object detection device, image processing method
US20190392202A1 (en) Expression recognition method, apparatus, electronic device, and storage medium
TWI497422B (en) A system and method for recognizing license plate image
CN112686812B (en) Bank card inclination correction detection method and device, readable storage medium and terminal
Abdi et al. Deep learning traffic sign detection, recognition and augmentation
CN111507327B (en) Target detection method and device
CN109886086B (en) Pedestrian detection method based on HOG (histogram of oriented gradient) features and linear SVM (support vector machine) cascade classifier
CN112613387A (en) Traffic sign detection method based on YOLOv3
CN116468931A (en) Vehicle part detection method, device, terminal and storage medium
CN112766273A (en) License plate recognition method
CN115345905A (en) Target object tracking method, device, terminal and storage medium
CN112541394A (en) Black eye and rhinitis identification method, system and computer medium
CN114419583A (en) Yolov4-tiny target detection algorithm with large-scale features
CN111192329B (en) Sensor calibration result verification method and device and storage medium
CN116433903A (en) Instance segmentation model construction method, system, electronic equipment and storage medium
CN111832463A (en) Deep learning-based traffic sign detection method
CN116824333A (en) Nasopharyngeal carcinoma detecting system based on deep learning model
CN113903074B (en) Eye attribute classification method, device and storage medium
CN114927236A (en) Detection method and system for multiple target images
CN116263504A (en) Vehicle identification method, device, electronic equipment and computer readable storage medium
CN111126271B (en) Bayonet snap image vehicle detection method, computer storage medium and electronic equipment
CN112686129A (en) Face recognition system and method
JP4719605B2 (en) Object detection data generation device, method and program, and object detection device, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination