CN116310579A - Vehicle damage detection method, device, equipment and storage medium - Google Patents

Vehicle damage detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN116310579A
CN116310579A CN202310319382.3A CN202310319382A CN116310579A CN 116310579 A CN116310579 A CN 116310579A CN 202310319382 A CN202310319382 A CN 202310319382A CN 116310579 A CN116310579 A CN 116310579A
Authority
CN
China
Prior art keywords
feature
features
convolution
vehicle
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310319382.3A
Other languages
Chinese (zh)
Inventor
时勇杰
刘莉红
陈远旭
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202310319382.3A priority Critical patent/CN116310579A/en
Publication of CN116310579A publication Critical patent/CN116310579A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Finance (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Accounting & Taxation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to artificial intelligence and provides a vehicle damage detection method, device, equipment and storage medium. According to the method, a vehicle damage feature map is extracted from a vehicle damage image based on a plurality of feature network layers, cavity convolution processing is carried out on the vehicle damage feature map to obtain a convolution feature map, convolution attention processing is carried out on the convolution feature map and the vehicle damage feature map to obtain attention features and global features, the attention features are fused to obtain local features, the local features and the global features are fused to obtain target features, vehicle damage detection is carried out on the target features to obtain detection results and result confidence, and vehicle damage results can be accurately generated according to the detection results and the result confidence. Furthermore, the present invention also relates to blockchain techniques, where the wear results may be stored in the blockchain.

Description

Vehicle damage detection method, device, equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting vehicle damage.
Background
With the development of artificial intelligence, automatic vehicle damage claim settlement systems have been developed. In the whole automatic system, when a user uploads a damage image, the automatic claim settlement system evaluates the damage condition of the vehicle through the damage image and gives the vehicle owner proper claim settlement amount according to a certain rule. However, the system is limited by shooting scenes, shooting angles, illumination conditions, integrity of lost parts, shielding conditions and the like of actual images, so that an automatic claim settlement system cannot accurately evaluate the loss conditions of damaged vehicles.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a vehicle damage detection method, apparatus, device, and storage medium that can solve the technical problem of how to improve the accuracy of vehicle damage detection.
In one aspect, the present invention provides a vehicle damage detection method, including:
extracting a vehicle loss feature map of each feature network layer from the acquired vehicle loss image based on a plurality of feature network layers in the feature extraction model which is trained in advance;
carrying out cavity convolution processing on each vehicle loss feature map to obtain a plurality of convolution feature maps of each vehicle loss feature map;
performing convolution attention processing on each convolution feature map to obtain attention features, and performing convolution attention processing on each vehicle loss feature map to obtain global features;
fusing a plurality of attention features corresponding to each vehicle loss feature map to obtain local features of each vehicle loss feature map;
performing feature projection fusion processing on the local features and the global features to obtain target features of each vehicle loss feature map;
performing vehicle damage detection on the target features based on a pre-trained vehicle damage detection model to obtain a detection result of each vehicle damage feature map and a result confidence coefficient corresponding to the detection result;
And generating a vehicle loss result of the vehicle loss image according to the detection results and the result confidence.
According to a preferred embodiment of the present invention, the performing convolution attention processing on each convolution feature map to obtain an attention feature includes:
performing maximum pooling treatment on the convolution feature map to obtain a first pooling feature, and performing average pooling treatment on the convolution feature map to obtain a second pooling feature;
carrying out shared full-connection processing on the first pooling feature and the second pooling feature to obtain a first full-connection feature;
performing channel interaction processing on the convolution feature map to obtain interaction features;
performing full connection processing on the interaction features to obtain second full connection features;
fusing the first full-connection feature and the second full-connection feature to obtain a fused feature;
and performing spatial attention processing on the first full-connection feature, the second full-connection feature and the fusion feature to obtain the attention feature.
According to a preferred embodiment of the present invention, the performing channel interaction processing on the convolution feature map to obtain interaction features includes:
performing grouping convolution processing on the convolution feature map to obtain a plurality of initial features;
Splicing the plurality of initial features to obtain spliced features;
performing transposition processing on the splicing characteristics to obtain transposition characteristics;
sequentially extracting a plurality of output features from the transposed feature based on feature dimensions of any of the plurality of initial features;
and splicing the plurality of output features to obtain the interaction feature.
According to a preferred embodiment of the present invention, the spatial attention processing on the first full connection feature, the second full connection feature and the fusion feature, to obtain the attention feature includes:
calculating the product of the first full-connection feature, the second full-connection feature and the fusion feature to obtain an input feature;
carrying out average pooling treatment on the input features to obtain third pooling features, and carrying out maximum pooling treatment on the input features to obtain fourth pooling features;
and carrying out convolution processing and activation processing on the third pooling feature and the fourth pooling feature to obtain the attention feature.
According to a preferred embodiment of the present invention, the performing feature projection fusion processing on the local feature and the global feature to obtain target features of each vehicle loss feature map includes:
Calculating a product feature of the local feature and the global feature;
calculating the ratio of the product feature on the module length of the local feature and the overall feature to obtain an included angle feature;
based on the dimension of the global feature, performing scale transformation on the local feature and the included angle feature to obtain a first transformation feature corresponding to the local feature and a second transformation feature corresponding to the included angle feature;
and fusing the global feature, the first transformation feature and the second transformation feature to obtain the target feature.
According to a preferred embodiment of the present invention, the vehicle damage detection model includes a feature pyramid network and a prediction network, and the vehicle damage detection is performed on the target feature based on the pre-trained vehicle damage detection model, and obtaining a detection result of each vehicle damage feature map and a result confidence coefficient corresponding to the detection result includes:
performing up-sampling processing on the target features based on the feature pyramid network to obtain up-sampling features;
detecting the up-sampling feature based on a candidate frame detection layer in the prediction network to obtain a candidate frame feature;
Performing activation normalization processing on the candidate frame features based on a mapping matrix in the prediction network to obtain probability vectors;
and determining the category corresponding to the element with the value larger than the preset probability threshold value in the probability vector as the detection result, and determining the element as the result confidence.
According to a preferred embodiment of the present invention, the plurality of feature network layers includes a first network layer, a second network layer, and a third network layer, the first network layer, the second network layer, and the third network layer are sequentially connected in the feature extraction model, the first network layer includes a convolution layer of a plurality of branches, the extracting a vehicle loss feature map of each feature network layer from the acquired vehicle loss image based on the plurality of feature network layers in the feature extraction model that is completed in advance includes:
carrying out convolution processing on the vehicle loss image based on the convolution layer of each branch to obtain the convolution characteristic of each branch;
performing weighted fusion processing on the plurality of convolution characteristics to obtain a vehicle loss characteristic diagram output by the first network layer;
convolving the vehicle loss feature map output by the first network layer based on the second network layer to obtain the vehicle loss feature map output by the second network layer;
And carrying out convolution processing on the vehicle loss feature map output by the second network layer based on the third network layer to obtain the vehicle loss feature map output by the third network layer.
In another aspect, the present invention also provides a vehicle damage detection device, including:
the extraction unit is used for extracting a vehicle loss feature map of each feature network layer from the acquired vehicle loss image based on a plurality of feature network layers in the feature extraction model which is trained in advance;
the convolution unit is used for carrying out cavity convolution processing on each vehicle loss feature map to obtain a plurality of convolution feature maps of each vehicle loss feature map;
the attention unit is used for carrying out convolution attention processing on each convolution feature map to obtain attention features, and carrying out convolution attention processing on each vehicle loss feature map to obtain global features;
the fusion unit is used for fusing a plurality of attention features corresponding to each vehicle loss feature map to obtain local features of each vehicle loss feature map;
the fusion unit is further used for carrying out feature projection fusion processing on the local features and the global features to obtain target features of each vehicle damage feature map;
The detection unit is used for detecting the vehicle damage to the target feature based on a pre-trained vehicle damage detection model to obtain a detection result of each vehicle damage feature map and a result confidence coefficient corresponding to the detection result;
and the generating unit is used for generating the vehicle damage result of the vehicle damage image according to the detection results and the result confidence level.
In another aspect, the present invention also proposes an electronic device, including:
a memory storing computer readable instructions; a kind of electronic device with high-pressure air-conditioning system
And a processor executing computer readable instructions stored in the memory to implement the vehicle damage detection method.
In another aspect, the present invention also proposes a computer readable storage medium having stored therein computer readable instructions that are executed by a processor in an electronic device to implement the vehicle damage detection method.
According to the technical scheme, the convolution processing with different hole ratios is carried out on the vehicle damage feature graphs, the convolution feature graphs with different receptive fields can be obtained, and further, the characterization capability of the local features can be enhanced through the convolution attention processing and the fusion processing on each convolution feature graph, so that the target features of the local features and the global features are fused to serve as final features for vehicle damage detection, the feature extraction capability of the vehicle damage images can be effectively improved, the generation accuracy of detection results and the result confidence is improved, and further, the vehicle damage results can be detected from different feature dimensions by combining the detection results and the result confidence of the vehicle damage feature graphs output by the feature network layers, so that the detection accuracy of the vehicle damage results is further improved.
Drawings
FIG. 1 is a flow chart of a vehicle damage detection method according to a preferred embodiment of the present invention.
Fig. 2 is a schematic diagram of a network framework for attention processing in the vehicle damage detection method of the present invention.
Fig. 3 is a schematic diagram of a network frame generated by a detection result in the vehicle damage detection method of the present invention.
FIG. 4 is a functional block diagram of a vehicle damage detection device according to a preferred embodiment of the present invention.
Fig. 5 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention for implementing a vehicle damage detection method.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a flow chart of a vehicle damage detection method according to a preferred embodiment of the present invention. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs.
The vehicle damage detection method can acquire and process related data based on artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The vehicle damage detection method is applied to one or more electronic devices, wherein the electronic devices are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored computer readable instructions, and the hardware comprises, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (Field-Programmable Gate Array, FPGA), digital signal processors (Digital Signal Processor, DSP), embedded devices and the like.
The electronic device may be any electronic product that can interact with a user in a human-computer manner, such as a personal computer, tablet computer, smart phone, personal digital assistant (Personal Digital Assistant, PDA), game console, interactive internet protocol television (Internet Protocol Television, IPTV), smart wearable device, etc.
The electronic device may comprise a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network electronic device, a group of electronic devices made up of multiple network electronic devices, or a Cloud based Cloud Computing (Cloud Computing) made up of a large number of hosts or network electronic devices.
The network in which the electronic device is located includes, but is not limited to: the internet, wide area networks, metropolitan area networks, local area networks, virtual private networks (Virtual Private Network, VPN), etc.
And 101, extracting a vehicle loss feature map of each feature network layer from the acquired vehicle loss image based on a plurality of feature network layers in the feature extraction model which is trained in advance.
In at least one embodiment of the present invention, the plurality of feature network layers includes a first network layer, a second network layer, and a third network layer, the first network layer, the second network layer, and the third network layer are sequentially connected in the feature extraction model, and the first network layer includes a convolution layer of a plurality of branches.
The second network layer may include a convolutional network and the third network layer may include a convolutional network.
The vehicle damage image can be an image uploaded by a claim settlement user on a vehicle insurance claim settlement system, and the electronic equipment acquires the image from the vehicle insurance claim settlement system as the vehicle damage image when detecting that the vehicle insurance claim settlement system has the image uploaded.
And the characteristic dimensions of the plurality of vehicle loss characteristic diagrams are different.
In at least one embodiment of the present invention, the electronic device extracting a loss feature map of each feature network layer from the obtained loss image based on a plurality of feature network layers in the feature extraction model that is trained in advance includes:
carrying out convolution processing on the vehicle loss image based on the convolution layer of each branch to obtain the convolution characteristic of each branch;
performing weighted fusion processing on the plurality of convolution characteristics to obtain a vehicle loss characteristic diagram output by the first network layer;
convolving the vehicle loss feature map output by the first network layer based on the second network layer to obtain the vehicle loss feature map output by the second network layer;
and carrying out convolution processing on the vehicle loss feature map output by the second network layer based on the third network layer to obtain the vehicle loss feature map output by the third network layer.
For example, the convolution of branch A is characterized by
Figure BDA0004151783170000071
The convolution characteristic of branch B is +.>
Figure BDA0004151783170000072
The convolution characteristic of branch C is +.>
Figure BDA0004151783170000073
If the weight ratio of the branch A, the branch B and the branch C is 3:5: and 2, weighting and fusing the convolution characteristics to obtain a vehicle loss characteristic diagram output by the first network layer: / >
Figure BDA0004151783170000074
The convolution layers of the branches are used for carrying out convolution processing on the vehicle loss image at the same time, so that the generation efficiency of the convolution features can be improved, the generation efficiency of the vehicle loss feature map is improved, and the generation accuracy of the vehicle loss feature map can be improved by carrying out weighting and fusion processing on the convolution features.
102, carrying out cavity convolution processing on each vehicle loss feature map to obtain a plurality of convolution feature maps of each vehicle loss feature map.
In at least one embodiment of the present invention, the plurality of convolution feature maps of each loss feature map are obtained by convolving the loss feature map with a plurality of convolution layers with different void ratios by the electronic device.
In at least one embodiment of the present invention, the electronic device performs a hole convolution process on each vehicle loss feature map, to obtain a plurality of convolution feature maps of each vehicle loss feature map, including:
and carrying out cavity convolution processing on each vehicle loss feature map based on a plurality of different cavity rates to obtain a plurality of convolution feature maps of each vehicle loss feature map on the plurality of different cavity rates.
Wherein the plurality of different void fractions may include, but are not limited to: 2,4,8, etc.
And the convolution characteristic diagrams of different receptive fields can be obtained through the different void ratios.
And 103, performing convolution attention processing on each convolution feature map to obtain attention features, and performing convolution attention processing on each vehicle loss feature map to obtain global features.
In at least one embodiment of the present invention, the attention feature refers to a feature generated after the convolution feature map passes through the attention-processing network frame as shown in fig. 2, and the global feature refers to a feature generated after the vehicle loss feature map passes through the attention-processing network frame as shown in fig. 2.
In the network framework of the attention process shown in fig. 2, a maximum pooling network, an average pooling network, a channel interaction network, a fully connected network 1 and a fully connected network 2, a converged network and a spatial attention network are included.
In at least one embodiment of the present invention, the electronic device performs convolution attention processing on each convolution feature map, and obtaining the attention feature includes:
performing maximum pooling treatment on the convolution feature map to obtain a first pooling feature, and performing average pooling treatment on the convolution feature map to obtain a second pooling feature;
Carrying out shared full-connection processing on the first pooling feature and the second pooling feature to obtain a first full-connection feature;
performing channel interaction processing on the convolution feature map to obtain interaction features;
performing full connection processing on the interaction features to obtain second full connection features;
fusing the first full-connection feature and the second full-connection feature to obtain a fused feature;
and performing spatial attention processing on the first full-connection feature, the second full-connection feature and the fusion feature to obtain the attention feature.
In fig. 2, when the Feature map in fig. 2 represents the convolution Feature map, the Feature output by the maximum pooling network is the first pooling Feature, the Feature output by the average pooling network is the second pooling Feature, the Feature output by the fully-connected network 1 is the first fully-connected Feature, the Feature output by the channel interaction network is the interaction Feature, the Feature output by the fully-connected network 2 is the second fully-connected Feature, the Feature output by the fusion network is the fusion Feature, and the Refined Feature output by the spatial attention network represents the attention Feature.
According to the embodiment, the average value pooling and the maximum value pooling can be considered when the space dimension is compressed, so that the pixel point with the largest response when gradient back propagation is performed in the convolution feature diagram can be considered while each pixel point in the convolution feature diagram is considered, the characterization effect of the first full-connection feature is improved, the purpose of channel information interaction can be achieved by performing channel interaction processing on the convolution feature diagram, the characterization capability of the interaction feature is improved, and the characterization capability of the attention feature can be doubly improved.
Specifically, the electronic device performing sharing full-connection processing on the first pooled feature and the second pooled feature to obtain a first full-connection feature includes:
and carrying out full connection processing on the first pooling feature and the second pooling feature respectively, and then carrying out averaging to obtain the first full connection feature.
Specifically, the electronic device performs channel interaction processing on the convolution feature map, and the obtaining interaction features includes:
performing grouping convolution processing on the convolution feature map to obtain a plurality of initial features;
splicing the plurality of initial features to obtain spliced features;
Performing transposition processing on the splicing characteristics to obtain transposition characteristics;
sequentially extracting a plurality of output features from the transposed feature based on feature dimensions of any of the plurality of initial features;
and splicing the plurality of output features to obtain the interaction feature.
For example, the transpose feature is:
Figure BDA0004151783170000091
if the feature dimension of any initial feature is 4, extracting to obtain the plurality of output features as follows: [1 7 8 2]、[5 10 3 9]、[11 4 6 12]。
By performing the group convolution processing on the convolution feature map, the accuracy of the interaction feature can be ensured, and meanwhile, the generation efficiency of the interaction feature can be improved.
Specifically, the electronic device performs spatial attention processing on the first fully-connected feature, the second fully-connected feature and the fusion feature, and the obtaining the attention feature includes:
calculating the product of the first full-connection feature, the second full-connection feature and the fusion feature to obtain an input feature;
carrying out average pooling treatment on the input features to obtain third pooling features, and carrying out maximum pooling treatment on the input features to obtain fourth pooling features;
and carrying out convolution processing and activation processing on the third pooling feature and the fourth pooling feature to obtain the attention feature.
By performing convolution processing and activation processing on the third pooling feature and the fourth pooling feature, a nonlinear factor can be introduced, and the adaptability of the attention feature can be improved.
In other embodiments, the global feature is generated in a similar manner to the attention feature, which is not described in detail in this application.
104, fusing a plurality of attention features corresponding to each vehicle loss feature map to obtain local features of each vehicle loss feature map.
In at least one embodiment of the present invention, the local feature refers to feature information corresponding to a defective area in the vehicle damage image.
In at least one embodiment of the present invention, the electronic device performs a scale unification process on the plurality of attention features, and further, the electronic device performs a weighting and fusion process on the scale unification processed features to obtain the local features.
And the plurality of attention features are subjected to scale unification, so that feature fusion can be facilitated, and the generation substantivity of the local features is improved.
And 105, performing feature projection fusion processing on the local features and the global features to obtain target features of each vehicle damage feature map.
In at least one embodiment of the invention, the target feature refers to a final feature for vehicle loss detection.
In at least one embodiment of the present invention, the performing, by the electronic device, feature projection fusion processing on the local feature and the global feature, to obtain target features of each vehicle loss feature map includes:
calculating a product feature of the local feature and the global feature;
calculating the ratio of the product feature on the module length of the local feature and the overall feature to obtain an included angle feature;
based on the dimension of the global feature, performing scale transformation on the local feature and the included angle feature to obtain a first transformation feature corresponding to the local feature and a second transformation feature corresponding to the included angle feature;
and fusing the global feature, the first transformation feature and the second transformation feature to obtain the target feature.
By the implementation mode, the representation capability of the target feature on the flaw area in the vehicle damage image can be further improved.
And 106, detecting the vehicle damage to the target feature based on the pre-trained vehicle damage detection model to obtain a detection result of each vehicle damage feature map and a result confidence coefficient corresponding to the detection result.
In at least one embodiment of the invention, the vehicle damage detection model includes a feature pyramid network and a prediction network. The feature pyramid network is used for performing dimension reduction processing on the features. The prediction network is used for predicting a damage result corresponding to the vehicle damage image.
The detection result refers to a result obtained after the vehicle damage is detected on each vehicle damage feature map, and the detection result may include a damage type of the damage accessory, for example, the detection result is: the lappets are scratched. The result confidence degree refers to the prediction probability of the vehicle damage detection network on the detection result.
As shown in fig. 3, fig. 3 is a schematic diagram of a network frame generated by a detection result in the vehicle damage detection method of the present invention. In fig. 3, each prediction network outputs a corresponding detection result and result confidence level for each vehicle loss feature map.
In at least one embodiment of the present invention, the electronic device performing vehicle damage detection on the target feature based on a pre-trained vehicle damage detection model, and obtaining a detection result of each vehicle damage feature map and a result confidence corresponding to the detection result includes:
Performing up-sampling processing on the target features based on the feature pyramid network to obtain up-sampling features;
detecting the up-sampling feature based on a candidate frame detection layer in the prediction network to obtain a candidate frame feature;
performing activation normalization processing on the candidate frame features based on a mapping matrix in the prediction network to obtain probability vectors;
and determining the category corresponding to the element with the value larger than the preset probability threshold value in the probability vector as the detection result, and determining the element as the result confidence.
The mapping matrix and the preset probability threshold belong to super parameters in the vehicle damage detection model.
The dimension of the up-sampling feature can be further reduced through the feature pyramid network, so that the detection of the up-sampling feature by the candidate frame detection layer is facilitated, the generation efficiency of the candidate frame feature is improved, and the screening efficiency of the detection result can be further improved through the screening of the detection result by the preset probability threshold.
107, generating a vehicle loss result of the vehicle loss image according to a plurality of detection results and the result confidence level.
It should be emphasized that, to further ensure the privacy and security of the impairment results, the impairment results may also be stored in nodes of a blockchain.
In at least one embodiment of the present invention, the impairment result refers to a detection result with the highest confidence level.
In at least one embodiment of the present invention, the generating, by the electronic device, the damage result of the damage image according to the plurality of detection results and the result confidence level includes:
if the detection results are the same, determining the detection result as the vehicle loss result; or alternatively
And if the detection results are not the same, selecting the damage result from the detection results based on the result confidence.
For example, the plurality of detection results includes: the detection result A is a lappet scratch, the detection result B is a wheel arch scratch, the detection result C is a lappet scratch, the result confidence coefficient corresponding to the detection result A is 0.45, the result confidence coefficient corresponding to the detection result B is 0.51, and the result confidence coefficient corresponding to the detection result C is 0.50, and the result confidence coefficient of the lappet scratch is obtained through calculation: 0.45+0.5=0.95, the confidence of the result of "wheel arch scratch" is: 0.51, the loss result is: "lappet scratch".
According to the embodiment, when the plurality of detection results are not identical, the vehicle damage result can be accurately and rapidly selected through the result confidence.
According to the technical scheme, the convolution processing with different hole ratios is carried out on the vehicle damage feature graphs, the convolution feature graphs with different receptive fields can be obtained, and further, the characterization capability of the local features can be enhanced through the convolution attention processing and the fusion processing on each convolution feature graph, so that the target features of the local features and the global features are fused to serve as final features for vehicle damage detection, the feature extraction capability of the vehicle damage images can be effectively improved, the generation accuracy of detection results and the result confidence is improved, and further, the vehicle damage results can be detected from different feature dimensions by combining the detection results and the result confidence of the vehicle damage feature graphs output by the feature network layers, so that the detection accuracy of the vehicle damage results is further improved.
Fig. 4 is a functional block diagram of a vehicle damage detection device according to a preferred embodiment of the present invention. The vehicle damage detection device 11 includes an extraction unit 110, a convolution unit 111, an attention unit 112, a fusion unit 113, a detection unit 114, and a generation unit 115. The module/unit referred to herein is a series of computer readable instructions capable of being retrieved by the processor 13 and performing a fixed function and stored in the memory 12. In the present embodiment, the functions of the respective modules/units will be described in detail in the following embodiments.
An extracting unit 110, configured to extract a vehicle loss feature map of each feature network layer from the obtained vehicle loss image based on a plurality of feature network layers in the feature extraction model that is trained in advance;
a convolution unit 111, configured to perform a hole convolution process on each vehicle loss feature map, so as to obtain a plurality of convolution feature maps of each vehicle loss feature map;
an attention unit 112, configured to perform convolution attention processing on each convolution feature map to obtain an attention feature, and perform convolution attention processing on each vehicle loss feature map to obtain a global feature;
a fusion unit 113, configured to fuse a plurality of attention features corresponding to each vehicle loss feature map, so as to obtain a local feature of each vehicle loss feature map;
the fusion unit 113 is further configured to perform feature projection fusion processing on the local feature and the global feature to obtain a target feature of each vehicle loss feature map;
the detecting unit 114 is configured to perform vehicle damage detection on the target feature based on a pre-trained vehicle damage detection model, so as to obtain a detection result of each vehicle damage feature map and a result confidence coefficient corresponding to the detection result;
and the generating unit 115 is configured to generate a vehicle loss result of the vehicle loss image according to a plurality of detection results and the result confidence degrees.
In at least one embodiment of the present invention, the attention unit 112 is further configured to perform a maximum pooling process on the convolution feature map to obtain a first pooled feature, and perform an average pooling process on the convolution feature map to obtain a second pooled feature;
carrying out shared full-connection processing on the first pooling feature and the second pooling feature to obtain a first full-connection feature;
performing channel interaction processing on the convolution feature map to obtain interaction features;
performing full connection processing on the interaction features to obtain second full connection features;
fusing the first full-connection feature and the second full-connection feature to obtain a fused feature;
and performing spatial attention processing on the first full-connection feature, the second full-connection feature and the fusion feature to obtain the attention feature.
In at least one embodiment of the present invention, the attention unit 112 is further configured to perform a group convolution process on the convolution feature map to obtain a plurality of initial features;
splicing the plurality of initial features to obtain spliced features;
performing transposition processing on the splicing characteristics to obtain transposition characteristics;
sequentially extracting a plurality of output features from the transposed feature based on feature dimensions of any of the plurality of initial features;
And splicing the plurality of output features to obtain the interaction feature.
In at least one embodiment of the present invention, the attention unit 112 is further configured to calculate a product of the first fully-connected feature, the second fully-connected feature, and the fusion feature to obtain an input feature;
carrying out average pooling treatment on the input features to obtain third pooling features, and carrying out maximum pooling treatment on the input features to obtain fourth pooling features;
and carrying out convolution processing and activation processing on the third pooling feature and the fourth pooling feature to obtain the attention feature.
In at least one embodiment of the present invention, the fusing unit 113 is further configured to calculate a product feature of the local feature and the global feature;
calculating the ratio of the product feature on the module length of the local feature and the overall feature to obtain an included angle feature;
based on the dimension of the global feature, performing scale transformation on the local feature and the included angle feature to obtain a first transformation feature corresponding to the local feature and a second transformation feature corresponding to the included angle feature;
and fusing the global feature, the first transformation feature and the second transformation feature to obtain the target feature.
In at least one embodiment of the present invention, the vehicle damage detection model includes a feature pyramid network and a prediction network, and the detection unit 114 is further configured to perform upsampling processing on the target feature based on the feature pyramid network to obtain an upsampled feature;
detecting the up-sampling feature based on a candidate frame detection layer in the prediction network to obtain a candidate frame feature;
performing activation normalization processing on the candidate frame features based on a mapping matrix in the prediction network to obtain probability vectors;
and determining the category corresponding to the element with the value larger than the preset probability threshold value in the probability vector as the detection result, and determining the element as the result confidence.
In at least one embodiment of the present invention, the plurality of feature network layers include a first network layer, a second network layer, and a third network layer, where the first network layer, the second network layer, and the third network layer are sequentially connected in the feature extraction model, the first network layer includes convolution layers of a plurality of branches, and the extraction unit 110 is further configured to perform convolution processing on the vehicle loss image based on the convolution layer of each branch, to obtain a convolution feature of each branch;
Performing weighted fusion processing on the plurality of convolution characteristics to obtain a vehicle loss characteristic diagram output by the first network layer;
convolving the vehicle loss feature map output by the first network layer based on the second network layer to obtain the vehicle loss feature map output by the second network layer;
and carrying out convolution processing on the vehicle loss feature map output by the second network layer based on the third network layer to obtain the vehicle loss feature map output by the third network layer.
According to the technical scheme, the convolution processing with different hole ratios is carried out on the vehicle damage feature graphs, the convolution feature graphs with different receptive fields can be obtained, and further, the characterization capability of the local features can be enhanced through the convolution attention processing and the fusion processing on each convolution feature graph, so that the target features of the local features and the global features are fused to serve as final features for vehicle damage detection, the feature extraction capability of the vehicle damage images can be effectively improved, the generation accuracy of detection results and the result confidence is improved, and further, the vehicle damage results can be detected from different feature dimensions by combining the detection results and the result confidence of the vehicle damage feature graphs output by the feature network layers, so that the detection accuracy of the vehicle damage results is further improved.
Fig. 5 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention for implementing a vehicle damage detection method.
In one embodiment of the invention, the electronic device 1 includes, but is not limited to, a memory 12, a processor 13, and computer readable instructions, such as a vehicle damage detection program, stored in the memory 12 and executable on the processor 13.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the electronic device 1 and does not constitute a limitation of the electronic device 1, and may include more or less components than illustrated, or may combine certain components, or different components, e.g. the electronic device 1 may further include input-output devices, network access devices, buses, etc.
The processor 13 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor 13 is an operation core and a control center of the electronic device 1, connects various parts of the entire electronic device 1 using various interfaces and lines, and executes an operating system of the electronic device 1 and various installed applications, program codes, etc.
Illustratively, the computer readable instructions may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to complete the present invention. The one or more modules/units may be a series of computer readable instructions capable of performing a specific function, the computer readable instructions describing a process of executing the computer readable instructions in the electronic device 1. For example, the computer readable instructions may be partitioned into an extraction unit 110, a convolution unit 111, an attention unit 112, a fusion unit 113, a detection unit 114, and a generation unit 115.
The memory 12 may be used to store the computer readable instructions and/or modules, and the processor 13 may implement various functions of the electronic device 1 by executing or executing the computer readable instructions and/or modules stored in the memory 12 and invoking data stored in the memory 12. The memory 12 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. Memory 12 may include non-volatile and volatile memory, such as: a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other storage device.
The memory 12 may be an external memory and/or an internal memory of the electronic device 1. Further, the memory 12 may be a physical memory, such as a memory bank, a TF Card (Trans-flash Card), or the like.
The integrated modules/units of the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present invention may also be implemented by implementing all or part of the processes in the methods of the embodiments described above, by instructing the associated hardware by means of computer readable instructions, which may be stored in a computer readable storage medium, the computer readable instructions, when executed by a processor, implementing the steps of the respective method embodiments described above.
Wherein the computer readable instructions comprise computer readable instruction code which may be in the form of source code, object code, executable files, or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer readable instruction code, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory).
The blockchain is a novel application mode of computer technologies such as distributed vehicle damage detection, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
In connection with fig. 1, the memory 12 in the electronic device 1 stores computer readable instructions for implementing a vehicle damage detection method, the processor 13 being executable to implement:
extracting a vehicle loss feature map of each feature network layer from the acquired vehicle loss image based on a plurality of feature network layers in the feature extraction model which is trained in advance;
carrying out cavity convolution processing on each vehicle loss feature map to obtain a plurality of convolution feature maps of each vehicle loss feature map;
performing convolution attention processing on each convolution feature map to obtain attention features, and performing convolution attention processing on each vehicle loss feature map to obtain global features;
Fusing a plurality of attention features corresponding to each vehicle loss feature map to obtain local features of each vehicle loss feature map;
performing feature projection fusion processing on the local features and the global features to obtain target features of each vehicle loss feature map;
performing vehicle damage detection on the target features based on a pre-trained vehicle damage detection model to obtain a detection result of each vehicle damage feature map and a result confidence coefficient corresponding to the detection result;
and generating a vehicle loss result of the vehicle loss image according to the detection results and the result confidence.
In particular, the specific implementation method of the processor 13 on the computer readable instructions may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The computer readable storage medium has stored thereon computer readable instructions, wherein the computer readable instructions when executed by the processor 13 are configured to implement the steps of:
Extracting a vehicle loss feature map of each feature network layer from the acquired vehicle loss image based on a plurality of feature network layers in the feature extraction model which is trained in advance;
carrying out cavity convolution processing on each vehicle loss feature map to obtain a plurality of convolution feature maps of each vehicle loss feature map;
performing convolution attention processing on each convolution feature map to obtain attention features, and performing convolution attention processing on each vehicle loss feature map to obtain global features;
fusing a plurality of attention features corresponding to each vehicle loss feature map to obtain local features of each vehicle loss feature map;
performing feature projection fusion processing on the local features and the global features to obtain target features of each vehicle loss feature map;
performing vehicle damage detection on the target features based on a pre-trained vehicle damage detection model to obtain a detection result of each vehicle damage feature map and a result confidence coefficient corresponding to the detection result;
and generating a vehicle loss result of the vehicle loss image according to the detection results and the result confidence.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. The units or means may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. A vehicle damage detection method, characterized in that the vehicle damage detection method comprises:
extracting a vehicle loss feature map of each feature network layer from the acquired vehicle loss image based on a plurality of feature network layers in the feature extraction model which is trained in advance;
carrying out cavity convolution processing on each vehicle loss feature map to obtain a plurality of convolution feature maps of each vehicle loss feature map;
performing convolution attention processing on each convolution feature map to obtain attention features, and performing convolution attention processing on each vehicle loss feature map to obtain global features;
fusing a plurality of attention features corresponding to each vehicle loss feature map to obtain local features of each vehicle loss feature map;
performing feature projection fusion processing on the local features and the global features to obtain target features of each vehicle loss feature map;
performing vehicle damage detection on the target features based on a pre-trained vehicle damage detection model to obtain a detection result of each vehicle damage feature map and a result confidence coefficient corresponding to the detection result;
and generating a vehicle loss result of the vehicle loss image according to the detection results and the result confidence.
2. The method for detecting vehicle damage according to claim 1, wherein the performing convolution attention processing on each convolution feature map to obtain an attention feature includes:
performing maximum pooling treatment on the convolution feature map to obtain a first pooling feature, and performing average pooling treatment on the convolution feature map to obtain a second pooling feature;
carrying out shared full-connection processing on the first pooling feature and the second pooling feature to obtain a first full-connection feature;
performing channel interaction processing on the convolution feature map to obtain interaction features;
performing full connection processing on the interaction features to obtain second full connection features;
fusing the first full-connection feature and the second full-connection feature to obtain a fused feature;
and performing spatial attention processing on the first full-connection feature, the second full-connection feature and the fusion feature to obtain the attention feature.
3. The method for detecting vehicle damage according to claim 2, wherein the performing channel interaction processing on the convolution feature map to obtain interaction features includes:
performing grouping convolution processing on the convolution feature map to obtain a plurality of initial features;
Splicing the plurality of initial features to obtain spliced features;
performing transposition processing on the splicing characteristics to obtain transposition characteristics;
sequentially extracting a plurality of output features from the transposed feature based on feature dimensions of any of the plurality of initial features;
and splicing the plurality of output features to obtain the interaction feature.
4. The method of claim 2, wherein the spatially focusing the first fully connected feature, the second fully connected feature, and the fusion feature to obtain the focus feature comprises:
calculating the product of the first full-connection feature, the second full-connection feature and the fusion feature to obtain an input feature;
carrying out average pooling treatment on the input features to obtain third pooling features, and carrying out maximum pooling treatment on the input features to obtain fourth pooling features;
and carrying out convolution processing and activation processing on the third pooling feature and the fourth pooling feature to obtain the attention feature.
5. The method for detecting vehicle damage according to claim 1, wherein the performing feature projection fusion processing on the local feature and the global feature to obtain the target feature of each vehicle damage feature map includes:
Calculating a product feature of the local feature and the global feature;
calculating the ratio of the product feature on the module length of the local feature and the overall feature to obtain an included angle feature;
based on the dimension of the global feature, performing scale transformation on the local feature and the included angle feature to obtain a first transformation feature corresponding to the local feature and a second transformation feature corresponding to the included angle feature;
and fusing the global feature, the first transformation feature and the second transformation feature to obtain the target feature.
6. The vehicle damage detection method according to claim 1, wherein the vehicle damage detection model includes a feature pyramid network and a prediction network, the vehicle damage detection is performed on the target feature based on the pre-trained vehicle damage detection model, and obtaining a detection result of each vehicle damage feature map and a result confidence corresponding to the detection result includes:
performing up-sampling processing on the target features based on the feature pyramid network to obtain up-sampling features;
detecting the up-sampling feature based on a candidate frame detection layer in the prediction network to obtain a candidate frame feature;
Performing activation normalization processing on the candidate frame features based on a mapping matrix in the prediction network to obtain probability vectors;
and determining the category corresponding to the element with the value larger than the preset probability threshold value in the probability vector as the detection result, and determining the element as the result confidence.
7. The method for detecting vehicle damage according to claim 1, wherein the plurality of feature network layers includes a first network layer, a second network layer, and a third network layer, the first network layer, the second network layer, and the third network layer are sequentially connected in the feature extraction model, the first network layer includes a convolution layer of a plurality of branches, and the extracting a vehicle damage feature map of each feature network layer from the acquired vehicle damage image based on the plurality of feature network layers in the feature extraction model that is completed in advance includes:
carrying out convolution processing on the vehicle loss image based on the convolution layer of each branch to obtain the convolution characteristic of each branch;
performing weighted fusion processing on the plurality of convolution characteristics to obtain a vehicle loss characteristic diagram output by the first network layer;
convolving the vehicle loss feature map output by the first network layer based on the second network layer to obtain the vehicle loss feature map output by the second network layer;
And carrying out convolution processing on the vehicle loss feature map output by the second network layer based on the third network layer to obtain the vehicle loss feature map output by the third network layer.
8. A vehicle damage detection device, characterized by comprising:
the extraction unit is used for extracting a vehicle loss feature map of each feature network layer from the acquired vehicle loss image based on a plurality of feature network layers in the feature extraction model which is trained in advance;
the convolution unit is used for carrying out cavity convolution processing on each vehicle loss feature map to obtain a plurality of convolution feature maps of each vehicle loss feature map;
the attention unit is used for carrying out convolution attention processing on each convolution feature map to obtain attention features, and carrying out convolution attention processing on each vehicle loss feature map to obtain global features;
the fusion unit is used for fusing a plurality of attention features corresponding to each vehicle loss feature map to obtain local features of each vehicle loss feature map;
the fusion unit is further used for carrying out feature projection fusion processing on the local features and the global features to obtain target features of each vehicle damage feature map;
the detection unit is used for detecting the vehicle damage to the target feature based on a pre-trained vehicle damage detection model to obtain a detection result of each vehicle damage feature map and a result confidence coefficient corresponding to the detection result;
And the generating unit is used for generating the vehicle damage result of the vehicle damage image according to the detection results and the result confidence level.
9. An electronic device, the electronic device comprising:
a memory storing computer readable instructions; a kind of electronic device with high-pressure air-conditioning system
A processor executing computer readable instructions stored in the memory to implement the vehicle damage detection method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized by: the computer-readable storage medium has stored therein computer-readable instructions that are executed by a processor in an electronic device to implement the vehicle damage detection method of any one of claims 1 to 7.
CN202310319382.3A 2023-03-22 2023-03-22 Vehicle damage detection method, device, equipment and storage medium Pending CN116310579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310319382.3A CN116310579A (en) 2023-03-22 2023-03-22 Vehicle damage detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310319382.3A CN116310579A (en) 2023-03-22 2023-03-22 Vehicle damage detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116310579A true CN116310579A (en) 2023-06-23

Family

ID=86814870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310319382.3A Pending CN116310579A (en) 2023-03-22 2023-03-22 Vehicle damage detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116310579A (en)

Similar Documents

Publication Publication Date Title
CN113139628B (en) Sample image identification method, device and equipment and readable storage medium
CN110598714B (en) Cartilage image segmentation method and device, readable storage medium and terminal equipment
CN109492627B (en) Scene text erasing method based on depth model of full convolution network
CN113642390B (en) Street view image semantic segmentation method based on local attention network
CN112257578B (en) Face key point detection method and device, electronic equipment and storage medium
CN112580720A (en) Model training method and device
CN116580257A (en) Feature fusion model training and sample retrieval method and device and computer equipment
CN114419351A (en) Image-text pre-training model training method and device and image-text prediction model training method and device
CN114549913A (en) Semantic segmentation method and device, computer equipment and storage medium
CN115880317A (en) Medical image segmentation method based on multi-branch feature fusion refining
CN114612902A (en) Image semantic segmentation method, device, equipment, storage medium and program product
CN115222946A (en) Single-stage example image segmentation method and device and computer equipment
CN110580726B (en) Dynamic convolution network-based face sketch generation model and method in natural scene
CN112861659A (en) Image model training method and device, electronic equipment and storage medium
CN113298931B (en) Reconstruction method and device of object model, terminal equipment and storage medium
CN113450822B (en) Voice enhancement method, device, equipment and storage medium
CN114925320A (en) Data processing method and related device
CN116861262B (en) Perception model training method and device, electronic equipment and storage medium
CN112465737A (en) Image processing model training method, image processing method and image processing device
TWI803243B (en) Method for expanding images, computer device and storage medium
CN113554047A (en) Training method of image processing model, image processing method and corresponding device
CN116796287A (en) Pre-training method, device, equipment and storage medium for graphic understanding model
CN116434303A (en) Facial expression capturing method, device and medium based on multi-scale feature fusion
CN116310579A (en) Vehicle damage detection method, device, equipment and storage medium
CN112102205B (en) Image deblurring method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination