WO2023035538A1 - Procédé de détection d'endommagement de véhicule, dispositif, appareil et support de stockage - Google Patents

Procédé de détection d'endommagement de véhicule, dispositif, appareil et support de stockage Download PDF

Info

Publication number
WO2023035538A1
WO2023035538A1 PCT/CN2022/072367 CN2022072367W WO2023035538A1 WO 2023035538 A1 WO2023035538 A1 WO 2023035538A1 CN 2022072367 W CN2022072367 W CN 2022072367W WO 2023035538 A1 WO2023035538 A1 WO 2023035538A1
Authority
WO
WIPO (PCT)
Prior art keywords
damage
region candidate
target
vehicle
frame
Prior art date
Application number
PCT/CN2022/072367
Other languages
English (en)
Chinese (zh)
Inventor
方起明
刘莉红
刘玉宇
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2023035538A1 publication Critical patent/WO2023035538A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present application relates to the technical field of artificial intelligence, in particular to a method, device, equipment and storage medium for detecting vehicle damage.
  • the main purpose of this application is to provide a detection method, device, equipment and storage medium for vehicle damage, aiming to solve the problem that in the prior art, the damage of the vehicle cannot be accurately identified without pre-training the vehicle models that need to be damaged, resulting in vehicle damage.
  • the present application proposes a method for detecting vehicle damage, the method comprising: acquiring a standard data set, wherein the standard data set includes vehicle data of several different vehicle types with different standard damage label information; acquiring a target image, Perform pre-identification on the target image, and mark a region candidate frame for each part pre-recognized as a damaged region, wherein the target image includes several vehicle images that have not been marked with damage; according to different regions, the candidate frame The positional relationship between each region candidate frame is identified in the target image corresponding to the target lesion; the region candidate frame is aggregated and calculated respectively, and the region candidate frame is aggregated to the target lesion The aggregated embedding value and the aggregated confidence degree; according to the aggregated embedded value and the aggregated confidence degree, the region candidate frames corresponding to the same target lesion are merged to obtain the target lesion corresponding to Prototype characterization information: through the detection model, the prototype characterization information and the standard damage labeling information of each vehicle type are respectively inter-domain aligned, and the standard damage label
  • the present application also proposes a vehicle damage detection device, including: a data set acquisition module, used to obtain a standard data set, wherein the standard data set includes several different types of vehicles with different standard damage labeling information Data; an image acquisition module, configured to acquire a target image, perform pre-identification on the target image, and mark a region candidate frame for each part pre-recognized as a damaged region, wherein the target image includes several unmarked damage The vehicle image; the target damage part recognition module, used to identify the target damage part corresponding to each of the region candidate frames in the target image according to the positional relationship between the different regions candidate frames; the aggregation calculation module, It is used to perform aggregation calculation on the region candidate frames respectively, to obtain the aggregation embedding value and the aggregation confidence when the region candidate frames are aggregated to the target damage site; Said aggregation confidence degree, merge each of the region candidate frames corresponding to the same target damage part, and obtain the prototype representation information corresponding to the target damage part; the domain alignment module is used
  • the present application also proposes a computer device, including a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the steps of the above-mentioned method for detecting vehicle damage are realized, including: obtaining the standard A data set, wherein the standard data set includes vehicle data of several different vehicle types with different standard damage labeling information; the target image is acquired, the target image is pre-identified, and each pre-identified as a damaged area The part mark region candidate frame, wherein, the target image includes several vehicle images without damage labeling; according to the positional relationship between the different region candidate frames, identify each region candidate frame in the target The corresponding target damage part in the image; perform aggregation calculation on the region candidate frames respectively, and obtain the aggregation embedding value and aggregation confidence when the region candidate frame is aggregated to the target damage part; according to the aggregation embedding value and the Said aggregation confidence, merge each of the region candidate frames corresponding to the same target damage part, and obtain the prototype representation
  • the present application also proposes a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above-mentioned vehicle damage detection method are implemented, including: acquiring a standard data set, wherein, The standard data set includes vehicle data of several different vehicle types with different standard damage labeling information; the target image is acquired, the target image is pre-recognized, and each part pre-recognized as a damaged region is marked as a region candidate frame, wherein the target image includes several vehicle images that have not been marked with damage; according to the positional relationship between the different candidate regions, identify the target corresponding to each candidate region frame in the target image Lesion site; perform aggregation calculation on the region candidate frames respectively, to obtain the aggregated embedding value and the aggregated confidence degree when the region candidate frame is aggregated to the target lesion site; according to the aggregated embedded value and the aggregated confidence degree, merging each of the region candidate frames corresponding to the same target damage part to obtain the prototype representation information
  • the vehicle damage detection method, device, equipment, and storage medium of the present application obtain a vehicle damage image that has not been marked with damage as a target image, and generate several area candidate frames for the target image, thereby realizing automatic detection of possible vehicle damage areas.
  • Recognition by identifying the positional relationship of each region candidate frame, it is determined whether different region candidate frames correspond to the same damaged target damage site, which improves the integrity of target damage site identification; through aggregation calculation of the region candidate frames, And obtain the prototype representation information corresponding to different target damage parts, thereby enhancing the robustness of region recognition and avoiding the problem of incorrect recognition of damaged regions caused by the labeling errors of individual region candidate frames;
  • the vehicle damage information corresponding to the prototype representation information is output, which improves the accuracy of vehicle damage detection.
  • FIG. 1 is a schematic flow chart of a vehicle damage detection method according to an embodiment of the present application
  • FIG. 2 is a schematic flow diagram of a vehicle damage detection method according to a specific embodiment of the present application.
  • Fig. 3 is a schematic structural block diagram of a vehicle damage detection device according to an embodiment of the present application.
  • FIG. 4 is a schematic block diagram of a computer device according to an embodiment of the present application.
  • an embodiment of the present application provides a method for detecting vehicle damage in order to achieve the above-mentioned purpose of the invention, the method comprising:
  • S1 Obtain a standard data set, wherein the standard data set includes vehicle data of several different vehicle types with different standard damage label information;
  • S2 Acquire a target image, perform pre-recognition on the target image, and mark a region candidate frame for each part pre-recognized as a damaged region, wherein the target image includes several vehicle images that have not been marked with damage;
  • S3 Identify the target lesion corresponding to each of the candidate region frames in the target image according to the positional relationship between the different candidate region frames;
  • S4 Carry out aggregation calculation on the region candidate frames respectively, to obtain the aggregation embedding value and the aggregation confidence when the region candidate frames are aggregated to the target lesion site;
  • S6 Align the prototype characterization information with the standard damage labeling information of each vehicle type through the detection model, and use the standard damage labeling information with the smallest alignment distance as the vehicle damage corresponding to the prototype characterization information information.
  • step S1 this embodiment is usually applied in the field of vehicle damage detection and recording.
  • the embodiment of this application can acquire and process vehicle images based on artificial intelligence technology .
  • artificial intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results. .
  • the above-mentioned vehicle data with standard damage annotation information is usually a business scenario data set with a large number of annotations, such as vehicle data of different vehicle types with damage annotations, so that the above-mentioned different vehicle types with damage annotations
  • the vehicle data serves as the source domain to facilitate domain alignment with the target domain formed from subsequent target images.
  • the above-mentioned vehicle damage image without damage labeling may be a vehicle data set without labeling of difficult-to-recognize vehicle types.
  • the unlabeled vehicle data set that is difficult to distinguish the vehicle type can be used as the target domain, and the source domain formed by the vehicle data of the known vehicle type with the damage label is used for domain alignment, that is, the known vehicle type with the damage label Damage prediction is performed on unlabeled vehicle data of unknown vehicle types.
  • the pre-trained image recognition model can be used to pre-recognize the damage of the target image of unknown vehicle type.
  • the target image can be image recognized, and the above-mentioned vehicles can be identified according to parameters such as color, shape, and texture. Parts in the damaged image that may be damaged regions are selected, and each part identified as a damaged region is framed to obtain several region candidate frames.
  • this embodiment adopts Construct a graph of the positional relationship between the region candidate frames, and construct a graph (graph) data structure according to the positional relationship between each region candidate frame, which contains a collection of vertices connected by a series of edges , where a vertex is a region candidate frame, and the length of the edge connected between the vertices is the distance between two region candidate frames;
  • this embodiment uses distance screening to distinguish whether different region candidate frames correspond to the same damage Part, specifically, delete the edge whose length is greater than the preset edge length threshold, that is, cancel the connection of the region candidate boxes at both ends of the edge, and finally retain the vertices corresponding to the connected edges, that is, the region candidate boxes with a closer positional relationship,
  • the region candidate frames that are still connected to each other can be divided into regions to obtain several mutually
  • step S4 after corresponding the region candidate frames, although the integrity of the identified damage parts can be improved, due to the recognition deviation, some region candidate frames are often distributed around the damage parts, which leads to a single candidate frame The inaccuracy of characterizing an object.
  • the candidate boxes belonging to the same target lesion should be aggregated into a complete marquee, so that the images in the complete marquee can form a relatively complete image of the lesion.
  • the aggregated embedding value represents the degree of influence of the region candidate frame on a certain target lesion
  • the aggregate confidence represents the possibility that the region candidate frame belongs to the target lesion.
  • step S5 by using the aggregated confidence as the weight of the aggregated embedding value, the weighted calculation of several area candidate frames corresponding to a target damage site is performed based on the above weights, and the weighted calculation result is the combined result of each area candidate frame, thus completing
  • the feature representation of the region candidate frame is aggregated at the instance level.
  • the image of the target damage part corresponding to the aggregation result often includes visual modality information.
  • the modes reflected by different target damage parts State information should be integrated into prototype representation information, so that it can be used as a substitute for each target damage site in subsequent inter-domain alignment.
  • prototype representation information is a data-based parameter of human visual features, used to represent Feature information of the image part corresponding to the damaged part.
  • step S6 the source domain and the target domain are aligned through the preset vehicle damage detection model, wherein the source domain is the vehicle data of the known vehicle type with standard damage label information, and the target domain adopts the above step S5
  • the obtained prototype representation information is replaced, and after several source domains and several target domains are aligned, several interrelated source domain-target domain pairs can be obtained, among which, in the same source domain-target domain pair, the target domain
  • the damage cause of the domain is the same as that of the source domain, and the smaller the alignment distance between the source domain and the target domain, the closer the two domains are, that is, the closer the vehicle types are, and vice versa.
  • this embodiment will align the source domain with the smallest distance
  • the domain-target domain pair is the closest detection result; at this time, since the source domain is the vehicle data with standard damage label information, the standard damage label can be used as the target corresponding to the closest target domain to the source domain The cause of the damage of the damaged part, thus realizing the damage detection of foreign damaged vehicles that are difficult to obtain training samples.
  • the identifying the target lesion S3 corresponding to each of the candidate area frames in the target image according to the positional relationship between the different candidate area frames includes:
  • S31 In the same target image, select two different candidate regions as the first recognition frame and the second recognition frame;
  • S32 Calculate an intersection ratio between the first recognition frame and the second recognition frame according to the positional relationship between the first recognition frame and the second recognition frame;
  • intersection-over-union ratio is greater than a preset ratio threshold, determine that the target lesion corresponding to the first recognition frame and the second recognition frame are the same;
  • S34 Select two different candidate regions as the first recognition frame and the second recognition frame again, and perform the calculation of the intersection ratio and the determination of the ratio threshold until the target image
  • Each of the region candidate frames completes the intersection-over-union ratio calculation and the ratio threshold determination with the rest of the region candidate frames.
  • step S31 in the actual vehicle damage, there may be multiple smaller damages.
  • the distance between two area candidate frames is used to determine whether they belong to the same template unit, it will often result in multiple smaller damages.
  • Small independent lesions are identified as the same piece of lesions. Therefore, in this embodiment, it is further determined whether different region candidate frames correspond to the same damage site by way of cross-over-merge ratio.
  • step S32 after two identification frames are selected, the intersection area and union area of the two are calculated, and the ratio of the intersection area to the union area is taken as the above-mentioned intersection-union ratio.
  • step S33 it can be understood that since the intersection ratio is closer to 1, the probability of overlap between the two is greater. Therefore, when the intersection ratio is greater than the preset ratio threshold, it is identified that there is a larger overlap area between the two recognition frames , so it can be determined that the target lesion corresponding to the first recognition frame and the second recognition frame are the same.
  • step S34 after the calculation of the current two recognition frames is completed, the rest of the region candidate frames are selected again to perform the above intersection calculation and judgment, until a judgment is completed between any two region candidate frames.
  • the calculation method S4 of the aggregated embedded value includes:
  • an adjacency matrix is used to calculate aggregated embedding values between candidate frames in different regions, so that more accurate damage instance information can be expressed.
  • the above-mentioned adjacency matrix can be obtained according to the above-mentioned graph (graph) data structure.
  • the adjacency matrix usually includes a two-dimensional array, and the one-dimensional array in the two-dimensional array stores all vertex data in the graph (graph) data structure.
  • the dimension array stores the data of the relationship (edge) between vertices, so as to obtain the distance between the quantified region candidate frames, and then determine the degree of aggregation between the region candidate frames.
  • the feature embedding value of the above-mentioned region candidate frame can be calculated by a feature selection algorithm (Embedded), which can obtain the feature combination of the region candidate frame, find the optimal feature combination in the feature combination and return the feature embedding value
  • a feature selection algorithm Embedded
  • the vector features used to describe different region candidate frames that is, the above-mentioned feature embedding values can reduce the data dimension to a fixed-size vector feature representation for easy processing and calculation; Extract the feature embedding value of the image to facilitate the subsequent aggregation calculation.
  • the calculation method S4 of the aggregation confidence includes:
  • an adjacency matrix is used to calculate aggregated embedding values between candidate frames in different regions, so that more accurate damage instance information can be further expressed.
  • classification confidence levels are generated for the images framed by each candidate area frame, and the above classification confidence levels represent the possibility that a candidate area frame belongs to a predetermined target lesion.
  • the image classification model can be preset, and the image classification model can be used to determine whether the image in the region candidate frame is corresponding to a certain target damage part, and calculate the possibility that the region candidate frame is corresponding to the target damage part , that is, the above-mentioned classification confidence.
  • the above-mentioned aggregation confidence is: under the spatial correlation provided by the adjacency matrix, the possibility that the region candidate frame belongs to the target lesion.
  • the aggregated embedding value and the aggregated confidence are combined for each of the region candidate frames corresponding to the same target lesion to obtain the prototype corresponding to the target lesion.
  • Characterization information S5 including:
  • S52 Perform weighted average calculation on the aggregated embedding value according to the combination weight to obtain the prototype representation information.
  • the weighted calculation is performed according to the combination weight to obtain the weighted prototype representation information, so as to obtain the prototype representation information that is more prominent for the region candidate frames with higher confidence.
  • step S52 in order to highlight the modal information corresponding to the region candidate frames that are more important to a specific category, the application uses the aggregation confidence of each region candidate frame as the merging weight for merging, so that different region candidate frames are combined according to the aggregation confidence Merging is carried out to obtain the prototype representation information of the above-mentioned target damage site.
  • said passing the detection model, performing inter-domain alignment S6 on the prototype representation information and the standard damage labeling information of each vehicle type including:
  • feature distribution alignment is performed through inter-class loss constraints, so as to obtain a domain alignment result considering class imbalance.
  • the core idea is to minimize the intra-class loss (denoted as L intra ) calculation by constraining the inter-class loss, so as to reduce the distance between two prototype representation information.
  • the distance between different prototype representations is constrained by another inter-class loss (denoted as L inter ).
  • L intra intra-class loss
  • L inter another inter-class loss
  • the marking method S2 of the region candidate frame includes:
  • S21 Based on the Faster R-CNN target detection framework, perform feature extraction of foreground and background features on the region candidate network of the target image to generate region candidate frames.
  • the feature extraction of foreground and background features is performed on the region candidate network of the target image, thereby generating accurate region candidate frames.
  • an initial vehicle damage detection model based on Convolutional Neural Networks can be established, and a Graph-induced Prototype Alignment (Graph-induced Prototype Alignment) framework can be used to unsupervised the initial vehicle damage detection model Adaptive learning to improve the accuracy of the initial vehicle damage detection model on target domain data.
  • CNN Convolutional Neural Networks
  • Graph-induced Prototype Alignment Graph-induced Prototype Alignment
  • the present application also proposes a detection device for vehicle damage, including:
  • a data set acquisition module 100 configured to acquire a standard data set, wherein the standard data set includes vehicle data of several different vehicle types with different standard damage labeling information;
  • the image acquisition module 200 is configured to acquire a target image, perform pre-recognition on the target image, and mark a region candidate frame for each part pre-recognized as a damaged region, wherein the target image includes several parts that have not been marked with damage. vehicle image;
  • a target lesion identification module 300 configured to identify the target lesion corresponding to each of the candidate area frames in the target image according to the positional relationship between the different candidate area frames;
  • Aggregation calculation module 400 configured to perform aggregation calculation on the region candidate frames respectively, to obtain the aggregation embedding value and aggregation confidence when the region candidate frames are aggregated to the target lesion site;
  • the merge calculation module 500 is configured to merge the region candidate frames corresponding to the same target lesion according to the aggregated embedding value and the aggregate confidence to obtain prototype representation information corresponding to the target lesion ;
  • the domain alignment module 600 is configured to perform inter-domain alignment on the prototype representation information and the standard damage annotation information of each vehicle type through the detection model, and use the standard damage annotation information with the smallest alignment distance as the prototype representation The vehicle damage information corresponding to the information.
  • the target lesion identification module 300 includes:
  • a candidate frame distinguishing unit configured to select two different region candidate frames in the same target image as the first recognition frame and the second recognition frame
  • an intersection ratio calculation unit configured to calculate an intersection ratio between the first recognition frame and the second recognition frame according to the positional relationship between the first recognition frame and the second recognition frame;
  • a damage determination unit configured to determine that the target damage site corresponding to the first recognition frame and the second recognition frame are the same if the intersection-over-union ratio is greater than a preset ratio threshold
  • a threshold determination unit configured to select two different candidate regions as the first identification frame and the second identification frame, and perform the intersection-over-union ratio calculation and the ratio threshold determination until the Each of the region candidate frames in the target image completes the intersection-over-union ratio calculation and the ratio threshold determination with the rest of the region candidate frames.
  • the aggregation calculation module 400 includes:
  • a matrix component unit configured to construct an adjacency matrix between the region candidate frames through the intersection-over-union ratio
  • An embedding value calculation unit configured to obtain the feature embedding value of the region candidate frame, and calculate the aggregate embedding value corresponding to the feature embedding value by the following formula:
  • the aggregation calculation module 400 includes:
  • An aggregation confidence calculation unit configured to obtain the classification confidence of the region candidate frame, and calculate the aggregation confidence corresponding to the classification confidence by the following formula:
  • the combined calculation module 500 includes:
  • a merging weight calculation unit configured to use the aggregation confidence as the merging weight of the region candidate frame
  • the characterization information calculation unit is configured to perform weighted average calculation on the aggregated embedded values according to the merging weight to obtain the prototype characterization information.
  • the domain alignment module 600 includes:
  • the feature alignment unit is configured to perform feature distribution alignment on the prototype representation information and the standard damage label information through a built-in detection model with inter-class loss constraints.
  • the image acquisition module 200 is also used for:
  • the feature extraction unit is used to perform feature extraction of foreground and background features on the region candidate network of the target image based on the Faster R-CNN target detection framework to generate a region candidate frame.
  • an embodiment of the present application also provides a computer device, which may be a server, and its internal structure may be as shown in FIG. 4 .
  • the computer device includes a processor, memory, network interface and database connected by a system bus. Among them, the processor designed by the computer is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer programs and databases.
  • the memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer equipment is used to store data such as detection methods of vehicle damage.
  • the network interface of the computer device is used to communicate with the external terminal through the network connection.
  • the vehicle damage detection method includes: obtaining a standard data set, wherein the standard data set includes several vehicle data with standard damage label information; obtaining a target image, and generating several region candidate frames for the target image , wherein, the target image includes several vehicle damage images that have not been marked with damage; according to the positional relationship between the different candidate regions, identify the target corresponding to each candidate region frame in the target image Lesion site; performing aggregation calculations on the region candidate frames respectively to obtain the aggregation embedding value and the aggregation confidence degree corresponding to the region candidate frame; according to the aggregation embedding value and the aggregation confidence degree, for the same target damage
  • the region candidate frames corresponding to the parts are merged to obtain the prototype representation information of the cluster corresponding to the target damage part; through the detection model, the prototype representation information and the standard damage label information are inter-domain aligned, and the output The vehicle damage information corresponding to the prototype characterization information.
  • An embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, a vehicle damage detection method is implemented, including the steps of: acquiring a standard data set, wherein the The standard data set includes vehicle data of several different vehicle types with different standard damage labeling information; the target image is acquired, the target image is pre-recognized, and each part pre-recognized as a damaged area is marked with a region candidate frame, Wherein, the target image includes several vehicle images that have not been marked with damage; according to the positional relationship between the different candidate regions, identify the target damage part corresponding to each candidate region frame in the target image ; Carry out aggregation calculations on the region candidate frames respectively, to obtain the aggregation embedding value and the aggregation confidence when the region candidate frames are aggregated to the target damage site; according to the aggregation embedding value and the aggregation confidence, for the same Merge each of the region candidate frames corresponding to a
  • a vehicle damage image without damage labeling is obtained as a target image, and several area candidate frames are generated for the target image, thereby realizing automatic identification of possible damage areas of the vehicle;
  • the positional relationship of each region candidate frame is identified to determine whether different region candidate frames correspond to the same damaged target damage part, which improves the integrity of target damage part identification;
  • region candidate frames different targets are obtained Prototype representation information corresponding to the damaged parts, thereby enhancing the robustness of region recognition and avoiding the problem of incorrect recognition of damaged regions caused by labeling errors in individual region candidate frames; through inter-domain alignment of prototype representation information and standard damage labeling information, the output
  • the vehicle damage information corresponding to the prototype representation information improves the accuracy of vehicle damage detection.
  • Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • SSRSDRAM Double Data Rate SDRAM
  • ESDRAM Enhanced SDRAM
  • SLDRAM Synchronous Link (Synchlink) DRAM
  • SLDRAM Synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé de détection d'endommagement de véhicule, un dispositif, un appareil et un support de stockage, le procédé comportant les étapes consistant à: obtenir un ensemble de données standard, l'ensemble de données standard comportant des données de véhicule pour une pluralité de types de véhicules différents dotés de diverses informations d'étiquette d'endommagement standard; acquérir une image cible, effectuer une préidentification sur l'image cible, et pour chaque site préidentifié comme étant une région d'endommagement, repérer avec un rectangle de région candidate, l'image cible comportant une pluralité d'images de véhicules qui n'ont pas encore subi un étiquetage d'endommagement; selon les relations de position entre divers rectangles de régions candidates, identifier un site d'endommagement cible correspondant à chaque rectangle de région candidate dans l'image cible; effectuer respectivement des calculs d'agrégation sur les rectangles de régions candidates pour obtenir une valeur incorporée d'agrégat et un score de confiance d'agrégation pour le moment où un rectangle de région candidate est agrégé à un site d'endommagement cible; en fonction de la valeur incorporée d'agrégat et du score de confiance d'agrégation, fusionner chacun des rectangles de régions candidates correspondant au même site d'endommagement cible, obtenir ainsi des informations de caractérisation de prototype correspondant au site d'endommagement cible; utiliser un modèle de détection pour effectuer respectivement un alignement interdomaines sur les informations de caractérisation de prototype et sur les informations d'étiquette d'endommagement standard de chaque type de véhicule, et utiliser les informations d'étiquette d'endommagement standard présentant la plus petite distance d'alignement en tant qu'informations d'endommagement de véhicule correspondant aux informations de caractérisation de prototype.
PCT/CN2022/072367 2021-09-08 2022-01-17 Procédé de détection d'endommagement de véhicule, dispositif, appareil et support de stockage WO2023035538A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111058959.7A CN113743407B (zh) 2021-09-08 2021-09-08 车辆损伤的检测方法、装置、设备及存储介质
CN202111058959.7 2021-09-08

Publications (1)

Publication Number Publication Date
WO2023035538A1 true WO2023035538A1 (fr) 2023-03-16

Family

ID=78737799

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072367 WO2023035538A1 (fr) 2021-09-08 2022-01-17 Procédé de détection d'endommagement de véhicule, dispositif, appareil et support de stockage

Country Status (2)

Country Link
CN (1) CN113743407B (fr)
WO (1) WO2023035538A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743407B (zh) * 2021-09-08 2024-05-10 平安科技(深圳)有限公司 车辆损伤的检测方法、装置、设备及存储介质
CN114898155B (zh) * 2022-05-18 2024-05-28 平安科技(深圳)有限公司 车辆定损方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569696A (zh) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 用于车辆部件识别的神经网络系统、方法和装置
CN111967595A (zh) * 2020-08-17 2020-11-20 成都数之联科技有限公司 候选框标注方法及系统及模型训练方法及目标检测方法
CN112966730A (zh) * 2021-03-01 2021-06-15 创新奇智(上海)科技有限公司 车辆伤损识别方法、装置、设备及存储介质
CN113743407A (zh) * 2021-09-08 2021-12-03 平安科技(深圳)有限公司 车辆损伤的检测方法、装置、设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712498A (zh) * 2020-12-25 2021-04-27 北京百度网讯科技有限公司 移动终端执行的车辆定损方法、装置、移动终端、介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569696A (zh) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 用于车辆部件识别的神经网络系统、方法和装置
CN111967595A (zh) * 2020-08-17 2020-11-20 成都数之联科技有限公司 候选框标注方法及系统及模型训练方法及目标检测方法
CN112966730A (zh) * 2021-03-01 2021-06-15 创新奇智(上海)科技有限公司 车辆伤损识别方法、装置、设备及存储介质
CN113743407A (zh) * 2021-09-08 2021-12-03 平安科技(深圳)有限公司 车辆损伤的检测方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN113743407A (zh) 2021-12-03
CN113743407B (zh) 2024-05-10

Similar Documents

Publication Publication Date Title
TWI742382B (zh) 透過電腦執行的、用於車輛零件識別的神經網路系統、透過神經網路系統進行車輛零件識別的方法、進行車輛零件識別的裝置和計算設備
CN111860670B (zh) 域自适应模型训练、图像检测方法、装置、设备及介质
CN109492643B (zh) 基于ocr的证件识别方法、装置、计算机设备及存储介质
CN111723860B (zh) 一种目标检测方法及装置
TWI729405B (zh) 優化損傷檢測結果的方法及裝置
CN108229509B (zh) 用于识别物体类别的方法及装置、电子设备
WO2019218410A1 (fr) Procédé de classification d'image, dispositif informatique, et support de stockage
WO2023035538A1 (fr) Procédé de détection d'endommagement de véhicule, dispositif, appareil et support de stockage
CN111914642B (zh) 一种行人重识别方法、装置、设备及介质
CN111368766B (zh) 一种基于深度学习的牛脸检测与识别方法
WO2020238256A1 (fr) Dispositif et procédé de détection d'endommagement basé sur une segmentation insuffisante
CN111914634B (zh) 一种抗复杂场景干扰的井盖类别自动检测方法和系统
WO2021000832A1 (fr) Procédé et appareil de correspondance faciale, dispositif informatique et support de stockage
WO2022121189A1 (fr) Procédé et appareil de mesure de température, et dispositif informatique
WO2020038138A1 (fr) Procédé et dispositif de marquage d'échantillon, et procédé et dispositif d'identification de catégorie d'endommagement
CN112668462B (zh) 车损检测模型训练、车损检测方法、装置、设备及介质
Liu et al. Deep domain adaptation for pavement crack detection
CN116740758A (zh) 一种防止误判的鸟类图像识别方法及系统
CN114549909A (zh) 一种基于自适应阈值的伪标签遥感图像场景分类方法
CN114494373A (zh) 基于目标检测与图像配准的轨道高精度对齐方法及系统
CN111401286B (zh) 一种基于部件权重生成网络的行人检索方法
WO2020155484A1 (fr) Procédé et dispositif de reconnaissance de caractères basés sur une machine à vecteurs de support, et dispositif informatique
Huang et al. Joint distribution adaptive-alignment for cross-domain segmentation of high-resolution remote sensing images
CN116188973B (zh) 认知生成机制裂缝检测方法
Ke Realization of Halcon Image Segmentation Algorithm in Machine Vision for Complex Scenarios

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE