CN115131564A - Vehicle component damage detection method based on artificial intelligence and related equipment - Google Patents

Vehicle component damage detection method based on artificial intelligence and related equipment Download PDF

Info

Publication number
CN115131564A
CN115131564A CN202210825740.3A CN202210825740A CN115131564A CN 115131564 A CN115131564 A CN 115131564A CN 202210825740 A CN202210825740 A CN 202210825740A CN 115131564 A CN115131564 A CN 115131564A
Authority
CN
China
Prior art keywords
vehicle component
image
vehicle
images
screening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210825740.3A
Other languages
Chinese (zh)
Inventor
余宪
刘莉红
刘玉宇
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210825740.3A priority Critical patent/CN115131564A/en
Publication of CN115131564A publication Critical patent/CN115131564A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The application provides a vehicle component damage detection method and device based on artificial intelligence, an electronic device and a storage medium, wherein the vehicle component damage detection method based on artificial intelligence comprises the following steps: acquiring images of vehicle parts according to a preset mode to obtain an initial vehicle image set; detecting the initial vehicle image set according to a preset semantic segmentation network to obtain vehicle part image sets of multiple categories; screening the vehicle component image set to obtain a vehicle component screening set; generating a vehicle component significant image based on all images in the vehicle component screening set and a preset vehicle component standard image; and detecting the vehicle part significant image according to a preset vehicle part detection model to obtain a vehicle part damage result. According to the vehicle part damage detection method and device, the vehicle part significant image capable of accurately reflecting the damage condition of the vehicle part is generated, the rapid detection of the vehicle part detection model on each part of the vehicle is realized, and the detection efficiency of the vehicle part damage is effectively improved.

Description

Vehicle component damage detection method based on artificial intelligence and related equipment
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for detecting damage to a vehicle component based on artificial intelligence, an electronic device, and a storage medium.
Background
The driver can encounter the accidents of small scratch, small rubbing and the like in the driving and parking processes inevitably, so that the driven vehicle is easy to be damaged at different parts and in different degrees.
Generally, the assessment of automobile collision is based on manual survey of damage assessment experts, and experts comprehensively analyze the automobile collision according to the automobile construction principle, accident occurrence sites and the like by means of science, system inspection, testing and the like to obtain a specific damage detection result of the automobile. However, professional resources such as loss assessment experts are scarce, and a common vehicle owner needs to spend more time and money to obtain assistance of the loss assessment experts, so how to provide a quick and simple vehicle damage detection scheme for the vehicle owner becomes an urgent need of the market to improve the vehicle damage detection efficiency.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a method, an apparatus, an electronic device and a storage medium for detecting damage to a vehicle component based on artificial intelligence, so as to solve the technical problem of how to improve the detection efficiency of vehicle damage. The related equipment comprises an artificial intelligence-based vehicle component damage detection device, electronic equipment and a storage medium.
The application provides a vehicle part damage detection method based on artificial intelligence, which comprises the following steps:
acquiring images of vehicle components according to a preset mode to obtain an initial vehicle image set;
detecting the initial vehicle image set according to a preset semantic segmentation network to obtain vehicle component image sets of multiple categories;
screening the vehicle component image set to obtain a vehicle component screening set;
generating a vehicle component significant image based on all images in the vehicle component screening set and a preset vehicle component standard image, wherein the vehicle component standard image and the vehicle component significant image are in one-to-one correspondence with the vehicle component screening set;
and detecting the vehicle part significant image according to a preset vehicle part detection model to obtain a vehicle part damage result.
In some embodiments, the acquiring the image of the vehicle component according to the preset manner to obtain the initial vehicle image set includes:
carrying out omnibearing video shooting on the vehicle according to a preset mode to obtain a vehicle component video stream;
and sequentially extracting single-frame images from the vehicle component video stream, and taking all the obtained single-frame images as an initial vehicle image set.
In some embodiments, the detecting the initial vehicle image set according to the preset semantic segmentation network to obtain a plurality of categories of vehicle component image sets includes:
setting different labels for all images in the initial vehicle image set according to different vehicle component types to obtain an initial vehicle label set;
training a preset semantic segmentation network according to the initial vehicle image set and the initial vehicle label set to obtain a vehicle component segmentation model;
detecting all images in the initial vehicle image set based on the vehicle component segmentation model to obtain vehicle component image sets of multiple categories.
In some embodiments, the screening the set of vehicle component images to obtain a vehicle component screening set comprises:
calculating the pixel number of each vehicle component image in the vehicle component image set according to a connected domain analysis method;
and screening the vehicle component image set based on the pixel number to obtain a vehicle component screening set.
In some embodiments, said screening said set of vehicle component images based on said number of pixels to obtain a screened set of vehicle components comprises:
classifying the vehicle component image set based on the number of pixels to obtain a classified set of number of pixels for a plurality of categories;
and selecting the pixel number classification set with the maximum pixel number as a vehicle component screening set.
In some embodiments, the generating a vehicle component significant image based on all images in the vehicle component filtering set and a preset vehicle component standard image, the vehicle component standard image and the vehicle component significant image each corresponding to the vehicle component filtering set in a one-to-one manner includes:
calculating the image similarity between each image in the vehicle component screening set and a preset vehicle component standard image;
sequencing all the images in the vehicle component screening set according to the sequence of the similarity of the images from large to small to obtain a sequencing result;
screening all images in the vehicle component screening set based on the sorting result to obtain a vehicle component preferred image set;
a vehicle component saliency image is generated based on all images in the vehicle component preferred image set.
In some embodiments, the generating a vehicle component saliency image based on all images of the vehicle component preferred image set comprises:
taking the image similarity between each image in the vehicle component preferred image set and a preset vehicle component standard image as an image initial weight corresponding to each image;
normalizing all the initial image weights to obtain an image normalization weight of each image in the vehicle component preferred image set;
and carrying out weighted summation on each image in the vehicle component preferred image set and the corresponding image normalization weight to obtain a vehicle component significant image.
The embodiment of the present application still provides a vehicle part damages detection device based on artificial intelligence, the device includes:
the acquisition unit is used for acquiring images of the vehicle parts according to a preset mode to obtain an initial vehicle image set;
the obtaining unit is used for detecting the initial vehicle image set according to a preset semantic segmentation network to obtain vehicle component image sets of multiple categories;
the screening unit is used for screening the vehicle component image set to obtain a vehicle component screening set;
the generating unit is used for generating a vehicle component remarkable image based on all images in the vehicle component screening set and a preset vehicle component standard image, and the vehicle component standard image and the vehicle component remarkable image are in one-to-one correspondence with the vehicle component screening set;
and the detection unit is used for detecting the vehicle component significant image according to a preset vehicle component detection model to obtain a vehicle component damage result.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a memory storing at least one instruction;
a processor executing instructions stored in the memory to implement the artificial intelligence based vehicle component damage detection method.
The embodiment of the application also provides a computer-readable storage medium, and at least one instruction is stored in the computer-readable storage medium and executed by a processor in an electronic device to implement the artificial intelligence based vehicle component damage detection method.
According to the method and the device, the obtained vehicle component image set is detected through the preset semantic segmentation network to obtain the vehicle component image sets corresponding to different vehicle parts, and the vehicle component significant images capable of accurately reflecting the damage conditions of the vehicle parts are generated through further screening and filtering of the vehicle component image sets, so that the rapid detection of the vehicle component by the vehicle component detection model is realized, and the detection efficiency of the damage of the vehicle component is effectively improved.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of an artificial intelligence based vehicle component damage detection method to which the present application is directed.
FIG. 2 is a functional block diagram of a preferred embodiment of an artificial intelligence based vehicle component damage detection apparatus according to the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to a preferred embodiment of the artificial intelligence-based vehicle component damage detection method according to the present application.
Detailed Description
For a clearer understanding of the objects, features and advantages of the present application, reference is made to the following detailed description of the present application along with the accompanying drawings and specific examples. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, and the described embodiments are merely some, but not all embodiments of the present application.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, features defined as "first" and "second" may explicitly or implicitly include one or more of the described features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The embodiment of the Application provides a vehicle component damage detection method based on artificial intelligence, which can be applied to one or more electronic devices, wherein the electronic devices are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and hardware of the electronic devices includes but is not limited to a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device and the like.
The electronic device may be any electronic product capable of performing human-computer interaction with a client, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an Internet Protocol Television (IPTV), an intelligent wearable device, and the like.
The electronic device may also include a network device and/or a client device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The Network where the electronic device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
Fig. 1 is a flowchart illustrating a method for detecting damage to a vehicle component based on artificial intelligence according to a preferred embodiment of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
And S10, acquiring images of the vehicle parts according to a preset mode to obtain an initial vehicle image set.
In an optional embodiment, the acquiring the image of the vehicle component according to the preset mode to obtain the initial vehicle image set includes:
s101, carrying out omnibearing video shooting on a vehicle according to a preset mode to obtain a vehicle component video stream;
s102, extracting single-frame images of the vehicle component video stream in sequence, and taking all the obtained single-frame images as an initial vehicle image set.
In this optional embodiment, when a vehicle component is damaged due to an accident, the vehicle owner may carry out all-around video recording on the vehicle body through a handheld RGB camera, a DV camera, a mobile phone, and other devices to obtain a vehicle component video stream because the specific damaged part of the vehicle component cannot be determined effectively.
In the optional embodiment, the vehicle owner needs to keep constant-speed movement around the vehicle body as much as possible during video recording, and meanwhile, the moving speed is controlled within a reasonable range, so that the phenomenon that the acquired video images are not clear due to too high moving speed and the number of the video image frames is too large due to too low moving speed, and redundant calculated amount is generated in the subsequent detection process.
In this optional embodiment, the video processing tool may perform frame-by-frame extraction of the single-frame images on the vehicle component video stream according to the chronological order, and use all the obtained single-frame images as an initial vehicle image set.
Therefore, the vehicle parts can be shot in an all-around mode through video recording of the vehicle, a vehicle owner does not need to shoot each vehicle part one by one, and accordingly the obtaining efficiency of the images of the vehicle parts is improved.
And S11, detecting the initial vehicle image set according to a preset semantic segmentation network to obtain vehicle component image sets of multiple categories.
In an optional embodiment, the detecting the initial vehicle image set according to the preset semantic segmentation network to obtain a plurality of categories of vehicle component image sets includes:
s111, setting different labels for all images in the initial vehicle image set according to different vehicle component types to obtain an initial vehicle label set;
s112, training a preset semantic segmentation network according to the initial vehicle image set and the initial vehicle label set to obtain a vehicle component segmentation model;
s113, detecting all images in the initial vehicle image set based on the vehicle component segmentation model to obtain vehicle component image sets of multiple categories.
In this alternative embodiment, since the vehicle includes a plurality of different types of components, all images in the initial vehicle image set are labeled according to vehicle component types, different labels are set for different types of vehicle components, and the labels may be numbers, letters or symbols, and as in this embodiment, the labels are sequentially labeled according to the order of natural numbers for different types of vehicle components.
In this optional embodiment, the categories of the vehicle components may include a front windshield, a rear windshield, left and right rearview mirrors, left and right front doors, left and right rear doors, a trunk, left and right front door glasses, left and right rear door glasses, a head, a tail, and other various common vehicle component categories, and the present scheme uses vehicle component images of all categories for which tags are set as an initial vehicle tag set.
In this optional embodiment, the preset semantic segmentation network may adopt an FCN (full convolution neural network), in order to enable the preset semantic segmentation network to detect different types of vehicle component images, the preset semantic segmentation network needs to be trained by using the initial vehicle image set and the initial vehicle label set to obtain a vehicle component segmentation model, and the training process is the same as that of existing semantic segmentation networks such as U-net and SegNet.
In this alternative embodiment, all the images in the initial vehicle image set may be detected based on the vehicle component segmentation model, so as to segment each vehicle component region image included in each image, and in this embodiment, all the segmented vehicle component region images belonging to the same vehicle component are taken as a vehicle component image set corresponding to the vehicle component, so that finally each type of vehicle component corresponds to one vehicle component image set.
Therefore, each vehicle component region image in the initial vehicle image set can be segmented, the influence of the non-vehicle component region in the initial vehicle image set on the subsequent process is reduced, the calculation amount is reduced, and meanwhile, the accuracy of the subsequent detection process can be improved.
And S12, screening the vehicle component image set to obtain a vehicle component screening set.
In an optional embodiment, the screening the vehicle component image set to obtain a vehicle component screening set includes:
s121, calculating the pixel number of each vehicle component image in the vehicle component image set according to a connected domain analysis method;
and S122, screening the vehicle component image set based on the number of the pixels to obtain a vehicle component screening set.
In this optional embodiment, since the vehicle owner continuously captures each vehicle component in the capturing process, the number of pixels of each acquired vehicle component image set of each category is not consistent, and some vehicle component images may only include a partial region of the vehicle component of the corresponding category, so that the vehicle component image set needs to be screened to obtain a vehicle component image including a complete region of the corresponding category.
In this alternative embodiment, the number of pixels of each vehicle component image in the vehicle component image set may be counted according to a connected component analysis method. Since the connected component analysis is generally performed on a binary image, each vehicle component image in the vehicle component image set needs to be converted into a grayscale image before performing the connected component analysis on each vehicle component image in the vehicle component image set.
In this alternative embodiment, the connected component analysis method is used to find and mark neighboring pixels in the image that have the same pixel value. Since each vehicle component image in the vehicle component image set is obtained by segmenting through the vehicle component segmentation model, the pixel values of all pixels of each vehicle component image in the vehicle component image set are still the same after being converted into the gray-scale image, and in the scheme, the pixel number of all pixels of each vehicle component image in the vehicle component image set is obtained according to the connected component analysis method.
In this alternative embodiment, the vehicle component image set may be classified according to the number of pixels using a K-means clustering algorithm, wherein the K-means clustering algorithm requires a pre-specified value for the classification number K in order to perform an average classification on the vehicle component image set. In the scheme, the K value can be 3, namely, all images in the vehicle component image set are averagely divided into three categories according to the number of the pixels, all images corresponding to each category are used as a pixel number classification set of the category, and the pixel number classification set with the largest pixel number is selected as a vehicle component screening set. Thus, ultimately, each vehicle component image set has a corresponding vehicle component screening set.
Therefore, through further screening of the vehicle component image set, images containing complete vehicle component areas can be reserved, and the detection efficiency of detecting damage of the vehicle components in the subsequent process is improved.
And S13, generating a vehicle component significant image based on all images in the vehicle component screening set and a preset vehicle component standard image, wherein the vehicle component standard image and the vehicle component significant image are in one-to-one correspondence with the vehicle component screening set.
In an optional embodiment, the generating a vehicle component significant image based on all images in the vehicle component filtering set and a preset vehicle component standard image, where the vehicle component standard image and the vehicle component significant image are in one-to-one correspondence with the vehicle component filtering set includes:
s131, calculating the image similarity between each image in the vehicle component screening set and a preset vehicle component standard image;
s132, sequencing all the images in the vehicle component screening set according to the sequence of the similarity of the images from large to small to obtain a sequencing result;
s133, screening all images in the vehicle component screening set based on the sorting result to obtain a vehicle component preferred image set;
and S134, generating a vehicle component significant image based on all the images in the vehicle component preferred image set.
In this alternative embodiment, a vehicle component standard image may be obtained in advance for each vehicle component, and the image similarity between each image in the vehicle component screening set and a preset vehicle component standard image is calculated by using a normalized cross-correlation matching algorithm. The normalized cross-correlation matching algorithm uses a vehicle component standard image as a template, traverses each pixel of each image in the vehicle component screening set, and compares whether each pixel is similar to the template, so that the image similarity between the vehicle component standard image and each image in the vehicle component screening set is obtained, wherein the value range is [0,1], and the closer to 1, the higher the similarity is.
In this optional embodiment, all images in the vehicle component screening set are sorted according to the sequence of the similarity of the images from large to small, three images corresponding to the first three images in the sorting result are reserved and used as a vehicle component preferred image set, and all images which do not enter the vehicle component preferred image set in the vehicle component screening set are filtered. The vehicle component optimal image set corresponds to the vehicle component screening set in a one-to-one mode.
In this optional embodiment, the image similarity between each image in the vehicle component preferred image set and a preset vehicle component standard image is used as an image initial weight corresponding to each image, all the image initial weights are normalized to obtain an image normalization weight of each image in the vehicle component preferred image set, and then each image in the vehicle component preferred image set and the corresponding image normalization weight are weighted and summed to obtain a vehicle component significant image.
Illustratively, the vehicle component preferred image set comprises A, B, C images, wherein the image similarities between A, B, C images and the preset vehicle component standard image are 0.4, 0.6 and 0.5 respectively, the initial weights of the images corresponding to A, B, C images are 0.4, 0.6 and 0.6 respectively, and the normalized weights of the images corresponding to A, B, C images after normalization of 0.4, 0.6 and 0.5 are 0.25, 0.375 and 0.375 respectively. Then A, B, C images and corresponding normalized weights are summed in a weighted manner and the resulting image is taken as the vehicle component saliency image.
In this way, a high-quality body damage evidence-taking photo can be generated by further screening the vehicle component screening set, so that the accuracy of the vehicle damage assessment result is improved.
And S14, detecting the vehicle part significant image according to a preset vehicle part detection model to obtain a vehicle part damage result.
In this optional embodiment, a preset vehicle component detection model may use a Single Shot multi box Detector (SSD, Single deep neural network detection) detection network, and in order to enable the vehicle component detection model to detect various types of damage to a vehicle body part, the SSD detection network needs to be trained to obtain the vehicle component detection model, where the training process is the same as that of an existing target detection network such as YOLO and FCOS.
In this optional embodiment, the trained vehicle component detection model may sequentially detect the vehicle component salient images corresponding to the respective portions of the vehicle, so as to identify common damages of the vehicle component, such as deformation, cracks or scratches, fractures, crater impacts, desoldering, local corrosion, and the like.
In the optional embodiment, the vehicle owner can upload the detection results of all parts of the vehicle to the Internet of vehicles server, so that the detection results can be used as the basis for damage assessment of the vehicle by the insurance claim settlement personnel, and the efficiency of the vehicle insurance claim settlement process is effectively improved.
Therefore, the damage conditions of all parts of the vehicle can be quickly detected, and the efficiency of the damage assessment process of the vehicle is improved.
Referring to fig. 2, fig. 2 is a functional block diagram of a preferred embodiment of the artificial intelligence based damage detection apparatus for vehicle components according to the present invention. The artificial intelligence-based vehicle component damage detection device 11 comprises an acquisition unit 110, an obtaining unit 111, a screening unit 112, a generation unit 113 and a detection unit 114. A module/unit as referred to herein is a series of computer readable instruction segments capable of being executed by the processor 13 and performing a fixed function, and is stored in the memory 12. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
In an alternative embodiment, the capturing unit 110 is configured to capture images of vehicle components according to a preset manner to obtain an initial vehicle image set.
In an optional embodiment, the acquiring the image of the vehicle component according to the preset mode to obtain the initial vehicle image set includes:
carrying out omnibearing video shooting on the vehicle according to a preset mode to obtain a vehicle component video stream;
and sequentially extracting single-frame images from the vehicle component video stream, and taking all the obtained single-frame images as an initial vehicle image set.
In this optional embodiment, when a vehicle component is damaged due to an accident, the vehicle owner may carry out all-around video recording on the vehicle body through a handheld RGB camera, a DV camera, a mobile phone, and other devices to obtain a vehicle component video stream because the specific damaged part of the vehicle component cannot be determined effectively.
In the optional embodiment, the vehicle owner needs to keep constant-speed movement around the vehicle body as much as possible during video recording, and meanwhile, the moving speed is controlled within a reasonable range, so that the phenomenon that the acquired video images are not clear due to too high moving speed and the number of the video image frames is too large due to too low moving speed, and redundant calculated amount is generated in the subsequent detection process.
In this optional embodiment, the video processing tool may perform frame-by-frame extraction of the single-frame images on the vehicle component video stream according to the chronological order, and use all the obtained single-frame images as an initial vehicle image set.
In an alternative embodiment, the obtaining unit 111 is configured to detect the initial vehicle image set according to a preset semantic segmentation network to obtain vehicle component image sets of multiple categories.
In an optional embodiment, the detecting the initial vehicle image set according to the preset semantic segmentation network to obtain a plurality of categories of vehicle component image sets includes:
setting different labels for all images in the initial vehicle image set according to different vehicle component types to obtain an initial vehicle label set;
training a preset semantic segmentation network according to the initial vehicle image set and the initial vehicle label set to obtain a vehicle component segmentation model;
detecting all images in the initial vehicle image set based on the vehicle component segmentation model to obtain vehicle component image sets of multiple categories.
In this alternative embodiment, since the vehicle includes a plurality of different types of components, all images in the initial vehicle image set are labeled according to vehicle component types, different labels are set for different types of vehicle components, and the labels may be numbers, letters or symbols, and as in this embodiment, the labels are sequentially labeled according to the order of natural numbers for different types of vehicle components.
In this optional embodiment, the categories of the vehicle components may include a front windshield, a rear windshield, left and right rearview mirrors, left and right front doors, left and right rear doors, a trunk, left and right front door glasses, left and right rear door glasses, a head, a tail, and other various common vehicle component categories, and the present scheme uses vehicle component images of all categories for which tags are set as an initial vehicle tag set.
In this optional embodiment, the preset semantic segmentation network may adopt an FCN (full convolution neural network), in order to enable the preset semantic segmentation network to detect different types of vehicle component images, the preset semantic segmentation network needs to be trained by using the initial vehicle image set and the initial vehicle label set to obtain a vehicle component segmentation model, and the training process is the same as that of existing semantic segmentation networks such as U-net and SegNet.
In this alternative embodiment, all the images in the initial vehicle image set may be detected based on the vehicle component segmentation model, so as to segment each vehicle component region image included in each image, and in this embodiment, all the segmented vehicle component region images belonging to the same vehicle component are taken as a vehicle component image set corresponding to the vehicle component, so that finally each type of vehicle component corresponds to one vehicle component image set.
In an alternative embodiment, the screening unit 112 is configured to screen the vehicle component image set to obtain a vehicle component screening set.
In an optional embodiment, the screening the set of vehicle component images to obtain a vehicle component screening set includes:
calculating the pixel number of each vehicle component image in the vehicle component image set according to a connected domain analysis method;
and screening the vehicle component image set based on the pixel number to obtain a vehicle component screening set.
In this optional embodiment, since the vehicle owner continuously captures each vehicle component in the capturing process, the number of pixels of each acquired vehicle component image set of each category is not consistent, and some vehicle component images may only include a partial region of the vehicle component of the corresponding category, so that the vehicle component image set needs to be screened to obtain a vehicle component image including a complete region of the corresponding category.
In this alternative embodiment, the number of pixels of each vehicle component image in the vehicle component image set may be counted according to a connected component analysis method. Since the connected component analysis is generally directed to a binary image, each vehicle component image in the vehicle component image set needs to be converted into a grayscale image before performing the connected component analysis on each vehicle component image in the vehicle component image set.
In this alternative embodiment, the connected component analysis method is used to find and mark neighboring pixels in the image that have the same pixel value. Since each vehicle component image in the vehicle component image set is obtained by segmenting through the vehicle component segmentation model, the pixel values of all pixels of each vehicle component image in the vehicle component image set are still the same after being converted into the gray-scale image, and in the scheme, the pixel number of all pixels of each vehicle component image in the vehicle component image set is obtained according to the connected component analysis method.
In this alternative embodiment, the vehicle component image set may be classified according to the number of pixels using a K-means clustering algorithm, wherein the K-means clustering algorithm requires a pre-specified value for the classification number K in order to perform an average classification on the vehicle component image set. In the scheme, the K value can be 3, namely, all images in the vehicle component image set are averagely divided into three categories according to the number of the pixels, all images corresponding to each category are used as a pixel number classification set of the category, and the pixel number classification set with the largest pixel number is selected as a vehicle component screening set. Thus, ultimately, each vehicle component image set has a corresponding vehicle component screening set.
In an optional embodiment, the generating unit 113 is configured to generate a vehicle component significant image based on all images in the vehicle component filtering set and a preset vehicle component standard image, where the vehicle component standard image and the vehicle component significant image are in one-to-one correspondence with the vehicle component filtering set.
In an optional embodiment, the generating a vehicle component significant image based on all images in the vehicle component filtering set and a preset vehicle component standard image, where the vehicle component standard image and the vehicle component significant image are in one-to-one correspondence with the vehicle component filtering set includes:
calculating the image similarity between each image in the vehicle component screening set and a preset vehicle component standard image;
sequencing all the images in the vehicle component screening set according to the sequence of the similarity of the images from large to small to obtain a sequencing result;
screening all images in the vehicle component screening set based on the sorting result to obtain a vehicle component preferred image set;
a vehicle component saliency image is generated based on all images in the vehicle component preferred image set.
In this alternative embodiment, a vehicle component standard image may be obtained in advance for each vehicle component, and the image similarity between each image in the vehicle component screening set and a preset vehicle component standard image is calculated by using a normalized cross-correlation matching algorithm. The normalized cross-correlation matching algorithm takes the vehicle component standard image as a template, traverses each pixel of each image in the vehicle component screening set, and compares whether each pixel is similar to the template, so that the image similarity between the vehicle component standard image and each image in the vehicle component screening set is obtained, wherein the value range is [0,1], and the closer to 1, the higher the similarity is.
In this optional embodiment, all images in the vehicle component screening set are sorted according to the sequence of the similarity of the images from large to small, three images corresponding to the first three images in the sorting result are reserved and used as a vehicle component preferred image set, and all images which do not enter the vehicle component preferred image set in the vehicle component screening set are filtered. The vehicle component optimal image set corresponds to the vehicle component screening set in a one-to-one mode.
In this optional embodiment, the image similarity between each image in the vehicle component preferred image set and a preset vehicle component standard image is used as an image initial weight corresponding to each image, all the image initial weights are normalized to obtain an image normalization weight of each image in the vehicle component preferred image set, and then each image in the vehicle component preferred image set and the corresponding image normalization weight are weighted and summed to obtain a vehicle component significant image.
Illustratively, the vehicle component preferred image set comprises A, B, C images, wherein the image similarities between A, B, C images and the preset vehicle component standard image are 0.4, 0.6 and 0.5 respectively, the initial weights of the images corresponding to A, B, C images are 0.4, 0.6 and 0.6 respectively, and the normalized weights of the images corresponding to A, B, C images after normalization of 0.4, 0.6 and 0.5 are 0.25, 0.375 and 0.375 respectively. Then A, B, C images and corresponding normalized weights are summed in a weighted manner and the resulting image is taken as the vehicle component saliency image.
In an optional embodiment, the detecting unit 114 is configured to detect the significant image of the vehicle component according to a preset vehicle component detection model to obtain a vehicle component damage result.
In this optional embodiment, a preset vehicle component detection model may use a Single Shot multi box Detector (SSD, Single deep neural network detection) detection network, and in order to enable the vehicle component detection model to detect various types of damage to a vehicle body part, the SSD detection network needs to be trained to obtain the vehicle component detection model, where the training process is the same as that of an existing target detection network such as YOLO and FCOS.
In this optional embodiment, the trained vehicle component detection model may sequentially detect the vehicle component salient images corresponding to the respective portions of the vehicle, so as to identify common damages of the vehicle component, such as deformation, cracks or scratches, fractures, crater impacts, desoldering, local corrosion, and the like.
In the optional embodiment, the vehicle owner can upload the detection results of all parts of the vehicle to the Internet of vehicles server, so that the vehicle owner can be used as a basis for damage assessment of the vehicle by insurance claim settlement personnel, and the efficiency of the vehicle insurance claim settlement process is effectively improved.
According to the technical scheme, the obtained vehicle component image sets can be detected through the preset semantic segmentation network to obtain the vehicle component image sets corresponding to different vehicle parts, and the vehicle component image sets are further screened and filtered to generate the vehicle component significant images capable of accurately reflecting the damage conditions of the vehicle parts, so that the rapid detection of the vehicle component detection model on the vehicle components is realized, and the detection efficiency of the damage of the vehicle components is effectively improved.
Please refer to fig. 3, which is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 1 comprises a memory 12 and a processor 13. The memory 12 is used for storing computer readable instructions, and the processor 13 is used for executing the computer readable instructions stored in the memory to implement the artificial intelligence based vehicle component damage detection method according to any one of the above embodiments.
In an alternative embodiment, the electronic device 1 further comprises a bus, a computer program stored in said memory 12 and executable on said processor 13, such as an artificial intelligence based vehicle component damage detection program.
Fig. 3 shows only the electronic device 1 with the memory 12 and the processor 13, and it will be understood by those skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
With reference to fig. 1, the memory 12 of the electronic device 1 stores a plurality of computer-readable instructions to implement an artificial intelligence based vehicle component damage detection method, and the processor 13 can execute the plurality of instructions to implement:
acquiring images of vehicle components according to a preset mode to obtain an initial vehicle image set;
detecting the initial vehicle image set according to a preset semantic segmentation network to obtain vehicle part image sets of multiple categories;
screening the vehicle component image set to obtain a vehicle component screening set;
generating a vehicle component significant image based on all images in the vehicle component screening set and a preset vehicle component standard image, wherein the vehicle component standard image and the vehicle component significant image are in one-to-one correspondence with the vehicle component screening set;
and detecting the vehicle part significant image according to a preset vehicle part detection model to obtain a vehicle part damage result.
Specifically, the specific implementation method of the instruction by the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1, which is not described herein again.
It will be understood by those skilled in the art that the schematic diagram is only an example of the electronic device 1, and does not constitute a limitation to the electronic device 1, the electronic device 1 may have a bus-type structure or a star-shaped structure, the electronic device 1 may further include more or less hardware or software than those shown in the figures, or different component arrangements, for example, the electronic device 1 may further include an input and output device, a network access device, etc.
It should be noted that the electronic device 1 is only an example, and other existing or future electronic products, such as those that may be adapted to the present application, should also be included in the scope of protection of the present application, and are included by reference.
Memory 12 includes at least one type of readable storage medium, which may be non-volatile or volatile. The readable storage medium includes flash memory, removable hard disks, multimedia cards, card type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 12 may in some embodiments be an internal storage unit of the electronic device 1, for example a removable hard disk of the electronic device 1. The memory 12 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 1. The memory 12 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of an artificial intelligence-based vehicle component damage detection program, etc., but also to temporarily store data that has been output or is to be output.
The processor 13 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 13 is a Control Unit (Control Unit) of the electronic device 1, connects various components of the electronic device 1 by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (for example, executing a vehicle component damage detection program based on artificial intelligence, etc.) stored in the memory 12 and calling data stored in the memory 12.
The processor 13 executes the operating system of the electronic device 1 and various installed application programs. The processor 13 executes the application program to implement the steps of the various artificial intelligence based vehicle component damage detection method embodiments described above, such as the steps shown in fig. 1.
Illustratively, the computer program may be partitioned into one or more modules/units, which are stored in the memory 12 and executed by the processor 13 to accomplish the present application. The one or more modules/units may be a series of computer readable instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the electronic device 1. For example, the computer program may be divided into an acquisition unit 110, an acquisition unit 111, a screening unit 112, a generation unit 113, a detection unit 114.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device) or a processor (processor) to execute the portions of the artificial intelligence based vehicle component damage detection method according to the embodiments of the present application.
The integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of the embodiments of the methods described above may be implemented.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), random-access Memory and other Memory, etc.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one arrow is shown in FIG. 3, but this does not indicate only one bus or one type of bus. The bus is arranged to enable connection communication between the memory 12 and at least one processor 13 or the like.
The embodiment of the present application further provides a computer-readable storage medium (not shown), in which computer-readable instructions are stored, and the computer-readable instructions are executed by a processor in an electronic device to implement the method for detecting damage to a vehicle component based on artificial intelligence according to any one of the above embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means stated in the description may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present application and not for limiting, and although the present application is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solutions of the present application without departing from the spirit and scope of the technical solutions of the present application.

Claims (10)

1. A vehicle component damage detection method based on artificial intelligence is characterized by comprising the following steps:
acquiring images of vehicle components according to a preset mode to obtain an initial vehicle image set;
detecting the initial vehicle image set according to a preset semantic segmentation network to obtain vehicle component image sets of multiple categories;
screening the vehicle component image set to obtain a vehicle component screening set;
generating a vehicle component significant image based on all images in the vehicle component screening set and a preset vehicle component standard image, wherein the vehicle component standard image and the vehicle component significant image are in one-to-one correspondence with the vehicle component screening set;
and detecting the vehicle part significant image according to a preset vehicle part detection model to obtain a vehicle part damage result.
2. The artificial intelligence based vehicle component damage detection method of claim 1, wherein said capturing the image of the vehicle component according to a predetermined pattern to obtain an initial set of vehicle images comprises:
carrying out omnibearing video shooting on the vehicle according to a preset mode to obtain a vehicle component video stream;
and sequentially extracting single-frame images from the vehicle component video stream, and taking all the obtained single-frame images as an initial vehicle image set.
3. The artificial intelligence based vehicle component damage detection method of claim 1, wherein the detecting the initial vehicle image set according to the preset semantic segmentation network to obtain vehicle component image sets of multiple categories comprises:
setting different labels for all images in the initial vehicle image set according to different vehicle component types to obtain an initial vehicle label set;
training a preset semantic segmentation network according to the initial vehicle image set and the initial vehicle label set to obtain a vehicle component segmentation model;
detecting all images in the initial vehicle image set based on the vehicle component segmentation model to obtain vehicle component image sets of multiple categories.
4. The artificial intelligence based vehicle component damage detection method of claim 1, wherein the screening the set of vehicle component images to obtain a screened set of vehicle components comprises:
calculating the pixel number of each vehicle component image in the vehicle component image set according to a connected domain analysis method;
and screening the vehicle component image set based on the pixel number to obtain a vehicle component screening set.
5. The artificial intelligence based vehicle component damage detection method of claim 4, wherein the screening the vehicle component image set based on the number of pixels to obtain a vehicle component screening set comprises:
classifying the vehicle component image set based on the number of pixels to obtain a classified set of number of pixels for a plurality of classes;
and selecting the pixel quantity classification set with the maximum pixel quantity as a vehicle component screening set.
6. The artificial intelligence based vehicle component damage detection method of claim 1, wherein generating a vehicle component significant image based on all images in the vehicle component filtering set and a preset vehicle component standard image, the vehicle component standard image and the vehicle component significant image each corresponding to the vehicle component filtering set one-to-one, comprises:
calculating the image similarity between each image in the vehicle component screening set and a preset vehicle component standard image;
sequencing all the images in the vehicle component screening set according to the sequence of the similarity of the images from large to small to obtain a sequencing result;
screening all images in the vehicle component screening set based on the sorting result to obtain a vehicle component preferred image set;
a vehicle component saliency image is generated based on all images in the vehicle component preferred image set.
7. The artificial intelligence based vehicle component damage detection method of claim 6, wherein the generating a vehicle component saliency image based on all images of the vehicle component preferred image set comprises:
taking the image similarity between each image in the vehicle component preferred image set and a preset vehicle component standard image as an image initial weight corresponding to each image;
normalizing all the initial image weights to obtain an image normalization weight of each image in the vehicle component preferred image set;
and carrying out weighted summation on each image in the vehicle component preferred image set and the corresponding image normalization weight to obtain a vehicle component significant image.
8. An artificial intelligence based vehicle component damage detection apparatus, the apparatus comprising:
the acquisition unit is used for acquiring images of the vehicle parts according to a preset mode to obtain an initial vehicle image set;
the obtaining unit is used for detecting the initial vehicle image set according to a preset semantic segmentation network to obtain vehicle component image sets of multiple categories;
the screening unit is used for screening the vehicle component image set to obtain a vehicle component screening set;
the generating unit is used for generating a vehicle component significant image based on all images in the vehicle component screening set and a preset vehicle component standard image, and the vehicle component standard image and the vehicle component significant image are in one-to-one correspondence with the vehicle component screening set;
and the detection unit is used for detecting the significant image of the vehicle part according to a preset vehicle part detection model to obtain a vehicle part damage result.
9. An electronic device, characterized in that the electronic device comprises:
a memory storing computer readable instructions; and
a processor executing computer readable instructions stored in the memory to implement the artificial intelligence based vehicle component damage detection method of any of claims 1-7.
10. A computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the artificial intelligence based vehicle component damage detection method of any one of claims 1 to 7.
CN202210825740.3A 2022-07-13 2022-07-13 Vehicle component damage detection method based on artificial intelligence and related equipment Pending CN115131564A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210825740.3A CN115131564A (en) 2022-07-13 2022-07-13 Vehicle component damage detection method based on artificial intelligence and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210825740.3A CN115131564A (en) 2022-07-13 2022-07-13 Vehicle component damage detection method based on artificial intelligence and related equipment

Publications (1)

Publication Number Publication Date
CN115131564A true CN115131564A (en) 2022-09-30

Family

ID=83383725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210825740.3A Pending CN115131564A (en) 2022-07-13 2022-07-13 Vehicle component damage detection method based on artificial intelligence and related equipment

Country Status (1)

Country Link
CN (1) CN115131564A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557221A (en) * 2023-11-17 2024-02-13 德联易控科技(北京)有限公司 Method, device, equipment and readable medium for generating vehicle damage report

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557221A (en) * 2023-11-17 2024-02-13 德联易控科技(北京)有限公司 Method, device, equipment and readable medium for generating vehicle damage report

Similar Documents

Publication Publication Date Title
Kasaei et al. New morphology-based method for robustiranian car plate detection and recognition
Abdullah et al. YOLO-based three-stage network for Bangla license plate recognition in Dhaka metropolitan city
US11113582B2 (en) Method and system for facilitating detection and identification of vehicle parts
Azad et al. New method for optimization of license plate recognition system with use of edge detection and connected component
CN108323209B (en) Information processing method, system, cloud processing device and computer storage medium
CN114387591A (en) License plate recognition method, system, equipment and storage medium
Raj et al. License plate recognition system using yolov5 and cnn
Rakhra et al. Classification and Prediction of License Plates Using Deeply Learned Convolutional Neural Networks
Salma et al. Development of ANPR framework for Pakistani vehicle number plates using object detection and OCR
CN104573680A (en) Image detection method, image detection device and traffic violation detection system
CN111178357A (en) License plate recognition method, system, device and storage medium
CN115063632A (en) Vehicle damage identification method, device, equipment and medium based on artificial intelligence
Karaimer et al. Detection and classification of vehicles from omnidirectional videos using multiple silhouettes
CN115810134A (en) Image acquisition quality inspection method, system and device for preventing car insurance from cheating
CN115131564A (en) Vehicle component damage detection method based on artificial intelligence and related equipment
CN111950546A (en) License plate recognition method and device, computer equipment and storage medium
CN115984786A (en) Vehicle damage detection method and device, terminal and storage medium
CN115909313A (en) Illegal parking board identification method and device based on deep learning
CN115222943A (en) Method for detecting damage of rearview mirror based on artificial intelligence and related equipment
CN113239738B (en) Image blurring detection method and blurring detection device
Ghosh et al. A vehicle number plate recognition system using region-of-interest based filtering method
Baviskar et al. Auto Number Plate Recognition
CN114972883B (en) Target detection sample generation method based on artificial intelligence and related equipment
CN111553368A (en) Fake license plate recognition method, fake license plate training method, fake license plate recognition device, fake license plate recognition equipment and storage medium
CN114943908A (en) Vehicle body damage evidence obtaining method, device, equipment and medium based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination