CN114998043A - Vehicle accessory damage assessment method and device, electronic equipment and storage medium - Google Patents

Vehicle accessory damage assessment method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114998043A
CN114998043A CN202210828958.4A CN202210828958A CN114998043A CN 114998043 A CN114998043 A CN 114998043A CN 202210828958 A CN202210828958 A CN 202210828958A CN 114998043 A CN114998043 A CN 114998043A
Authority
CN
China
Prior art keywords
accessory
image
target
processed
target accessory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210828958.4A
Other languages
Chinese (zh)
Inventor
徐振博
朱志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202210828958.4A priority Critical patent/CN114998043A/en
Publication of CN114998043A publication Critical patent/CN114998043A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W90/00Enabling technologies or technologies with a potential or indirect contribution to greenhouse gas [GHG] emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

According to the vehicle accessory damage assessment method and device, the electronic equipment and the storage medium, the picture set of the target accessory is obtained; inputting the picture set of the target accessory into a pre-trained recognition model, and outputting a recognition result of the target accessory; acquiring a judgment value of the target accessory according to the accessory integrity of the target accessory and the identification accuracy of the identification model; if the judgment value is smaller than or equal to the first preset threshold value, acquiring a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value, the accessory price, the accessory type and the accessory material of the target accessory; by the aid of the mode, automatic calculation of the residual value of the target accessory is achieved, accuracy of residual value calculation of the target accessory is improved, and accordingly the method is beneficial to reduction of claim payment cost.

Description

Vehicle accessory damage assessment method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a vehicle accessory damage assessment method and device, electronic equipment and a storage medium.
Background
In the process of claim settlement of vehicle insurance, vehicle damage assessment is an important link. The vehicle damage assessment refers to a verification and confirmation process for the damage condition of a vehicle in a field accident, and comprises the following steps: the accident picture is taken, the damaged parts of the vehicle are checked, the replaced or repaired parts are confirmed, and the final loss amount is given. The residue deduction is a component of lost amount, after the parts are replaced, two processing modes are provided for the replaced damaged parts, namely old part recovery and changed residual value, and the residue deduction mentioned above is the amount of the changed residual value.
The prior art lacks the unified standard of the incomplete mode of deduction, mostly relies on the experience judgement of the personnel of deciding to lose, because subjective judgement difference is big, has the seepage risk, leads to the rate of accuracy of residual value calculation low, and then causes the indemnity cost to rise.
Disclosure of Invention
The application aims to provide a vehicle accessory damage assessment method, a vehicle accessory damage assessment device, electronic equipment and a storage medium, so as to solve the technical problem that in the prior art, the accuracy of residual value calculation of vehicle accessories is low.
The technical scheme of the application is as follows: there is provided a vehicle accessory damage assessment method comprising:
acquiring a picture set of a target accessory, wherein the picture set comprises a plurality of images to be processed, and a shooting area of at least one image to be processed covers a damaged part of the target accessory;
inputting the picture set of the target accessory into a pre-trained recognition model, and outputting a recognition result of the target accessory, wherein the recognition result comprises accessory types, accessory materials and accessory integrity, and the recognition model is obtained by training according to a sample accessory image labeled with the sample types, the sample materials and the sample integrity;
acquiring a judgment value of the target accessory according to the accessory integrity of the target accessory and the identification accuracy of the identification model;
and if the judgment value is smaller than or equal to the first preset threshold value, acquiring a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value, the accessory price, the accessory type and the accessory material of the target accessory.
In some embodiments, the obtaining a set of pictures of the target accessory comprises:
acquiring a plurality of shot images of the target accessory;
performing target detection on the shot image by adopting a target detection algorithm to obtain a detected shot image, wherein the detected shot image comprises a boundary box of the target accessory, and the boundary box is a frame to select an external area of the target accessory;
and cutting the detected shot image according to the boundary frame to obtain the corresponding image to be processed, and constructing the picture set of the target accessory according to a plurality of images to be processed.
In some embodiments, the images to be processed are divided into at least one category according to the shooting angle of the target accessory;
the inputting the picture set of the target accessory into a pre-trained recognition model and outputting the recognition result of the target accessory comprises:
dividing the image to be processed into different areas according to a preset dividing mode, respectively extracting first features of the different areas in the image to be processed, and outputting a first image feature matrix of the image to be processed, wherein the first image feature matrix comprises the first features of the different areas;
multiplying the first image characteristic matrix by a preset weight matrix to obtain a second image characteristic of the image to be processed, wherein the weight matrix comprises weights of different areas, and the second image characteristic comprises second characteristics of different areas;
acquiring a feature matrix of the target accessory, wherein the feature matrix comprises second features of different areas of different shooting angles of the target accessory;
and outputting the identification result according to the feature matrix of the target accessory.
In some embodiments, before obtaining the determination value of the target accessory according to the accessory integrity of the target accessory and the recognition accuracy of the recognition model, the method further includes:
acquiring a data set, wherein the data set comprises picture sets of different accessories, the picture sets comprise a plurality of images to be processed, the images to be processed are divided into at least one type according to the shooting angle of the accessories, and the shooting area of at least one image to be processed covers the damage part of the accessory;
dividing the data set into a training set and a testing set, and labeling the type, material and integrity of a sample of the image to be processed of the picture set of each accessory in the training set;
training a recognition model by using the training set to obtain the trained recognition model, wherein the activation function of the recognition model is f (x) max (0, w) t x + b), wherein x is the first image characteristic matrix of the image to be processed, w is the weight matrix, and b is a preset parameter;
and testing the trained recognition model by using a test set, and acquiring the recognition accuracy of the recognition model according to the recognition result of the trained recognition model.
In some embodiments, after obtaining the set of pictures of the target accessory, the method further includes:
performing noise reduction processing on each image to be processed in the image set to obtain a noise-reduced image set;
performing data enhancement on the image to be processed in the image set after the noise reduction processing to obtain a data-enhanced image set;
splicing the images to be processed in the image set after the data enhancement to obtain a panoramic spliced image of the target accessory;
and acquiring a plurality of standard images to be processed of the target accessory according to the panoramic mosaic, and replacing the images to be processed according to the standard images to be processed so as to update the image set of the target accessory, wherein each standard image to be processed corresponds to a preset shooting angle.
In some embodiments, the obtaining a residual value result of the target part by using a preset part residual value algorithm according to the judgment value, the part price, the part type, and the part material of the target part includes:
acquiring a corresponding type preset value according to the type of the accessory;
acquiring a corresponding material preset value according to the accessory material;
and acquiring a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value of the target accessory, the accessory price, the type preset value corresponding to the accessory type and the material preset value corresponding to the accessory material.
In some embodiments, after obtaining the determination value of the target part according to the part integrity of the target part and the recognition accuracy of the recognition model, the method further includes:
and if the judgment value is larger than the first preset threshold value, outputting a recovery result according to the integrity of the target accessory.
Another technical scheme of the application is as follows: there is provided a vehicle accessory damage assessment device comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a picture set of a target accessory, the picture set comprises a plurality of images to be processed, and a shooting area of at least one image to be processed covers a damaged part of the target accessory;
the identification module is used for inputting the picture set of the target accessory into a pre-trained identification model and outputting an identification result of the target accessory, wherein the identification result comprises accessory types, accessory materials and accessory integrity, and the identification model is obtained by training according to a sample accessory image marked with sample types, sample materials and sample integrity;
the first calculation module is used for acquiring a judgment value of the target accessory according to the accessory integrity of the target accessory and the identification accuracy of the identification model;
and the second calculation module is used for acquiring a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value, the accessory price, the accessory type and the accessory material of the target accessory if the judgment value is less than or equal to the first preset threshold value.
Another technical scheme of the application is as follows: an electronic device is provided that includes a processor, and a memory coupled to the processor, the memory storing program instructions executable by the processor; the processor, when executing the program instructions stored by the memory, implements the vehicle accessory impairment methodology described above.
Another technical scheme of the application is as follows: there is provided a storage medium having stored therein program instructions that, when executed by a processor, implement a vehicle accessory damage assessment method as described above.
The beneficial effect of this application lies in: according to the vehicle accessory damage assessment method, the vehicle accessory damage assessment device, the electronic equipment and the storage medium, the picture set of the target accessory is obtained; inputting the picture set of the target accessory into a pre-trained recognition model, and outputting a recognition result of the target accessory; acquiring a judgment value of the target accessory according to the accessory integrity of the target accessory and the identification accuracy of the identification model; if the judgment value is smaller than or equal to the first preset threshold value, acquiring a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value, the accessory price, the accessory type and the accessory material of the target accessory; by the aid of the mode, automatic calculation of the residual value of the target accessory is achieved, accuracy of residual value calculation of the target accessory is improved, and accordingly the method is beneficial to reduction of the claim paying cost.
Drawings
FIG. 1 is a schematic flow chart of a method for damage assessment of a vehicle accessory according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a recognition model according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an image partitioning method according to an embodiment of the present disclosure;
FIG. 4 is a schematic view of a vehicle accessory damage assessment device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the application;
fig. 6 is a schematic structural diagram of a storage medium according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. All directional indications (such as up, down, left, right, front, and rear … …) in the embodiments of the present application are only used to explain the relative positional relationship between the components, the movement, and the like in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indication is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
An embodiment of the application provides a vehicle accessory damage assessment method. The execution subject of the vehicle accessory damage assessment method includes, but is not limited to, at least one of a service end, a terminal and other electronic devices that can be configured to execute the vehicle accessory damage assessment method provided by the embodiments of the present application. In other words, the vehicle accessory damage assessment method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Fig. 1 is a schematic flow chart of a method for determining damage of a vehicle accessory according to an embodiment of the present application. It should be noted that the method of the present application is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. In this embodiment, the vehicle accessory damage assessment method includes the steps of:
s10, acquiring a picture set of the target accessory, wherein the picture set comprises a plurality of images to be processed, and the shooting area of at least one image to be processed covers the damaged part of the target accessory.
In an embodiment of the present application, the target accessory is a damaged accessory of a vehicle having a damaged portion. In some embodiments, the image to be processed in the picture set may include at least one of a still picture and a video stream, for example, the still picture contains the target accessory, or each frame of video of the video stream contains the target accessory.
In some embodiments, step S10 specifically includes the following steps:
s11, acquiring a plurality of captured images of the target accessory;
the shot image can be a still picture or a video stream shot by a loser through a mobile terminal.
S12, performing target detection on the shot image by adopting a target detection algorithm to obtain a detected shot image, wherein the detected shot image comprises a boundary box of the target accessory, and the boundary box is an external area of the target accessory selected by a frame;
in some embodiments, the target detection algorithm is a Mask R-CNN algorithm, and the Mask R-CNN algorithm is used to perform target detection on the shot image to obtain a bounding box of an output target accessory, that is, if the shot image is a static picture, in the detected static picture, the target accessory is extracted from the bounding box, and a corresponding image in the bounding box is an area image where the target accessory is located; if the shot image is a video stream, the target accessory is extracted by the boundary frame in each frame of the detected video stream, and the image of the target accessory in each frame of the video stream can be extracted by the boundary frame in the corresponding image in the boundary frame, namely the image of the area where the target accessory is located.
S13, cutting the detected shot image according to the boundary frame to obtain the corresponding image to be processed, and constructing the picture set of the target accessory according to the plurality of images to be processed;
in some embodiments, the bounding box may select a minimum bounding rectangle of the target accessory to be identified for the box to eliminate the influence of the environment as much as possible on the premise of completely acquiring rich detail features including the target accessory.
Because the damaged person can not unify the selected shooting angles when holding the mobile terminal for shooting, the damaged person can select the shooting angle which is favorable for displaying the damaged part for more clearly shooting the damaged part, so that the difference of the shooting angle of each target accessory is large, as an implementation manner, the step S10 is followed by the following steps:
s14, performing noise reduction processing on each image to be processed in the image set to obtain a noise-reduced image set;
in order to improve the definition of the image to be processed, a denoising operation is performed on each image to be processed in the picture set, specifically, a denoising operation may be performed on each image to be processed by using a filter, where the filter may be a median filter, and the like.
S15, performing data enhancement on the image to be processed in the image set after the noise reduction processing to obtain an image set after the data enhancement;
the image to be processed can be subjected to data enhancement by utilizing horizontal overturning, so that the image to be processed with more shooting angles can be obtained.
S16, splicing the images to be processed in the image set after the data enhancement to obtain a panoramic spliced image of the target accessory;
as an implementation manner, the panoramic three-dimensional model technology in the prior art may be used for stitching, the panoramic three-dimensional model and the stitching information of the target accessory may be obtained in advance, or the panoramic three-dimensional model and the stitching information of the target accessory may be obtained while obtaining the picture set, and then each image to be processed may be matched according to the panoramic three-dimensional model by using a phase algorithm during stitching, so as to obtain a panoramic stitching map of the target accessory. The obtained panoramic spliced picture is a three-dimensional panoramic picture of the target accessory.
S17, obtaining a plurality of standard images to be processed of the target accessory according to the panoramic stitching image, and replacing the images to be processed according to the standard images to be processed to update the image set of the target accessory, wherein each standard image to be processed corresponds to a preset shooting angle.
After the three-dimensional panoramic spliced image is obtained, a standard image to be processed corresponding to the preset shooting angle can be intercepted from the panoramic spliced image according to the preset shooting angle, and the standard image to be processed is used for replacing each original image to be processed in the image set. In the subsequent identification step, the identification of the target accessory is performed according to the standard image to be processed.
And S20, inputting the picture set of the target accessory into a pre-trained recognition model, and outputting a recognition result of the target accessory, wherein the recognition result comprises accessory types, accessory materials and accessory integrity, and the recognition model is obtained by training according to a sample accessory image labeled with sample types, sample materials and sample integrity.
In some embodiments, the recognition model is a Convolutional Neural Network (CNN), and the recognition model may include several Convolutional layers and several fully-connected layers, where a Convolutional layer (Conv) refers to a layered structure composed of several Convolutional units in a Convolutional Neural Network layer, and the Convolutional Neural Network is a feed-forward Neural Network including at least two Neural Network layers, where each Neural Network layer includes several neurons, the neurons are arranged in layers, there is no interconnection between neurons in the same layer, and transmission of information between layers is performed only in one direction. The full Connected layer (FC) means that each node in the layered structure is Connected to all nodes in the previous layer, and can be used for performing comprehensive processing on the features extracted by the neural network layer in the previous layer, and plays a role of a "classifier" in the neural network model.
In some embodiments, the recognition model may further include a Batch Normalization layer, an activation function layer, and a pooling layer, wherein a Batch Normalization layer (BN) refers to a layered structure capable of unifying scattered data, so that data input into the neural network model has a unified specification, the neural network model is easier to find rules from the data, and the neural network model can be optimized. An Activation Function layer (AF) refers to a layered structure of functions that run on neurons of a neural network model, and can map inputs of the neurons to outputs. By introducing the nonlinear function into the neural network model, the output value of the neural network model can be arbitrarily approximated to the nonlinear function. The Pooling layer (Pooling layer) is also named as a sampling layer, and after the convolutional layer, refers to a layered structure capable of extracting features from input values twice, and the Pooling layer can guarantee the main features of the values of the previous layer and reduce the parameters and the calculated amount of the next layer. The pooling layer is composed of a plurality of characteristic surfaces, one characteristic surface of the convolution layer corresponds to one characteristic surface in the pooling layer, the number of the characteristic surfaces cannot be changed, and the characteristic with space invariance is obtained by reducing the resolution ratio of the characteristic surfaces.
Specifically, fig. 2 is a schematic structural diagram of an identification model provided in an exemplary embodiment of the present application, please refer to fig. 2, where the identification model includes an input layer, two convolution modules, and three fully-connected modules, each convolution module includes at least one convolution layer, and further each convolution module may further include a batch normalization layer, an activation function layer, or a pooling layer; every connection module entirely includes at least one full articulamentum, and three connection module entirely corresponds accessory kind, accessory material and accessory completeness respectively. The convolution layer of the first convolution module is used for extracting the features of each image to be processed in the image set of the target accessory; the convolution layer of the second convolution module is used for performing convolution operation on the output of the first convolution module; the first full-connection module is used for acquiring a probability value of the target accessory to each preset accessory type according to the output of the second convolution module, wherein the preset accessory types comprise headlights, bumpers, steering lamps, front cover plates, rear cover plates, side decorative plates, panel assemblies and the like; the second full-connection module is used for acquiring the probability value of the target accessory to each preset accessory material according to the output of the second convolution module, wherein the preset accessory materials comprise copper, iron, aluminum, plastic, rubber and the like; the third full-connection module is configured to obtain the integrity of the target accessory according to the output of the second convolution module, for example, the integrity of the target accessory may be 90%, 89%, 80%, and the like, where the integrity of the new accessory is 100%, and the integrity of the target accessory is less than 100% due to deformation or loss of a damaged portion.
As an embodiment, the images to be processed are divided into at least one category according to the shooting angles of the target accessory, which may include, but are not limited to, a front view angle, a left view angle, a right view angle, a top view angle, an elevation view angle, and the like; correspondingly, step S20 specifically includes the following steps:
s21, dividing the image to be processed into different areas according to a preset dividing mode, respectively extracting first features of the different areas in the image to be processed, and outputting a first image feature matrix of the image to be processed, wherein the first image feature matrix comprises the first features of the different areas;
as an embodiment, the preset dividing manner may be as shown in fig. 3, the to-be-processed image is divided into image blocks with the same size, each image block corresponds to one area, when feature extraction is performed on the to-be-processed image, feature extraction is performed on each image block respectively to obtain a first feature of each image block, and a first image feature matrix F of an ith to-be-processed image i =[F i 1 ,F i 2 ,…,F i j ,…,F i N ]N is the number of image blocks, F i j J is more than or equal to 1 and less than or equal to N, the ith image to be processed corresponds to the ith shooting angle, i is more than or equal to 1 and less than or equal to M, and M is the number of the shooting angles.
S22, multiplying the first image feature matrix by a preset weight matrix to obtain a second image feature of the image to be processed, wherein the weight matrix comprises weights of different areas, and the second image feature comprises second features of different areas;
wherein, each weight element in the weight matrix corresponds to each image block, and the weight matrix W i =[w i 1 ,w i 2 ,…,w i j ,…,w i N ]N is the number of image blocks, w i j J is more than or equal to 1 and less than or equal to N, which is the weight of the corresponding area. First image characteristic matrix B of ith image to be processed i =[B i 1 ,B i 2 ,…,B i j ,…,B i N ]N is the number of image blocks, B i j As a second feature of the corresponding region, B i j =F i j *w i j ,1≤j≤N。
S23, acquiring a feature matrix of the target accessory, wherein the feature matrix comprises second features of different areas of different shooting angles of the target accessory;
merging second image characteristics of different images to be processed of the target accessory to obtain a characteristic matrix T ═ B of the target accessory 1 1 ,…,B 1 N ,…,B i 1 ,…,B i N ,…,B M 1 ,…,B M N ]。
S24, outputting the recognition result according to the feature matrix of the target accessory.
In some embodiments, the feature matrix of the target fitting is subjected to a convolution operation and a full connection operation sequentially, and a recognition result is output.
S30, obtaining the judgment value of the target accessory according to the accessory integrity of the target accessory and the identification accuracy of the identification model.
As an embodiment, the judgment value of the target accessory may be calculated in the following manner: and judging the value X as the integrity a/accuracy b. For example, the recognition model output integrity a is 80%, the recognition accuracy of the recognition model is 90%, and the corresponding determination value is 80%/90% — 89%.
In this embodiment, the determination value is used to instruct the target component to perform the recycling process or the disabling process.
As another embodiment, step S20 is preceded by the following steps:
s31, acquiring a data set, wherein the data set comprises a picture set of different accessories, the picture set comprises a plurality of images to be processed, the images to be processed are divided into at least one type according to the shooting angle of the accessories, and the shooting area of at least one image to be processed covers the damage part of the accessories;
s32, dividing the data set into a training set and a testing set, and labeling the type, material and integrity of the sample of the image to be processed of the picture set of each accessory in the training set;
s33, use ofTraining the recognition model by the training set to obtain the trained recognition model, wherein the activation function of the recognition model is f (x) max (0, w) t x + b), wherein x is the first image characteristic matrix of the image to be processed, w is the weight matrix, and b is a preset parameter;
and S34, testing the trained recognition model by using a test set, and acquiring the recognition accuracy of the recognition model according to the recognition result of the trained recognition model.
As a preferred embodiment, in order to more accurately label the integrity, in step S31, when the data set is acquired and the image capture area covers the damaged part, when the image capture area is used to capture an image corresponding to the image capture area, the image capture area of the accessory is scanned by infrared rays to obtain the surface structure information of the damaged part of the accessory, and the degree of damage of the damaged part is obtained from the image capture area and the surface structure information of the damaged part, so as to obtain the integrity of the accessory for labeling.
And S40, if the judgment value is less than or equal to the first preset threshold value, acquiring a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value, the accessory price, the accessory type and the accessory material of the target accessory.
When the judgment value is smaller than the first preset threshold value, the target part is low in integrity and high in recovery cost, residue deduction processing is carried out, and a residual value result of the target part is obtained according to the judgment value, the part price corresponding to the target part, the part type output by the identification model and the part material output by the identification model.
As an embodiment, step S40 specifically includes the following steps:
s41, acquiring a corresponding type preset value according to the type of the accessory;
in this embodiment, the type preset value is a correction parameter, different types of accessories correspond to different type preset values, a type parameter table for recording the corresponding relationship between the types of accessories and the type preset values may be preset, and the type preset value may be looked up in the type parameter table according to the type of accessories.
S42, acquiring a corresponding material preset value according to the accessory material;
in this embodiment, the material default value is also a correction parameter, different accessory materials correspond to different material default values, a material parameter table for recording the corresponding relationship between the accessory materials and the material default values may be preset, and the corresponding material default values may be queried in the material parameter table according to the accessory materials.
And S43, obtaining a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value of the target accessory, the accessory price, the class preset value corresponding to the accessory class and the material preset value corresponding to the accessory material.
As an embodiment, the preset fitting residual value algorithm is as follows: and d, judging the residual value result r, namely the judgment value X, the type preset value c, the accessory price e and the material preset value d. That is, the residual value result of the target accessory is the sum of the product of the judgment value, the category preset value and the accessory price and the material preset value.
In some embodiments, step S30 is followed by the following steps:
and S50, if the judgment value is larger than the first preset threshold value, outputting a recovery result according to the integrity of the target accessory.
As shown in fig. 4, an embodiment of the present application provides a vehicle accessory damage assessment apparatus, where the apparatus 40 includes an obtaining module 41, an identification module 42, a first calculating module 43, and a second calculating module 44, where the obtaining module 41 is configured to obtain a picture set of a target accessory, where the picture set includes a plurality of images to be processed, and a shooting area of at least one of the images to be processed covers a damaged portion of the target accessory; the identification module 42 is configured to input the picture set of the target accessory into a pre-trained identification model, and output an identification result of the target accessory, where the identification result includes an accessory type, an accessory material, and an accessory integrity, and the identification model is obtained by training according to a sample accessory image labeled with a sample type, a sample material, and a sample integrity; a first calculating module 43, configured to obtain a determination value of the target accessory according to the accessory integrity of the target accessory and the recognition accuracy of the recognition model; a second calculating module 44, configured to, if the determination value is smaller than or equal to the first preset threshold, obtain a residual value result of the target accessory by using a preset accessory residual value algorithm according to the determination value, the accessory price, the accessory type, and the accessory material of the target accessory.
In some embodiments, the acquisition module 41 is further configured to acquire a plurality of captured images of the target accessory; performing target detection on the shot image by adopting a target detection algorithm to obtain a detected shot image, wherein the detected shot image comprises a boundary box of the target accessory, and the boundary box is a frame to select an external area of the target accessory; and cutting the detected shot image according to the boundary frame to obtain the corresponding image to be processed, and constructing the picture set of the target accessory according to a plurality of images to be processed.
In some embodiments, the images to be processed are divided into at least one category according to the shooting angle of the target accessory; correspondingly, the identifying module 42 is further configured to divide the image to be processed into different regions according to a preset dividing manner, extract first features of the different regions in the image to be processed respectively, and output a first image feature matrix of the image to be processed, where the first image feature matrix includes first features of the different regions; multiplying the first image feature matrix by a preset weight matrix to obtain a second image feature of the image to be processed, wherein the weight matrix comprises weights of different areas, and the second image feature comprises second features of different areas; acquiring a feature matrix of the target accessory, wherein the feature matrix comprises second features of different areas of different shooting angles of the target accessory; and outputting the identification result according to the feature matrix of the target accessory.
In some embodiments, the identifying is performed by a computerThe module 42 is further configured to obtain a data set, where the data set includes a picture set of different accessories, the picture set includes a plurality of images to be processed, the images to be processed are divided into at least one category according to a shooting angle of the accessories, and a shooting area of at least one of the images to be processed covers a damaged portion of the accessories; dividing the data set into a training set and a testing set, and labeling the type, the material and the integrity of a sample on the image to be processed of the picture set of each accessory in the training set; training a recognition model by using the training set to obtain the trained recognition model, wherein the activation function of the recognition model is f (x) max (0, w) t x + b), wherein x is the first image characteristic matrix of the image to be processed, w is the weight matrix, and b is a preset parameter; and testing the trained recognition model by using a test set, and acquiring the recognition accuracy of the recognition model according to the recognition result of the trained recognition model.
In some embodiments, the obtaining module 41 is further configured to perform noise reduction processing on each to-be-processed image in the picture set, so as to obtain a noise-reduced picture set; performing data enhancement on the image to be processed in the image set after the noise reduction processing to obtain a data-enhanced image set; splicing the images to be processed in the image set after the data enhancement to obtain a panoramic spliced image of the target accessory; and acquiring a plurality of standard images to be processed of the target accessory according to the panoramic mosaic, and replacing the images to be processed according to the standard images to be processed to update the image set of the target accessory, wherein each standard image to be processed corresponds to a preset shooting angle.
In some embodiments, the second calculating module 44 is further configured to obtain a corresponding category preset value according to the category of the accessory; acquiring a corresponding material preset value according to the accessory material; and acquiring a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value of the target accessory, the accessory price, the type preset value corresponding to the accessory type and the material preset value corresponding to the accessory material.
In some embodiments, the second calculating module 44 is further configured to output a recycling result according to the integrity of the target component if the determination value is greater than the first preset threshold.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device 50 includes a processor 51 and a memory 52 coupled to the processor 51.
The memory 52 stores program instructions for implementing the vehicle accessory damage assessment method of any of the embodiments described above.
The processor 51 is operable to execute program instructions stored in the memory 52 to perform vehicle accessory damage assessment.
The processor 51 may also be referred to as a CPU (Central Processing Unit). The processor 51 may be an integrated circuit chip having signal processing capabilities. The processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a storage medium according to an embodiment of the present application. The storage medium 60 of the embodiment of the present application stores program instructions 61 capable of implementing all the methods described above, where the program instructions 61 may be stored in the storage medium in the form of a software product, and include several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only a logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings are included in the scope of the present disclosure.
While the foregoing is directed to embodiments of the present application, it will be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims.

Claims (10)

1. A method of damage assessment of a vehicle accessory, comprising:
acquiring a picture set of a target accessory, wherein the picture set comprises a plurality of images to be processed, and a shooting area of at least one image to be processed covers a damaged part of the target accessory;
inputting the picture set of the target accessory into a pre-trained recognition model, and outputting a recognition result of the target accessory, wherein the recognition result comprises accessory types, accessory materials and accessory integrity, and the recognition model is obtained by training according to a sample accessory image labeled with the sample types, the sample materials and the sample integrity;
acquiring a judgment value of the target accessory according to the accessory integrity of the target accessory and the identification accuracy of the identification model;
and if the judgment value is smaller than or equal to the first preset threshold value, acquiring a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value, the accessory price, the accessory type and the accessory material of the target accessory.
2. The vehicle accessory damage assessment method of claim 1, wherein said obtaining a set of pictures of a target accessory comprises:
acquiring a plurality of shot images of the target accessory;
performing target detection on the shot image by adopting a target detection algorithm to obtain a detected shot image, wherein the detected shot image comprises a boundary box of the target accessory, and the boundary box is a frame to select an external area of the target accessory;
and cutting the detected shot image according to the boundary frame to obtain the corresponding image to be processed, and constructing the picture set of the target accessory according to a plurality of images to be processed.
3. The vehicle accessory damage assessment method according to claim 1, wherein the images to be processed are classified into at least one class according to the photographing angle of the target accessory;
the inputting the picture set of the target accessory into a pre-trained recognition model and outputting the recognition result of the target accessory comprises:
dividing the image to be processed into different areas according to a preset dividing mode, respectively extracting first features of the different areas in the image to be processed, and outputting a first image feature matrix of the image to be processed, wherein the first image feature matrix comprises the first features of the different areas;
multiplying the first image feature matrix by a preset weight matrix to obtain a second image feature of the image to be processed, wherein the weight matrix comprises weights of different areas, and the second image feature comprises second features of different areas;
acquiring a feature matrix of the target accessory, wherein the feature matrix comprises second features of different areas of different shooting angles of the target accessory;
and outputting the identification result according to the feature matrix of the target accessory.
4. The vehicle accessory damage assessment method according to claim 3, wherein before obtaining the determination value of the target accessory based on the accessory integrity of the target accessory and the recognition accuracy of the recognition model, further comprising:
acquiring a data set, wherein the data set comprises picture sets of different accessories, the picture sets comprise a plurality of images to be processed, the images to be processed are divided into at least one type according to the shooting angle of the accessories, and the shooting area of at least one image to be processed covers the damage part of the accessory;
dividing the data set into a training set and a testing set, and labeling the type, material and integrity of a sample of the image to be processed of the picture set of each accessory in the training set;
training a recognition model by using the training set to obtain a trained recognition model, wherein an activation function of the recognition model is f (x) max (0, w) t x + b), wherein x is the first image characteristic matrix of the image to be processed, w is the weight matrix, and b is a preset parameter;
and testing the trained recognition model by using a test set, and acquiring the recognition accuracy of the recognition model according to the recognition result of the trained recognition model.
5. The vehicle accessory damage assessment method of claim 3, wherein after said obtaining a set of pictures of the target accessory, further comprising:
performing noise reduction processing on each image to be processed in the image set to obtain a noise-reduced image set;
performing data enhancement on the image to be processed in the image set after the noise reduction processing to obtain a data-enhanced image set;
splicing the images to be processed in the image set after the data enhancement to obtain a panoramic spliced image of the target accessory;
and acquiring a plurality of standard images to be processed of the target accessory according to the panoramic mosaic, and replacing the images to be processed according to the standard images to be processed to update the image set of the target accessory, wherein each standard image to be processed corresponds to a preset shooting angle.
6. The method of claim 1, wherein the obtaining a residual value result of the target component by a predetermined component residual value algorithm according to the determination value of the target component, a component price, a component type, and a component material comprises:
acquiring a corresponding type preset value according to the type of the accessory;
acquiring a corresponding material preset value according to the accessory material;
and acquiring a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value of the target accessory, the accessory price, the type preset value corresponding to the accessory type and the material preset value corresponding to the accessory material.
7. The vehicle accessory damage assessment method according to claim 1, wherein after obtaining the determination value of the target accessory according to the accessory integrity of the target accessory and the recognition accuracy of the recognition model, further comprising:
and if the judgment value is larger than the first preset threshold value, outputting a recovery result according to the integrity of the target accessory.
8. A vehicle accessory damage assessment device, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a picture set of a target accessory, the picture set comprises a plurality of images to be processed, and a shooting area of at least one image to be processed covers a damaged part of the target accessory;
the identification module is used for inputting the picture set of the target accessory into a pre-trained identification model and outputting an identification result of the target accessory, wherein the identification result comprises accessory types, accessory materials and accessory integrity, and the identification model is obtained by training according to a sample accessory image marked with sample types, sample materials and sample integrity;
the first calculation module is used for acquiring a judgment value of the target accessory according to the accessory integrity of the target accessory and the identification accuracy of the identification model;
and the second calculation module is used for acquiring a residual value result of the target accessory by using a preset accessory residual value algorithm according to the judgment value, the accessory price, the accessory type and the accessory material of the target accessory if the judgment value is less than or equal to the first preset threshold value.
9. An electronic device comprising a processor, and a memory coupled to the processor, the memory storing program instructions executable by the processor; the processor, when executing the program instructions stored in the memory, implements the vehicle accessory damage assessment method of any of claims 1-7.
10. A storage medium having stored therein program instructions which, when executed by a processor, implement a vehicle accessory damage assessment method according to any one of claims 1 to 7.
CN202210828958.4A 2022-07-15 2022-07-15 Vehicle accessory damage assessment method and device, electronic equipment and storage medium Pending CN114998043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210828958.4A CN114998043A (en) 2022-07-15 2022-07-15 Vehicle accessory damage assessment method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210828958.4A CN114998043A (en) 2022-07-15 2022-07-15 Vehicle accessory damage assessment method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114998043A true CN114998043A (en) 2022-09-02

Family

ID=83021876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210828958.4A Pending CN114998043A (en) 2022-07-15 2022-07-15 Vehicle accessory damage assessment method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114998043A (en)

Similar Documents

Publication Publication Date Title
CN108921806B (en) Image processing method, image processing device and terminal equipment
CN111667520B (en) Registration method and device for infrared image and visible light image and readable storage medium
CN109166156B (en) Camera calibration image generation method, mobile terminal and storage medium
CN109951635B (en) Photographing processing method and device, mobile terminal and storage medium
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN110335216B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
CN108961183B (en) Image processing method, terminal device and computer-readable storage medium
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN109377494B (en) Semantic segmentation method and device for image
CN113807451B (en) Panoramic image feature point matching model training method and device and server
CN109711427A (en) Object detection method and Related product
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
CN111325698A (en) Image processing method, device and system and electronic equipment
CN109308704B (en) Background eliminating method, device, computer equipment and storage medium
CN108289176B (en) Photographing question searching method, question searching device and terminal equipment
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN110349108B (en) Method, apparatus, electronic device, and storage medium for processing image
CN108776959B (en) Image processing method and device and terminal equipment
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN110610178A (en) Image recognition method, device, terminal and computer readable storage medium
CN114998043A (en) Vehicle accessory damage assessment method and device, electronic equipment and storage medium
CN115984786A (en) Vehicle damage detection method and device, terminal and storage medium
CN112329729B (en) Small target ship detection method and device and electronic equipment
CN109977937B (en) Image processing method, device and equipment
CN114120343A (en) Certificate image quality evaluation method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination