CN116935293A - Automatic processing method and system for remote video exploration and damage assessment - Google Patents

Automatic processing method and system for remote video exploration and damage assessment Download PDF

Info

Publication number
CN116935293A
CN116935293A CN202311193673.9A CN202311193673A CN116935293A CN 116935293 A CN116935293 A CN 116935293A CN 202311193673 A CN202311193673 A CN 202311193673A CN 116935293 A CN116935293 A CN 116935293A
Authority
CN
China
Prior art keywords
vehicle
image
algorithm
neural network
damage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311193673.9A
Other languages
Chinese (zh)
Other versions
CN116935293B (en
Inventor
高云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoren Property Insurance Co ltd
Original Assignee
Guoren Property Insurance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoren Property Insurance Co ltd filed Critical Guoren Property Insurance Co ltd
Priority to CN202311193673.9A priority Critical patent/CN116935293B/en
Publication of CN116935293A publication Critical patent/CN116935293A/en
Application granted granted Critical
Publication of CN116935293B publication Critical patent/CN116935293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Medical Informatics (AREA)
  • Finance (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a remote video exploration loss assessment automatic processing method and a system, comprising the following steps: s1: the data acquisition module acquires accident scene image data; s2: processing the data by using a Felzenszwalb algorithm, and extracting a damaged part image of the vehicle; s3: the trained improved convolutional neural network algorithm model comprehensively judges the damage degree according to the characteristics of the image of the damage part extracted by the Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk situation; s4: improving a convolutional neural network, and giving a pay scheme according to the damage degree judgment result and the insurance rule; s5: and (5) ending. According to the application, the features are extracted and the damage degree is judged by adopting the Felzenszwalb algorithm and combining with the improved convolutional neural network algorithm, and the pay plan is given out by adopting the improved convolutional neural network according to the damage degree judgment result, so that the automatic processing of insurance is realized, and the degree of insurance risk automation is greatly enhanced.

Description

Automatic processing method and system for remote video exploration and damage assessment
Technical Field
The application relates to the technical field of insurance automatic processing, in particular to a remote video investigation damage assessment automatic processing method and system.
Background
With the development of technology, especially the advancement of computer vision and deep learning technology, remote video exploration and loss assessment has become an important trend in the insurance industry. By using advanced image processing and machine learning techniques, insurance companies can quickly and accurately judge the damage level of vehicles, thereby improving the efficiency and fairness of damage assessment.
In the conventional damage assessment process, insurance companies need to send specialized damage assessment personnel to the site for investigation, which is time consuming and labor intensive. Moreover, due to human factors, there may be some subjectivity and inconsistency in the impairment results. Therefore, how to improve the efficiency and fairness of damage assessment by using scientific and technological means has become an important task in the insurance industry.
Currently, there have been some studies and applications attempting to conduct remote video exploration impairments using image processing and machine learning techniques. For example, some studies use image segmentation algorithms, machine learning algorithms, and the like. These methods improve the efficiency and accuracy of the impairment to some extent, but still present some problems and challenges.
First, existing methods rely primarily on image information, and ignore other factors that may affect the outcome of the assessment, such as age of the vehicle, brand of vehicle, historical risk, etc. This may result in impaired fairness and accuracy of the impairment results.
Second, the existing method damage degree judgment cannot be systematically adjusted according to factors such as the service life of the combined vehicle and the brand information of the vehicle. Therefore, how to use limited data or unsupervised data to perform effective training judgment is a problem to be solved.
Finally, the existing damage extraction method is relatively low in accuracy, and a targeted method can not be selected for damage image segmentation by combining the characteristics of the damage degree; furthermore, existing methods generally only give a judgment of the degree of damage, and cannot directly give a pay-off scheme. This means that the insurance company also requires additional steps to formulate a pay plan based on the extent of damage and insurance rules, which increases the complexity and effort of the damage assessment.
Therefore, how to design a remote video exploration and damage assessment method which can comprehensively consider image information and non-image information, can process limited or unsupervised data and directly give out a pay scheme is an important research topic in the insurance industry.
Disclosure of Invention
Aiming at the problems mentioned in the prior art, the invention provides a remote video exploration damage assessment automatic processing method and a remote video exploration damage assessment automatic processing system, and the method adopts a Felzenszwalb algorithm and combines an improved convolutional neural network algorithm to extract characteristics and judge damage degree, adopts an improved convolutional neural network to provide a pay plan according to the damage degree judgment result and combines an insurance rule, thereby realizing automatic insurance processing and greatly enhancing insurance risk automation degree.
The invention discloses a remote video exploration damage assessment automatic processing method, which comprises the following steps:
s1: the data acquisition module acquires accident scene image data;
s2: processing the data by using a Felzenszwalb algorithm, and extracting a damaged part image of the vehicle;
s21: the Felzenszwalb algorithm firstly builds a graph, converts an image into a graph, in the graph, each pixel corresponds to a vertex, edges between each vertex correspond to similarity between pixels, and the weight of the edges can be obtained by calculating differences between the pixels;
s22: sequencing edges: ordering all edges according to the weight, wherein the edge with the minimum weight, namely the pixel with the highest similarity, is considered firstly;
s23: initializing a component: each pixel is considered a separate component;
s24: combining component: traversing the ordered edges, for each edge, merging the two components into a new component if the two pixels it connects belong to different components and the difference between the two components is less than a given threshold;
s3: the trained improved convolutional neural network algorithm model comprehensively judges the damage degree according to the characteristics of the image of the damage part extracted by the Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk situation; the improved convolutional neural network algorithm employed is as follows:
wherein ,representing neural network->The output of the layer, i.e.)>The result of the layer after the activation function treatment, +.>Representing the tanh function>Representing neural network->Layer input,/->Indicate->Weight matrix of layer,/>Indicate->Bias vector of layer, ">Representing weight factor vectors for adjusting the degree of influence of different input features on damage degree judgment, including w_year, w_brandAnd w_history represents the weights of the service life of the vehicle, the brand of the vehicle and the historical risk conditions respectively, and the value of each feature is multiplied by a corresponding weight factor to obtain a weighted feature;
s4: improving a convolutional neural network, and giving a pay scheme according to the damage degree judgment result and the insurance rule;
s5: and (5) ending.
Preferably, the trained convolutional neural network algorithm model comprehensively judges the damage degree according to the characteristics of the image of the damage part extracted by the Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk conditions, codes the service life of the vehicle, the brand of the vehicle and the historical risk conditions into numerical values or vectors, and combines the image characteristics of the damage part extracted by the Felzenszwalb algorithm together to serve as input of the convolutional neural network.
Preferably, the comprehensively judging the damage degree comprises three categories: when information is input into the trained CNN, each neuron of the output layer gives an activation value, the activation value is regarded as the probability of the category of the damage degree corresponding to the neuron, the neuron with the highest activation value is selected, and the category corresponding to the neuron is used as the judgment result of the damage degree.
Preferably, the method further comprises preprocessing the image before extracting the damaged part image of the vehicle by utilizing the Felzenszwalb algorithm, and comprises adopting histogram equalization to perform noise reduction processing.
The application also provides a remote video exploration damage assessment automatic processing system, which comprises:
the image acquisition module is used for acquiring accident scene image data by adopting a CCD camera or a mobile phone data acquisition module;
the vehicle damage part image extraction module is used for processing data by utilizing a Felzenszwalb algorithm to extract a vehicle damage part image;
the image construction module firstly constructs an image by using a Felzenszwalb algorithm, and converts the image into an image, wherein each pixel corresponds to one vertex, the edges between each vertex correspond to the similarity between the pixels, and the weight of the edges can be obtained by calculating the difference between the pixels;
And a sequencing module for sequencing the edges: ordering all edges according to the weight, wherein the edge with the minimum weight, namely the pixel with the highest similarity, is considered firstly;
initializing a component module: each pixel is considered a separate component;
and (3) combining the component modules: traversing the ordered edges, for each edge, merging the two components into a new component if the two pixels it connects belong to different components and the difference between the two components is less than a given threshold;
the convolutional neural network damage degree judging module is used for comprehensively judging the damage degree according to the characteristics of images of damaged parts extracted by a Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk conditions by a trained improved convolutional neural network algorithm model; the improved convolutional neural network algorithm employed is as follows:
wherein ,representing neural network->The output of the layer, i.e.)>The result of the layer after the activation function treatment, +.>Representing the tanh function>Representing neural network->Layer input,/->Indicate->Weight matrix of layer,/>Indicate->Bias vector of layer, ">The method comprises the steps of representing a weight factor vector, wherein the weight factor vector is used for adjusting the influence degree of different input features on damage degree judgment, and comprises the steps that w_year, w_brand and w_history represent weights of service life, vehicle brand and historical risk conditions of a vehicle respectively, and the value of each feature is multiplied by a corresponding weight factor to obtain a weighted feature;
The pay plan generating module is used for improving the judgment result of the convolutional neural network according to the damage degree and combining with the insurance rule to give a pay plan;
and (5) ending the module.
Preferably, the trained convolutional neural network algorithm model comprehensively judges the damage degree according to the characteristics of the image of the damage part extracted by the Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk conditions, codes the service life of the vehicle, the brand of the vehicle and the historical risk conditions into numerical values or vectors, and combines the image characteristics of the damage part extracted by the Felzenszwalb algorithm together to serve as input of the convolutional neural network.
Preferably, the comprehensively judging the damage degree comprises three categories: when information is input into the trained CNN, each neuron of the output layer gives an activation value, the activation value is regarded as the probability of the category of the damage degree corresponding to the neuron, the neuron with the highest activation value is selected, and the category corresponding to the neuron is used as the judgment result of the damage degree.
Preferably, the method further comprises preprocessing the image before extracting the damaged part image of the vehicle by utilizing the Felzenszwalb algorithm, and comprises adopting histogram equalization to perform noise reduction processing.
The application provides a remote video exploration damage assessment automatic processing method and a system, which can realize the following beneficial technical effects:
1. according to the application, the features are extracted and the damage degree is judged by adopting the Felzenszwalb algorithm and combining the improved convolutional neural network algorithm, so that the insurance claim settlement accuracy is improved, the insurance claim automation degree is greatly enhanced by adopting the improved convolutional neural network according to the damage degree judgment result, the insurance automatic claim settlement processing efficiency is improved, and real-time automatic claim settlement is realized.
2. The present application is implemented by employing an improved convolutional neural network byThe weight factor vector is used for adjusting the influence degree of different input features on damage degree judgment, and comprises that w_year, w_brand and w_history respectively represent the weights of the service life of the vehicle, the brand of the vehicle and the historical risk situation, the value of each feature is multiplied by the corresponding weight factor to obtain a weighted feature, so that the safety occurrence judgment accuracy is greatly improved, for example, the influence of the service life of the vehicle on the pay is larger, the influence degree of the service life of the vehicle is enhanced by adjusting w_year, and the calculation efficiency and the safety calculation accuracy are improved.
3. According to the invention, the damage degree is comprehensively judged according to the characteristics of the image of the damage part extracted by the Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk situation through the trained convolutional neural network algorithm model, the service life of the vehicle, the brand of the vehicle and the historical risk situation are coded into numerical values or vectors, and the image characteristics of the damage part extracted by the Felzenszwalb algorithm are combined together to be used as the input of the convolutional neural network, so that the judgment accuracy is greatly realized, and the comprehensive multiple factor judgment insurance automation claims are realized.
4. The original Felzenszwalb algorithm mainly considers the brightness difference among pixels, and the invention adopts the color difference of the introduced color space information such as RGB or HSV space to improve the accuracy of image segmentation and greatly improve the degree of automation of disassembling the waste battery pack.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of steps of an automatic disassembly method of a waste battery pack according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1:
in order to solve the above-mentioned problems mentioned in the prior art, as shown in fig. 1: the invention provides a remote video exploration damage assessment automatic processing method, which comprises the following steps:
s1: the data acquisition module acquires accident scene image data;
s2: processing the data by using a Felzenszwalb algorithm, and extracting a damaged part image of the vehicle;
the Felzenszwalb algorithm is a graph-based image segmentation algorithm that divides images by measuring differences between regions. In our case, we can use this algorithm to identify and extract images of damaged portions of the vehicle.
The following is the basic steps for image extraction using the Felzenszwalb algorithm:
converting the video frames into images: first, we need to convert each frame in the video data into a still image. This may be achieved by various video processing libraries (e.g., openCV).
Preprocessing an image: to improve the performance of the algorithm, some pre-processing of the image, such as graying, noise reduction, etc., may be required.
Image segmentation was performed using the Felzenszwalb algorithm: we can then segment the preprocessed image using the Felzenszwalb algorithm. This algorithm will divide the image into many small areas or "super-pixels".
Extracting an image of the lesion: finally, we can extract an image of the lesion by analyzing the segmented image. In particular, we can assume that the image of the lesion is visually significantly different from the surrounding image. Thus, we can identify the image of the lesion by comparing the differences between the superpixels.
The Felzenszwalb algorithm is a graph-based image segmentation algorithm. The basic idea is to consider the image as a Graph (Graph) in which each pixel corresponds to a vertex and the edges between each vertex correspond to the similarity between pixels. The image is then divided by calculating the weights of the edges (i.e., the differences between pixels).
The following is the basic steps for image segmentation using the Felzenszwalb algorithm:
construction diagram: first, we need to convert the image into a graph. In this figure, each pixel corresponds to a vertex, and edges between each vertex correspond to the similarity between pixels. The weight of an edge can be obtained by calculating the difference between pixels.
In image processing, the similarity between pixels can be generally obtained by calculating the difference between pixels. The smaller the difference, the higher the similarity; the larger the difference, the lower the similarity. In this case we can use the color difference or the luminance difference as the difference between pixels.
For example, if we choose the RGB color space, then there will be three channel values per pixel: red (R), green (G) and blue (B). The color difference between pixel i and pixel j can be calculated as:
Dif(i, j) = sqrt((Ri - Rj)^2 + (Gi - Gj)^2 + (Bi - Bj)^2)
where (Ri, gi, bi) and (Rj, gj, bj) are RGB values for pixel i and pixel j, respectively.
We can then convert this color difference to a similarity. One common approach is to use a gaussian function as follows:
Sim(i, j) = exp(-Dif(i, j) / (2 * sigma^2))
wherein sigma is a parameter for controlling the decay rate of the similarity. The larger the sigma, the slower the similarity decay; the smaller the sigma, the faster the similarity decays.
In the Felzenszwalb algorithm, we can take this similarity as the weight of the edge and then use this weight to decide whether to merge the components where the two pixels are located. Specifically, if the similarity of two pixels is above a threshold, then the components in which the two pixels are located are merged; otherwise, the independence of the two components is maintained.
Sequencing edges: we then need to order all edges by weight. Thus, the edge with the smallest weight (i.e., the most similar pixel) will be considered first.
Initializing a component: next, we need to initialize a set of components. At the beginning, each pixel is considered a separate component.
Combining component: we then begin traversing the ordered edges. For each edge, if the two pixels it connects belong to different components and the difference between the two components is less than a given threshold, we merge the two components into one new component.
S21: the Felzenszwalb algorithm firstly builds a graph, converts an image into a graph, in the graph, each pixel corresponds to a vertex, edges between each vertex correspond to similarity between pixels, and the weight of the edges can be obtained by calculating differences between the pixels;
S22: sequencing edges: ordering all edges according to the weight, wherein the edge with the minimum weight, namely the pixel with the highest similarity, is considered firstly;
s23: initializing a component: each pixel is considered a separate component;
s24: combining component: traversing the ordered edges, for each edge, merging the two components into a new component if the two pixels it connects belong to different components and the difference between the two components is less than a given threshold;
s3: the trained improved convolutional neural network algorithm model comprehensively judges the damage degree according to the characteristics of the image of the damage part extracted by the Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk situation; the improved convolutional neural network algorithm employed is as follows:
wherein ,representing neural network->The output of the layer, i.e.)>The result of the layer after the activation function treatment, +.>Representing the tanh function>Representing neural network->Layer input,/->Indicate->Weighting matrix of layers,/>Indicate->Bias vector of layer, ">The method comprises the steps of representing a weight factor vector, wherein the weight factor vector is used for adjusting the influence degree of different input features on damage degree judgment, and comprises the steps that w_year, w_brand and w_history represent weights of service life, vehicle brand and historical risk conditions of a vehicle respectively, and the value of each feature is multiplied by a corresponding weight factor to obtain a weighted feature;
Convolutional neural networks (Convolutional Neural Network, CNN) are a deep learning algorithm commonly used for image recognition and processing. In this case, we can use CNN to process the extracted vehicle damage image and determine the extent of damage.
The following is a basic procedure for damage level determination using CNN:
image preprocessing: first, we need to pre-process the extracted lesion image, such as resizing, normalizing, etc., to facilitate CNN processing.
Feature extraction: then, we input the preprocessed image into the CNN. CNNs automatically extract features of images through a series of convolution, pooling, and full-connection layers.
Judging the damage degree: finally, we can get the judging result of the damage degree through the output layer of CNN. In particular, each neuron of the output layer corresponds to a possible degree of impairment, and the activation value of the neuron represents the probability of that degree of impairment.
In this case, we do not only use Convolutional Neural Network (CNN) to process the extracted images, but also consider other factors such as the contrast of the original vehicle, the age of the vehicle, the brand of vehicle and the historical risk. These factors can be used as additional inputs to the CNN, along with image features, for the determination of the extent of damage.
The method comprises the following specific steps:
extracting image features: first, we input the preprocessed lesion image into the CNN, extracting features of the image through a series of convolution, pooling and full-join layers.
Additional information processing: then, we need to deal with other factors. For example, we can encode the age of the vehicle, the brand of the vehicle, and the historical risk occurrence as values or vectors, which are then used as inputs to the CNN along with image features.
Judging the damage degree: finally, we obtain the judging result of the damage degree through the output layer of the CNN. In particular, each neuron of the output layer corresponds to a possible degree of impairment, and the activation value of the neuron represents the probability of that degree of impairment.
In Convolutional Neural Networks (CNNs), the output layer is typically used to perform classification or regression tasks. In this case, we can consider the judgment of the damage degree as a classification task, namely dividing the damage degree into several categories such as "mild", "moderate" and "severe".
The method comprises the following specific steps:
setting the damage degree category: first, we need to set the category of damage level. For example, we can set three categories: "mild", "moderate" and "severe".
Setting an output layer: then, we need to set a corresponding number of neurons at the output layer of the CNN. In this example, we need to set three neurons, one for each category of lesion level.
Training CNN: next, we need to train the CNN using training data labeled with the extent of the impairment. During the training process, the CNN will learn how to predict the extent of damage based on the input image features and additional information.
Judging the damage degree: finally, when we input new images and information into the trained CNN, each neuron of the output layer will give an activation value. This activation value may be considered as a probability of the class of impairment degree to which the neuron corresponds. The neuron with the highest activation value can be selected, and the corresponding category is used as the judgment result of the damage degree.
For example, if the activation value of a "light" neuron is 0.1, the activation value of a "medium" neuron is 0.3, and the activation value of a "heavy" neuron is 0.6, then we can determine that the degree of damage to this vehicle is "heavy".
S4: improving a convolutional neural network, and giving a pay scheme according to the damage degree judgment result and the insurance rule; in the insurance industry, the development of payment schemes typically depends on a number of factors, including the extent of damage, insurance clauses, age of the vehicle, brand of vehicle, historical risk, etc. In this case, we can give a pay plan in combination with insurance rules by:
Judging the damage degree: first, we analyze the lesion image using a neural network to obtain a determination of the extent of the lesion. The result may be a continuous value indicating the severity of the lesion or a classified result indicating the type of lesion.
Insurance rule application: then, we convert the judgment result of the damage degree into the payoff amount according to the insurance rule. The insurance rules may include some fixed payoff criteria, such as, for mild damage, paying off 10% of the vehicle value; for moderate damage, pay 30% of the vehicle value; for severe damage, 50% of the vehicle value is paid for. Insurance rules may also take into account factors such as age of the vehicle, brand of vehicle, historical risk of emergence, etc., e.g., the amount of reimbursement may be reduced for vehicles having an age of more than 5 years.
And (3) making a payment scheme: finally, we compare the payoff amount with the insurance clauses to obtain the final payoff scheme. For example, if the payable amount exceeds the maximum payable limit of the insurance clause, then the final payable amount is the maximum payable limit; if the payable amount is less than the payable amount of the insurance clause, then the final payable amount is 0.
S5: and (5) ending.
In some embodiments, the trained convolutional neural network algorithm model comprehensively judges the damage degree according to the characteristics of the image of the damage part extracted by the Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk conditions, codes the service life of the vehicle, the brand of the vehicle and the historical risk conditions into numerical values or vectors, and combines the image characteristics of the damage part extracted by the Felzenszwalb algorithm together to serve as the input of the convolutional neural network.
Non-image information such as the age of the vehicle, the brand of the vehicle, and the historical risk of developing the accident need to be taken as input to the CNN along with image features. This requires that we encode this non-image information, converting it into a number or vector. The method comprises the following specific steps:
service life of vehicle: this is a numerical information that can be directly used as input. For example, if a car is used for 3 years, we can directly take 3 as input.
Vehicle brand: this is a class of information that we can use One-Hot Encoding (One-Hot Encoding) to convert into vectors. For example, if we have three brands "A", "B", and "C", we can code "A" as [1, 0, 0], "B" as [0, 1, 0], and "C" as [0, 0, 1].
Historical risk conditions: this is a numerical information that can be directly used as input. For example, if a car is at risk 2 times in the last 5 years, we can take 2 directly as input.
Combining information: finally, we need to combine this information with the image features as input to the CNN. In particular, we can stitch this information behind the vector of image features to form a longer vector. For example, if the image feature is a 100-dimensional vector, the vehicle life is 3, the vehicle brand is "A", and the history is 2, then we can combine this information into a 104-dimensional vector: [ image feature, 3, 1, 0, 0, 2].
The above is a specific step of encoding the age of the vehicle, the brand of the vehicle and the historical risk situations into values or vectors and using the values or vectors together with image features as input to the CNN.
In some embodiments, the comprehensively judging the degree of damage includes three categories: when information is input into the trained CNN, each neuron of the output layer gives an activation value, the activation value is regarded as the probability of the category of the damage degree corresponding to the neuron, the neuron with the highest activation value is selected, and the category corresponding to the neuron is used as the judgment result of the damage degree.
In some embodiments, the processing of the data using the Felzenszwalb algorithm further includes pre-processing the image before extracting the vehicle lesion image, including noise reduction using histogram equalization.
Example 2
The application also provides a remote video exploration damage assessment automatic processing system, which comprises:
the image acquisition module is used for acquiring accident scene image data by adopting a CCD camera or a mobile phone data acquisition module;
the vehicle damage part image extraction module is used for processing data by utilizing a Felzenszwalb algorithm to extract a vehicle damage part image;
the image construction module firstly constructs an image by using a Felzenszwalb algorithm, and converts the image into an image, wherein each pixel corresponds to one vertex, the edges between each vertex correspond to the similarity between the pixels, and the weight of the edges can be obtained by calculating the difference between the pixels;
in the Felzenszwalb algorithm, the image is seen as a graph, each pixel being a node, and each edge connecting two pixels. During execution of the algorithm, pixels are grouped according to differences between pixels (e.g., brightness differences or color differences) to form different components.
Determining whether two pixels belong to the same component is typically accomplished by looking up their "root" node. During execution of the algorithm, each component has a "root" node, and all the "root" nodes of pixels belonging to the same component are identical.
In particular, we can use a data structure called a "union" to maintain and query the "root" node of a pixel. In the union, each node has a pointer to its "root" node. When we need to query the "root" node of a pixel, we look up all the way up along this pointer until the "root" node is found. When we need to merge two components, we point the pointer of the "root" node of one component to the "root" node of the other component.
Thus, it is determined whether two pixels belong to the same component, i.e., find their "root" nodes, see if the two "root" nodes are identical. If so, then the two pixels belong to the same component; if different, the two pixels belong to different components.
And a sequencing module for sequencing the edges: ordering all edges according to the weight, wherein the edge with the minimum weight, namely the pixel with the highest similarity, is considered firstly;
Initializing a component module: each pixel is considered a separate component;
and (3) combining the component modules: traversing the ordered edges, for each edge, merging the two components into a new component if the two pixels it connects belong to different components and the difference between the two components is less than a given threshold;
the convolutional neural network damage degree judging module is used for comprehensively judging the damage degree according to the characteristics of images of damaged parts extracted by a Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk conditions by a trained improved convolutional neural network algorithm model; the improved convolutional neural network algorithm employed is as follows:
wherein ,representing neural network->The output of the layer, i.e.)>The result of the layer after the activation function treatment, +.>Representing the tanh function>Representing neural network->Layer input,/->Indicate->Weight matrix of layer,/>Indicate->Bias vector of layer, ">The method comprises the steps of representing a weight factor vector, wherein the weight factor vector is used for adjusting the influence degree of different input features on damage degree judgment, and comprises the steps that w_year, w_brand and w_history represent weights of service life, vehicle brand and historical risk conditions of a vehicle respectively, and the value of each feature is multiplied by a corresponding weight factor to obtain a weighted feature;
In Convolutional Neural Networks (CNNs), the extraction of image features is mainly achieved by convolutional layers, pooling layers and fully connected layers. The method comprises the following specific steps:
convolution layer: the convolution layer is a core component of the CNN that extracts local features of the image through convolution operations. Specifically, the convolution layer will slide a small window (i.e., convolution kernel) over the image, and at each location, the dot product of the pixels in the window and the convolution kernel will be calculated, resulting in a new pixel value. This process may extract low-level features such as edges, textures, etc. of the image.
Introducing a weight factor: in the formulation of the neural network, we can introduce a weighting factor for adjusting the importance of the different inputs. For example, we can consider the age of a vehicle to have a greater impact on the extent of damage than the brand of the vehicle, and therefore we can give greater weight to the input of the age of the vehicle. This weighting factor can be learned and optimized as a parameter of the neural network by training data.
The specific formula can be expressed as:
where w represents a weight factor, x_ { l+1}: the output of the layer 1 of the neural network, namely the result of the layer 1 after being processed by the activation function, is shown.
f: representing an activation function, such as ReLU, sigmoid, tanh, etc. The role of the activation function is to introduce nonlinearities so that the neural network can fit complex functions.
: representing the weight matrix of the first layer. The weight matrix is the main parameter of the neural network, which determines how the input signals are transformed and combined.
: representing the input of the first layer of the neural network, i.e. the output of the first-1 layer.
* : representing a matrix multiplication operation, i.e. multiplying the input signal with a weight matrix.
: representing the bias vector of the first layer. Bias is also a parameter of the neural network that can adjust the activation threshold of the neurons.
w: the weight factors are represented for adjusting the importance of the different inputs. The weighting factors can be used as a parameter of the neural network, and are learned and optimized through training data.
Pooling layer: the pooling layer, typically after the convolution layer, acts to reduce the spatial size of the image, reducing the computational effort while maintaining the main features of the image. Common pooling operations are maximum pooling (taking the maximum value within a window) and average pooling (taking the average value within a window).
In neural networks, weighting factors may be used to adjust the importance of different input features. In this case, we can set a weight factor w_year for the age of the vehicle. This weighting factor may characterize the extent to which the age of the vehicle affects the damage level determination.
Specifically, we can multiply the value of the age of the vehicle by the weight factor w_year to obtain a weighted age characteristic. This weighted feature is input into the neural network along with other features.
For example, assuming that the service life of a vehicle is 3 years and the weight factor w_year is 2, the weighted service life characteristic is 3×2=6. This means that the influence of the age of the vehicle on the judgment of the degree of damage is amplified by a factor of 2.
In the training process of the neural network, the weight factor w_year is a learnable parameter, and can be automatically adjusted through an optimization algorithm (such as gradient descent), so that the output of the neural network is as close as possible to the real label of the training data. Thus, the neural network can automatically learn the real influence degree of the service life of the vehicle on the damage degree judgment without the need of preset.
In the solution of the present application, "w" represents a weight factor, which is used to adjust the degree of influence of different input features on the damage degree judgment. Specifically, our input features include image features extracted by the Felzenszwalb algorithm, as well as non-image information such as age of the vehicle, brand of the vehicle, and historical risk issues.
Herein, "w" may be considered as a vector in which each element corresponds to a weight of an input feature. For example, w_year, w_brand, and w_history may represent weights for vehicle age, vehicle brand, and historical risk situations, respectively. These weighting factors may be learned and optimized through a training process of the neural network.
In the input of the neural network, the value of each feature is multiplied by a corresponding weight factor to obtain a weighted feature. These weighted features are input together into the neural network for calculation of the determination of the degree of impairment.
For example, if the service life of a vehicle is 3 years and the weight factor w_year is 2, the weighted service life characteristic is 3×2=6. This means that the influence of the age of the vehicle on the judgment of the degree of damage is amplified by a factor of 2.
Full tie layer: the fully connected layer is typically at the last few layers of the CNN, which connects all neurons of the previous layer to each neuron, thus enabling global integration of features. Before the full join layer, there is typically a flattening operation that converts the two-dimensional image features into one-dimensional vectors.
The above is the basic step of extracting image features in CNN. It should be noted that this is just one possible method, and the actual implementation may be adjusted according to the specific application scenario and requirement. For example, some modern CNN architectures may also include other types of layers, such as normalization layers, activation layers, dropout layers, and the like.
The pay plan generating module is used for improving the judgment result of the convolutional neural network according to the damage degree and combining with the insurance rule to give a pay plan; we can improve Convolutional Neural Networks (CNNs) by:
judging the damage degree: first, we analyze the lesion image using CNN to obtain a judgment of the degree of the lesion. The result may be a continuous value indicating the severity of the lesion or a classified result indicating the type of lesion.
Insurance rule coding: then we need to encode the insurance rules into a form that can be processed by the CNN. For example, we can convert various payoff criteria in the insurance rules into a list of payoff proportions, which is then input into the CNN as an additional input feature.
Predicting the payment scheme: finally, we need to modify the output layer of the CNN so that it can directly predict the pay-off scheme. Specifically, we can design the output layer of the CNN as a regression layer, outputting a continuous value representing the payoff amount. During the training process, we can use the true value of the payoff amount as the target value, and adjust the parameters of the CNN by an optimization algorithm (e.g., gradient descent) so that the predicted payoff amount is as close as possible to the true value.
For example, assuming that we have a damage image and the damage degree judgment result of CNN is 0.8 (indicating that the damage degree is high), the pay rate of the insurance rule is 50%, the output layer of CNN should predict a pay amount, which should be close to 50% of the vehicle value.
In this way, we can give a pay plan directly using CNN without requiring additional steps to formulate a pay plan based on the extent of damage and insurance rules. This not only improves the efficiency of loss assessment, but also makes the formulation of the pay plan more fair and transparent.
In this case, we can give a pay plan using Convolutional Neural Network (CNN) and insurance rules by:
judging the damage degree: first, we analyze the lesion image using CNN to obtain a judgment of the degree of the lesion. For example, the output of CNN may be a value between 0 and 1, indicating the severity of the lesion. Assume that in one specific case, the CNN output is 0.7, indicating a higher degree of damage.
Insurance rule application: then, we need to convert the judgment result of the damage degree into the payoff amount according to the insurance rule. Assuming that the insurance rules dictate that 40% of the vehicle value is paid for vehicles with damage levels between 0.6 and 0.8. Assuming that the vehicle has a value of $ 10,000, the payoff amount is $ 10,000 x 40% = 4,000.
And (3) making a payment scheme: finally, we need to consider other conditions in the insurance clauses, such as the no-claim and the maximum payout limit. Assuming that the insurance clause specifies a claim of $500, the maximum payout limit is $5,000. Then, based on the claim free amount, we need to deduct $500 from the payoff amount, resulting in a final payoff amount of $4,000-500=3,500. Since this amount does not exceed the maximum payout limit, the final payout scheme is to pay $3,500.
The above is a specific claim scheme making process. It should be noted that the payment scheme formulation process may be adjusted according to specific damage conditions, insurance rules and insurance clauses.
And (5) ending the module.
In some embodiments, the trained convolutional neural network algorithm model comprehensively judges the damage degree according to the characteristics of the image of the damage part extracted by the Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk conditions, codes the service life of the vehicle, the brand of the vehicle and the historical risk conditions into numerical values or vectors, and combines the image characteristics of the damage part extracted by the Felzenszwalb algorithm together to serve as the input of the convolutional neural network.
In some embodiments, the comprehensively judging the degree of damage includes three categories: when information is input into the trained CNN, each neuron of the output layer gives an activation value, the activation value is regarded as the probability of the category of the damage degree corresponding to the neuron, the neuron with the highest activation value is selected, and the category corresponding to the neuron is used as the judgment result of the damage degree.
In some embodiments, the processing of the data using the Felzenszwalb algorithm further includes pre-processing the image before extracting the vehicle lesion image, including noise reduction using histogram equalization. In the original Felzenszwalb algorithm, image segmentation is mainly based on the difference in brightness between pixels. If we are to introduce color space information, we can consider converting the image from gray space to RGB or HSV color space, and then calculate the difference between pixels in this color space.
The method comprises the following specific steps:
color space conversion: first, we need to convert the image from gray space to color space. For example, if we choose the RGB color space, then there will be three channel values per pixel: red (R), green (G) and blue (B).
Calculating the color difference: then we need to calculate the difference between pixels in the color space. One common method is to calculate the euclidean distance between pixels. For example, if the RGB value of pixel i is (Ri, gi, bi) and the RGB value of pixel j is (Rj, gj, bj), the color difference between pixel i and pixel j can be calculated as:
Dif(i, j) = sqrt((Ri - Rj)^2 + (Gi - Gj)^2 + (Bi - Bj)^2)
image segmentation: finally, we can use the Felzenszwalb algorithm for image segmentation. In this process we replace the original brightness difference with a color difference, the other steps remain unchanged.
The application provides a remote video exploration damage assessment automatic processing method and a system, which can realize the following beneficial technical effects:
1. according to the application, the features are extracted and the damage degree is judged by adopting the Felzenszwalb algorithm and combining the improved convolutional neural network algorithm, so that the insurance claim settlement accuracy is improved, the insurance claim automation degree is greatly enhanced by adopting the improved convolutional neural network according to the damage degree judgment result, the insurance automatic claim settlement processing efficiency is improved, and real-time automatic claim settlement is realized.
2. The present application is implemented by employing an improved convolutional neural network by Representing a weight factor vector for adjusting the influence of different input features on damage degree judgment, including weights of w_year, w_brand and w_history representing the service life of the vehicle, the brand of the vehicle and the historical risk situation, respectively, and multiplying the value of each feature by the corresponding weight factor to obtain a weighted feature, thereby greatly improving the insurance appearance judgment accuracy, for example, the influence of the service life of the vehicle on the payment is relatively highAnd by adjusting w_year, the influence degree of the vehicle age is enhanced, and the calculation efficiency and the insurance calculation accuracy are improved.
3. According to the invention, the damage degree is comprehensively judged according to the characteristics of the image of the damage part extracted by the Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk situation through the trained convolutional neural network algorithm model, the service life of the vehicle, the brand of the vehicle and the historical risk situation are coded into numerical values or vectors, and the image characteristics of the damage part extracted by the Felzenszwalb algorithm are combined together to be used as the input of the convolutional neural network, so that the judgment accuracy is greatly realized, and the comprehensive multiple factor judgment insurance automation claims are realized.
4. The original Felzenszwalb algorithm mainly considers the brightness difference among pixels, and the invention adopts the color difference of the introduced color space information such as RGB or HSV space to improve the accuracy of image segmentation and greatly improve the degree of automation of disassembling the waste battery pack.
The above describes in detail a method and apparatus for automatically disassembling a waste battery pack, and specific examples are applied herein to illustrate the principles and embodiments of the present invention, and the above examples are only for helping to understand the core idea of the present invention; also, as will be apparent to those skilled in the art in light of the present teachings, the present disclosure should not be limited to the specific embodiments and applications described herein.

Claims (8)

1. The automatic processing method for remote video exploration damage assessment is characterized by comprising the following steps:
s1: the data acquisition module acquires accident scene image data;
s2: processing the data by using a Felzenszwalb algorithm, and extracting a damaged part image of the vehicle;
s21: the Felzenszwalb algorithm firstly builds a graph, converts an image into a graph, in the graph, each pixel corresponds to a vertex, edges between each vertex correspond to similarity between pixels, and the weight of the edges is obtained by calculating differences between the pixels;
s22: sequencing edges: ordering all edges according to the weight, wherein the edge with the minimum weight, namely the pixel with the highest similarity, is considered firstly;
S23: initializing a component: each pixel is considered a separate component;
s24: combining component: traversing the ordered edges, for each edge, merging the two components into a new component if the two pixels it connects belong to different components and the difference between the two components is less than a given threshold;
s3: the trained improved convolutional neural network algorithm model comprehensively judges the damage degree according to the characteristics of the image of the damage part extracted by the Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk situation; the improved convolutional neural network algorithm employed is as follows:
wherein ,representing neural network->The output of the layer, i.e.)>The result of the layer after the activation function treatment, +.>Representing the tanh function>Representing neural network->Layer input,/->Indicate->Weight matrix of layer,/>Indicate->Bias vector of layer, ">The method comprises the steps of representing a weight factor vector, wherein the weight factor vector is used for adjusting the influence degree of different input features on damage degree judgment, and comprises the steps that w_year, w_brand and w_history represent weights of service life, vehicle brand and historical risk conditions of a vehicle respectively, and the value of each feature is multiplied by a corresponding weight factor to obtain a weighted feature;
S4: improving a convolutional neural network, and giving a pay scheme according to the damage degree judgment result and the insurance rule;
s5: and (5) ending.
2. The automatic processing method for remote video exploration and damage assessment according to claim 1, wherein the trained convolutional neural network algorithm model comprehensively judges damage degree according to the characteristics of images of damaged parts extracted by a Felzenszwalb algorithm, vehicle service life, vehicle brands and historical risk conditions, encodes the vehicle service life, the vehicle brands and the historical risk conditions into numerical values or vectors, and uses the image characteristics of the damaged parts extracted by the Felzenszwalb algorithm as input of the convolutional neural network.
3. The method of claim 1, wherein the comprehensively determining the damage level comprises three categories: when information is input into the trained CNN, each neuron of the output layer gives an activation value, the activation value is regarded as the probability of the category of the damage degree corresponding to the neuron, the neuron with the highest activation value is selected, and the category corresponding to the neuron is used as the judgment result of the damage degree.
4. The automated processing method for remote video exploration and damage assessment according to claim 1, wherein the processing of the data using the Felzenszwalb algorithm further comprises preprocessing the image before extracting the image of the damaged portion of the vehicle, including noise reduction using histogram equalization.
5. A remote video survey impairment automation processing system, comprising:
the image acquisition module is used for acquiring accident scene image data by adopting a CCD camera or a mobile phone data acquisition module;
the vehicle damage part image extraction module is used for processing data by utilizing a Felzenszwalb algorithm to extract a vehicle damage part image;
the image construction module is used for constructing an image by a Felzenszwalb algorithm, converting the image into an image, wherein each pixel corresponds to one vertex in the image, the edges between the vertices correspond to the similarity between the pixels, and the weight of the edges is obtained by calculating the difference between the pixels;
and a sequencing module for sequencing the edges: ordering all edges according to the weight, wherein the edge with the minimum weight, namely the pixel with the highest similarity, is considered firstly;
initializing a component module: each pixel is considered a separate component;
And (3) combining the component modules: traversing the ordered edges, for each edge, merging the two components into a new component if the two pixels it connects belong to different components and the difference between the two components is less than a given threshold;
the convolutional neural network damage degree judging module is used for comprehensively judging the damage degree according to the characteristics of images of damaged parts extracted by a Felzenszwalb algorithm, the service life of the vehicle, the brand of the vehicle and the historical risk conditions by a trained improved convolutional neural network algorithm model; the improved convolutional neural network algorithm employed is as follows:
wherein ,representing neural network->The output of the layer, i.e.)>The result of the layer after the activation function treatment, +.>Representing the tanh function>Representing neural network->Layer input,/->Indicate->Weight matrix of layer,/>Indicate->Bias vector of layer, ">The method comprises the steps of representing a weight factor vector, wherein the weight factor vector is used for adjusting the influence degree of different input features on damage degree judgment, and comprises the steps that w_year, w_brand and w_history represent weights of service life, vehicle brand and historical risk conditions of a vehicle respectively, and the value of each feature is multiplied by a corresponding weight factor to obtain a weighted feature;
The pay plan generating module is used for improving the judgment result of the convolutional neural network according to the damage degree and combining with the insurance rule to give a pay plan;
and (5) ending the module.
6. The remote video exploration damage assessment automatic processing system according to claim 5, wherein the trained convolutional neural network algorithm model comprehensively judges damage degree according to the characteristics of images of damaged parts extracted by the Felzenszwalb algorithm, the service life of vehicles, the brands of vehicles and the historical risk conditions, encodes the service life of vehicles, the brands of vehicles and the historical risk conditions into numerical values or vectors, and uses the image characteristics of the damaged parts extracted by the Felzenszwalb algorithm as input of the convolutional neural network.
7. The automated remote video exploration and damage handling system of claim 5, wherein said comprehensive determination of damage level comprises three categories: when information is input into the trained CNN, each neuron of the output layer gives an activation value, the activation value is regarded as the probability of the category of the damage degree corresponding to the neuron, the neuron with the highest activation value is selected, and the category corresponding to the neuron is used as the judgment result of the damage degree.
8. The automated processing system for remote video exploration and impairment according to claim 5, wherein the processing of the data using a Felzenszwalb algorithm further comprises preprocessing the image prior to extracting the image of the impairment portion of the vehicle, comprising noise reduction using histogram equalization.
CN202311193673.9A 2023-09-15 2023-09-15 Automatic processing method and system for remote video exploration and damage assessment Active CN116935293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311193673.9A CN116935293B (en) 2023-09-15 2023-09-15 Automatic processing method and system for remote video exploration and damage assessment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311193673.9A CN116935293B (en) 2023-09-15 2023-09-15 Automatic processing method and system for remote video exploration and damage assessment

Publications (2)

Publication Number Publication Date
CN116935293A true CN116935293A (en) 2023-10-24
CN116935293B CN116935293B (en) 2024-01-02

Family

ID=88380728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311193673.9A Active CN116935293B (en) 2023-09-15 2023-09-15 Automatic processing method and system for remote video exploration and damage assessment

Country Status (1)

Country Link
CN (1) CN116935293B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876232A (en) * 2024-03-11 2024-04-12 国任财产保险股份有限公司 Intelligent traffic accident insurance processing method and system based on large model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268783A (en) * 2014-05-30 2015-01-07 翱特信息系统(中国)有限公司 Vehicle loss assessment method and device and terminal device
CN110009508A (en) * 2018-12-25 2019-07-12 阿里巴巴集团控股有限公司 A kind of vehicle insurance compensates method and system automatically
WO2019184899A1 (en) * 2018-03-26 2019-10-03 苏州山水树儿信息技术有限公司 Vehicle collision damage assessment method and system based on historical cases
CN111626601A (en) * 2020-05-25 2020-09-04 泰康保险集团股份有限公司 Scheduling system, method, equipment and storage medium for public estimation exploration loss assessment task
CN115905385A (en) * 2022-11-17 2023-04-04 格林美股份有限公司 Method, device and equipment for automatically identifying and encoding quality state of scraped car frame

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268783A (en) * 2014-05-30 2015-01-07 翱特信息系统(中国)有限公司 Vehicle loss assessment method and device and terminal device
WO2019184899A1 (en) * 2018-03-26 2019-10-03 苏州山水树儿信息技术有限公司 Vehicle collision damage assessment method and system based on historical cases
CN111886619A (en) * 2018-03-26 2020-11-03 苏州山水树儿信息技术有限公司 Vehicle collision damage assessment method and system based on historical case
CN110009508A (en) * 2018-12-25 2019-07-12 阿里巴巴集团控股有限公司 A kind of vehicle insurance compensates method and system automatically
CN111626601A (en) * 2020-05-25 2020-09-04 泰康保险集团股份有限公司 Scheduling system, method, equipment and storage medium for public estimation exploration loss assessment task
CN115905385A (en) * 2022-11-17 2023-04-04 格林美股份有限公司 Method, device and equipment for automatically identifying and encoding quality state of scraped car frame

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PEDRO F.FELZENSZWALB ET AL: ""Efficient Graph-Based Image Segmentation"", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》, vol. 59, no. 2, pages 167 - 181, XP055013351, DOI: 10.1023/B:VISI.0000022288.19776.77 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876232A (en) * 2024-03-11 2024-04-12 国任财产保险股份有限公司 Intelligent traffic accident insurance processing method and system based on large model
CN117876232B (en) * 2024-03-11 2024-05-28 国任财产保险股份有限公司 Intelligent traffic accident insurance processing method and system based on large model

Also Published As

Publication number Publication date
CN116935293B (en) 2024-01-02

Similar Documents

Publication Publication Date Title
CN111914907B (en) Hyperspectral image classification method based on deep learning space-spectrum combined network
US20210081698A1 (en) Systems and methods for physical object analysis
CN116935293B (en) Automatic processing method and system for remote video exploration and damage assessment
CN112163628A (en) Method for improving target real-time identification network structure suitable for embedded equipment
CN111126115A (en) Violence sorting behavior identification method and device
CN112861690A (en) Multi-method fused remote sensing image change detection method and system
CN110245620B (en) Non-maximization inhibition method based on attention
US20230222643A1 (en) Semantic deep learning and rule optimization for surface corrosion detection and evaluation
CN114529730A (en) Convolutional neural network ground material image classification method based on LBP (local binary pattern) features
CN114332559A (en) RGB-D significance target detection method based on self-adaptive cross-modal fusion mechanism and depth attention network
CN112365451B (en) Method, device, equipment and computer readable medium for determining image quality grade
Yuan et al. Locally and multiply distorted image quality assessment via multi-stage CNNs
CN113628143A (en) Weighted fusion image defogging method and device based on multi-scale convolution
CN112686498A (en) Enterprise credit rating method based on deep convolutional network
CN112270404A (en) Detection structure and method for bulge defect of fastener product based on ResNet64 network
Singh et al. Multiscale reflection component based weakly illuminated nighttime image enhancement
CN116452472A (en) Low-illumination image enhancement method based on semantic knowledge guidance
CN116309270A (en) Binocular image-based transmission line typical defect identification method
CN111754459B (en) Dyeing fake image detection method based on statistical depth characteristics and electronic device
CN115457015A (en) Image no-reference quality evaluation method and device based on visual interactive perception double-flow network
CN115049611A (en) Continuous casting billet crack defect identification method based on improved yolov5
CN114648738A (en) Image identification system and method based on Internet of things and edge calculation
CN112418085A (en) Facial expression recognition method under partial shielding working condition
CN113538199B (en) Image steganography detection method based on multi-layer perception convolution and channel weighting
CN117253184B (en) Foggy day image crowd counting method guided by foggy priori frequency domain attention characterization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant