CN117252712A - Product claim settlement method, device, equipment and storage medium based on transfer learning - Google Patents

Product claim settlement method, device, equipment and storage medium based on transfer learning Download PDF

Info

Publication number
CN117252712A
CN117252712A CN202311160060.5A CN202311160060A CN117252712A CN 117252712 A CN117252712 A CN 117252712A CN 202311160060 A CN202311160060 A CN 202311160060A CN 117252712 A CN117252712 A CN 117252712A
Authority
CN
China
Prior art keywords
product
prediction model
data
product data
settlement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311160060.5A
Other languages
Chinese (zh)
Inventor
徐振博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202311160060.5A priority Critical patent/CN117252712A/en
Publication of CN117252712A publication Critical patent/CN117252712A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Finance (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Technology Law (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

The application discloses a product claim settlement method, device, equipment and storage medium based on transfer learning, and belongs to the technical field of artificial intelligence and the field of insurance production finance. According to the method, the first claim settlement prediction model is trained based on the first product data, the model hiding layer parameters are obtained to construct the feature extractor, the feature extractor is used for replacing the hiding layer of the new prediction model, the new prediction model is trained based on the second product data, the data size of the second product data is smaller than that of the first product data, when the new prediction model is trained, the hiding layer parameters of the new prediction model are frozen, the product to be clashed is obtained, the product to be clashed is imported into the trained second claim settlement prediction model, and the claim settlement result is output. The present application also relates to the field of blockchain technology, where product data may be stored on blockchain nodes. The method and the device accelerate the training process of the product claim prediction model, reduce the dependence on product data in the model training process, and simultaneously ensure the prediction accuracy of the model.

Description

Product claim settlement method, device, equipment and storage medium based on transfer learning
Technical Field
The application belongs to the technical field of artificial intelligence and the field of insurance production finance, and particularly relates to a product claim settlement method, device, equipment and storage medium based on transfer learning.
Background
Product claims refer to the process that when the risk event agreed by the insurance product occurs, the insured person makes a claim request to the insurance company and obtains the corresponding claim. In short, product claims refer to the agreement of an insurance contract, when an insured life experiences unexpected losses or risk events meeting the rules of the insurance, an application can be made to an insurance company to obtain economic compensation or other related insurance services.
At present, in the product claim settlement process, intelligent claim settlement is often realized by using a plurality of product claim settlement prediction models trained in advance so as to improve the claim settlement efficiency, but the accuracy of the product claim settlement prediction model is greatly influenced by the number of training samples, and the accuracy of the product claim settlement prediction model is lower for products with less training samples. For example, the data amount of the claim data of the automobile products is large, so that the accuracy of the claim prediction result given by the claim prediction model of the automobile insurance products is high, while the data amount of the claim data of some non-automobile products, such as forestry, livestock breeding and the like, is small, so that the accuracy of the claim prediction model trained by directly using the claim data of the products is low, and the output claim prediction result has little reference value.
Disclosure of Invention
The embodiment of the application aims to provide a product claim settlement method, device, equipment and storage medium based on transfer learning, so as to solve the technical problem of low accuracy of a product claim settlement prediction model aiming at claim settlement products with a small number of training samples.
In order to solve the above technical problems, the embodiments of the present application provide a product claim settlement method based on migration learning, which adopts the following technical scheme:
a method of product claims settlement based on transfer learning, comprising:
acquiring first product data and training a first claim prediction model based on the first product data;
acquiring hidden layer parameters of the first claim settlement prediction model, and constructing a feature extractor based on the hidden layer parameters;
replacing a hidden layer of a preset initial prediction model by using a feature extractor to obtain a new prediction model;
acquiring second product data, training a new prediction model based on the second product data to obtain a second claim-settlement prediction model, wherein the data volume of the second product data is smaller than that of the first product data, and freezing hidden layer parameters of the new prediction model when the new prediction model is trained;
receiving an instruction for claim settlement, acquiring product data of a product to be claim settled, importing the product data of the product to be claim settled into a trained second claim settlement prediction model, and outputting a result of claim settlement of the product to be claim settled.
Further, obtaining hidden layer parameters of the first claim prediction model, and constructing a feature extractor based on the hidden layer parameters, specifically including:
acquiring associated product features of the first product data and the second product data;
determining hidden layer parameters associated with associated product features in the first claim prediction model to obtain associated parameters;
a feature extractor is constructed based on the correlation parameters.
Further, the new prediction model includes an input layer, a hidden layer, and an output layer, and second product data is obtained, and the new prediction model is trained based on the second product data, so as to obtain a second claim settlement prediction model, which specifically includes:
freezing associated parameters in the new prediction model hidden layer parameters, and determining non-associated parameters in the new prediction model hidden layer parameters;
training a new prediction model based on the second product data to obtain a claim settlement prediction result of the second product;
comparing the claim settlement prediction result of the second product with a preset second product standard processing result to obtain a claim settlement prediction error of the second product;
and optimizing the non-associated parameters in the input layer parameters and the hidden layer parameters and the output layer parameters based on the second product claim prediction error until the model is fitted to obtain a second claim prediction model.
Further, training a new prediction model based on the second product data to obtain a claim settlement prediction result of the second product, which specifically includes:
acquiring second product data and preprocessing the second product data;
carrying out data division on the preprocessed second product data to obtain a second product data training set and a second product data verification set;
training the new prediction model based on the second product data training set to obtain a claim settlement prediction result of the second product;
training the new prediction model based on the second product data training set to obtain a claim settlement prediction result of the second product, which specifically comprises the following steps:
sequentially receiving training data in the second product data training set through the input layer, and sequentially transmitting the training data to the hidden layer of the new prediction model;
extracting features of the training data through the hidden layer to obtain feature representations corresponding to the training data;
and carrying out feature conversion on the feature representation corresponding to the training data through the activation function in the output layer to obtain the claim settlement prediction result of the second product.
Further, tuning the non-associated parameters of the input layer parameters and the hidden layer parameters and the output layer parameters based on the second product claim prediction error until the model is fitted to obtain a second claim prediction model, which specifically comprises:
Transmitting a second product claim prediction error in the output layer, the hidden layer and the output layer;
respectively acquiring an input layer error, a hidden layer error and an output layer error, and respectively comparing the input layer error, the hidden layer error and the output layer error with a second product error threshold of a preset standard;
when the error of any network layer in the new prediction model is larger than a second product error threshold of a preset standard, continuously adjusting the input layer parameter, the uncorrelated parameter and the output layer parameter in the hidden layer parameter in the new prediction model until the errors of all network layers are smaller than or equal to the second product error threshold of the preset standard, and obtaining a second claim settlement prediction model for completing training.
Further, the first claim prediction model includes a data input layer, a feature hiding layer, and a prediction output layer, obtains first product data, and trains the first claim prediction model based on the first product data, and specifically includes:
acquiring first product data, preprocessing the first product data, and dividing the preprocessed first product data to obtain a first product data training set and a first product data verification set;
training a first claim prediction model based on a first product data training set;
Verifying the trained first claim settlement prediction model based on the first product data verification set, and outputting the verified first claim settlement prediction model;
training the first claim settlement prediction model based on the first product data training set specifically comprises:
importing the first product data training set into a first claim prediction model;
sequentially receiving product data in the first product data training set through the data input layer, and sequentially transmitting the product data to the feature hiding layer;
carrying out feature processing on the product data through the feature hiding layer to obtain feature representation corresponding to the product data;
performing feature conversion on the feature representation corresponding to the product data through an activation function of the output layer to obtain a claim settlement prediction result of the first product;
and iteratively updating the first claim settlement prediction model based on the claim settlement prediction result of the first product and a preset first product standard processing result until the model is fitted to obtain a trained first claim settlement prediction model.
Further, based on the claim prediction result of the first product and a preset first product standard processing result, iteratively updating the first claim prediction model until the model is fitted to obtain a trained first claim prediction model, which specifically includes:
Comparing the claim settlement prediction result of the first product with a preset first product standard processing result to obtain a claim settlement prediction error of the first product;
transmitting a first product claim prediction error in the data output layer, the feature hiding layer and the prediction output layer;
respectively acquiring a data input layer error, a characteristic hiding layer error and a prediction output layer error, and respectively comparing the prediction input layer error, the characteristic hiding layer error and the prediction output layer error with a preset standard first product error threshold;
when the error of any network layer in the first claim settlement prediction model is larger than a first product error threshold value of a preset standard, continuously adjusting the model parameters of the first claim settlement prediction model until the error of all network layers in the first claim settlement prediction model is smaller than or equal to the first product error threshold value of the preset standard, and obtaining the first claim settlement prediction model which is completely trained.
In order to solve the above technical problems, the embodiments of the present application further provide a product claim settlement device based on migration learning, which adopts the following technical scheme:
a product claim settlement device based on transfer learning, comprising:
the first claim settlement prediction module is used for acquiring first product data and training a first claim settlement prediction model based on the first product data;
The feature extractor construction module is used for acquiring hidden layer parameters of the first claim settlement prediction model and constructing a feature extractor based on the hidden layer parameters;
the hidden layer replacing module is used for replacing a hidden layer of a preset initial prediction model by using the feature extractor to obtain a new prediction model;
the second claim settlement prediction module is used for acquiring second product data, training a new prediction model based on the second product data to obtain a second claim settlement prediction model, wherein the data volume of the second product data is smaller than that of the first product data, and freezing hidden layer parameters of the new prediction model when the new prediction model is trained;
the product claim settlement module is used for receiving the claim settlement instruction, acquiring product data of the product to be clashed, importing the product data of the product to be clashed into the trained second claim settlement prediction model, and outputting the claim settlement result of the product to be clashed.
In order to solve the above technical problems, the embodiments of the present application further provide a computer device, which adopts the following technical schemes:
a computer device comprising a memory having stored therein computer readable instructions which when executed by the processor implement the steps of the migration learning based product claim method of any one of the preceding claims.
In order to solve the above technical problems, embodiments of the present application further provide a computer readable storage medium, which adopts the following technical solutions:
a computer readable storage medium having stored thereon computer readable instructions which when executed by a processor implement the steps of a method of claim settlement of a product based on transfer learning as claimed in any one of the preceding claims.
Compared with the prior art, the embodiment of the application has the following main beneficial effects:
the application discloses a product claim settlement method, device, equipment and storage medium based on transfer learning, and belongs to the technical field of artificial intelligence and the field of insurance production finance. According to the method, when the second product data with smaller data quantity is trained, model training is achieved through a migration learning mode, the first product data is used for training the first product data, the first product data is the product data with larger data quantity of the second product data, then the feature extractor is built by using the hidden layer parameters of the first product data, the feature extractor is used for replacing the hidden layer of the second product data, when the second product data is trained, the hidden layer parameters of the second product data are frozen, only parameters of other network layers in the model are required to be optimized, the training process of the product data with smaller training sample quantity can be quickened, dependence on the product data in the model training process is reduced, and meanwhile accuracy and generalization capability of the product data on a new task are guaranteed.
Drawings
For a clearer description of the solution in the present application, a brief description will be given below of the drawings that are needed in the description of the embodiments of the present application, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 illustrates an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 illustrates a flow chart of one embodiment of a method of product claims settlement based on transfer learning according to the present application;
FIG. 3 illustrates a schematic diagram of one embodiment of a product claim device based on transfer learning according to the present application;
fig. 4 shows a schematic structural diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to better understand the technical solutions of the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping class application, a search class application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic video expert compression standard audio plane 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video expert compression standard audio plane 4) players, laptop and desktop computers, and the like.
The server 105 may be a server that provides various services, such as a background server that provides support for pages displayed on the terminal devices 101, 102, 103, and may be a stand-alone server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
It should be noted that, the product claim settlement method based on the transfer learning provided in the embodiments of the present application is generally executed by a server, and accordingly, the product claim settlement device based on the transfer learning is generally disposed in the server.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow chart of one embodiment of a method of product claims settlement based on transfer learning according to the present application is shown. The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
At present, in the product claim settlement process, intelligent claim settlement is often realized by using a plurality of product claim settlement prediction models trained in advance so as to improve the claim settlement efficiency, but the accuracy of the product claim settlement prediction model is greatly influenced by the number of training samples, and the accuracy of the product claim settlement prediction model is lower for products with less training samples. For example, the data amount of the claim data of the automobile products is large, so that the accuracy of the claim prediction result given by the claim prediction model of the automobile insurance products is high, while the data amount of the claim data of some non-automobile products, such as forestry, livestock breeding and the like, is small, so that the accuracy of the claim prediction model trained by directly using the claim data of the products is low, and the output claim prediction result has little reference value.
In order to solve the technical problems, the application discloses a product claim settlement method, device, equipment and storage medium based on transfer learning, which belong to the technical field of artificial intelligence and the field of insurance production finance.
The product claim settlement method based on transfer learning comprises the following steps:
s201, acquiring first product data, and training a first claim settlement prediction model based on the first product data.
In this embodiment, the first product is a product having a larger amount of claim data. For example, in a specific embodiment of the present application, the first product is an automobile claim settlement product, the first product data is automobile insurance claim settlement data, and these claim settlement data are used to train a first product prediction model, i.e. a claim settlement prediction model of an automobile insurance product, which can predict claim settlement results of the automobile product.
S202, acquiring hidden layer parameters of the first claim settlement prediction model, and constructing a feature extractor based on the hidden layer parameters.
In this embodiment, the first product prediction model includes an input layer, a hidden layer, and an output layer, and parameters of the hidden layer include weights and biases. In particular, the model parameters of the hidden layer are extracted individually and based on this parameters a feature extractor is built for subsequent feature processing of the second product data, the hidden layer being a separate component responsible for extracting useful features from the input data.
And when the second product prediction model is constructed, the model parameters of the hidden layer are acquired from the first product model in a transfer learning mode to construct the feature extractor. Transfer learning is a machine learning method aimed at applying knowledge and experience learned in one task to solve another related task, and improving learning performance of a target task by transferring knowledge learned in a source domain (existing task) into a target domain (new task), which may be performed from aspects of features, model parameters, neural network layers, etc. in the source domain. The transfer learning has the advantages that the data and knowledge of the existing task can be utilized, the dependence on a large amount of marked data is reduced, the model training process is quickened, and the accuracy and generalization capability of the target task are improved.
In the above embodiment, when model training is performed, the hidden layer parameters of the first claim prediction model are obtained through a migration learning mode, and the feature extractor is constructed based on the hidden layer parameters, so that the model training process is accelerated while the model prediction accuracy is ensured.
S203, replacing a hidden layer of a preset initial prediction model by using a feature extractor to obtain a new prediction model;
in this embodiment, a preset new prediction model is obtained, a feature extractor is used to replace a hidden layer of a preset initial prediction model, and when the new prediction model is trained, hidden layer parameters of the new prediction model are frozen, so that second product data are only used for completing training of a prediction task, dependence on product data in the model training process is reduced, and meanwhile accuracy and generalization capability of the product claim settlement prediction model on the new task are guaranteed.
In the above embodiments, since the claim data set for non-automotive products is typically much less than the claim data set for automotive products, the predictive model for automotive products has been sufficiently trained on automotive product data in which parameters have been learned about the characterization of the vehicle insurance claim. When a subsequent second product prediction model is constructed, the feature extractor is constructed by using the model parameters of the middle layer obtained from the first product model, and the prior knowledge of the vehicle insurance product claim settlement task by the vehicle insurance product claim settlement prediction model can be utilized to transfer the useful feature representations to the non-vehicle product claim settlement task, so that the performance of the non-vehicle product prediction model is improved, the second product data is only used for completing the training of the prediction task when the second claim settlement prediction model is trained, and the dependence on the product data in the model training process is reduced.
S204, obtaining second product data, training a new prediction model based on the second product data to obtain a second claim settlement prediction model, wherein the data size of the second product data is smaller than that of the first product data, and freezing hidden layer parameters of the new prediction model when the new prediction model is trained.
Wherein the first product prediction model and the second product prediction model may be constructed based on a machine learning algorithm or a deep learning algorithm.
In this embodiment, the second product is a product in which only a small amount of claim data exists. For example, in one specific embodiment of the present application, the second product refers to some non-automotive products, such as claim products related to forestry, livestock breeding, and the like, the second product data is claim data of claim products related to forestry, livestock breeding, and the like, the claim data is used to train a second product prediction model, the second product prediction model performs feature processing on the claim data of the non-automotive products from the feature representation of the first product model through transfer learning, and the feature representation obtained through transfer learning is used to predict claim results of the non-automotive products.
S205, receiving an instruction of claim settlement, obtaining product data of a product to be claim settled, importing the product data of the product to be claim settled into a trained second claim settlement prediction model, and outputting a result of claim settlement of the product to be claim settled.
In this embodiment, when an instruction for claim settlement is received, product data of a product to be claim settled is obtained and input into a trained second product prediction model, the second product prediction model performs feature processing on the product data of the product to be claim settled in a feature representation mode obtained through transfer learning, and then the second product prediction performs feature processing result prediction on the product data of the product to be claim settled, and outputs a corresponding claim settlement result.
In the embodiment, the model training is realized through the mode of transfer learning, the training process of the product claim settlement prediction model with a small number of training samples can be quickened, the dependence on product data in the model training process is reduced, and meanwhile, the accuracy and generalization capability of the product claim settlement prediction model on a new task are ensured.
Further, the first claim prediction model includes a data input layer, a feature hiding layer, and a prediction output layer, obtains first product data, and trains the first claim prediction model based on the first product data, and specifically includes:
acquiring first product data, preprocessing the first product data, and dividing the preprocessed first product data to obtain a first product data training set and a first product data verification set;
Training a first claim prediction model based on a first product data training set;
verifying the trained first claim settlement prediction model based on the first product data verification set, and outputting the verified first claim settlement prediction model;
training the first claim settlement prediction model based on the first product data training set specifically comprises:
importing the first product data training set into a first claim prediction model;
sequentially receiving product data in the first product data training set through the data input layer, and sequentially transmitting the product data to the feature hiding layer;
carrying out feature processing on the product data through the feature hiding layer to obtain feature representation corresponding to the product data;
performing feature conversion on the feature representation corresponding to the product data through an activation function of the output layer to obtain a claim settlement prediction result of the first product;
and iteratively updating the first claim settlement prediction model based on the claim settlement prediction result of the first product and a preset first product standard processing result until the model is fitted to obtain a trained first claim settlement prediction model.
In a specific embodiment of the present application, the first product prediction model is a claim prediction model of a vehicle insurance product, and when the claim prediction model of the vehicle insurance product is trained, a vehicle insurance data set is obtained, where the vehicle insurance data set includes policy information, vehicle attributes and historical claim records, the vehicle insurance data set is preprocessed, the preprocessing includes deduplication, missing value filling, data normalization, and the like, then the preprocessed vehicle insurance data set is subjected to data division to obtain a first training data set and a first verification data set, the claim prediction model of the vehicle insurance product is trained based on the first training data set, the claim prediction model of the vehicle insurance product that completes training is verified based on the first verification data set, and the claim prediction model of the vehicle insurance product that passes verification is output.
The first claim prediction model includes a data input layer, a feature hiding layer, and a prediction output layer. The data input layer is used for converting data into a format which can be processed by the model, and transmitting information to the hidden layer for feature extraction and representation learning. Feature concealment layers are a series of network layers between the input and output layers, each comprising a plurality of neurons or nodes, calculated by learning complex patterns and feature representations in the data, the function of the concealment layers being to extract higher level feature representations from the input data, enabling the model to better understand and capture the intrinsic structure and patterns of the data. The prediction output layer receives the output of the hidden layer and generates the final prediction result as the output of the model, the number of neurons of the output layer depends on the number of predicted categories or target dimensions to be predicted, and the output layer converts the output of the model into a probability distribution or an appropriate numerical range through an activation function (such as a softmax function, a sigmoid function, etc.) so as to interpret or post-process the result.
In the above embodiment, the input layer is responsible for receiving the original data or features, the hidden layer is responsible for extracting abstract features from the input data, and the output layer is responsible for generating the final prediction result, and the combination and the cooperative work of the network layers enable the claim prediction model of the vehicle insurance product to effectively learn and predict the input data.
Further, based on the claim prediction result of the first product and a preset first product standard processing result, iteratively updating the first claim prediction model until the model is fitted to obtain a trained first claim prediction model, which specifically includes:
comparing the claim settlement prediction result of the first product with a preset first product standard processing result to obtain a claim settlement prediction error of the first product;
transmitting a first product claim prediction error in the data output layer, the feature hiding layer and the prediction output layer;
respectively acquiring a data input layer error, a characteristic hiding layer error and a prediction output layer error, and respectively comparing the prediction input layer error, the characteristic hiding layer error and the prediction output layer error with a preset standard first product error threshold;
when the error of any network layer in the first claim settlement prediction model is larger than a first product error threshold value of a preset standard, continuously adjusting the model parameters of the first claim settlement prediction model until the error of all network layers in the first claim settlement prediction model is smaller than or equal to the first product error threshold value of the preset standard, and obtaining the first claim settlement prediction model which is completely trained.
In this embodiment, the present application iteratively updates the claim prediction model of the vehicle insurance product by using a back propagation algorithm, firstly, a loss function of the first claim prediction model is obtained, and an error between the claim prediction result of the first product and the first product standard processing result is calculated by using the obtained loss function, where the first product standard processing result is a preset data labeling result generated when labeling is performed on training data in the first training data set. And determining whether the errors of all network layers of the model are larger than a preset first error threshold value through direction error transfer and error comparison, if the errors of the network layers are larger than the preset first error threshold value, adjusting and optimizing all network layer parameters of the first claim prediction model until the prediction errors of all network layers are smaller than or equal to the first error threshold value, and fitting the model at the moment to obtain the claim prediction model of the train insurance product.
Further, obtaining hidden layer parameters of the first claim prediction model, and constructing a feature extractor based on the hidden layer parameters, specifically including:
acquiring associated product features of the first product data and the second product data;
determining hidden layer parameters associated with associated product features in the first claim prediction model to obtain associated parameters;
a feature extractor is constructed based on the correlation parameters.
In this embodiment, the present application acquires model parameters of the hidden layer from the first product model by means of transfer learning to construct a feature extractor, and uses the feature extractor to replace the hidden layer of the new product prediction model, so as to obtain the second product prediction model. In particular, by analyzing the associated product characteristics of the first product data and the second product data, the associated product characteristics include that two products may have the same characteristics or properties, such as the same type of business, the same manner of processing, and so on. And then determining hidden layer parameters associated with the associated product features in the first claim prediction model, judging which parameters are related to the associated product features by monitoring the change and adjustment of the hidden layer parameters in the training process of the second product prediction model, and obtaining the associated parameters, wherein the hidden layer parameters comprise weights, biases, learning rates and the like, and constructing a feature extractor based on the associated parameters.
In the above specific embodiment, in the transition learning, the hidden layer parameters of the claim prediction model of the vehicle insurance product are used as the construction of the feature extractor, and the claim prediction model of the vehicle insurance product is trained on the vehicle insurance claim learning task and has learned the feature representations related to the vehicle insurance claim learning, which can be regarded as the "knowledge" possessed by the claim prediction model of the vehicle insurance product, and by the transition learning, the construction feature extractor can also obtain the corresponding "knowledge" of these feature representations, which are then applied to the training and prediction task of the second product prediction model, so that the second product prediction model utilizes the "knowledge" and the feature representations of the claim prediction model of the vehicle insurance product to some extent.
In the embodiment, the feature extractor is constructed in a transfer learning mode, and the feature extractor is used for replacing the hidden layer of the new product prediction model to obtain the second product prediction model, so that the training process of the product claim settlement prediction model with a small number of training samples can be quickened, the dependence on product data in the model training process is reduced, and meanwhile, the accuracy and generalization capability of the product claim settlement prediction model on a new task are ensured.
Further, the new prediction model includes an input layer, a hidden layer, and an output layer, and second product data is obtained, and the new prediction model is trained based on the second product data, so as to obtain a second claim settlement prediction model, which specifically includes:
freezing associated parameters in the new prediction model hidden layer parameters, and determining non-associated parameters in the new prediction model hidden layer parameters;
training a new prediction model based on the second product data to obtain a claim settlement prediction result of the second product;
comparing the claim settlement prediction result of the second product with a preset second product standard processing result to obtain a claim settlement prediction error of the second product;
and optimizing the non-associated parameters in the input layer parameters and the hidden layer parameters and the output layer parameters based on the second product claim prediction error until the model is fitted to obtain a second claim prediction model.
Further, training a new prediction model based on the second product data to obtain a claim settlement prediction result of the second product, which specifically includes:
acquiring second product data and preprocessing the second product data;
carrying out data division on the preprocessed second product data to obtain a second product data training set and a second product data verification set;
Training the new prediction model based on the second product data training set to obtain a claim settlement prediction result of the second product;
training the new prediction model based on the second product data training set to obtain a claim settlement prediction result of the second product, which specifically comprises the following steps:
sequentially receiving training data in the second product data training set through the input layer, and sequentially transmitting the training data to the hidden layer of the new prediction model;
extracting features of the training data through the hidden layer to obtain feature representations corresponding to the training data;
and carrying out feature conversion on the feature representation corresponding to the training data through the activation function in the output layer to obtain the claim settlement prediction result of the second product.
In this embodiment, after the feature extractor is used to replace the hidden layer of the new product prediction model to obtain the second product prediction model, the associated parameters in the hidden layer parameters of the new prediction model are frozen, the non-associated parameters in the hidden layer parameters of the new prediction model are determined, and the second claim settlement prediction model is trained by using the second product data to obtain a claim settlement prediction model for the second product. It should be noted that, in the training process of the second claim prediction model, when the model parameter tuning is performed, the relevant parameters in the frozen hidden layer parameters will not change, so as to adjust the input layer parameters, the non-relevant parameters in the hidden layer parameters and the output layer parameters in the model.
Further, tuning the non-associated parameters of the input layer parameters and the hidden layer parameters and the output layer parameters based on the second product claim prediction error until the model is fitted to obtain a second claim prediction model, which specifically comprises:
transmitting a second product claim prediction error in the output layer, the hidden layer and the output layer;
respectively acquiring an input layer error, a hidden layer error and an output layer error, and respectively comparing the input layer error, the hidden layer error and the output layer error with a second product error threshold of a preset standard;
when the error of any network layer in the new prediction model is larger than a second product error threshold of a preset standard, continuously adjusting the input layer parameter, the uncorrelated parameter and the output layer parameter in the hidden layer parameter in the new prediction model until the errors of all network layers are smaller than or equal to the second product error threshold of the preset standard, and obtaining a second claim settlement prediction model for completing training.
In this embodiment, when the second claim prediction model is trained, the parameter adjustment of the second product prediction model is performed according to the errors of each network layer in the second claim prediction model and the standard second product error threshold, and it should be noted that, before the training starts, the associated parameters in the hidden layer parameters need to be frozen to ensure that the associated parameters cannot be changed, so as to maintain the capability of extracting the model features, prevent the hidden layer from being excessively fitted, reduce the dependence on product data in the model training process, and accelerate the model training process. When the parameter is adjusted, the parameters of the input layer, the parameters of the non-associated layer and the parameters of the output layer in the new prediction model can be adjusted, so that the model is fitted, and a second claim-settlement prediction model which completes training is obtained.
In the above embodiment, the application discloses a product claim settlement method based on transfer learning, which belongs to the technical field of artificial intelligence and the field of insurance production finance. According to the method, when the second product data with smaller data quantity is trained, model training is achieved through a migration learning mode, the first product data is used for training the first product data, the first product data is the product data with larger data quantity of the second product data, then the feature extractor is built by using the hidden layer parameters of the first product data, the feature extractor is used for replacing the hidden layer of the second product data, when the second product data is trained, the hidden layer parameters of the second product data are frozen, only parameters of other network layers in the model are required to be optimized, the training process of the product data with smaller training sample quantity can be quickened, dependence on the product data in the model training process is reduced, and meanwhile accuracy and generalization capability of the product data on a new task are guaranteed.
In this embodiment, the electronic device (for example, the server shown in fig. 1) on which the product claim settlement method based on the transfer learning operates may receive the instruction or acquire the data through a wired connection manner or a wireless connection manner. It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
It is emphasized that to further ensure the privacy and security of the automotive and non-automotive data, the automotive and non-automotive data may also be stored in a blockchain node.
The blockchain referred to in the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Those skilled in the art will appreciate that implementing all or part of the processes of the methods of the embodiments described above may be accomplished by way of computer readable instructions, stored on a computer readable storage medium, which when executed may comprise processes of embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
With further reference to fig. 3, as an implementation of the method shown in fig. 2, the application provides an embodiment of a product claim settlement apparatus based on transfer learning, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 3, a product claim settlement apparatus 300 based on transfer learning according to the embodiment includes:
A first claim prediction module 301, configured to obtain first product data, and train a first claim prediction model based on the first product data;
a feature extractor construction module 302, configured to obtain hidden layer parameters of the first claim prediction model, and construct a feature extractor based on the hidden layer parameters;
a hidden layer replacing module 303, configured to replace a hidden layer of a preset initial prediction model with a feature extractor, so as to obtain a new prediction model;
the second claim prediction module 304 is configured to obtain second product data, and train a new prediction model based on the second product data to obtain a second claim prediction model, where a data amount of the second product data is smaller than a data amount of the first product data, and freeze hidden layer parameters of the new prediction model when training the new prediction model;
the product claim settlement module 305 is configured to receive the claim settlement instruction, obtain product data of the product to be clawed, import the product data of the product to be clawed into the trained second claim settlement prediction model, and output a claim settlement result of the product to be clawed.
Further, the feature extractor construction module 302 specifically includes:
the associated product feature sub-module is used for acquiring associated product features of the first product data and the second product data;
The associated parameter obtaining sub-module is used for determining hidden layer parameters associated with the associated product features in the first claim settlement prediction model to obtain associated parameters;
the feature extractor construction sub-module is used for constructing the feature extractor based on the association parameters.
Further, the new prediction model includes an input layer, a hidden layer, and an output layer, and the second claim prediction module 304 specifically includes:
the parameter freezing sub-module is used for freezing associated parameters in the new prediction model hidden layer parameters and determining non-associated parameters in the new prediction model hidden layer parameters;
the model training sub-module is used for training a new prediction model based on the second product data to obtain a claim settlement prediction result of the second product;
the prediction error sub-module is used for comparing the claim settlement prediction result of the second product with a preset second product standard processing result to obtain a second product claim settlement prediction error;
and the model iteration sub-module is used for optimizing the non-associated parameters and the output layer parameters in the input layer parameters and the hidden layer parameters based on the second product claim prediction error until the model is fitted to obtain a second claim prediction model.
Further, the model training submodule specifically includes:
The preprocessing unit is used for acquiring second product data and preprocessing the second product data;
the data dividing unit is used for carrying out data division on the preprocessed second product data to obtain a second product data training set and a second product data verification set;
the model training unit is used for training the new prediction model based on the second product data training set to obtain a claim settlement prediction result of the second product;
the model training unit specifically comprises:
the data transmission subunit is used for sequentially receiving training data in the second product data training set through the input layer and sequentially transmitting the training data to the hidden layer of the new prediction model;
the feature extraction subunit is used for carrying out feature extraction on the training data through the hidden layer to obtain feature representation corresponding to the training data;
and the feature conversion subunit is used for carrying out feature conversion on the feature representation corresponding to the training data through the activation function in the output layer to obtain the claim prediction result of the second product.
Further, the model iteration submodule specifically includes:
the error calculation unit is used for transmitting second product claim settlement prediction errors in the output layer, the hidden layer and the output layer;
The error comparison unit is used for respectively acquiring an input layer error, a hidden layer error and an output layer error and respectively comparing the input layer error, the hidden layer error and the output layer error with a second product error threshold value of a preset standard;
and the parameter tuning unit is used for continuously adjusting the input layer parameters, the non-associated parameters and the output layer parameters in the new prediction model when the error of any network layer in the new prediction model is larger than the second product error threshold of the preset standard until the errors of all network layers are smaller than or equal to the second product error threshold of the preset standard, so as to obtain a second claim prediction model for finishing training.
Further, the first claim prediction model includes a data input layer, a feature hiding layer, and a prediction output layer, and the first claim prediction module 301 specifically includes:
the first product data dividing sub-module is used for acquiring first product data, preprocessing the first product data, and dividing the preprocessed first product data to obtain a first product data training set and a first product data verification set;
the first claim settlement prediction model training sub-module is used for training the first claim settlement prediction model based on the first product data training set;
The first claim settlement prediction model verification sub-module is used for verifying the first claim settlement prediction model which is trained based on the first product data verification set and outputting the verified first claim settlement prediction model;
the first claim prediction model training submodule specifically comprises:
a first product data importing unit for importing a first product data training set into a first claim prediction model;
the product data transmission unit is used for sequentially receiving the product data in the first product data training set through the data input layer and sequentially transmitting the product data to the characteristic hiding layer;
the product data feature processing unit is used for carrying out feature processing on the product data through the feature hiding layer to obtain feature representation corresponding to the product data;
the product data feature conversion unit is used for carrying out feature conversion on the feature representation corresponding to the product data through the activation function of the output layer to obtain a claim settlement prediction result of the first product;
and the first claim settlement prediction model iteration updating unit is used for carrying out iteration updating on the first claim settlement prediction model based on the claim settlement prediction result of the first product and a preset first product standard processing result until the model is fitted to obtain a trained first claim settlement prediction model.
Further, the first claim prediction module iteration update unit specifically includes:
the first product claim settlement prediction error calculation subunit is used for comparing the claim settlement prediction result of the first product with a preset first product standard processing result to obtain a first product claim settlement prediction error;
a first product claim prediction error transmission subunit, configured to transmit a first product claim prediction error in the data output layer, the feature hiding layer, and the prediction output layer;
the first product claim prediction error comparison subunit is used for respectively acquiring a data input layer error, a characteristic hiding layer error and a prediction output layer error, and respectively comparing the prediction input layer error, the characteristic hiding layer error and the prediction output layer error with a preset standard first product error threshold;
and the first claim prediction model parameter optimization subunit is used for continuously adjusting the model parameters of the first claim prediction model when the error of any network layer in the first claim prediction model is larger than the first product error threshold of the preset standard, until the errors of all network layers in the first claim prediction model are smaller than or equal to the first product error threshold of the preset standard, so as to obtain the first claim prediction model which completes training.
In the above embodiment, the application discloses a product claim settlement device based on transfer learning, which belongs to the technical field of artificial intelligence and the field of insurance production finance. According to the method, when the second product data with smaller data quantity is trained, model training is achieved through a migration learning mode, the first product data is used for training the first product data, the first product data is the product data with larger data quantity of the second product data, then the feature extractor is built by using the hidden layer parameters of the first product data, the feature extractor is used for replacing the hidden layer of the second product data, when the second product data is trained, the hidden layer parameters of the second product data are frozen, only parameters of other network layers in the model are required to be optimized, the training process of the product data with smaller training sample quantity can be quickened, dependence on the product data in the model training process is reduced, and meanwhile accuracy and generalization capability of the product data on a new task are guaranteed.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 4, fig. 4 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It should be noted that only computer device 4 having components 41-43 is shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 4. Of course, the memory 41 may also comprise both an internal memory unit of the computer device 4 and an external memory device. In this embodiment, the memory 41 is generally used to store an operating system and various application software installed on the computer device 4, such as computer readable instructions of a product claim method based on transfer learning. Further, the memory 41 may be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute computer readable instructions stored in the memory 41 or process data, for example, execute computer readable instructions of the product claim settlement method based on transfer learning.
The network interface 43 may comprise a wireless network interface or a wired network interface, which network interface 43 is typically used for establishing a communication connection between the computer device 4 and other electronic devices.
In the above embodiment, the application discloses a computer device, which belongs to the technical field of artificial intelligence and the field of insurance finance. According to the method, when the second product data with smaller data quantity is trained, model training is achieved through a migration learning mode, the first product data is used for training the first product data, the first product data is the product data with larger data quantity of the second product data, then the feature extractor is built by using the hidden layer parameters of the first product data, the feature extractor is used for replacing the hidden layer of the second product data, when the second product data is trained, the hidden layer parameters of the second product data are frozen, only parameters of other network layers in the model are required to be optimized, the training process of the product data with smaller training sample quantity can be quickened, dependence on the product data in the model training process is reduced, and meanwhile accuracy and generalization capability of the product data on a new task are guaranteed.
The present application also provides another embodiment, namely, a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of a product claim method based on transfer learning as described above.
In the above embodiments, the application discloses a computer readable storage medium, which belongs to the technical field of artificial intelligence and the field of risk-producing finance. According to the method, when the second product data with smaller data quantity is trained, model training is achieved through a migration learning mode, the first product data is used for training the first product data, the first product data is the product data with larger data quantity of the second product data, then the feature extractor is built by using the hidden layer parameters of the first product data, the feature extractor is used for replacing the hidden layer of the second product data, when the second product data is trained, the hidden layer parameters of the second product data are frozen, only parameters of other network layers in the model are required to be optimized, the training process of the product data with smaller training sample quantity can be quickened, dependence on the product data in the model training process is reduced, and meanwhile accuracy and generalization capability of the product data on a new task are guaranteed.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It is apparent that the embodiments described above are only some embodiments of the present application, but not all embodiments, the preferred embodiments of the present application are given in the drawings, but not limiting the patent scope of the present application. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a more thorough understanding of the present disclosure. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing, or equivalents may be substituted for elements thereof. All equivalent structures made by the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the protection scope of the application.

Claims (10)

1. A method for product claims settlement based on transfer learning, comprising:
acquiring first product data and training a first claim prediction model based on the first product data;
acquiring hidden layer parameters of the first claim prediction model, and constructing a feature extractor based on the hidden layer parameters;
Replacing a hidden layer of a preset initial prediction model by using the feature extractor to obtain a new prediction model;
acquiring second product data, and training the new prediction model based on the second product data to obtain a second claim settlement prediction model, wherein the data volume of the second product data is smaller than the data volume of the first product data, and freezing hidden layer parameters of the new prediction model when the new prediction model is trained;
receiving an instruction for claim settlement, acquiring product data of a product to be claim settled, importing the product data of the product to be claim settled into the trained second claim settlement prediction model, and outputting a result for claim settlement of the product to be claim settled.
2. The method of claim 1, wherein obtaining hidden layer parameters of the first claim prediction model and constructing a feature extractor based on the hidden layer parameters, comprises:
acquiring associated product features of the first product data and the second product data;
determining hidden layer parameters associated with the associated product features in the first claim prediction model to obtain associated parameters;
the feature extractor is constructed based on the association parameters.
3. The method of claim 2, wherein the new prediction model includes an input layer, a hidden layer, and an output layer, second product data is obtained, and the new prediction model is trained based on the second product data to obtain a second claim prediction model, and the method specifically includes:
freezing associated parameters in the new prediction model hidden layer parameters, and determining non-associated parameters in the new prediction model hidden layer parameters;
training the new prediction model based on the second product data to obtain a claim settlement prediction result of the second product;
comparing the claim prediction result of the second product with a preset second product standard processing result to obtain a claim prediction error of the second product;
and optimizing the non-associated parameters in the input layer parameters and the hidden layer parameters and the output layer parameters based on the second product claim prediction error until a model is fitted to obtain the second claim prediction model.
4. The method for claim 3, wherein training the new prediction model based on the second product data to obtain a claim prediction result of the second product specifically comprises:
Acquiring second product data and preprocessing the second product data;
carrying out data division on the preprocessed second product data to obtain a second product data training set and a second product data verification set;
training the new prediction model based on the second product data training set to obtain a claim settlement prediction result of the second product;
training the new prediction model based on the second product data training set to obtain a claim settlement prediction result of the second product, which specifically comprises:
sequentially receiving training data in the second product data training set through the input layer, and sequentially transmitting the training data to a hidden layer of the new prediction model;
extracting features of the training data through the hidden layer to obtain feature representations corresponding to the training data;
and carrying out feature transformation on the feature representation corresponding to the training data through the activation function in the output layer to obtain a claim settlement prediction result of the second product.
5. The method for claim 3, wherein the optimizing the non-associated parameters of the input layer parameters, the hidden layer parameters and the output layer parameters based on the second claim prediction error until the model is fitted to obtain the second claim prediction model specifically comprises:
Transmitting the second product claim prediction error in the output layer, the hidden layer, and the output layer;
respectively acquiring an input layer error, a hidden layer error and an output layer error, and respectively comparing the input layer error, the hidden layer error and the output layer error with a second product error threshold of a preset standard;
when the error of any network layer in the new prediction model is larger than a second product error threshold of a preset standard, continuously adjusting the input layer parameter, the uncorrelated parameter and the output layer parameter in the hidden layer parameter in the new prediction model until the errors of all network layers are smaller than or equal to the second product error threshold of the preset standard, and obtaining the second claim prediction model which completes training.
6. The method for claim 1 to 5, wherein the first claim prediction model comprises a data input layer, a feature hiding layer and a prediction output layer, the first product data is obtained, and the first claim prediction model is trained based on the first product data, and the method specifically comprises:
acquiring first product data, preprocessing the first product data, and dividing the preprocessed first product data to obtain a first product data training set and a first product data verification set;
Training the first claim prediction model based on the first product data training set;
verifying the trained first claim prediction model based on the first product data verification set, and outputting the verified first claim prediction model;
training the first claim prediction model based on the first product data training set, specifically including:
importing the first training set of product data into the first claim prediction model;
sequentially receiving product data in the first product data training set through the data input layer, and sequentially transmitting the product data to the feature hiding layer;
performing feature processing on the product data through the feature hiding layer to obtain feature representation corresponding to the product data;
performing feature conversion on the feature representation corresponding to the product data through the activation function of the output layer to obtain a claim settlement prediction result of the first product;
and iteratively updating the first claim settlement prediction model based on the claim settlement prediction result of the first product and a preset first product standard processing result until the model is fitted to obtain the first claim settlement prediction model which completes training.
7. The method for claim 6, wherein iteratively updating the first claim settlement model based on the claim settlement prediction result of the first product and a preset first product standard processing result until the model is fitted to obtain the first claim settlement prediction model with complete training, specifically comprising:
comparing the claim prediction result of the first product with a preset first product standard processing result to obtain a first product claim prediction error;
transmitting the first product claim prediction error in the data output layer, the feature hiding layer and the prediction output layer;
respectively acquiring a data input layer error, a characteristic hidden layer error and a predicted output layer error, and respectively comparing the predicted input layer error, the characteristic hidden layer error and the predicted output layer error with a first product error threshold of a preset standard;
when the error of any network layer in the first claim prediction model is larger than a first product error threshold value of a preset standard, continuously adjusting model parameters of the first claim prediction model until the error of all network layers in the first claim prediction model is smaller than or equal to the first product error threshold value of the preset standard, and obtaining the first claim prediction model which is trained.
8. A product claim settlement device based on transfer learning, comprising:
the first claim settlement prediction module is used for acquiring first product data and training a first claim settlement prediction model based on the first product data;
the feature extractor construction module is used for acquiring hidden layer parameters of the first claim settlement prediction model and constructing a feature extractor based on the hidden layer parameters;
the hidden layer replacing module is used for replacing a hidden layer of a preset initial prediction model by using the characteristic extractor to obtain a new prediction model;
the second claim settlement prediction module is used for acquiring second product data, training the new prediction model based on the second product data to obtain a second claim settlement prediction model, wherein the data volume of the second product data is smaller than that of the first product data, and freezing hidden layer parameters of the new prediction model when the new prediction model is trained;
and the product claim settlement module is used for receiving the claim settlement instruction, acquiring product data of the product to be clawed, importing the product data of the product to be clawed into the trained second claim settlement prediction model, and outputting the claim settlement result of the product to be clawed.
9. A computer device comprising a memory having stored therein computer readable instructions which when executed implement the steps of the migrate learning based product claim method of any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer readable instructions which when executed by a processor implement the steps of the migrate learning based product claim method of any of claims 1 to 7.
CN202311160060.5A 2023-09-08 2023-09-08 Product claim settlement method, device, equipment and storage medium based on transfer learning Pending CN117252712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311160060.5A CN117252712A (en) 2023-09-08 2023-09-08 Product claim settlement method, device, equipment and storage medium based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311160060.5A CN117252712A (en) 2023-09-08 2023-09-08 Product claim settlement method, device, equipment and storage medium based on transfer learning

Publications (1)

Publication Number Publication Date
CN117252712A true CN117252712A (en) 2023-12-19

Family

ID=89132316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311160060.5A Pending CN117252712A (en) 2023-09-08 2023-09-08 Product claim settlement method, device, equipment and storage medium based on transfer learning

Country Status (1)

Country Link
CN (1) CN117252712A (en)

Similar Documents

Publication Publication Date Title
WO2021120677A1 (en) Warehousing model training method and device, computer device and storage medium
CN115329876A (en) Equipment fault processing method and device, computer equipment and storage medium
CN116684330A (en) Traffic prediction method, device, equipment and storage medium based on artificial intelligence
CN116843483A (en) Vehicle insurance claim settlement method, device, computer equipment and storage medium
CN116777646A (en) Artificial intelligence-based risk identification method, apparatus, device and storage medium
CN116843395A (en) Alarm classification method, device, equipment and storage medium of service system
CN116578774A (en) Method, device, computer equipment and storage medium for pre-estimated sorting
CN115392361A (en) Intelligent sorting method and device, computer equipment and storage medium
CN117252712A (en) Product claim settlement method, device, equipment and storage medium based on transfer learning
CN113792342B (en) Desensitization data reduction method, device, computer equipment and storage medium
CN116307742B (en) Risk identification method, device and equipment for subdivision guest group and storage medium
CN117172632B (en) Enterprise abnormal behavior detection method, device, equipment and storage medium
CN117252713A (en) Risk identification method, device and equipment for new energy vehicle and storage medium
CN117236707A (en) Asset optimization configuration method and device, computer equipment and storage medium
CN116756147A (en) Data classification method, device, computer equipment and storage medium
CN117611352A (en) Vehicle insurance claim processing method, device, computer equipment and storage medium
CN117034114A (en) Data prediction method, device, equipment and storage medium based on artificial intelligence
CN116775776A (en) Multi-label text classification method, device, computer equipment and storage medium
CN116977095A (en) Dynamic wind control early warning method and device, computer equipment and storage medium
CN115344564A (en) Data verification method and device, computer equipment and storage medium
CN117078406A (en) Customer loss early warning method and device, computer equipment and storage medium
CN116934506A (en) User behavior prediction method and device, computer equipment and storage medium
CN116757771A (en) Scheme recommendation method, device, equipment and storage medium based on artificial intelligence
CN116451125A (en) New energy vehicle owner identification method, device, equipment and storage medium
CN117034875A (en) Text data generation method, device, equipment and storage medium thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination