CN117670070A - Insurance calculation prediction method and device, electronic equipment and storage medium - Google Patents

Insurance calculation prediction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117670070A
CN117670070A CN202311365871.9A CN202311365871A CN117670070A CN 117670070 A CN117670070 A CN 117670070A CN 202311365871 A CN202311365871 A CN 202311365871A CN 117670070 A CN117670070 A CN 117670070A
Authority
CN
China
Prior art keywords
data
target
prediction
original
hypothesis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311365871.9A
Other languages
Chinese (zh)
Inventor
王海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202311365871.9A priority Critical patent/CN117670070A/en
Publication of CN117670070A publication Critical patent/CN117670070A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the application provides an insurance calculation prediction method and device, electronic equipment and storage medium, and belongs to the technical field of finance. The method comprises the following steps: acquiring original product attribute data marked with an original version number, original hypothesis data, original service data and original prediction result data from a preset storage database; acquiring target product attribute data and target hypothesis data; obtaining target prediction result data according to target product attribute data, target hypothesis data and original service data, wherein the original service data has a source data version number; and generating a target version number according to the source data version number and the running basic data, and marking the target version number on target product attribute data, target hypothesis data, target service data and target prediction result data respectively to generate a target data chain. According to the method and the device, different assumptions can be replaced by primary source data, multiple sets of prediction results are generated, and the usability and the comparability of the data are improved.

Description

Insurance calculation prediction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of financial technologies, and in particular, to an insurance calculation prediction method and apparatus, an electronic device, and a storage medium.
Background
In the insurance management field, the calculation and evaluation and prediction are an indispensable step flow of each insurance company, including the prediction process of liability measurement, odds ratio, continuation ratio, refund ratio and the like. The existing prediction system has serious hard coding, the prediction model version is determined by the hypothesis data, once the product attribute is changed, the prediction model version is issued for correction, and the iteration speed of the system is very slow. And the prediction model version of the existing prediction system is not easy to manage, the hypothesis can not be flexibly changed, and the prediction results generated by the prediction models of all versions are not well compared transversely. In addition, at present, business personnel often combine excel and other tools to perform manual operation when monitoring and managing various data, but the prediction process is performed based on massive data, so that the manual operation is complicated, the system is not easy to butt joint and integrate, and a large amount of labor cost is caused.
Disclosure of Invention
The embodiment of the application mainly aims to provide an insurance calculation prediction method and device, electronic equipment and storage medium, and aims to realize that source data replace different assumptions at one time, generate a plurality of sets of prediction results and improve the usability and comparability of the data.
To achieve the above object, a first aspect of an embodiment of the present application provides an insurance calculation prediction method, where the method includes:
responding to an original version number input by a target object, and acquiring a historical data chain to be traced from a preset storage database, wherein the historical data chain to be traced comprises original product attribute data, original hypothesis data, original service data and original prediction result data which are marked with the original version number;
receiving target product attribute data and corresponding target hypothesis data input by the target object and storing the target product attribute data and the corresponding target hypothesis data into the storage database in response to the input configuration operation of the target object;
calculating according to the target product attribute data, the target hypothesis data and target service data to obtain target prediction result data, wherein the target service data is the original service data, and the target service data has a source data version number;
and generating a target version number according to the source data version number and the running basic data of the target prediction result data, and marking the target version number on the target product attribute data, the target hypothesis data, the target service data and the target prediction result data respectively to generate a target data chain.
In some embodiments, the calculating according to the target product attribute data, the target hypothesis data and the target service data to obtain target prediction result data includes:
reading the target product attribute data, the target hypothesis data and the target service data from the storage database;
and calculating the target prediction result data by using the target product attribute data, the target hypothesis data and the target service data through a big data real-time calculation engine.
In some embodiments, the calculating, by the big data real-time calculation engine, the target prediction result data by using the target product attribute data, the target hypothesis data and the target service data includes:
acquiring security total basic information according to the target product attribute data and the target service data by the big data real-time calculation engine;
determining a target prediction model based on the target hypothesis data;
and inputting the security total basic information into the target prediction model, and predicting to obtain the target prediction result data.
In some embodiments, the determining a target prediction model based on the target hypothesis data comprises:
Determining a refined predictor type from the target hypothesis data;
determining a refined prediction element according to the type of the refined prediction factor, wherein the refined prediction element comprises odds, refund rate and profit margin;
and determining the corresponding target prediction model based on the refined prediction elements.
In some embodiments, the operation base data includes operation time and operation personnel; after the target data chain is generated, the insurance fine prediction method further comprises the following steps:
generating a monitoring log based on the running time and the running personnel of the target prediction result data, and the source data version number and the target version number;
and sending the monitoring log to the target object.
In some embodiments, the insurance fine prediction method further comprises:
generating graph comparison data according to the target service data, the target prediction result data and the original prediction result data based on an echart framework;
and displaying the graph comparison data on a target front-end page so as to visualize the variation and development trend of different prediction result data generated by the same service data corresponding to different hypothesis data.
In some embodiments, the generating graphic contrast data based on the echart framework from the target business data, the target prediction result data, and the original prediction result data includes:
Generating a tree diagram or a table according to the target service data, the target prediction result data and the original prediction result data based on an echart framework;
the displaying the graph comparison data on the target front-end page to visualize the variation and development trend of different prediction result data generated by the same business data corresponding to different hypothesis data, including:
and displaying the tree diagram or the table on the target front-end page to visualize the variation and development trend of different prediction result data generated by the same business data corresponding to different hypothesis data.
To achieve the above object, a second aspect of the embodiments of the present application proposes an insurance fine prediction device, including:
the historical data chain acquisition module is used for responding to an original version number input by a target object and acquiring a historical data chain to be traced from a preset storage database, wherein the historical data chain to be traced comprises original product attribute data, original hypothesis data, original service data and original prediction result data which are marked with the original version number;
the adjustment data acquisition module is used for responding to the input configuration operation of the target object, receiving target product attribute data and corresponding target hypothesis data input by the target object and storing the target product attribute data and the corresponding target hypothesis data into the storage database;
The prediction result calculation module is used for calculating target prediction result data according to the target product attribute data, the target hypothesis data and the target service data, wherein the target service data is the original service data, and the target service data has a source data version number;
and the version number marking module is used for generating a target version number according to the source data version number and the running basic data of the target prediction result data, and marking the target version number on the target product attribute data, the target hypothesis data, the target service data and the target prediction result data respectively to generate a target data chain.
To achieve the above object, a third aspect of the embodiments of the present application provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores a computer program, and the processor executes the computer program to implement the insurance fine prediction method according to the first aspect.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a computer-readable storage medium storing a computer program that, when executed by a processor, implements the insurance fine prediction method described in the first aspect.
According to the insurance calculation prediction method and device, the electronic equipment and the storage medium, original product attribute data, original hypothesis data, original service data and original prediction result data can be found through the original version number input by the target object, then the modified target product attribute data and corresponding target hypothesis data input by the target object are obtained, new target prediction result data can be obtained through recalculation based on the original service data, the modified target product attribute data and the corresponding target hypothesis data, a new target version number mark is generated according to the source data version number of the target service data and the running basic data of the target prediction result data, a new target data chain is generated, so that different hypotheses can be replaced by the source data at one time through the mark version number, multiple sets of prediction results are generated, and the usability and the comparability of the data are improved. The method can also realize the re-running of the data, the recovery and the backup of the historical data, is convenient for monitoring and managing the data and improves the working efficiency.
Drawings
FIG. 1 is a flow chart of an insurance fine prediction method provided in an embodiment of the present application;
fig. 2 is a flowchart of step S300 in fig. 1;
Fig. 3 is a flowchart of step S320 in fig. 2;
fig. 4 is a flowchart of step S322 in fig. 3;
FIG. 5 is a flow chart of an insurance fine prediction method provided in another embodiment of the present application;
FIG. 6 is a flow chart of an insurance fine prediction method provided in another embodiment of the present application;
fig. 7 is a schematic structural diagram of an insurance fine calculation prediction device provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Reference numerals:
a historical data chain acquisition module 710;
an adjustment data acquisition module 720;
a prediction result calculation module 730;
version number tagging module 740.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
First, several nouns referred to in this application are parsed:
artificial intelligence (artificial intelligence, AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding the intelligence of people; artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a manner similar to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Natural language processing (natural language processing, NLP): NLP is a branch of artificial intelligence that is a interdisciplinary of computer science and linguistics, and is often referred to as computational linguistics, and is processed, understood, and applied to human languages (e.g., chinese, english, etc.). Natural language processing includes parsing, semantic analysis, chapter understanding, and the like. Natural language processing is commonly used in the technical fields of machine translation, handwriting and print character recognition, voice recognition and text-to-speech conversion, information intent recognition, information extraction and filtering, text classification and clustering, public opinion analysis and opinion mining, and the like, and relates to data mining, machine learning, knowledge acquisition, knowledge engineering, artificial intelligence research, linguistic research related to language calculation, and the like.
Information extraction (Information Extraction): extracting the fact information of the appointed type of entity, relation, event and the like from the natural language text, and forming the text processing technology of the structured data output. Information extraction is a technique for extracting specific information from text data. Text data is made up of specific units, such as sentences, paragraphs, chapters, and text information is made up of small specific units, such as words, phrases, sentences, paragraphs, or a combination of these specific units. The noun phrase, the name of a person, the name of a place, etc. in the extracted text data are all text information extraction, and of course, the information extracted by the text information extraction technology can be various types of information.
An Image description (Image capture) generates a natural language description for the Image and uses the generated description to help the application understand the semantics expressed in the visual scene of the Image. For example, the image description may convert an image search to a text search for classifying the image and improving the image search results. People usually need to quickly browse to describe the details of the visual scene of the image, and automatically adding descriptions to the image is a comprehensive and difficult computer vision task, and complex information contained in the image needs to be converted into natural language descriptions. In contrast to common computer vision tasks, image captions not only require identifying objects from images, but also require associating the identified objects with natural semantics and describing them in natural language. Thus, image descriptions require one to extract deep features of the image, correlate with semantic features, and transform for generating the description.
In the insurance management field, the calculation and evaluation and prediction are an indispensable step flow of each insurance company, including the prediction process of liability measurement, odds ratio, continuation ratio, refund ratio and the like. The existing prediction system has serious hard coding, the prediction model version is determined by the hypothesis data, once the product attribute is changed, the prediction model version is issued for correction, and the iteration speed of the system is very slow. And the prediction model version of the existing prediction system is not easy to manage, the hypothesis can not be flexibly changed, and the prediction results generated by the prediction models of all versions are not well compared transversely. In addition, at present, business personnel often combine excel and other tools to perform manual operation when monitoring and managing various data, but the prediction process is performed based on massive data, so that the manual operation is complicated, the system is not easy to butt joint and integrate, and a large amount of labor cost is caused.
Based on the above, the embodiment of the application provides an insurance calculation prediction method and device, electronic equipment and storage medium, which aim to realize that source data replace different assumptions at one time, generate a plurality of sets of prediction results and improve the usability and the comparability of the data.
The method and device for predicting the safe calculation, the electronic device and the storage medium provided by the embodiment of the application are specifically described through the following embodiments, and the method for predicting the safe calculation in the embodiment of the application is described first.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides an insurance calculation prediction method, which relates to the technical fields of artificial intelligence and finance. The insurance calculation prediction method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application or the like that implements the insurance fine prediction method, but is not limited to the above form.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In the embodiments of the present application, when related processing is required according to user information, user behavior data, user history data, user location information, and other data related to user identity or characteristics, permission or consent of the user is obtained first, and the collection, use, processing, and the like of the data comply with related laws and regulations and standards of related countries and regions. In addition, when the embodiment of the application needs to acquire the sensitive personal information of the user, the independent permission or independent consent of the user is acquired through a popup window or a jump to a confirmation page or the like, and after the independent permission or independent consent of the user is explicitly acquired, necessary user related data for enabling the embodiment of the application to normally operate is acquired.
Fig. 1 is an optional flowchart of an insurance fine prediction method provided in an embodiment of the present application, where the method in fig. 1 may include, but is not limited to, steps S100 to S400.
Step S100, responding to an original version number input by a target object, and acquiring a historical data chain to be traced from a preset storage database, wherein the historical data chain to be traced comprises original product attribute data, original hypothesis data, original service data and original prediction result data which are marked with the original version number;
Step S200, receiving target product attribute data and corresponding target hypothesis data input by a target object and storing the target product attribute data and the corresponding target hypothesis data in a storage database in response to an input configuration operation of the target object;
step S300, calculating to obtain target prediction result data according to target product attribute data, target hypothesis data and target service data, wherein the target service data is original service data, and the target service data has a source data version number;
step S400, a target version number is generated according to the source data version number and the running basic data of the target prediction result data, and the target version number is marked on the target product attribute data, the target hypothesis data, the target service data and the target prediction result data respectively to generate a target data chain.
In the steps S100 to S400 illustrated in the embodiments of the present application, original product attribute data, original hypothesis data, original service data and original prediction result data may be found through an original version number input by a target object, and then, by obtaining modified target product attribute data and corresponding target hypothesis data input by the target object, new target prediction result data may be obtained by re-performing calculation based on the original service data and the modified target product attribute data and the corresponding target hypothesis data, and a new target version number mark may be generated according to a source data version number of the target service data and operation base data of the target prediction result data, so that a new target data chain may be generated, and thus, different hypotheses may be replaced by the source data at one time through the mark version number, multiple sets of prediction results may be generated, and availability and comparability of the data may be improved. The method can also realize the re-running of the data, the recovery and the backup of the historical data, is convenient for monitoring and managing the data and improves the working efficiency.
In step S100 of some embodiments, product attribute data includes, but is not limited to, type of policy product, home location, age of sale in the area of the insurance business. Assuming that the data is a professional refined predictor, such as: life form, odds factor, preparation factor, etc., which are to be participated in insurance prediction algorithms, different insurance prediction algorithms correspond to different prediction models. The business data includes, but is not limited to, policy information (year of policy, date of effect), customer information (including name, gender, age), and fee information for the payment amount of the policy. The predicted outcome data includes, but is not limited to, liability measures, odds, continuation rates, refund rates, reinsurance rates, and is determined based on actual product attribute data, hypothesis data, and business data.
The historical data chain to be traced is stored in a preset storage database so as to be monitored and managed, and in a security management scene, the trouble that business personnel use excel and other tools to manually operate the data can be avoided, the butt joint and integration of the system are easy, and the labor cost is reduced.
The original version number is generated according to the source data version number of the original service data and the operation basic data of the original prediction result data after the original prediction result data is obtained through calculation, wherein the operation basic data comprises operation time and operation personnel, and the generated version numbers are different due to the fact that the operation time and the operation personnel of different prediction result data are different, and a historical data chain can be uniquely marked.
In step S200 of some embodiments, the configuration may be performed by means of excel uploading of the web page or by means of direct page input. The target product attribute data and the corresponding target hypothesis data are obtained in a configuration mode, the situation that the version of the prediction model is to be issued for correction once the product attribute is changed can be avoided, the system prediction model can be corrected every time, the corresponding prediction model is stored in the server corresponding to different hypothesis data, the corresponding prediction calculation can be performed only by inputting the configuration target product attribute data and the corresponding target hypothesis data, and the system flexibility is high.
In step S300 of some embodiments, after the target product attribute data, the target hypothesis data and the target service data are read from the storage database, the real-time calculation engine of big data is used to obtain the security total amount basic information according to the target product attribute data and the target service data, then the target prediction model is determined based on the target hypothesis data, and then the security total amount basic information is input into the target prediction model to be predicted to obtain the target prediction result data.
In the insurance management field, the data is assumed to be professional refined prediction factors, the refined prediction factors are life forms, odds and preparation gold factors, and the like, the refined prediction factors are involved in insurance prediction algorithms, different insurance prediction algorithms correspond to different refined prediction elements, and the refined prediction elements comprise odds, refunds, profit margins and the like, and different refined prediction elements correspond to different prediction models.
The big data real-time calculation engine adopts a spark streaming real-time calculation engine, and big data is adopted for calculation, so that the aging problem of batch calculation is solved.
In step S400 of some embodiments, each service data has a unique source data version number, a personalized number belonging to the service data, for distinguishing between different service data, and also used as base data for generating version numbers of the data chain. Different data chains can be distinguished by the source data version number, the runtime, and the version number of the data chain generated by the operator being unique. After each target prediction result data is calculated, the target version number is required to be marked, a target data chain is generated, so that the target data chain is reused for the next time to realize the replacement of source data for different hypotheses, multiple sets of prediction results are generated, and the usability and the comparability of the data are improved. The method can also realize the re-running of the data, the recovery and the backup of the historical data, is convenient for monitoring and managing the data and improves the working efficiency.
In order to more clearly illustrate the advantages of the insurance fine prediction method according to the embodiments of the present application, a detailed description will be given next to a generation process of a version number according to a specific embodiment.
Taking the product attribute data and the corresponding assumption data of which the odds are calculated by 15 minutes and 30 seconds based on the odds of the three-layer model at 2023 6 month 1 and the corresponding assumption data as an example to recalculate the odds, firstly, acquiring the original business data of the odds calculated by 15 minutes and 30 seconds based on the odds of the three-layer model at 2023 6 month 1 from a preset storage database through the original version number input by a target object, determining the source data version number of the original business data, then calculating to obtain a new odds according to the target product attribute data and the target assumption data input by the target object and the original business data, and then combining the source data version number and the running time information of the new odds calculated by the three-layer model to generate a new target version number, and marking the new target version number, the target assumption data, the target business data and the target prediction result data.
It should be noted that, the original version number may be calculated by the target object according to the coding mode of generating the version number, and only needs to know which service data, operating personnel and operating time are needed to be traced.
Referring to fig. 2, in some embodiments, step S300 may include, but is not limited to including, step S310 and step S320:
Step S310, reading target product attribute data, target hypothesis data and target business data from a storage database;
step S320, calculating by the big data real-time calculation engine, the target prediction result data by using the target product attribute data, the target hypothesis data and the target service data.
In step S310 of some embodiments, the storage database is a repository of business transactions in the insurance domain, and may store various product attribute data, hypothesis data, and business data.
In step S320 of some embodiments, the target product attribute data, the target hypothesis data, and the target service data are first extracted from the storage database by means of ogg+kafka to the hive library, and then calculated by the spark streaming real-time calculation engine using the target product attribute data, the target hypothesis data, and the target service data to obtain target prediction result data. Big data are adopted for calculation, so that the aging problem of batch calculation is solved.
It should be noted that, the hive library is a centralized library of data center stations, the spark streaming real-time computing engine is one of big data real-time computing engines, and the structure and principle thereof are all known in the prior art by those skilled in the art, and will not be described in detail herein. Ogg+kafka is also known to those skilled in the art and will not be described in detail herein.
Referring to fig. 3, in some embodiments, step S320 may include, but is not limited to, steps S321 to S323:
step S321, obtaining the security total basic information according to the target product attribute data and the target service data by a big data real-time calculation engine;
step S322, determining a target prediction model based on the target hypothesis data;
step S323, inputting the security total basic information into a target prediction model, and predicting to obtain target prediction result data.
In steps S321 to S323 of some embodiments, in the field of insurance management, it is assumed that the data is a professional refined prediction factor, where the refined prediction factor is a life form, a pay factor, a preparation factor, etc., and these refined prediction factors are to participate in an insurance prediction algorithm, different insurance prediction algorithms correspond to different refined prediction elements, where the refined prediction elements include a odds ratio, a refund ratio, a profit margin, etc., and different refined prediction elements correspond to different prediction models.
The big data real-time calculation engine adopts a spark streaming real-time calculation engine, and big data is adopted for calculation, so that the aging problem of batch calculation is solved.
It should be noted that, various prediction models in the field of insurance calculation are known to those skilled in the art, and specific structures thereof will not be described herein.
Referring to fig. 4, in some embodiments, step S322 may include, but is not limited to, steps S3221 through S3223:
step S3221, determining the type of the refined prediction factor according to the target hypothesis data;
step S3222, determining the refined prediction elements according to the type of the refined prediction factors, wherein the refined prediction elements comprise odds, refund rate and profit margin;
step S3223, determining a corresponding target prediction model based on the refined prediction elements.
In steps S3221 to S3223 of some embodiments, in the insurance management field, the refined prediction factors are life tables, odds factors, preparation factors, and the like, and these refined prediction factors are to be involved in insurance prediction algorithms, different insurance prediction algorithms correspond to different refined prediction elements, the refined prediction elements include odds, refunds, profit margins, and the like, and different refined prediction elements correspond to different prediction models.
It should be noted that the above-mentioned types of the refinement predictors and refinement predictors are only examples, and should not be construed as limiting the present application.
Referring to fig. 5, in some embodiments, the operation base data includes an operation time and an operator, and after the target data chain is generated, the insurance calculation prediction method may further include, but is not limited to, steps S510 and S520:
Step S510, generating a monitoring log based on the running time and the running personnel of the target prediction result data, and the source data version number and the target version number;
step S520, the monitoring log is sent to the target object.
In steps S510 and S520 of some embodiments, the entire calculation process is required to be monitored in real time due to the huge amount of data for the insurance fine assessment and metering. Various log information (namely the running time and the running personnel of target prediction result data, and the source data version number and the target version number) in the version management process is grabbed, and then the log information is subjected to statistical analysis to form a monitoring log, and the monitoring log is sent to an insurance fine calculation business personnel in a mail and short message mode, so that once abnormality occurs in the whole calculation process, the first time discovery can be ensured. It should be noted that, the form of sending the monitoring log to the insurance fine business personnel may also be other forms, which should not be construed as limiting the present application.
Referring to fig. 6, in some embodiments, the insurance fine prediction method further includes, but is not limited to, steps S610 and S620:
step S610, generating graph contrast data according to target service data, target prediction result data and original prediction result data based on an echart framework;
Step S620, the graphic comparison data is displayed on the target front-end page to visualize the variation and development trend of different prediction result data generated by the same business data corresponding to different hypothesis data.
In step S610 of some embodiments, the echart framework is a data visualization framework, and is used to generate graphic data from the background data in json format, so that the target service data, the target prediction result data and the original prediction result data can be dynamically and graphically displayed to the insurance fine service personnel in real time. It should be noted that the echart framework is known in the art, and specific principles are not described herein.
In the foregoing steps, it is mentioned that the target product attribute data, the target hypothesis data and the target service data are extracted from the storage database by means of ogg+kafka, and then the target prediction result data is obtained by calculating the target product attribute data, the target hypothesis data and the target service data by means of the spark streaming real-time calculation engine. The hive library is a centralized library of data middle tables, and because middle table data is not easy to query and use, the calculated data needs to be synchronized into mongolib in the mode of OGG+KAFKA, and mongolib is a preset memory database, and target business data, target prediction result data and original prediction result data can be extracted from the memory database to generate graph comparison data.
It should be noted that the structure and principle of mongolidb are known in the prior art by those skilled in the art, and will not be described in detail herein.
In step S620 of some embodiments, by displaying the graphic comparison data on the target front-end page, the variation and development trend of different prediction result data generated by the same service data corresponding to different hypothesis data can be visualized, so that the insurance fine calculation service personnel can conveniently observe various prediction result data and then adjust the hypothesis data to recalculate, thereby improving the availability and comparability of the data.
In some embodiments, step S610 may include, but is not limited to including step S611:
step S611, generating a tree diagram or a table according to the target service data, the target prediction result data and the original prediction result data based on the echart framework;
step S620 may include, but is not limited to including step S621:
in step S621, the tree diagram or table is displayed on the target front-end page to visualize the variation and development trend of different prediction result data generated by the same business data corresponding to different hypothesis data.
In step S611 and step S621 of some embodiments, the tree graph can intuitively see different prediction result data generated by the same service data corresponding to different hypothesis data, so that the hypothesis data can be adjusted for recalculation after comparison and observation, the availability and comparability of the data are improved, and the use experience of insurance fine-computing service personnel is also improved.
The table can be set to be different prediction result data generated by the same row of data corresponding to different hypothesis data of the same service data, or can be set to be different prediction result data generated by the same column of data corresponding to different hypothesis data of the same service data, and also can intuitively see the different prediction result data generated by the same service data corresponding to different hypothesis data, so that the hypothesis data can be adjusted for recalculation after comparison and observation, the availability and comparability of the data are improved, and the use experience of insurance fine calculation service personnel is also improved.
It should be noted that, the display form of the graphic comparison data may be selected according to actual needs, so long as the variation and development trend of different prediction result data generated by the same service data corresponding to different hypothesis data can be displayed, and the present application cannot be considered as being limited.
In addition, how to adjust the hypothesis data to be the professional working process of the business personnel in the insurance calculation field after comparing and observing the different prediction result data is not specifically described herein, and is not specifically limited.
In some embodiments, since the system requires automated operation of the full process, we implement automated operation and scheduling of the full process based on quatertz, then self-building a task schedule, using MP message queues to complete the task's production and consumption actions. It should be noted that the automated operation and scheduling process and principle are known to those skilled in the art, and the automated operation and scheduling process of the insurance fine prediction method of the embodiment of the present application will not be described in detail herein.
Referring to fig. 7, the embodiment of the present application further provides an insurance calculation prediction apparatus, which may implement the insurance calculation prediction method, where the apparatus includes a historical data chain obtaining module 710, an adjustment data obtaining module 720, a prediction result calculating module 730, and a version number marking module 740.
The historical data chain obtaining module 710 is configured to obtain a historical data chain to be traced from a preset storage database in response to an original version number input by a target object, where the historical data chain to be traced includes original product attribute data, original hypothesis data, original service data and original prediction result data, all of which are marked with the original version number;
an adjustment data acquisition module 720, configured to receive, in response to an input configuration operation of a target object, target product attribute data and corresponding target hypothesis data input by the target object, and store the target product attribute data and the corresponding target hypothesis data in a storage database;
the prediction result calculation module 730 is configured to calculate target prediction result data according to target product attribute data, target hypothesis data, and target service data, where the target service data is original service data, and the target service data has a source data version number;
the version number marking module 740 is configured to generate a target version number according to the source data version number and the running base data of the target prediction result data, and mark the target product attribute data, the target hypothesis data, the target service data and the target prediction result data with the target version number respectively to generate a target data chain.
In the insurance business field, product attribute data includes, but is not limited to, type of policy product, home location, sales age. Assuming that the data is a professional refined predictor, such as: life form, odds factor, preparation factor, etc., which are to be participated in insurance prediction algorithms, different insurance prediction algorithms correspond to different prediction models. The business data includes, but is not limited to, policy information (year of policy, date of effect), customer information (including name, gender, age), and fee information for the payment amount of the policy. The predicted outcome data includes, but is not limited to, liability measures, odds, continuation rates, refund rates, reinsurance rates, and is determined based on actual product attribute data, hypothesis data, and business data.
The historical data chain to be traced is stored in a preset storage database so as to be monitored and managed, and in a security management scene, the trouble that business personnel use excel and other tools to manually operate the data can be avoided, the butt joint and integration of the system are easy, and the labor cost is reduced.
The original version number is generated according to the source data version number of the original service data and the operation basic data of the original prediction result data after the original prediction result data is obtained through calculation, wherein the operation basic data comprises operation time and operation personnel, and the generated version numbers are different due to the fact that the operation time and the operation personnel of different prediction result data are different, and a historical data chain can be uniquely marked.
The configuration can be carried out by means of excel uploading of the web page or by means of direct page input. The target product attribute data and the corresponding target hypothesis data are obtained in a configuration mode, the situation that the version of the prediction model is to be issued for correction once the product attribute is changed can be avoided, the system prediction model can be corrected every time, the corresponding prediction model is stored in the server corresponding to different hypothesis data, the corresponding prediction calculation can be performed only by inputting the configuration target product attribute data and the corresponding target hypothesis data, and the system flexibility is high.
And after the target product attribute data, the target hypothesis data and the target service data are read from the storage database, acquiring the security total quantity basic information according to the target product attribute data and the target service data by a big data real-time calculation engine, determining a target prediction model based on the target hypothesis data, and inputting the security total quantity basic information into the target prediction model to obtain target prediction result data through prediction.
In the insurance management field, the data is assumed to be professional refined prediction factors, the refined prediction factors are life forms, odds and preparation gold factors, and the like, the refined prediction factors are involved in insurance prediction algorithms, different insurance prediction algorithms correspond to different refined prediction elements, and the refined prediction elements comprise odds, refunds, profit margins and the like, and different refined prediction elements correspond to different prediction models.
The big data real-time calculation engine adopts a spark streaming real-time calculation engine, and big data is adopted for calculation, so that the aging problem of batch calculation is solved.
Each service data has a unique source data version number, belongs to the personalized number of the service data, is used for distinguishing different service data, and is also used as basic data for generating the version number of a data chain. Different data chains can be distinguished by the source data version number, the runtime, and the version number of the data chain generated by the operator being unique. After each target prediction result data is calculated, the target version number is required to be marked, a target data chain is generated, so that the target data chain is reused for the next time to realize the replacement of source data for different hypotheses, multiple sets of prediction results are generated, and the usability and the comparability of the data are improved. The method can also realize the re-running of the data, the recovery and the backup of the historical data, is convenient for monitoring and managing the data and improves the working efficiency.
The insurance calculation prediction device provided by the embodiment of the application can realize the replacement of different assumptions of primary source data through marking version numbers, generate a plurality of sets of prediction results and improve the availability and comparability of the data. The method can also realize the re-running of the data, the recovery and the backup of the historical data, is convenient for monitoring and managing the data and improves the working efficiency.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the insurance calculation prediction method when executing the computer program. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 8, fig. 8 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 801 may be implemented by a general-purpose CPU (central processing unit), a microprocessor, an application-specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present application;
the memory 802 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM). Memory 802 may store an operating system and other application programs, and when implementing the technical solutions provided in the embodiments of the present disclosure by software or firmware, relevant program codes are stored in memory 802, and the processor 801 invokes an insurance fine prediction method to execute the embodiments of the present disclosure;
An input/output interface 803 for implementing information input and output;
the communication interface 804 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g., USB, network cable, etc.), or may implement communication in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
a bus 805 that transfers information between the various components of the device (e.g., the processor 801, the memory 802, the input/output interface 803, and the communication interface 804);
wherein the processor 801, the memory 802, the input/output interface 803, and the communication interface 804 implement communication connection between each other inside the device through a bus 805.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the insurance fine prediction method when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
According to the insurance calculation prediction method, the insurance calculation prediction device, the electronic equipment and the storage medium, original product attribute data, original hypothesis data, original service data and original prediction result data can be found through the original version number input by the target object, then the modified target product attribute data and corresponding target hypothesis data input by the target object are obtained, new target prediction result data can be obtained through recalculation based on the original service data, the modified target product attribute data and the corresponding target hypothesis data, a new target version number mark is generated according to the source data version number of the target service data and the running basic data of the target prediction result data, a new target data chain is generated, so that different hypotheses can be replaced by the source data at one time through the mark version number, multiple sets of prediction results are generated, and the usability and the comparability of the data are improved. The method can also realize the re-running of the data, the recovery and the backup of the historical data, is convenient for monitoring and managing the data and improves the working efficiency.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the technical solutions shown in the figures do not constitute limitations of the embodiments of the present application, and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A method of actuarial prediction of insurance, the method comprising:
responding to an original version number input by a target object, and acquiring a historical data chain to be traced from a preset storage database, wherein the historical data chain to be traced comprises original product attribute data, original hypothesis data, original service data and original prediction result data which are marked with the original version number;
receiving target product attribute data and corresponding target hypothesis data input by the target object and storing the target product attribute data and the corresponding target hypothesis data into the storage database in response to the input configuration operation of the target object;
calculating according to the target product attribute data, the target hypothesis data and target service data to obtain target prediction result data, wherein the target service data is the original service data, and the target service data has a source data version number;
And generating a target version number according to the source data version number and the running basic data of the target prediction result data, and marking the target version number on the target product attribute data, the target hypothesis data, the target service data and the target prediction result data respectively to generate a target data chain.
2. The insurance fine prediction method according to claim 1, wherein the calculating to obtain the target prediction result data according to the target product attribute data, the target hypothesis data and the target service data includes:
reading the target product attribute data, the target hypothesis data and the target service data from the storage database;
and calculating the target prediction result data by using the target product attribute data, the target hypothesis data and the target service data through a big data real-time calculation engine.
3. The insurance fine prediction method according to claim 2, wherein said calculating, by a big data real-time calculation engine, said target prediction result data using said target product attribute data, said target hypothesis data and said target business data includes:
Acquiring security total basic information according to the target product attribute data and the target service data by the big data real-time calculation engine;
determining a target prediction model based on the target hypothesis data;
and inputting the security total basic information into the target prediction model, and predicting to obtain the target prediction result data.
4. The insurance fine prediction method according to claim 3, characterized in that said determining a target prediction model based on said target hypothesis data comprises:
determining a refined predictor type from the target hypothesis data;
determining a refined prediction element according to the type of the refined prediction factor, wherein the refined prediction element comprises odds, refund rate and profit margin;
and determining the corresponding target prediction model based on the refined prediction elements.
5. The insurance fine prediction method according to claim 1, wherein the operation base data includes an operation time and an operation person; after the target data chain is generated, the insurance fine prediction method further comprises the following steps:
generating a monitoring log based on the running time and the running personnel of the target prediction result data, and the source data version number and the target version number;
And sending the monitoring log to the target object.
6. The insurance calculation prediction method according to claim 1, characterized in that the insurance calculation prediction method further comprises:
generating graph comparison data according to the target service data, the target prediction result data and the original prediction result data based on an echart framework;
and displaying the graph comparison data on a target front-end page so as to visualize the variation and development trend of different prediction result data generated by the same service data corresponding to different hypothesis data.
7. The insurance fine prediction method according to claim 6, wherein the generating graphic contrast data based on the echart framework from the target business data, the target prediction result data, and the original prediction result data includes:
generating a tree diagram or a table according to the target service data, the target prediction result data and the original prediction result data based on an echart framework;
the displaying the graph comparison data on the target front-end page to visualize the variation and development trend of different prediction result data generated by the same business data corresponding to different hypothesis data, including:
And displaying the tree diagram or the table on the target front-end page to visualize the variation and development trend of different prediction result data generated by the same business data corresponding to different hypothesis data.
8. An insurance calculation prediction device, characterized in that the device comprises:
the historical data chain acquisition module is used for responding to an original version number input by a target object and acquiring a historical data chain to be traced from a preset storage database, wherein the historical data chain to be traced comprises original product attribute data, original hypothesis data, original service data and original prediction result data which are marked with the original version number;
the adjustment data acquisition module is used for responding to the input configuration operation of the target object, receiving target product attribute data and corresponding target hypothesis data input by the target object and storing the target product attribute data and the corresponding target hypothesis data into the storage database;
the prediction result calculation module is used for calculating target prediction result data according to the target product attribute data, the target hypothesis data and the target service data, wherein the target service data is the original service data, and the target service data has a source data version number;
And the version number marking module is used for generating a target version number according to the source data version number and the running basic data of the target prediction result data, and marking the target version number on the target product attribute data, the target hypothesis data, the target service data and the target prediction result data respectively to generate a target data chain.
9. An electronic device comprising a memory storing a computer program and a processor that when executing the computer program implements the insurance calculation prediction method of any of claims 1 to 7.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the insurance fine prediction method of any of claims 1 to 7.
CN202311365871.9A 2023-10-19 2023-10-19 Insurance calculation prediction method and device, electronic equipment and storage medium Pending CN117670070A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311365871.9A CN117670070A (en) 2023-10-19 2023-10-19 Insurance calculation prediction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311365871.9A CN117670070A (en) 2023-10-19 2023-10-19 Insurance calculation prediction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117670070A true CN117670070A (en) 2024-03-08

Family

ID=90074142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311365871.9A Pending CN117670070A (en) 2023-10-19 2023-10-19 Insurance calculation prediction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117670070A (en)

Similar Documents

Publication Publication Date Title
US10977293B2 (en) Technology incident management platform
Srinivasan et al. Biases in AI systems
Wang et al. Industrial big data analytics: challenges, methodologies, and applications
CN111506723B (en) Question-answer response method, device, equipment and storage medium
CN110008979A (en) Abnormal data prediction technique, device, electronic equipment and computer storage medium
CN113627797B (en) Method, device, computer equipment and storage medium for generating staff member portrait
CN113449046A (en) Model training method, system and related device based on enterprise knowledge graph
CA3170083C (en) Systems and methods for project and program management using artificial intelligence
CN113919336A (en) Article generation method and device based on deep learning and related equipment
Guo et al. A new user implicit requirements process method oriented to product design
CN117522538A (en) Bid information processing method, device, computer equipment and storage medium
CN109146306B (en) Enterprise management system
CN116225848A (en) Log monitoring method, device, equipment and medium
CN113674065B (en) Service contact-based service recommendation method and device, electronic equipment and medium
CN117670070A (en) Insurance calculation prediction method and device, electronic equipment and storage medium
CN114925674A (en) File compliance checking method and device, electronic equipment and storage medium
KR20230059364A (en) Public opinion poll system using language model and method thereof
CN111444170B (en) Automatic machine learning method and equipment based on predictive business scene
CN112561558A (en) Express time portrait generation method, generation device, equipment and storage medium
CN117149991A (en) Demand determining method and electronic equipment
Ma et al. A Prediction Method of Scientific Achievement based on Attention Characteristics
CN114399318A (en) Link processing method and device, computer equipment and storage medium
CN116364223A (en) Feature processing method, device, computer equipment and storage medium
CN115062994A (en) Object evaluation method, object evaluation device, electronic device, and storage medium
CN114549078A (en) Client behavior processing method and device based on time sequence and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination