CN110503507B - Insurance product data pushing method and system based on big data and computer equipment - Google Patents

Insurance product data pushing method and system based on big data and computer equipment Download PDF

Info

Publication number
CN110503507B
CN110503507B CN201910606135.5A CN201910606135A CN110503507B CN 110503507 B CN110503507 B CN 110503507B CN 201910606135 A CN201910606135 A CN 201910606135A CN 110503507 B CN110503507 B CN 110503507B
Authority
CN
China
Prior art keywords
data
client
model
target
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910606135.5A
Other languages
Chinese (zh)
Other versions
CN110503507A (en
Inventor
杨春春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201910606135.5A priority Critical patent/CN110503507B/en
Publication of CN110503507A publication Critical patent/CN110503507A/en
Application granted granted Critical
Publication of CN110503507B publication Critical patent/CN110503507B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Engineering & Computer Science (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Technology Law (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

A big data based insurance product data pushing method, the method comprising: acquiring client information of an intention client; extracting a plurality of feature item data according to the client information; obtaining abnormal coefficients of the intention clients according to the plurality of characteristic item data; judging whether the abnormal coefficient is larger than a preset value or not; if the anomaly coefficient is larger than the preset value, controlling to establish a session window between the client terminal and the expert seat terminal; if the anomaly coefficient is not greater than the preset value, obtaining a feature combination according to the plurality of feature item data; inputting the feature combination into a product pushing model to obtain the confidence coefficient of the intention client and each category label; generating an insurance product data form based on the confidence level of each category label; and sending the insurance product data form to the client terminal. The scheme can realize intelligent selection between machine service and manual service, and effectively improve accuracy of pushing insurance product data.

Description

Insurance product data pushing method and system based on big data and computer equipment
Technical Field
The present invention relates to the field of big data, and in particular, to a method, a system, a computer device, and a computer storage medium for pushing insurance product data based on big data.
Background
With the increasing awareness of people's insurance, commercial insurance has become an important component of current social security systems. At this stage, the main stream sales modes of commercial insurance include an off-line face-to-face sales mode and an on-line electricity sales mode.
The online electric sales mode is to communicate with the intended clients and promote the transactions on the network platform through robots or electric sales agents, and in order to attract the clients and promote the transactions, the online electric sales agents are required to accurately capture the real demands of the target clients in a short time.
The current electrical pin mode suffers from at least the following drawbacks: (1) The transaction is achieved through the robot service, the customer is not switched to the manual service unless the customer is willing to actively select the manual service, and the existing electric marketing platform can not realize intelligent selection between the robot service and the manual service; (2) Current robotic services basically predict and push product data based on keywords entered by the intended customer, with low prediction accuracy, resulting in a large number of invalid push events. In order to solve the above-mentioned problems, it is necessary to propose a method for pushing insurance product data based on big data.
Disclosure of Invention
Accordingly, an objective of the embodiments of the present invention is to provide a method, a system, a computer device and a computer storage medium for pushing insurance product data based on big data, so as to solve the problem of precisely pushing the insurance product data to customers.
In order to achieve the above object, the embodiment of the present invention provides a method for pushing insurance product data based on big data, comprising the following steps:
receiving a client data form of an intended client uploaded by a client terminal, and acquiring client information of the intended client from the client data form;
extracting a plurality of feature item data according to the client information;
inputting the plurality of characteristic item data into a target isolated forest model to obtain an abnormal coefficient of the intention client;
judging whether the abnormal coefficient is larger than a preset value or not;
if the anomaly coefficient is larger than the preset value, controlling to establish a session window between the client terminal and the expert seat terminal;
if the anomaly coefficient is not greater than the preset value, inputting the plurality of feature item data into a target iterative decision tree model, and outputting a feature combination through the target iterative decision tree model;
inputting the feature combination into a product pushing model to obtain the confidence coefficient of the intention client and each category label;
generating an insurance product data form based on the confidence level of each category label; and
And sending the insurance product data form to the client terminal so that the client terminal can display the insurance product data form on a display unit of the client terminal.
Preferably, the step of extracting a plurality of feature item data according to the client information includes:
extracting a plurality of field information corresponding to a plurality of field names from the client information according to the plurality of field names of a plurality of preset fields; and
And storing the field information into a database form.
Preferably, the step of inputting the plurality of feature item data into a target isolated forest model to obtain an anomaly coefficient of the intended client includes:
traversing the plurality of characteristic item data in each isolated tree in the target isolated forest model to obtain the path length of the plurality of characteristic item data in each isolated tree;
according to the path length of the plurality of feature item data in each isolated tree, calculating to obtain an abnormal coefficient of the plurality of feature item data, wherein the abnormal coefficient has the following calculation formula:
c(n)=2H(n-1)-(2(n-1)/n)
H(k)=In(k)+ξ
wherein s (x, n) is an abnormal coefficient of the plurality of feature item data obtained through n isolated trees, E (H (x)) is a height average value of the plurality of feature item data in each isolated tree, H (x) is a path length of the plurality of feature item data in each isolated tree, ζ is an Euler parameter, and H (k) is a harmonic number.
Preferably, if the anomaly coefficient is greater than the preset value, controlling to establish a session window between the client terminal and the expert agent terminal, including:
defining an abnormal grade identification of the intention client according to the abnormal coefficient;
and distributing corresponding target expert seat terminals for the intended clients according to the abnormal grade identification, and establishing a session window between the client terminals and the target expert seat terminals.
Preferably, the step of inputting the plurality of feature item data into a target iterative decision tree model, and outputting a feature combination through the target iterative decision tree model includes:
and obtaining a feature combination corresponding to the client information of the intention client according to the position distribution of the plurality of feature item data in the leaf nodes of each decision tree in the target iterative decision tree model.
Preferably, the method further comprises the training step of the product pushing model:
presetting a plurality of characteristic items for determining product data pushing;
acquiring a plurality of sample data of a plurality of clients from a client information database according to the plurality of characteristic items;
configuring n sample data sets according to the plurality of sample data;
one or more sample data sets in the n sample data sets are used for training a random forest algorithm to obtain the target isolated forest model, and the random forest model is used for determining whether characteristic item data of an intention client are input into a target iterative decision tree model or not;
using one or more of the n sample data sets for training an iterative decision tree algorithm to obtain the target iterative decision tree model; and
Training a logistic regression algorithm according to the feature combination output by the target iterative decision tree model to obtain a product pushing model, wherein m class labels are preconfigured on the output of the product pushing model.
Preferably, the step of configuring n sample data sets according to the plurality of sample data in the training step of the product push model includes:
extracting M sample data corresponding to M clients according to a preset rule from a plurality of sample numbers of the clients, and forming a sample data set by the M sample data; and
The extraction step is repeated to obtain n sample data sets.
In order to achieve the above object, the embodiment of the present invention further provides an insurance product data pushing system based on big data, including:
the product pushing model is used for obtaining the confidence coefficient of the intention client and each category label;
the target isolated forest model is used for obtaining abnormal coefficients of the intention clients;
the acquisition module is used for receiving a client data form of an intention client uploaded by the client terminal and acquiring client information of the intention client from the client data form;
the extraction module is used for extracting a plurality of characteristic item data according to the client information;
the analysis module is used for inputting the plurality of characteristic item data into a target isolated forest model to obtain an abnormal coefficient of the intention client;
the judging module is used for judging whether the abnormal coefficient is larger than a preset value or not;
the session window establishing module is used for controlling to establish a session window between the client terminal and the expert seat terminal if the anomaly coefficient is larger than the preset value;
the feature output module is used for inputting the plurality of feature item data into a target iterative decision tree model according to the plurality of feature item data and outputting feature combinations through the target iterative decision tree model if the anomaly coefficient is not larger than the preset value;
the prediction module is used for inputting the feature combination into a product pushing model to obtain the confidence coefficient of the intention client and each category label;
the generating module is used for sending corresponding insurance product data to the intention clients based on the confidence degrees of the category labels; and
And the sending module is used for sending the insurance product data form to the client terminal so that the client terminal can display the insurance product data form on a display unit of the client terminal.
To achieve the above object, an embodiment of the present invention further provides a computer device, a memory of the computer device, a processor, and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the insurance product data pushing method based on big data as described above.
To achieve the above object, an embodiment of the present invention also provides a computer-readable storage medium having stored therein a computer program executable by at least one processor to cause the at least one processor to perform the steps of the big data based insurance product data pushing method as described above.
Compared with the prior art, the large data-based insurance product data pushing method, system, computer equipment and computer readable storage medium provided by the embodiment of the invention can obtain the client information of the intended client through the client terminal when the intended client needs to purchase insurance, extract a plurality of characteristic item data according to the client information, obtain an abnormal coefficient according to the characteristic item data, judge and analyze the abnormal coefficient, suggest a session window between the client terminal and an expert seat if the abnormal coefficient is larger than a preset value, obtain a characteristic combination according to the characteristic item data if the abnormal coefficient is not larger than the preset value, obtain an insurance product form required by the intended client according to the characteristic combination, and send the insurance product form to the client terminal for display. Therefore, the embodiment of the invention can realize intelligent selection between machine service and manual service, and effectively improve the accuracy of pushing insurance product data.
Drawings
Fig. 1 is a flowchart of an embodiment of a method for pushing insurance product data based on big data.
Fig. 2 is a flowchart of step S102 in a first embodiment of the insurance product data pushing method based on big data.
Fig. 3 is a flowchart of step S104 in a first embodiment of the insurance product data pushing method based on big data.
Fig. 4 is a flowchart of a training step S116 of a product pushing model in a first embodiment of a big data based insurance product data pushing method according to the present invention.
Fig. 5 is a schematic diagram of a program module of a second embodiment of the insurance product data pushing system based on big data according to the present invention.
Fig. 6 is a schematic diagram of a hardware structure of a third embodiment of the computer device of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1, a flowchart of steps of a big data based insurance product data pushing method according to an embodiment of the invention is shown. It will be appreciated that the flow charts in the method embodiments are not intended to limit the order in which the steps are performed. The method comprises the following steps:
step S100, receiving a client data form of an intention client uploaded by a client terminal, and acquiring client information of the intention client from the client data form.
Illustratively, the customer information includes, but is not limited to, including occupation, age, gender, marital status, working years, academia, annual income, real estate ratings, automobile ratings, and the like.
And step S102, extracting a plurality of characteristic item data according to the client information.
Illustratively, the step S102 may further include:
step S102A, extracting a plurality of field information corresponding to a plurality of field names from the client information according to the plurality of field names of a plurality of preset fields; and
And step S102B, saving the field information into a database form.
And step S104, inputting the plurality of characteristic item data into a target isolated forest model to obtain the abnormal coefficients of the intention clients.
For example, as shown in fig. 3, the step S104 may further include:
step S104A, traversing the plurality of feature item data in each isolated tree in the target isolated forest model to obtain the path length of the plurality of feature item data in each isolated tree.
Specifically, traversing the plurality of feature item data in each isolated tree, starting from a root node of each isolated tree, and according to a feature item and a feature item segmentation value selected when constructing the isolated tree, traversing feature item information to a left subtree if the feature item value of a corresponding feature item in a certain isolated tree is smaller than the feature item segmentation value, otherwise traversing the plurality of feature item data to a right subtree until the leaf node is reached, and recording the number of edges passed in the process to obtain the path length of the single isolated tree.
Step S104B, according to the path length of the plurality of feature item data in each isolated tree, calculating to obtain the abnormal coefficients of the plurality of feature item data.
The calculation formula of the anomaly coefficient is as follows:
c(n)=2H(n-1)-(2(n-1)/n)
H(k)=In(k)+ξ
wherein s (x, n) is an abnormal coefficient of the plurality of feature item data obtained through n isolated trees, E (H (x)) is a height average value of the plurality of feature item data in each isolated tree, H (x) is a path length of the plurality of feature item data in each isolated tree, ζ is an Euler parameter, and H (k) is a harmonic number.
The closer s (x, n) is to 1, the higher the probability that the feature item information is a discrete outlier; s (x, n) is less than 0.5, indicating that the characteristic item information is normal data.
Step S106, judging whether the abnormal coefficient is larger than a preset value.
And S108, if the anomaly coefficient is larger than the preset value, controlling to establish a session window between the client terminal and the expert seat terminal.
Specifically, according to the anomaly coefficient, defining an anomaly class identifier of the intention client; and distributing corresponding target expert seat terminals for the intended clients according to the abnormal grade identification, and establishing a session window between the client terminals and the target expert seat terminals.
If the anomaly coefficient is not greater than the preset value, step S110 is entered.
Step S110, inputting the feature item data into a target iterative decision tree model, and outputting feature combinations through the target iterative decision tree model.
Specifically, each decision tree in the GBDT model includes a root node, an intermediate node, and a leaf node. The root node and each intermediate node have a corresponding risk feature item (e.g., age) and risk feature value (e.g., age 30), if the customer age of a sample is greater than 30 years old, the sample is assigned to the right child node of the node, otherwise, the sample is assigned to the left child node, and the lower nodes are similar until the customer information of the intended customer falls to a leaf node. And obtaining the feature combination corresponding to the client information of the intention client according to the position distribution of the client information of the intention client in the leaf nodes of each decision tree.
And step S112, inputting the feature combination into a product pushing model to obtain the confidence of the intention clients and the labels of all categories.
Each category label may correspond to an insurance item or a combination of insurance items.
Step S114, generating an insurance product data form based on the confidence level of each category label.
Illustratively, the corresponding insurance product data of the category label with the highest confidence level is imported into a database, and the insurance product data form is generated.
The class labels belong to a plurality of large classes, and insurance product data with highest confidence in each large class is imported into a database to generate the insurance product data form.
And step S116, the insurance product data form is sent to the client terminal so that the client terminal can display the insurance product data form on a display unit of the client terminal.
If the intention clients consult through the network, the insurance product data forms generated in the database are sent to the client terminals through the network, and the intention clients look up the insurance product data forms; if the intention clients consult through telephone, the insurance product data forms generated in the database are sent to the client terminals through short messages, and the intention clients look up the insurance product data forms.
Optionally, as shown in fig. 4, the embodiment of the present invention further includes a training step S116 of the product pushing model, where the training step may specifically be as follows:
in step S116A, a plurality of feature items for determining the pushing of the product data are preset.
Illustratively, the plurality of characteristic items includes, but is not limited to, including occupation, age, gender, marital status, working years, academia, annual income, real estate rating, automobile rating, and the like.
Step S116B, obtaining a plurality of sample data of a plurality of clients from the client information database according to the plurality of characteristic items.
Illustratively, the plurality of sample data includes professional information, age information, gender information, marital information, age information, school information, annual income information, property information, automobile information, etc. corresponding to the customer. Various information in the sample data is loaded into the corresponding fields of the data table, for example, occupation information is loaded into the fields corresponding to the job, and age information is loaded into the fields corresponding to the age.
Step S116C configures n sample data sets from the plurality of sample data.
Illustratively, the step S116C further includes:
step S116C01, extracting M sample data corresponding to M clients from a plurality of sample numbers of the clients according to a preset rule, and forming a sample data set by the M sample data; and
Step S116C02, step S116C01 is repeatedly performed to obtain n sample data sets.
Step S116D, one or more sample data sets in the n sample data sets are used for training a random forest algorithm to obtain the target isolated forest model, and the random forest model is used for determining whether characteristic item data of an intention customer are input into a target iterative decision tree model.
Step S116E, using one or more of the n sample data sets to train an iterative decision tree algorithm to obtain the target iterative decision tree model.
The iterative decision tree model may be a GBDT (Gradient Boosting Decision Tree, gradient-lifting decision tree) model, which is based on an iterative decision tree algorithm consisting of a plurality of decision trees, with the specific structure: each tree fits the residual errors of the previous K trees and each tree depends on the result of the previous tree, so that a certain order needs to be ensured among decision trees. In this way, the decision classification is carried out on the plurality of sample data sets through the plurality of decision trees in the GBDT model, so that the association relation among the characteristic items in the plurality of sample data sets can be found. It is easy to understand that each iterative decision model comprises a plurality of trained decision trees.
Each decision tree includes a root node, an intermediate node, and a leaf node. The root node and each intermediate node have a corresponding risk feature term (e.g., age) and risk feature value (e.g., age 30), if the customer age of a sample is greater than 30 years old, the sample is assigned to the right child node of the node, otherwise the sample is assigned to the left child node, and the lower nodes are similarly assigned until the sample falls to a leaf node.
And step S116F, training a logistic regression algorithm according to the feature combination output by the target iterative decision tree model to obtain a product pushing model, wherein m class labels are preconfigured on the output of the product pushing model.
Optionally, the step S116 may further include a step S116G, configured to correspond to the isolated forest model of the target, specifically as follows:
step S116G01, randomly selecting one of the feature items (e.g. age) and one of the feature item values, and performing a recursive algorithm on the risk array X until: the isolated tree reaches the set height limit, so that a single isolated tree is established; an orphan tree is a binary tree with zero or two child nodes per node of the tree.
Step S116G02, repeating the step S116G01 to obtain a plurality of isolated trees, and forming an isolated forest by the isolated trees to obtain a trained target isolated forest model.
Example two
With continued reference to fig. 5, a schematic program diagram of a second embodiment of the big data based insurance product data pushing system 20 of the present invention is shown. In this embodiment, the big data based insurance product data pushing system 20 may include or be divided into one or more program modules, one or more program modules being stored in a storage medium and executed by one or more processors to complete the present invention, and the above big data based insurance product data pushing method may be implemented. Program modules in accordance with embodiments of the present invention refer to a series of computer program instruction segments capable of performing particular functions, and are more suited than the program itself to describe the execution of the big data based insurance product data pushing system 20 in a storage medium. The following description will specifically describe functions of each program module of the present embodiment:
and the acquisition module 200 is used for receiving the client data form of the intention client uploaded by the client terminal and acquiring the client information of the intention client from the client data form.
An extracting module 202 is configured to extract a plurality of feature item data according to the client information. Illustratively, the extraction module 202 is further configured to: extracting a plurality of field information corresponding to a plurality of field names from the client information according to the plurality of field names of a plurality of preset fields; and saving the plurality of field information to a database form.
And the analysis module 204 is used for inputting the plurality of characteristic item data into a target isolated forest model to obtain the anomaly coefficient of the intention client. The analysis module 204 is further configured to: traversing the plurality of characteristic item data in each isolated tree in the target isolated forest model to obtain the path length of the plurality of characteristic item data in each isolated tree; and calculating the abnormal coefficients of the plurality of characteristic item data according to the path lengths of the plurality of characteristic item data in each isolated tree. The calculation formula of the anomaly coefficient is as follows:
c(n)=2H(n-1)-(2(n-1)/n)
H(k)=In(k)+ξ
wherein s (x, n) is an abnormal coefficient of the plurality of feature item data obtained through n isolated trees, E (H (x)) is a height average value of the plurality of feature item data in each isolated tree, H (x) is a path length of the plurality of feature item data in each isolated tree, and ζ is an Euler parameter, wherein H (k) is a harmonic number.
A judging module 206, configured to judge whether the anomaly coefficient is greater than a preset value.
Illustratively, the determining module 206 is further configured to: and if the anomaly coefficient is larger than a preset value, guiding the intention client to an expert seat.
A session window establishing module 208, configured to control to establish a session window between the client terminal and the expert agent terminal if the anomaly coefficient is greater than the preset value.
Specifically, according to the anomaly coefficient, defining an anomaly class identifier of the intention client; and distributing corresponding target expert seat terminals for the intended clients according to the abnormal grade identification, and establishing a session window between the client terminals and the target expert seat terminals.
And the feature output module 210 is configured to input the feature combinations to a target iterative decision tree model according to the plurality of feature item data if the anomaly coefficient is not greater than the preset value, and output the feature combinations through the target iterative decision tree model. The feature output module 210 is further configured to: and obtaining a feature combination corresponding to the client information of the intention client according to the position distribution of the plurality of feature item data in the leaf nodes of each decision tree in the target iterative decision tree model.
And the prediction module 212 is used for inputting the feature combination into a product push model to obtain the confidence level of the intention clients and the labels of all categories.
The generating module 214 is configured to generate an insurance product data form based on the confidence level of each category label.
And a sending module 216, configured to send the insurance product data form to the client terminal, so that the client terminal displays the insurance product data form on a display unit of the client terminal.
Illustratively, the big data based insurance product data pushing system 20 further includes a product pushing model training module 218 for:
presetting a plurality of characteristic items for determining product data pushing; acquiring a plurality of sample data of a plurality of clients from a client information database according to the plurality of characteristic items; configuring n sample data sets according to the plurality of sample data; one or more sample data sets in the n sample data sets are used for training a random forest algorithm to obtain the target isolated forest model, and the random forest model is used for determining whether characteristic item data of an intention client are input into a target iterative decision tree model or not; using one or more of the n sample data sets for training an iterative decision tree algorithm to obtain the target iterative decision tree model; training a logistic regression algorithm according to the feature combination output by the target iterative decision tree model to obtain a product pushing model, wherein m class labels are preconfigured on the output of the product pushing model.
Wherein configuring n sample data sets from the plurality of sample data further comprises:
extracting M sample data corresponding to M clients according to a preset rule from a plurality of sample numbers of the clients, and forming a sample data set by the M sample data; and repeatedly executing the extraction steps to obtain n sample data sets.
Example III
Referring to fig. 6, a hardware architecture diagram of a computer device according to a third embodiment of the present invention is shown. In this embodiment, the computer device 2 is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction. The computer device 2 may be a rack server, a blade server, a tower server, or a rack server (including a stand-alone server, or a server cluster made up of multiple servers), or the like. As shown, the computer device 2 includes, but is not limited to, at least a memory 21, a processor 22, a network interface 23, and an insurance product data pushing system 20 that are communicatively connected to each other via a system bus. Wherein:
in this embodiment, the memory 21 includes at least one type of computer-readable storage medium including flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the memory 21 may be an internal storage unit of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device 20. Of course, the memory 21 may also include both internal storage units of the computer device 2 and external storage devices. In this embodiment, the memory 21 is generally used to store an operating system and various application software installed on the computer device 2, such as program codes of the insurance product data pushing system 20 based on big data in the second embodiment. Further, the memory 21 may be used to temporarily store various types of data that have been output or are to be output.
The processor 22 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 22 is typically used to control the overall operation of the computer device 2. In this embodiment, the processor 22 is configured to execute the program code stored in the memory 21 or process data, for example, execute the insurance product data pushing system 20 based on big data, so as to implement the insurance product data pushing method based on big data of the first embodiment.
The network interface 23 may comprise a wireless network interface or a wired network interface, which network interface 23 is typically used for establishing a communication connection between the computer apparatus 2 and other electronic devices. For example, the network interface 23 is used to connect the computer device 2 to an external terminal through a network, establish a data transmission channel and a communication connection between the computer device 2 and the external terminal, and the like. The network may be an Intranet (Intranet), the Internet (Internet), a global system for mobile communications (Global System of Mobile communication, GSM), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), a 4G network, a 5G network, bluetooth (Bluetooth), wi-Fi, or other wireless or wired network.
It is noted that fig. 6 only shows a computer device 2 having components 20-23, but it is understood that not all of the illustrated components are required to be implemented, and that more or fewer components may alternatively be implemented.
In this embodiment, the insurance product data pushing system 20 based on big data stored in the memory 21 can also be divided into one or more program modules, which are stored in the memory 21 and executed by one or more processors (the processor 22 in this embodiment) to complete the present invention.
For example, fig. 6 shows a schematic diagram of a program module for implementing a second embodiment of the big data based insurance product data pushing system 20, where the big data based insurance product data pushing system 20 may be divided into an obtaining module 200, an extracting module 202, an analyzing module 204, a judging module 206, a session window establishing module 208, a feature outputting module 210, a predicting module 212, a generating module 214, a sending module 216, and a product pushing model training module 218. Program modules in the present invention are understood to mean a series of computer program instruction segments capable of performing a specific function, more preferably than a program, describing the execution of the big data based insurance product data pushing system 20 in the computer device 2. The specific functions of the program modules 200-218 are described in detail in the second embodiment, and are not described herein.
Example IV
The present embodiment also provides a computer-readable storage medium such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by a processor, performs the corresponding functions. The computer readable storage medium of the present embodiment is used for storing the insurance product data pushing system 20 based on big data, and when executed by the processor, implements the insurance product data pushing method based on big data of the first embodiment.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (8)

1. A method for pushing insurance product data based on big data, the method comprising:
receiving a client data form of an intended client uploaded by a client terminal, and acquiring client information of the intended client from the client data form;
extracting a plurality of feature item data according to the client information;
inputting the plurality of characteristic item data into a target isolated forest model to obtain an abnormal coefficient of the intention client;
judging whether the abnormal coefficient is larger than a preset value or not;
if the anomaly coefficient is larger than the preset value, controlling to establish a session window between the client terminal and the expert seat terminal;
if the anomaly coefficient is not greater than the preset value, inputting the plurality of feature item data into a target iterative decision tree model, and outputting a feature combination through the target iterative decision tree model;
inputting the feature combination into a product pushing model to obtain the confidence coefficient of the intention client and each category label;
generating an insurance product data form based on the confidence level of each category label; and
Transmitting the insurance product data form to the client terminal so that the client terminal displays the insurance product data form on a display unit of the client terminal;
the step of inputting the plurality of characteristic item data into a target isolated forest model to obtain the abnormal coefficient of the intention client comprises the following steps:
traversing the plurality of characteristic item data in each isolated tree in the target isolated forest model to obtain the path length of the plurality of characteristic item data in each isolated tree;
according to the path length of the plurality of feature item data in each isolated tree, calculating to obtain an abnormal coefficient of the plurality of feature item data, wherein the abnormal coefficient has the following calculation formula:
c(n)=2H(n-1)-(2(n-1)/n)
H(n-1)=In(n-1)+ξ
wherein s (x, n) is an abnormal coefficient of the plurality of feature item data obtained through n isolated trees, E (H (x)) is a height average value of the plurality of feature item data In each isolated tree, H (x) is a path length of the plurality of feature item data In each isolated tree, ζ is an Euler parameter, H (n-1) is a sum of the sums, and In (n-1) is a logarithm of n-1;
the method further comprises the training step of the product pushing model:
presetting a plurality of characteristic items for determining product data pushing;
acquiring a plurality of sample data of a plurality of clients from a client information database according to the plurality of characteristic items;
configuring n sample data sets according to the plurality of sample data;
one or more sample data sets in the n sample data sets are used for training a random forest algorithm to obtain the target isolated forest model, and the target isolated forest model is used for determining whether characteristic item data of an intention client are input into a target iterative decision tree model or not;
using one or more of the n sample data sets for training an iterative decision tree algorithm to obtain the target iterative decision tree model; and
Training a logistic regression algorithm according to the feature combination output by the target iterative decision tree model to obtain a product pushing model, wherein m class labels are preconfigured on the output of the product pushing model.
2. The big data based insurance product data pushing method according to claim 1, wherein the step of extracting a plurality of feature item data according to the customer information includes:
extracting a plurality of field information corresponding to a plurality of field names from the client information according to the plurality of field names of a plurality of preset fields; and
And storing the field information into a database form.
3. The big data based insurance product data pushing method according to claim 1, wherein said controlling the establishment of a session window between said client terminal and expert agent terminal if said anomaly coefficient is greater than said preset value includes:
defining an abnormal grade identification of the intention client according to the abnormal coefficient;
and distributing corresponding target expert seat terminals for the intended clients according to the abnormal grade identification, and establishing a session window between the client terminals and the target expert seat terminals.
4. The big data based insurance product data pushing method according to claim 1, wherein the step of inputting the plurality of feature item data into a target iterative decision tree model, outputting feature combinations through the target iterative decision tree model includes:
and obtaining a feature combination corresponding to the client information of the intention client according to the position distribution of the plurality of feature item data in the leaf nodes of each decision tree in the target iterative decision tree model.
5. The big data based insurance product data pushing method according to any of claims 1 to 4, wherein the step of configuring n sample data sets according to the plurality of sample data in the training of the product pushing model includes:
extracting M sample data corresponding to M clients according to a preset rule from a plurality of sample numbers of the clients, and forming a sample data set by the M sample data; and
The extraction step is repeated to obtain n sample data sets.
6. An insurance product pushing system, said system comprising:
the acquisition module is used for receiving a client data form of an intention client uploaded by the client terminal and acquiring client information of the intention client from the client data form;
the extraction module is used for extracting a plurality of characteristic item data according to the client information;
the analysis module is used for inputting the plurality of characteristic item data into a target isolated forest model to obtain an abnormal coefficient of the intention client;
the judging module is used for judging whether the abnormal coefficient is larger than a preset value or not;
the session window establishing module is used for controlling to establish a session window between the client terminal and the expert seat terminal if the anomaly coefficient is larger than the preset value;
the feature output module is used for inputting the plurality of feature item data into a target iterative decision tree model according to the plurality of feature item data and outputting feature combinations through the target iterative decision tree model if the anomaly coefficient is not larger than the preset value;
the prediction module is used for inputting the feature combination into a product pushing model to obtain the confidence coefficient of the intention client and each category label;
the generation module is used for generating an insurance product data form based on the confidence degree of each category label; and
A transmitting module, configured to transmit the insurance product data form to the client terminal, so that the client terminal displays the insurance product data form on a display unit of the client terminal;
wherein the analysis module is further configured to:
traversing the plurality of characteristic item data in each isolated tree in the target isolated forest model to obtain the path length of the plurality of characteristic item data in each isolated tree;
according to the path length of the plurality of feature item data in each isolated tree, calculating to obtain an abnormal coefficient of the plurality of feature item data, wherein the abnormal coefficient has the following calculation formula:
c(n)=2H(n-1)-(2(n-1)/n)
H(n-1)=In(n-1)+ξ
wherein s (x, n) is an abnormal coefficient of the plurality of feature item data obtained through n isolated trees, E (H (x)) is a height average value of the plurality of feature item data In each isolated tree, H (x) is a path length of the plurality of feature item data In each isolated tree, ζ is an Euler parameter, H (n-1) is a sum of the sums, and In (n-1) is a logarithm of n-1;
the product push model training module is used for:
presetting a plurality of characteristic items for determining product data pushing;
acquiring a plurality of sample data of a plurality of clients from a client information database according to the plurality of characteristic items;
configuring n sample data sets according to the plurality of sample data;
one or more sample data sets in the n sample data sets are used for training a random forest algorithm to obtain the target isolated forest model, and the target isolated forest model is used for determining whether characteristic item data of an intention client are input into a target iterative decision tree model or not;
using one or more of the n sample data sets for training an iterative decision tree algorithm to obtain the target iterative decision tree model; and
Training a logistic regression algorithm according to the feature combination output by the target iterative decision tree model to obtain a product pushing model, wherein m class labels are preconfigured on the output of the product pushing model.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the computer program when executed by the processor implements the steps of the method according to any one of claims 1 to 5.
8. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program executable by at least one processor to cause the at least one processor to perform the steps of the method according to any one of claims 1 to 5.
CN201910606135.5A 2019-07-05 2019-07-05 Insurance product data pushing method and system based on big data and computer equipment Active CN110503507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910606135.5A CN110503507B (en) 2019-07-05 2019-07-05 Insurance product data pushing method and system based on big data and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910606135.5A CN110503507B (en) 2019-07-05 2019-07-05 Insurance product data pushing method and system based on big data and computer equipment

Publications (2)

Publication Number Publication Date
CN110503507A CN110503507A (en) 2019-11-26
CN110503507B true CN110503507B (en) 2023-09-22

Family

ID=68586118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910606135.5A Active CN110503507B (en) 2019-07-05 2019-07-05 Insurance product data pushing method and system based on big data and computer equipment

Country Status (1)

Country Link
CN (1) CN110503507B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992067B (en) * 2019-12-13 2023-08-08 中国平安财产保险股份有限公司 Message pushing method, device, computer equipment and storage medium
CN111160647B (en) * 2019-12-30 2023-08-22 第四范式(北京)技术有限公司 Money laundering behavior prediction method and device
CN112182377A (en) * 2020-09-28 2021-01-05 厦门美柚股份有限公司 Message pushing processing method, device, terminal and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9355155B1 (en) * 2015-07-01 2016-05-31 Klarna Ab Method for using supervised model to identify user
CN107730389A (en) * 2017-09-30 2018-02-23 平安科技(深圳)有限公司 Electronic installation, insurance products recommend method and computer-readable recording medium
WO2019037202A1 (en) * 2017-08-24 2019-02-28 平安科技(深圳)有限公司 Method and apparatus for recognising target customer, electronic device and medium
CN109544284A (en) * 2018-11-08 2019-03-29 平安科技(深圳)有限公司 Product data method for pushing, system and computer equipment based on big data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9355155B1 (en) * 2015-07-01 2016-05-31 Klarna Ab Method for using supervised model to identify user
WO2019037202A1 (en) * 2017-08-24 2019-02-28 平安科技(深圳)有限公司 Method and apparatus for recognising target customer, electronic device and medium
CN107730389A (en) * 2017-09-30 2018-02-23 平安科技(深圳)有限公司 Electronic installation, insurance products recommend method and computer-readable recording medium
CN109544284A (en) * 2018-11-08 2019-03-29 平安科技(深圳)有限公司 Product data method for pushing, system and computer equipment based on big data

Also Published As

Publication number Publication date
CN110503507A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
CN110349038B (en) Risk assessment model training method and risk assessment method
CN107844634B (en) Modeling method of multivariate general model platform, electronic equipment and computer readable storage medium
CN110659318B (en) Big data-based policy pushing method, system and computer equipment
CN110503507B (en) Insurance product data pushing method and system based on big data and computer equipment
CN112163887B (en) Electric pin system, electric pin list management method, electric pin system device, electric pin list management equipment and storage medium
CN109961080B (en) Terminal identification method and device
CN113627566B (en) Phishing early warning method and device and computer equipment
CN110599354B (en) Online checking method, online checking system, computer device and computer readable storage medium
CN110070382B (en) Method and device for generating information
CN110555451A (en) information identification method and device
CN111797320A (en) Data processing method, device, equipment and storage medium
CN109816534B (en) Financing lease product recommendation method, financing lease product recommendation device, computer equipment and storage medium
CN107644042B (en) Software program click rate pre-estimation sorting method and server
CN111831817A (en) Questionnaire generation and analysis method and device, computer equipment and readable storage medium
CN112163154A (en) Data processing method, device, equipment and storage medium
CN109902196B (en) Trademark category recommendation method and device, computer equipment and storage medium
CN110807466A (en) Method and device for processing order data
CN113743129B (en) Information pushing method, system, equipment and medium based on neural network
CN111723872B (en) Pedestrian attribute identification method and device, storage medium and electronic device
CN114925275A (en) Product recommendation method and device, computer equipment and storage medium
CN114897607A (en) Data processing method and device for product resources, electronic equipment and storage medium
CN114913008A (en) Decision tree-based bond value analysis method, device, equipment and storage medium
CN109885710B (en) User image depicting method based on differential evolution algorithm and server
CN113159877B (en) Data processing method, device, system and computer readable storage medium
CN112163167A (en) Intelligent decision-making method, system, equipment and medium based on big data platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant