CN112966189A - Fund product recommendation system - Google Patents

Fund product recommendation system Download PDF

Info

Publication number
CN112966189A
CN112966189A CN202110400498.0A CN202110400498A CN112966189A CN 112966189 A CN112966189 A CN 112966189A CN 202110400498 A CN202110400498 A CN 202110400498A CN 112966189 A CN112966189 A CN 112966189A
Authority
CN
China
Prior art keywords
fund
value
function
representing
comment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110400498.0A
Other languages
Chinese (zh)
Other versions
CN112966189B (en
Inventor
黄明刚
刘蒙
封晓荣
邢国政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jizhi Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110400498.0A priority Critical patent/CN112966189B/en
Publication of CN112966189A publication Critical patent/CN112966189A/en
Application granted granted Critical
Publication of CN112966189B publication Critical patent/CN112966189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis

Abstract

The invention discloses a fund product recommendation system, which comprises a fund recommendation platform and a user terminal, wherein the fund recommendation platform and the user terminal are in communication connection with each other, the fund recommendation platform comprises a primary screening module, a feature extraction module, a secondary screening module, a prediction model and a recommendation module, a fund classification table is established, a primary screening fund table is generated, and fund identifications corresponding to funds with high quality degrees are filled into the fund recommendation table; and adding the predicted operation strategy as an investment suggestion to a fund recommendation table, and recommending the fund recommendation table to the user terminal. The fund product recommendation system can screen high-value fund to recommend to a client, improve the investment experience of the client, provide operation suggestions of the fund, has high prediction accuracy, can provide investment guidance for the user, is especially suitable for novice users, is intelligent, reasonable and fair in recommendation method, and can solve the problems that the existing fund product recommendation system is unreasonable in fund recommendation and not beneficial, so that the vital interests of investors can be better guaranteed.

Description

Fund product recommendation system
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of financial risk management and control, in particular to a fund product recommendation system.
[ background of the invention ]
In recent decades, financial products have been developed rapidly in China, and for the public fund raising products with the highest table conversion degree, more than 100 fund companies, more than 6000 fund products and more than 8 trillion yuan of asset scale have been reached so far, and various fund products such as stock fund, bond fund, currency fund, FOF fund and the like are formed. Most funds have good annual revenue from the historical performance of the fund market, but most consumers fail to make a profit in the fund market because of the lack of professional systematic analysis of fund products and the difficulty in screening high quality funds from numerous fund products.
The existing fund product recommendation system is unreasonable and not objective in fund recommendation and is often influenced by benefit-related parties, so that the vital benefit of investors cannot be guaranteed.
[ summary of the invention ]
In view of this, embodiments of the present invention provide a fund product recommendation system.
In a first aspect, an embodiment of the present invention provides a fund product recommendation system, where the system includes a fund recommendation platform and a user terminal that are in communication connection with each other, where the fund recommendation platform includes a prescreening module, a feature extraction module, a secondary screening module, a prediction model, and a recommendation module, where:
the primary screening module establishes a fund classification table based on the theme plate, acquires first parameters of the fund in a preset time period, selects primary screening fund from the fund classification table according to a primary screening function, and fills fund identification corresponding to the primary screening fund into the primary screening fund table;
the feature extraction module determines a data source platform according to a preset data acquisition list, obtains feature data of funds in a primary screening fund table in a preset time period from the data source platform, preprocesses the feature data and classifies the feature data based on a semantic judgment model to generate second parameters;
the secondary screening module calculates the quality degree Q of the fund in the primary screening fund table based on the second parameter, and the quality degree Q is larger than a quality degree threshold value Q0The fund identifier corresponding to the fund is filled into a fund recommendation table;
establishing and training a deep reinforcement learning model based on the fund operation strategy by using the prediction model, and predicting the operation strategy of the target fund of the fund recommendation table by using the trained deep reinforcement learning model;
and the recommending module adds the predicted operation strategy as an investment suggestion to the fund recommending table and sends the fund recommending table to the user terminal.
The above-described aspect and any possible implementation further provide an implementation, where the first parameter includes: visit volume, number of holders, and net worth.
The above-described aspect and any possible implementation further provide an implementation, where the prescreening function is defined as:
Figure BDA0003020050620000021
wherein F (x) represents the preliminary screening function, alphaiRepresents the average value of the visit amount, alpha, of the ith fund in the xth subject plate within a preset time period0Indicating an access amount threshold set according to actual conditions; beta is aiRepresents the average value of the number of holders of the ith fund in the xth subject plate within a preset time period, beta0Indicating a threshold value of the number of persons holding the vehicle according to actual conditions; gamma rayiRepresenting the net mean value, gamma, of the ith fund in the xth subject plate over a predetermined period of time0Representing a net value threshold set according to actual conditions; n isxThe number of funds representing the xth subject plate; w is a1,w2,w3Represents a weight satisfying w1,w2,w3∈[0,1]And w1+w2+w3=1;
M funds with the maximum value of the primary screening function F (x) are selected from each subject plate as primary screening funds.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the feature data of the fund in the preliminary screening fund table includes: the fund mention data, the fund review praise data, the fund manager mention data, the fund manager review data and the fund review praise data.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the preprocessing the feature data specifically includes:
setting a preprocessing priority for the acquired feature data of the fund, wherein the text comment is a first priority, the voice comment is a second priority, and the picture comment is a third priority;
first priority processing: judging whether the text comments are matched with the good comment word library, if so, judging whether the comment IP coincidence degree corresponding to the text comments exceeds a coincidence degree threshold value, and if so, giving up all potential good comment marks of the comment IP; if the potential goodness mark does not exceed the preset threshold value, performing potential goodness marking, and entering second priority processing;
and second priority processing: converting the voice comments into text comments, judging whether the text comments are matched with the good comment word library, if the matching is successful, judging whether the comment IP coincidence degree corresponding to the text comments exceeds the coincidence degree threshold value, and if the comment IP coincidence degree exceeds the coincidence degree threshold value, giving up all potential good comment marks of the comment IP; if not, performing potential good comment marking, and entering third priority processing;
and (3) third-priority treatment: judging whether the number of the same pictures in the picture comments exceeds a number threshold value, if not, identifying the text comments in the picture comments, further judging whether the text comments are matched with a good comment word library, and if the matching is successful, marking potential good comments; if the number threshold is exceeded, further judging whether the IP contact ratio of the picture comments exceeds the contact ratio threshold, if the IP contact ratio exceeds the contact ratio threshold, giving up all potential good comment marks of the comment IP exceeding the contact ratio threshold, if the IP contact ratio does not exceed the contact ratio threshold, identifying the character comments in the picture comments, further judging whether the character comments are matched with a good comment word library, and if the matching is successful, performing the potential good comment marks.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where the generating a second parameter based on semantic judgment model classification specifically includes:
preprocessing the characteristic data of the fund to generate a favorable tendency identifier;
constructing a semantic judgment model, wherein the construction method comprises the following steps:
establishing an optimal reward model:
Figure BDA0003020050620000031
wherein E represents an expectation value, λ represents a discount factor, and λ ∈ [0,1 ]];s0Representing the initial state, R representing the reward function, pi(s)t) Representing a policy that maps states to operations;
defining the Q function:
Figure BDA0003020050620000032
wherein the content of the first and second substances,πirepresenting the current strategy of determining the Q value according to the equation, R represents a function, λ represents a discount factor, p (s, a, s)*) Indicates that action a transits from state s to s*Transition probability of, TπiRepresenting the reward obtained by iterating step i;
the iterative update of the new strategy is as follows:
pi (i +1)(s) ═ arg maxQ (s, a), defining an epsilon-greedy behavior strategy, and determining the behavior of the current state by adopting the epsilon-greedy behavior strategy, wherein each action is determined by some predefined fixed probability
Figure BDA0003020050620000041
Randomly selected.
Obtaining a Q value by learning iterative approximation to an optimal strategy;
and performing reinforcement learning on the characteristic data carrying the good comment tendency identification through a semantic judgment model to generate a good comment classification result.
The above-described aspect and any possible implementation manner further provide an implementation manner, and the calculation formula of the goodness degree Q is as follows:
Figure BDA0003020050620000042
wherein Q represents a high quality, p1Indicates the number of favorable points of the fund, m1Denotes the reference number of the fund, c1Number of praise, p, indicating good comment of fund2Represents the number of favorable comments, m, of the fund to the fund manager2Number of mentions indicating the fund corresponds to the fund manager, c2The favorable praise number of the fund corresponding to the fund manager is represented, and t represents the time taking days as a unit; k is a radical of1,k2Represents an adjustment coefficient, satisfies k1,k2∈[0,1]And k is1+k2=1。
The above-described aspect and any possible implementation manner further provide an implementation manner, where the building and training of the deep reinforcement learning model based on the fund operation strategy specifically includes:
acquiring historical operating strategy data of a plurality of funds, summing and averaging the historical operating strategy data to input, predicting the operating strategy of the funds, establishing a corresponding Markov decision process model, wherein an action a is expressed and comprises buying, selling and keeping, a state is expressed by s and is fund price information generated by a behavior strategy, a reward is expressed by R, and the change of the investment combination price value is realized when the state changes;
training data, continuously updating the value function Vπ(s, a) up to a value function Vπ(s, a) converge to obtain an optimum function V*(s,a);
Function of optimum V*(s, a) is formulated as follows:
Figure BDA0003020050620000051
wherein, V*(S, a) represents an optima function, S' e S represents a state instance, a e A represents an action instance,gamma denotes a discount factor, R denotes a reward function, which designates a reward, P denotes a transition function, which designates a state transition probability;
based on the above-mentioned optimum value function V*(s, a), optimal strategy π*(s) can be obtained:
Figure BDA0003020050620000052
wherein, pi*(s) denotes the optimal strategy, Psa(s ', a) represents the transition probability of the state s taking the action a to the next state s', a ∈ A represents an action instance, and γ represents a discount factor;
adopting a recurrent neural network as a network of Q value, wherein the parameter is theta;
Ht=f(u×xt+w×Ht-1+b1),
Qt=f(v×Ht-1+b2),
L=Qt-yt
wherein HtIndicating a hidden state at time t, Ht-1Representing a hidden state at time t-1, QtRepresents the output of the current layer at time t, L represents the error, xtRepresenting training data input at time t, ytRepresenting the original output of training data, f representing the activation function of the hidden layer, u, w and v representing the weights shared by the recurrent neural network, b1And b2A threshold value representing recurrent neural network sharing;
defining a loss function L (theta) in the Q value;
training the parameters of the recurrent neural network by adopting a batch gradient descent method, selecting the action with the maximum Q value through the Q value output by the network along with the continuous increase of the training times, and finally converging to an optimal strategy;
and in the updating period, historical operation strategy data which is pre-divided into a test set is used for testing the trained model.
The above-described aspects and any possible implementations further provide an implementation in which the loss function L (θ) is formulated as follows:
Figure BDA0003020050620000061
wherein L (theta) represents a loss function, r represents a reward value, theta and theta' represent neural network weights,
Figure BDA0003020050620000062
representing the target Q function value, Q (s, a, θ) representing the predicted Q function value, and γ representing the discount factor.
The above-described aspects and any possible implementation manners further provide an implementation manner that the user terminal is a smart device with a communication function, and the smart device includes a smart phone, a notebook computer, a tablet computer, and a desktop computer.
One of the above technical solutions has the following beneficial effects:
the method of the embodiment of the invention provides a fund product recommendation system, which selects primary screened funds from a fund classification table through a primary screening function, can filter out funds with higher activity and larger growth potential for investors preliminarily, obtains a secondary evaluation standard based on double indexes of funds and fund managers, screens out the funds with high quality Q meeting the requirements, and recommends the funds to users; the operation strategy of the target fund of the fund recommendation table is predicted by the deep reinforcement learning model, and the operation suggestion of the fund is attached, so that the prediction accuracy is high, and investment guidance can be provided for users, especially novice users; the recommendation method is intelligent, reasonable and fair, can screen high-value funds to recommend the funds to the client, improves the investment experience of the client, and can solve the problems that the existing fund product recommendation system is unreasonable in fund recommendation and is not profitable, so that the vital interests of investors can be better guaranteed.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a functional block diagram of a fund product recommendation system according to an embodiment of the present invention;
fig. 2 is a hardware schematic diagram of a fund recommendation platform according to an embodiment of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail and completely with reference to the following embodiments and accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a functional block diagram of a fund product recommendation system according to an embodiment of the present invention is shown. As shown in fig. 1, the system includes a fund recommendation platform and a user terminal, which are in communication connection with each other, where the fund recommendation platform includes a primary screening module, a feature extraction module, a secondary screening module, a prediction model, and a recommendation module, where:
the primary screening module establishes a fund classification table based on the theme plate, acquires first parameters of the fund in a preset time period, selects primary screening fund from the fund classification table according to a primary screening function, and fills fund identification corresponding to the primary screening fund into the primary screening fund table;
the feature extraction module determines a data source platform according to a preset data acquisition list, obtains feature data of funds in a primary screening fund table in a preset time period from the data source platform, preprocesses the feature data and classifies the feature data based on a semantic judgment model to generate second parameters;
the secondary screening module calculates the quality degree Q of the fund in the primary screening fund table based on the second parameter, and the quality degree Q is larger than a quality degree threshold value Q0The fund identifier corresponding to the fund is filled into a fund recommendation table;
establishing and training a deep reinforcement learning model based on the fund operation strategy by using the prediction model, and predicting the operation strategy of the target fund of the fund recommendation table by using the trained deep reinforcement learning model;
and the recommending module adds the predicted operation strategy as an investment suggestion to the fund recommending table and sends the fund recommending table to the user terminal.
The user terminal is an intelligent device with a communication function and comprises a smart phone, a notebook computer, a tablet computer and a desktop computer
The invention establishes a fund product recommendation system based on the theme plate, is convenient for investors to quickly search for the fund and is also beneficial for initial contacts of the fund to quickly know the fund; primary screening funds are selected from the fund classification table through a primary screening function, so that the funds with higher activity and larger growth potential can be filtered for investors primarily; then, on the basis of primary screening, calculating the quality degree Q of the fund in the primary screening fund table based on the second parameter, and enabling the quality degree Q to be larger than a quality degree threshold value Q0The fund identification corresponding to the fund is filled into a fund recommendation table, the characteristic data of the fund is analyzed, a secondary evaluation standard based on double indexes of the fund and a fund manager is obtained, and the fund with the quality degree Q meeting the requirement is screened out and recommended to the user; in addition, a deep reinforcement learning model based on fund operation strategies is established and trained, the trained deep reinforcement learning model is used for predicting the operation strategies of the target fund of the fund recommendation table, the fund is recommended to the user, meanwhile, the operation strategies of the target fund of the fund recommendation table are predicted by the reinforcement learning model, and the operation suggestions of the fund are attached, so that investment guidance can be provided for the user, particularly for novice users. The fund product recommendation system is intelligent, reasonable and fair, can screen high-value fund to recommend to customers, improves the investment experience of the customers, and can solve the problems of unreasonable fund recommendation and the profit of the existing fund product recommendation system, so that the vital interests of investors can be better ensured.
The preliminary screening function of the embodiment of the invention is defined as follows:
Figure BDA0003020050620000081
wherein F (x) represents the preliminary screening function, alphaiRepresents the average value of the visit amount, alpha, of the ith fund in the xth subject plate within a preset time period0Indicating an access amount threshold set according to actual conditions; beta is aiRepresents the average value of the number of holders of the ith fund in the xth subject plate within a preset time period, beta0Indicating a threshold value of the number of persons holding the vehicle according to actual conditions; gamma rayiRepresenting the net mean value, gamma, of the ith fund in the xth subject plate over a predetermined period of time0Representing a net value threshold set according to actual conditions; n isxThe number of funds representing the xth subject plate; w is a1,w2,w3Represents a weight satisfying w1,w2,w3∈[0,1]And w1+w2+w3=1;
M funds with the maximum value of the primary screening function F (x) are selected from each subject plate as primary screening funds.
The embodiment of the invention specifically comprises the following steps of:
setting a preprocessing priority for the acquired feature data of the fund, wherein the text comment is a first priority, the voice comment is a second priority, and the picture comment is a third priority;
first priority processing: judging whether the text comments are matched with the good comment word library, and if the text comments are not successfully matched with the good comment word library, directly entering second priority processing; if the matching is successful, judging whether the coincidence degree of the comment IP corresponding to the text comment exceeds the coincidence degree threshold value, and if so, giving up all potential good comment marks of the comment IP; if the potential goodness mark does not exceed the preset threshold value, performing potential goodness marking, and entering second priority processing;
and second priority processing: converting the voice comments into text comments, judging whether the text comments are matched with the good comment word bank or not, and directly entering third-priority processing if the text comments are not successfully matched with the good comment word bank; if the matching is successful, judging whether the coincidence degree of the comment IP corresponding to the text comment exceeds the coincidence degree threshold value, and if so, giving up all potential good comment marks of the comment IP; if not, performing potential good comment marking, and entering third priority processing;
and (3) third-priority treatment: judging whether the number of the same pictures in the picture comments exceeds a number threshold, if not, identifying the text comments in the picture comments, further judging whether the text comments are matched with a good comment word library, and if not, directly terminating; if the matching is successful, carrying out potential favorable mark; if the number threshold is exceeded, further judging whether the IP contact ratio of the picture comments exceeds the contact ratio threshold, if the IP contact ratio exceeds the contact ratio threshold, giving up all potential good comment marks of the comment IP exceeding the contact ratio threshold, if the IP contact ratio does not exceed the contact ratio threshold, identifying the character comments in the picture comments, further judging whether the character comments are matched with a good comment word library, and if the matching is successful, performing the potential good comment marks.
According to the embodiment of the invention, character comments, voice comments and picture comments of the feature data are subjected to third-priority preprocessing, so that comment data can be comprehensively obtained, and the system can judge the fund and the fund manager more accurately. In addition, by judging the coincidence degree of the comment IP and the number of the same pictures in the picture comments, the water army can be efficiently removed, so that the data is more real and accurate. Potential good comment marking is carried out in advance, the task processing amount of the semantic judgment model can be reduced, and the classification accuracy of the semantic judgment model is further improved.
The generating of the second parameter based on the semantic judgment model classification specifically includes:
preprocessing the characteristic data of the fund to generate a favorable tendency identifier;
constructing a semantic judgment model, wherein the construction method comprises the following steps:
establishing an optimal reward model:
Figure BDA0003020050620000101
wherein E represents an expectation value, λ represents a discount factor, and λ ∈ [0,1 ]];s0Representing the initial state, R representing the reward function, pi(s)t) Representing a policy that maps states to operations;
defining the Q function:
Figure BDA0003020050620000104
where pi i represents the current strategy to determine the Q value according to the equation, R represents the function, λ represents the discount factor, p (s, a, s)*) Indicates that action a transits from state s to s*Transition probability of, TπiRepresenting the reward obtained by iterating step i;
the iterative update of the new strategy is as follows:
pi (i +1)(s) ═ argmaxQ (s, a), defining an epsilon-greedy behavior strategy, and determining the behavior of the current state by adopting the epsilon-greedy behavior strategy, wherein each action is determined by some predefined fixed probability
Figure BDA0003020050620000102
Randomly selected.
Obtaining a Q value by learning iterative approximation to an optimal strategy;
and performing reinforcement learning on the characteristic data carrying the good comment tendency identification through a semantic judgment model to generate a good comment classification result.
The semantic judgment model of the embodiment of the invention can realize accurate classification of favorable comments, and after the RL-based algorithm is trained, the favorable comments and other comments are classified with higher confidence, so that the model has good robustness, and the second parameter can be efficiently and accurately extracted from the characteristic data of the fund.
The calculation formula of the high quality Q of the embodiment of the invention is as follows:
Figure BDA0003020050620000103
wherein Q represents a high quality, p1Indicates the number of favorable points of the fund, m1Denotes the reference number of the fund, c1Number of praise, p, indicating good comment of fund2Represents the number of favorable comments, m, of the fund to the fund manager2Number of mentions indicating the fund corresponds to the fund manager, c2The favorable praise number of the fund corresponding to the fund manager is represented, and t represents the time taking days as a unit; k is a radical of1,k2Represents an adjustment coefficient, satisfies k1,k2∈[0,1]And k is1+k2=1。
The calculation formula of the high-quality degree Q can establish a secondary evaluation standard of double indexes of fund and fund managers, and a time attenuation model is established based on a logarithmic function, so that the fund and the fund managers of the profile can be efficiently and accurately selected.
The establishing and training of the deep reinforcement learning model based on the fund operation strategy of the embodiment of the invention specifically comprises the following steps:
acquiring historical operating strategy data of a plurality of funds, summing and averaging the historical operating strategy data to input, predicting the operating strategy of the funds, establishing a corresponding Markov decision process model, wherein an action a is expressed and comprises buying, selling and keeping, a state is expressed by s and is fund price information generated by a behavior strategy, a reward is expressed by R, and the change of the investment combination price value is realized when the state changes;
training data, continuously updating the value function Vπ(s, a) up to a value function Vπ(s, a) converge to obtain an optimum function V*(s,a);
Function of optimum V*(s, a) is formulated as follows:
Figure BDA0003020050620000111
wherein, V*(S, a) represents an optima function, S' e S represents a state instance, a e a represents an action instance, γ represents a discount factor, R represents a reward function, a reward is specified, P represents a transition function, a state transition probability is specified;
based on the above-mentioned optimum value function V*(s, a), optimal strategy π*(s) can be obtained:
Figure BDA0003020050620000112
wherein, pi*(s) denotes the optimal strategy, Psa(s', a) represents the state sTaking the transition probability from the action a to the next state s', wherein a belongs to the action example A, and gamma represents a discount factor;
adopting a recurrent neural network as a network of Q value, wherein the parameter is theta;
Ht=f(u×xt+w×Ht-1+b1),
Qt=f(v×Ht-1+b2),
L=Qt-yt
wherein HtIndicating a hidden state at time t, Ht-1Representing a hidden state at time t-1, QtRepresents the output of the current layer at time t, L represents the error, xtRepresenting training data input at time t, ytRepresenting the original output of training data, f representing the activation function of the hidden layer, u, w and v representing the weights shared by the recurrent neural network, b1And b2Representing a threshold value shared by the recurrent neural network.
The recurrent neural network of the embodiment of the invention has good perception capability and feature extraction capability, and has the key aspects of representation of actual features, self-learning layer by layer, limitation of sparse constraint of parameter space and prevention of overfitting.
Defining a loss function L (theta) in the Q value;
training the parameters of the recurrent neural network by adopting a batch gradient descent method, selecting the action with the maximum Q value through the Q value output by the network along with the continuous increase of the training times, and finally converging to an optimal strategy;
and in the updating period, historical operation strategy data which is pre-divided into a test set is used for testing the trained model.
The loss function L (θ) of the embodiment of the present invention is expressed by the following formula:
Figure BDA0003020050620000121
wherein L (theta) represents a loss function, r represents a prize value, and theta' tablesShowing the weight of the neural network, and showing the weight of the neural network,
Figure BDA0003020050620000122
representing the target Q function value, Q (s, a, θ) representing the predicted Q function value, and γ representing the discount factor.
The deep reinforcement learning model constructed in the embodiment of the invention is used for the recursive Q network, self exploration and experience playback are predicted based on the RNN combined feature processing and DQN-based self experiment analysis, the prediction of the deep reinforcement learning model is accurate in fund prediction, and the model has good robustness.
FIG. 2 is a hardware diagram of a fund recommendation platform, according to an embodiment of the present invention. Referring to fig. 2, at a hardware level, the fund recommendation platform includes a processor, and optionally an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the fund recommendation platform may also include hardware needed for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 2, but this does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
In a possible implementation manner, the processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program, and the corresponding computer program can also be obtained from other equipment so as to form a pricing device of the electricity price on a logic level. And the processor executes the program stored in the memory so as to realize the node working method provided by any embodiment of the invention through the executed program.
An embodiment of the present invention further provides a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by a fund recommendation platform including a plurality of application programs, enable the fund recommendation platform to execute the node working method provided in any embodiment of the present invention.
The method performed by the fund recommendation platform according to the embodiment of the present invention may be implemented in or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
An embodiment of the present invention further provides a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by a fund recommendation platform including a plurality of application programs, enable the fund recommendation platform to execute the node working method provided in any embodiment of the present invention.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units or modules by function, respectively. Of course, the functionality of the units or modules may be implemented in the same one or more software and/or hardware when implementing the invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments of the present invention are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (10)

1. The utility model provides a fund product recommendation system, its characterized in that, the system includes mutual fund recommendation platform and the user terminal that has communication connection, fund recommendation platform includes prescreening module, feature extraction module, secondary screening module, prediction model and recommendation module, wherein:
the primary screening module establishes a fund classification table based on the theme plate, acquires first parameters of the fund in a preset time period, selects primary screening fund from the fund classification table according to a primary screening function, and fills fund identification corresponding to the primary screening fund into the primary screening fund table;
the feature extraction module determines a data source platform according to a preset data acquisition list, obtains feature data of funds in a primary screening fund table in a preset time period from the data source platform, preprocesses the feature data and classifies the feature data based on a semantic judgment model to generate second parameters;
the secondary screening module calculates the quality degree Q of the fund in the primary screening fund table based on the second parameter, and the quality degree Q is larger than a quality degree threshold value Q0The fund identifier corresponding to the fund is filled into a fund recommendation table;
establishing and training a deep reinforcement learning model based on the fund operation strategy by using the prediction model, and predicting the operation strategy of the target fund of the fund recommendation table by using the trained deep reinforcement learning model;
and the recommending module adds the predicted operation strategy as an investment suggestion to the fund recommending table and sends the fund recommending table to the user terminal.
2. The fund product recommendation system according to claim 1, wherein the first parameters comprise: visit volume, number of holders, and net worth.
3. The fund product recommendation system according to claim 2, wherein the prescreening function is defined as:
Figure FDA0003020050610000011
wherein F (x) represents the preliminary screening function, alphaiRepresents the average value of the visit amount, alpha, of the ith fund in the xth subject plate within a preset time period0Indicating an access amount threshold set according to actual conditions; beta is aiRepresents the average value of the number of holders of the ith fund in the xth subject plate within a preset time period, beta0Indicating a threshold value of the number of persons holding the vehicle according to actual conditions; gamma rayiRepresenting the net mean value, gamma, of the ith fund in the xth subject plate over a predetermined period of time0Representing a net value threshold set according to actual conditions; n isxThe number of funds representing the xth subject plate; w is a1,w2,w3Represents a weight satisfying w1,w2,w3∈[0,1]And w1+w2+w3=1;
M funds with the maximum value of the primary screening function F (x) are selected from each subject plate as primary screening funds.
4. The fund product recommendation system according to claim 1, wherein the data characterizing the fund in the prescreening fund table comprises: the fund mention data, the fund review praise data, the fund manager mention data, the fund manager review data and the fund review praise data.
5. The fund product recommendation system according to claim 4, wherein the pre-processing the characteristic data specifically comprises:
setting a preprocessing priority for the acquired feature data of the fund, wherein the text comment is a first priority, the voice comment is a second priority, and the picture comment is a third priority;
first priority processing: judging whether the text comments are matched with the good comment word library, if so, judging whether the comment IP coincidence degree corresponding to the text comments exceeds a coincidence degree threshold value, and if so, giving up all potential good comment marks of the comment IP; if the potential goodness mark does not exceed the preset threshold value, performing potential goodness marking, and entering second priority processing;
and second priority processing: converting the voice comments into text comments, judging whether the text comments are matched with the good comment word library, if the matching is successful, judging whether the comment IP coincidence degree corresponding to the text comments exceeds the coincidence degree threshold value, and if the comment IP coincidence degree exceeds the coincidence degree threshold value, giving up all potential good comment marks of the comment IP; if not, performing potential good comment marking, and entering third priority processing;
and (3) third-priority treatment: judging whether the number of the same pictures in the picture comments exceeds a number threshold value, if not, identifying the text comments in the picture comments, further judging whether the text comments are matched with a good comment word library, and if the matching is successful, marking potential good comments; if the number threshold is exceeded, further judging whether the IP contact ratio of the picture comments exceeds the contact ratio threshold, if the IP contact ratio exceeds the contact ratio threshold, giving up all potential good comment marks of the comment IP exceeding the contact ratio threshold, if the IP contact ratio does not exceed the contact ratio threshold, identifying the character comments in the picture comments, further judging whether the character comments are matched with a good comment word library, and if the matching is successful, performing the potential good comment marks.
6. The fund product recommendation system according to claim 5, wherein the generating of the second parameter based on the semantic judgment model classification specifically comprises:
preprocessing the characteristic data of the fund to generate a favorable tendency identifier;
constructing a semantic judgment model, wherein the construction method comprises the following steps:
establishing an optimal reward model:
Figure FDA0003020050610000031
wherein E represents an expectation value, λ represents a discount factor, and λ ∈ [0,1 ]];s0Representing the initial state, R representing the reward function, pi(s)t) Representing a policy that maps states to operations;
defining the Q function:
Figure FDA0003020050610000032
wherein the content of the first and second substances,πirepresenting the current strategy of determining the Q value according to the equation, R represents a function, λ represents a discount factor, p (s, a, s)*) Indicates that action a transits from state s to s*Transition probability of, TπiRepresenting the reward obtained by iterating step i;
the iterative update of the new strategy is as follows:
pi (i +1)(s) ═ argmaxQ (s, a), defining an epsilon-greedy behavior strategy, and determining the behavior of the current state by adopting the epsilon-greedy behavior strategy, wherein each action is determined by some predefined fixed probability
Figure FDA0003020050610000034
Randomly selected.
Obtaining a Q value by learning iterative approximation to an optimal strategy;
and performing reinforcement learning on the characteristic data carrying the good comment tendency identification through a semantic judgment model to generate a good comment classification result.
7. The fund product recommendation system according to claim 6, wherein the goodness Q is calculated by the formula:
Figure FDA0003020050610000033
wherein Q represents a high quality, p1Indicates the number of favorable points of the fund, m1Denotes the reference number of the fund, c1Number of praise, p, indicating good comment of fund2Represents the number of favorable comments, m, of the fund to the fund manager2Number of mentions indicating the fund corresponds to the fund manager, c2The favorable praise number of the fund corresponding to the fund manager is represented, and t represents the time taking days as a unit; k is a radical of1,k2Represents an adjustment coefficient, satisfies k1,k2∈[0,1]And k is1+k2=1。
8. The fund product recommendation system according to claim 1 or 7, wherein the establishing and training of the deep reinforcement learning model based on the fund operation strategy specifically comprises:
acquiring historical operating strategy data of a plurality of funds, summing and averaging the historical operating strategy data to input, predicting the operating strategy of the funds, establishing a corresponding Markov decision process model, wherein an action a is expressed and comprises buying, selling and keeping, a state is expressed by s and is fund price information generated by a behavior strategy, a reward is expressed by R, and the change of the investment combination price value is realized when the state changes;
training data, continuously updating the value function Vπ(s, a) up to a value function Vπ(s, a) converge to obtain an optimum function V*(s,a);
Function of optimum V*(s, a) is formulated as follows:
Figure FDA0003020050610000041
wherein, V*(S, a) represents an optima function, S' e S represents a state instance, a e A represents an action instance, γ represents a discount factor, R represents a reward function, a reward is specifiedP represents a transition function, specifying a state transition probability;
based on the above-mentioned optimum value function V*(s, a), optimal strategy π*(s) can be obtained:
Figure FDA0003020050610000042
wherein, pi*(s) denotes the optimal strategy, Psa(s ', a) represents the transition probability of the state s taking the action a to the next state s', a ∈ A represents an action instance, and γ represents a discount factor;
adopting a recurrent neural network as a network of Q value, wherein the parameter is theta;
Ht=f(u×xt+w×Ht-1+b1),
Qt=f(v×Ht-1+b2),
L=Qt-yt
wherein HtIndicating a hidden state at time t, Ht-1Representing a hidden state at time t-1, QtRepresents the output of the current layer at time t, L represents the error, xtRepresenting training data input at time t, ytRepresenting the original output of training data, f representing the activation function of the hidden layer, u, w and v representing the weights shared by the recurrent neural network, b1And b2A threshold value representing recurrent neural network sharing;
defining a loss function L (theta) in the Q value;
training the parameters of the recurrent neural network by adopting a batch gradient descent method, selecting the action with the maximum Q value through the Q value output by the network along with the continuous increase of the training times, and finally converging to an optimal strategy;
and in the updating period, historical operation strategy data which is pre-divided into a test set is used for testing the trained model.
9. The fund product recommendation system according to claim 8, wherein the loss function L (θ) is formulated as follows:
Figure FDA0003020050610000051
wherein L (theta) represents a loss function, r represents a reward value, theta and theta' represent neural network weights,
Figure FDA0003020050610000052
representing the target Q function value, Q (s, a, θ) representing the predicted Q function value, and γ representing the discount factor.
10. The fund product recommendation system according to claim 1, wherein the user terminal is a smart device with a communication function, and comprises a smart phone, a laptop computer, a tablet computer and a desktop computer.
CN202110400498.0A 2021-04-14 2021-04-14 Fund product recommendation system Active CN112966189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110400498.0A CN112966189B (en) 2021-04-14 2021-04-14 Fund product recommendation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110400498.0A CN112966189B (en) 2021-04-14 2021-04-14 Fund product recommendation system

Publications (2)

Publication Number Publication Date
CN112966189A true CN112966189A (en) 2021-06-15
CN112966189B CN112966189B (en) 2024-01-26

Family

ID=76280375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110400498.0A Active CN112966189B (en) 2021-04-14 2021-04-14 Fund product recommendation system

Country Status (1)

Country Link
CN (1) CN112966189B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256390A (en) * 2021-06-16 2021-08-13 平安科技(深圳)有限公司 Product recommendation method and device, computer equipment and storage medium
CN113393330A (en) * 2021-07-11 2021-09-14 北京天仪百康科贸有限公司 Financial wind control management system based on block chain
CN113393321A (en) * 2021-07-11 2021-09-14 北京天仪百康科贸有限公司 Financial wind control method based on block chain
CN117474686A (en) * 2023-12-11 2024-01-30 万链指数(青岛)信息科技有限公司 Financial data prediction system based on blockchain and big data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020133447A1 (en) * 2001-01-12 2002-09-19 Smartfolios, Inc. Computerized method and system for formulating stock portfolios
CN107808254A (en) * 2017-11-10 2018-03-16 北京云际投资咨询有限公司 A kind of public offering fund evaluation and suggestion for investment method
CN111815447A (en) * 2020-07-06 2020-10-23 上海汇正财经顾问有限公司 Stock intelligent recommendation system and method based on backtesting data and electronic terminal
CN112102095A (en) * 2020-09-17 2020-12-18 中国建设银行股份有限公司 Fund product recommendation method, device and equipment
CN112612942A (en) * 2020-12-29 2021-04-06 河海大学 Social big data-based fund recommendation system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020133447A1 (en) * 2001-01-12 2002-09-19 Smartfolios, Inc. Computerized method and system for formulating stock portfolios
CN107808254A (en) * 2017-11-10 2018-03-16 北京云际投资咨询有限公司 A kind of public offering fund evaluation and suggestion for investment method
CN111815447A (en) * 2020-07-06 2020-10-23 上海汇正财经顾问有限公司 Stock intelligent recommendation system and method based on backtesting data and electronic terminal
CN112102095A (en) * 2020-09-17 2020-12-18 中国建设银行股份有限公司 Fund product recommendation method, device and equipment
CN112612942A (en) * 2020-12-29 2021-04-06 河海大学 Social big data-based fund recommendation system and method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256390A (en) * 2021-06-16 2021-08-13 平安科技(深圳)有限公司 Product recommendation method and device, computer equipment and storage medium
CN113393330A (en) * 2021-07-11 2021-09-14 北京天仪百康科贸有限公司 Financial wind control management system based on block chain
CN113393321A (en) * 2021-07-11 2021-09-14 北京天仪百康科贸有限公司 Financial wind control method based on block chain
CN117474686A (en) * 2023-12-11 2024-01-30 万链指数(青岛)信息科技有限公司 Financial data prediction system based on blockchain and big data
CN117474686B (en) * 2023-12-11 2024-03-29 万链指数(青岛)信息科技有限公司 Financial data prediction system based on blockchain and big data

Also Published As

Publication number Publication date
CN112966189B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
US20210166140A1 (en) Method and apparatus for training risk identification model and server
CN112966189A (en) Fund product recommendation system
CN108921569B (en) Method and device for determining complaint type of user
US20090083128A1 (en) Predicted variable analysis based on evaluation variables relating to site selection
CN110147925B (en) Risk decision method, device, equipment and system
JP2003526146A (en) Method and system for reducing risk by obtaining evaluation values
CN108509492B (en) Big data processing and system based on real estate industry
CN111582538A (en) Community value prediction method and system based on graph neural network
Eddy et al. Credit scoring models: Techniques and issues
US20160343051A1 (en) Network computer system to predict contingency outcomes
CN108596765A (en) A kind of Electronic Finance resource recommendation method and device
CN115271976A (en) Advisory recommendation method and device and computer readable storage medium
CN111061948A (en) User label recommendation method and device, computer equipment and storage medium
CN113407854A (en) Application recommendation method, device and equipment and computer readable storage medium
US20210142406A1 (en) Vehicle selection platform
CN114897607A (en) Data processing method and device for product resources, electronic equipment and storage medium
US10402921B2 (en) Network computer system for quantifying conditions of a transaction
CN112288306A (en) Mobile application crowdsourcing test task recommendation method based on xgboost
CN111815204A (en) Risk assessment method, device and system
Reig-Mullor et al. A novel approach to improve the bank ranking process: an empirical study in Spain
CN110689170A (en) Object parameter determination method and device, electronic equipment and storage medium
CN112948700A (en) Fund recommendation method
JP7345032B1 (en) Credit screening device, method and program
US20170052959A1 (en) Filtering Resources Using a Multilevel Classifier
AlHakeem et al. Iraqi Stock Market Prediction Using Hybrid LSTM and CNN Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231219

Address after: Room A135, 1st Floor, Building 3, No. 18 Keyuan Road, Daxing District Economic Development Zone, Beijing 102600

Applicant after: Beijing Jizhi Technology Co.,Ltd.

Address before: 711711 Yihe formation, Sanhe Village, Qicun Township, Fuping County, Weinan City, Shaanxi Province

Applicant before: Liu Meng

GR01 Patent grant
GR01 Patent grant