CN111681092B - Resource scheduling method, server, electronic equipment and storage medium - Google Patents

Resource scheduling method, server, electronic equipment and storage medium Download PDF

Info

Publication number
CN111681092B
CN111681092B CN202010319617.5A CN202010319617A CN111681092B CN 111681092 B CN111681092 B CN 111681092B CN 202010319617 A CN202010319617 A CN 202010319617A CN 111681092 B CN111681092 B CN 111681092B
Authority
CN
China
Prior art keywords
server
resource
service
user
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010319617.5A
Other languages
Chinese (zh)
Other versions
CN111681092A (en
Inventor
杜岳欣
房佳斐
吴来祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qiyue Information Technology Co Ltd
Original Assignee
Shanghai Qiyue Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qiyue Information Technology Co Ltd filed Critical Shanghai Qiyue Information Technology Co Ltd
Priority to CN202010319617.5A priority Critical patent/CN111681092B/en
Publication of CN111681092A publication Critical patent/CN111681092A/en
Application granted granted Critical
Publication of CN111681092B publication Critical patent/CN111681092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis

Abstract

The invention discloses a resource scheduling method, which comprises the steps that a first server responds to a resource scheduling request sent by a client, sends first service protocol page content to the client, and pushes resource scheduling application information to a second server for risk assessment and triggers the second server to apply resource scheduling to a third server in response to an operation instruction sent by the client and representing confirmation of a first service protocol; the first server side sends a resource scheduling result query request to the second server side so as to acquire a resource scheduling result fed back from the third server side, and feeds the resource scheduling result back to the client side for display; the first, second and third service ends correspond to the first, second and third service sides for providing the first, second and third business services for the user. The method realizes the resource scheduling method based on multiple service parties and reasonably realizes the resource allocation. Correspondingly, the invention also provides a server, electronic equipment and a storage medium.

Description

Resource scheduling method, server, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computers, and in particular, to a resource scheduling method, a server, an electronic device, and a computer readable storage medium based on multiple service parties.
Background
With the development of internet technology, networks are widely and deeply affecting people's lives. More and more people choose to select the appropriate financial product on the financial institution's corporate network or APP application.
At present, the financial institutions comprise a loan assisting mode, a joint loan mode and a wetting mode in the cooperation mode of resources, wherein the wetting mode mainly comprises the cooperation of the loan assisting mechanism and a card-holding and money removing company (namely, a wetting mechanism), the money removing company collects all the fees of customers and bears risks, and meanwhile, the wetting is carried out based on the services of the qualified customers recommended by the loan assisting mechanism, the customer management, the post-loan management and the like.
Under the current mode of moisturizing, divide moist mechanism to need satisfy: 1) Pricing for guests up to 36%/year can be accepted; 2) Recognizing the property performance of the lending assisting mechanism, wherein the property performance has certain wind control approval capacity and self-guarantee risk, and bad property exists in the receiving list; 3) The fund cost is controllable, and the income rate higher than the fixed loan interest rate can be realized under the condition of controllable risk. Thus, the current distribution institutions are mainly based on the gold-eliminating companies, and not banks. However, it is known that the bank resources are the most abundant compared with the vanishing company, and in practical application, the bank accounts for more than 50% in the financial institution cooperating with the lending institution, which results in that the resources with high concentration of about 70% are not fully utilized, that is, the resources are wasted due to unreasonable resource allocation mode. However, if the cashier company is directly replaced with a bank in order to fully utilize the resources, the implementation of the method is difficult due to the problems of supervision such as pricing and asset performance and the risk bearing. Therefore, if a resource allocation is to be reasonably performed, a bank is required to provide resources, and accordingly, other institutions and lending institutions are required to cooperate to realize supervision and risk bearing of pricing, asset performance and the like, that is, a resource scheduling method based on multiple service parties is needed so that the resource allocation can be reasonably performed.
The above information disclosed in the background section is only for enhancement of understanding of the background of the disclosure and therefore it may include information that does not form the prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of the above, the present specification has been presented in order to provide a multi-party service method, a server, an electronic device, and a computer-readable storage medium that overcome or at least partially solve the above-described problems.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
In a first aspect, the present invention discloses a resource scheduling method based on multiple service parties, including:
the method comprises the steps that a first service end responds to a resource scheduling request sent by a client, and sends first service protocol page content to the client to be displayed to a user; the first service protocol page content is acquired from a second service end by the first service end; the first service end corresponds to a first service party providing first business service for the user, and the second service end corresponds to a second service party providing second business service for the user;
The first server side responds to an operation instruction sent by the client side and representing a first service protocol, the resource scheduling application information pre-input by the user is pushed to the second server side for risk assessment, so that intention information representing accepting service is generated, and the second server side is triggered to apply for resource scheduling to a third server side based on the intention information; the third service end corresponds to a third service side for providing third business service for the user;
the first server sends a resource scheduling result query request to the second server to acquire a resource scheduling result fed back from the third server, and feeds the resource scheduling result back to the client for display.
In an exemplary embodiment of the present disclosure, before the step of responding to the resource scheduling request, further comprising:
the first server side responds to the credit request sent by the client side, generates first information page content for the user to input user information, and feeds back the first information page content to the client side to display the user information;
the first server side responds to an operation instruction sent by the client side and representing submitting the user information, the user information is sent to the third server side through the second server side to initiate credit evaluation, and a credit evaluation result returned by the third server side through the second server side is received; the credit evaluation result comprises a credit report of the user and/or credit scores obtained by calculating the credit report and the user information by adopting a preset credit evaluation model;
The first service end carries out credit approval based on the credit evaluation result, sends the credit approval result to the third service end for final approval through the second service end, and receives a final approval result returned by the third service end through the second service end;
and the first service end generates a corresponding credit approval result based on the final result, and feeds the credit approval result back to the client end for display, wherein the credit approval result comprises the maximum resource scheduling limit acquired by the user.
In an exemplary embodiment of the present disclosure, before the step of pushing the resource scheduling application information to the second server for risk assessment, the method further includes:
identifying whether the user is of a user type; if the user is a decoupling user or a new user, a credit evaluation request is initiated to the third server through the second server, and credit approval is conducted based on a new credit evaluation result;
the decoupling user is a user whose latest credit inquiry time exceeds a preset time.
In an exemplary embodiment of the present disclosure, the resource scheduling method further includes:
And the first service end judges whether the current time node is a resource return plan generation time node or not based on the system time, if so, generates a corresponding resource return plan based on the current resource release information, and feeds back the resource return plan to the client end for display.
In an exemplary embodiment of the present disclosure, the step of generating the resource return plan specifically includes:
the first server side obtains the total number of additional resources generated by resource scheduling application and a preset resource return time node from the second server side;
the first server calculates the total number of resources to be returned of the user according to the resource issuing information and the total number of the additional resources, and generates the resource return plan based on the total number of the resources to be returned and the resource return time node;
the total number of the additional resources comprises a first additional resource number calculated by the third server side and a second additional resource number calculated by the second server side.
In an exemplary embodiment of the present disclosure, the resource scheduling method further includes:
and the first server responds to the resource return request sent by the client, and sends a first resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule the current resource to be returned of the user to the second server.
In an exemplary embodiment of the present disclosure, the resource scheduling method further includes:
and the first server responds to the advanced clearing request sent by the client, and sends a second resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule all resources to be returned of the user to the second server.
In an exemplary embodiment of the present disclosure, the resource scheduling method further includes:
the first server responds to the third resource scheduling instruction sent by the second server to indicate that the second additional resource of the first service Fang Dianfu is required, and sends a fourth resource scheduling instruction for indicating confirmation of the payment to a third party resource scheduling server associated with the first server, so that the third party resource scheduling server associated with the first server schedules a corresponding number of second additional resources to the second server, and the second server is triggered to send a fifth resource scheduling instruction for indicating the payment resources to the third party resource scheduling server associated with the second server, so that the third party resource scheduling server associated with the second server schedules a corresponding number of resources to the third server at a preset scheduling time node.
In a second aspect, the present invention also provides another multi-server-based resource scheduling method, which includes:
the second service end calculates the quantity of the resources allocated by each resource receiver based on the resources to be allocated and preset resource allocation rules returned currently by the user, and generates corresponding resource allocation details; the resource allocation rule comprises service identifiers of a plurality of resource receivers for receiving the resources to be allocated and allocation quantity or percentage of each resource receiver under different return scenes; the second service end corresponds to a second service side for providing second business service for the user;
and the second server side sends the resource allocation details to a third-party resource scheduling server side associated with the second server side so as to trigger the third-party resource scheduling server side to schedule the resources to be allocated.
In an exemplary embodiment of the present disclosure, the plurality of resource receivers includes: a first service party providing a first business service to the user; a third service party providing a third business service to the user; and the second service party.
In an exemplary embodiment of the present disclosure, if the return scenario is normal return, the resources to be allocated include a rated allocation resource and a first additional resource that are directionally allocated to the third service side, and a second additional resource that is commonly allocated by the first service side and the second service side; or alternatively, the process may be performed,
If the return scenario is overdue return, the resources to be allocated further include a quota allocation resource and a first additional resource which are allocated to the third service party in a directed manner, a second additional resource which is shared by the first service party and the second service party, a fourth additional resource which is allocated to the first service party in a directed manner, and a fifth additional resource which is shared by the first service party and the second service party; or alternatively, the process may be performed,
and if the return scene is a forward balance, the resources to be allocated further comprise rated allocation resources and first additional resources which are allocated to the third service side in a directed manner, second additional resources which are commonly allocated by the first service side and the second service side are allocated to the third additional resources of the first service side in a directed manner.
In an exemplary embodiment of the present disclosure, the step of calculating the number of resources allocated by each resource receiver specifically includes:
identifying the current return scene of the user;
if the current return scene of the user is normal return, the rated allocation resources and the first additional resources are counted as resources allocated to the third service side, and the number of resources allocated to the first service side and the second service side is calculated based on the second additional resources, a first preset percentage and a second preset percentage respectively;
If the current return scene of the user is overdue return, the rated allocation resource and the first additional resource are counted as resources allocated to the third server, the fourth additional resource is used as resources directionally allocated to the first server, the quantity of the resources allocated to the first server and the second server is calculated based on the second additional resource, a first preset percentage and a second preset percentage, and the quantity of the resources allocated to the first server and the second server is calculated based on the fifth additional resource and the third preset percentage;
and if the current return scene of the user is a forward balance, taking the rated allocation resource and the first additional resource as resources allocated to the third service party, taking the third additional resource as resources directionally allocated to the first service party, and respectively calculating the quantity of the resources allocated to the first service party and the second service party based on the second additional resource, the first preset percentage and the second preset percentage.
In a third aspect, the present invention provides a server, including:
the first response module is used for responding to the resource scheduling request sent by the client, and sending the first business protocol page content to the client so as to be displayed to the user; the first service protocol page content is obtained from a second service end in advance by the first service end; the first service end corresponds to a first service party providing first business service for the user, and the second service end corresponds to a second service party providing second business service for the user;
The second response module is used for responding to the operation instruction sent by the client and used for indicating and confirming the first service protocol, pushing the resource scheduling application information pre-input by the user to the second server for risk assessment to generate intention information indicating to accept the service, and triggering the second server to apply for resource scheduling to a third server based on the intention information; the third service end corresponds to a third service side for providing third business service for the user;
the first data acquisition module is used for sending a resource scheduling result query request to the second server side so as to acquire a resource scheduling result fed back from the third server side, and feeding the resource scheduling result back to the client side for display.
In an exemplary embodiment of the present disclosure, the server further includes:
the third response module is used for responding to the credit request sent by the client, generating first information page content for the user to enter user information, and feeding back to the client to display the user;
the fourth response module is used for responding to the operation instruction sent by the client and representing submitting the user information, and sending the user information to the third server through the second server to initiate credit evaluation; receiving a credit evaluation result returned by the third server through the second server; the credit evaluation result comprises a credit report of the user and/or credit scores obtained by calculating the credit report and the user information by adopting a preset credit evaluation model;
The credit approval module is used for carrying out credit approval based on the credit evaluation result, sending the credit approval result to the third service end for final approval through the second service end, receiving the final approval result returned by the third service end through the second service end, generating a corresponding credit approval result based on the final approval result, and feeding back to the client for display, wherein the credit approval result comprises the maximum resource scheduling limit acquired by the user.
In an exemplary embodiment of the present disclosure, the server further includes:
the second data acquisition module is used for acquiring the credit evaluation record of the user before the second response module triggers the second server to perform risk evaluation;
the user judging module is used for judging whether the user is a decoupling user according to the credit evaluation record, triggering the fourth response module to initiate a credit evaluation request to the third server through the second server when the user is judged to be the decoupling user, and carrying out credit approval again based on a credit evaluation result; the decoupling user is a user whose latest credit inquiry time exceeds a preset time.
In an exemplary embodiment of the present disclosure, the server further includes:
and the return plan module is used for judging whether the current time node is generated for the resource return plan based on the system time, if so, generating a corresponding resource return plan based on the current resource release information and the user information, and feeding back to the client.
In an exemplary embodiment of the present disclosure, the resource return plan module specifically includes:
the additional resource information acquisition unit is used for acquiring the total number of additional resources generated by resource scheduling application and a preset resource return time node from the second server;
the computing unit is used for computing the total number of resources to be returned of the user according to the current-day resource release information of the user and the total number of the additional resources;
the plan feedback unit is used for generating the resource return plan based on the total number of the resources to be returned and the resource return time node and feeding back the resource return plan to the client;
the total number of the additional resources comprises a first additional resource number calculated by the third server side and a second additional resource number calculated by the second server side.
In an exemplary embodiment of the present disclosure, the server further includes:
and the fifth response module is used for responding to the resource return request sent by the client, and sending a first resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule the resources to be returned of the user in the current period to the second server.
In an exemplary embodiment of the present disclosure, the server further includes:
and the sixth response module is used for responding to the resource advance clearing request sent by the client, and sending a second resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule all the resources to be returned of the user to the second server.
In an exemplary embodiment of the present disclosure, the server further includes:
a seventh response module, configured to send, in response to the second server sending a third resource scheduling instruction indicating that the first service Fang Dianfu second additional resource is required, a fourth resource scheduling instruction indicating that the first server is acknowledged to be paid to a third party resource scheduling server associated with the first server, so that the third party resource scheduling server associated with the first server schedules a corresponding number of second additional resources to the second server, so as to trigger the second server to send, to the third party resource scheduling server associated with the second server, a fifth resource scheduling instruction indicating that the first additional resources are paid to the third party resource scheduling server, so that the third party resource scheduling server associated with the second server schedules a corresponding number of resources to the third server at a preset scheduling time node;
The third resource scheduling request is generated when the second server judges that the current overdue time of the user reaches a preset trigger condition based on the system time and the resource return plan, and the payment request comprises the number of second additional resources for requesting the first server to pay.
In a fourth aspect, the present disclosure further provides another service end, which includes:
the first calculation module is used for calculating the quantity of the resources to be allocated by each resource receiver based on the resources to be allocated and preset resource allocation rules returned by the user currently, and generating corresponding resource allocation details; the resource allocation rule comprises service identifiers of a plurality of resource receivers for receiving the resources to be allocated and allocation quantity or percentage of each resource receiver under different return scenes; the second service end corresponds to a second service side for providing second business service for the user;
and the resource allocation module is used for sending the resource allocation details to a third party resource scheduling server associated with a second service side so as to trigger the third party resource scheduling server to allocate the resources to be allocated.
In one exemplary embodiment of the present disclosure, the plurality of resource recipients includes a first service party that receives the fixedly allocated resource and the first additional resource and provides a first business service to the user, a second service party, and a third service party that provides a third business service to the user.
In an exemplary embodiment of the present disclosure, when the return scenario is normal return, the resources to be allocated include a rated allocation resource and a first additional resource that are directionally allocated to the third service side, and a second additional resource that is commonly allocated by the first service side and a second service side; or alternatively, the process may be performed,
when the return scenario is overdue return, the resources to be allocated further include a quota allocation resource and a first additional resource which are allocated to the third service party in a directed manner, a second additional resource which is commonly allocated by the first service party and the second service party, a fourth additional resource which is allocated to the first service party in a directed manner, and a fifth additional resource which is commonly allocated by the first service party and the second service party; or alternatively, the process may be performed,
when the return scene is a forward balance, the resources to be allocated further comprise rated allocation resources and first additional resources which are allocated to the third service side in a directed manner, second additional resources which are commonly allocated by the first service side and the second service side are allocated to the third additional resources of the first service side in a directed manner.
In an exemplary embodiment of the present disclosure, the first computing module specifically includes;
the return scene recognition unit is used for recognizing the current return scene of the user;
a first calculation unit configured to, when the return scenario identification unit identifies that the return scenario is currently normal return, calculate, based on the second additional resource, a first preset percentage, and a second preset percentage, the number of resources allocated to the first server and the second server, respectively, using the quota allocated resource and the first additional resource as resources allocated to the third server;
a second calculation unit configured to, when the return scenario identification unit identifies that the return scenario is currently overdue, take the quota allocated resource and the first additional resource as resources allocated to the third service party, calculate the number of resources allocated to the first service party and the second service party based on the second additional resource, a first preset percentage, and a second preset percentage, respectively, take the fourth additional resource as resources allocated to the first service party in a targeted manner, and calculate the number of resources allocated to the first service party and the second service party based on the fifth additional resource and a third preset percentage, respectively;
And a third calculation unit, configured to, when the return scenario identification unit identifies that the current situation is early, take the quota allocated resource and the first additional resource as resources allocated to the third service party, calculate the number of resources allocated to the first service party and the second service party based on the second additional resource, the first preset percentage and the second preset percentage, respectively, and take the third additional resource as resources allocated to the first service party in a targeted manner.
In a fifth aspect, the present description provides an electronic device comprising a processor and a memory: the memory is used for storing a program of the method of any one of the above; the processor is configured to execute the program stored in the memory to implement the steps of the method of any one of the preceding claims.
In a sixth aspect, embodiments of the present description provide a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of any of the methods described above.
The invention has the beneficial effects that:
according to the invention, the first service end pushes the user information and the resource scheduling information of the user to the second service end, then the second service end corresponding to the second service end carries out risk assessment on the user to generate the intention information representing the accepting service, and then the second service end applies for resource scheduling based on the third service end of the intention information, so that the resource scheduling method based on multiple service ends is realized, in the resource scheduling process, the first service end and the second service end price and asset performance and other supervision are carried out, and meanwhile, the third service end only needs to realize resource allocation, so that the problem that a great amount of resources with high concentration are not fully utilized mainly by a cash eliminating company is avoided, resources can be reasonably distributed, in the resource allocation process, the third service end does not need to bear any risk, does not need to carry out supervision such as price or asset performance, and the like, and the system resources and management cost of the third service end are reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart illustrating a multi-server based resource scheduling method according to a first exemplary embodiment;
fig. 2 is a flowchart illustrating a multi-server based resource scheduling method according to a second exemplary embodiment;
fig. 3 is a flowchart illustrating a multi-server based resource scheduling method according to a third exemplary embodiment;
fig. 4 is a flowchart illustrating a multi-server based resource scheduling method according to a fourth exemplary embodiment;
fig. 5 is a flowchart illustrating a multi-server based resource scheduling method according to a fifth exemplary embodiment;
fig. 6 is a flowchart illustrating a multi-server based resource scheduling method according to a sixth exemplary embodiment;
Fig. 7 is a flowchart illustrating a multi-server based resource scheduling method according to a seventh exemplary embodiment;
FIG. 8 is a block diagram of a server shown in accordance with another exemplary embodiment;
FIG. 9 is a block diagram of a server shown in accordance with yet another exemplary embodiment;
fig. 10 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
However, the exemplary embodiments described below can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another element. Accordingly, a first component discussed below could be termed a second component without departing from the teachings of the concepts of the present disclosure. As used herein, the term "and/or" includes any one of the associated listed items and all combinations of one or more.
Those skilled in the art will appreciate that the drawings are schematic representations of example embodiments and that the modules or flows in the drawings are not necessarily required to practice the present disclosure, and therefore, should not be taken to limit the scope of the present disclosure.
The invention provides a resource scheduling method based on multiple service parties, which is used for solving the problem that resources with high concentration are wasted due to unreasonable resource allocation/scheduling modes in the prior art, and the general thought of the invention is as follows in order to solve the problems: the method comprises the steps that a first service end responds to a resource scheduling request sent by a client, and sends first service protocol page content to the client for display to a user; the first service protocol page content is acquired from a second service end by the first service end; the first service end corresponds to a first service side providing a first service for the user, and the second service end corresponds to a second service side providing a second service for the user; the first service end responds to an operation instruction sent by the client end and representing that a first service protocol is confirmed, the resource scheduling application information is pushed to the second service end to carry out risk assessment so as to generate intention information representing that the service is accepted, and the second service end is triggered to apply for resource scheduling to the third service end based on the intention information; the third service end corresponds to a third service side for providing a third service for the user; the first server sends a resource scheduling result query request to the second server to acquire a resource scheduling result fed back from the third server, and feeds the resource scheduling result back to the client for display.
It is first to be noted that in the various embodiments of the present invention, the terms involved are:
the term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The technical scheme of the invention is described and illustrated in detail below through a few specific embodiments.
Referring to fig. 1, a multi-server-based resource scheduling method in this embodiment includes:
s101, a first service end responds to a resource scheduling request sent by a client end, and sends first service protocol page content to the client end so as to display a first service protocol page to a user.
In this embodiment, a user logs in a corresponding webpage/APP through a client provided by a first server corresponding to a first server, and then accesses a corresponding resource scheduling application page, specifically, the resource scheduling application page is obtained by the client responding to a corresponding access operation of the user, and then analyzing and rendering the content of the resource scheduling application page, so that the user can enter resource scheduling application information in the resource scheduling application page. Further, when the user inputs the resource scheduling application information in the resource scheduling application page and triggers an operation instruction for applying for resource scheduling to the first service side corresponding to the first service side, the client responds to the operation instruction and sends a resource scheduling request to the first service side. Of course, the resource scheduling request includes user information of the user and entered resource scheduling application information, such as a stage returning mode (such as an equal cost or an equal amount of cost, etc.), a scheduling period, a first-term returning resource amount, preferential information, a resource receiving mode (such as a certain bank card or a payment treasured account or a WeChat account), and the like.
In this embodiment, the first service protocol page content is obtained from a second service end by the first service end, where the first service end corresponds to a first service party that provides a first service to the user, and the second service end corresponds to a second service party that provides a second service to the user. Specifically, the first service side is a lending assistant mechanism for providing lending assistant service (i.e. first business service) to the user, and the first service side is the service side of the lending assistant mechanism, accordingly, the user logs in the official network or the APP corresponding to the first service side, accesses the resource scheduling application page, fills in the resource scheduling application information, clicks the submitting application icon/button of the resource scheduling application page, and generates the resource scheduling request to send to the first service side; the second service side is an insurance mechanism for providing personal credit assurance insurance service (second business service) for the user, the second service side is the service side of the insurance mechanism, the first business protocol page content is insurance protocol page content, namely, the user is used as an applicant to sign personal credit assurance insurance related protocol with the insurance company; and a third service party providing a resource scheduling service (i.e., a third business service) to the user is used as an insured; correspondingly, when the first service end receives a resource scheduling request sent by the client end, the first service end responds to the resource scheduling request to acquire the page content of the first service protocol from the second service end so as to feed back the page content to the client end, so that a user can carefully read the specific content of the first service protocol.
S102, the first server responds to an operation instruction sent by the client and used for indicating and confirming the first service protocol, the resource scheduling information pre-recorded by the user is pushed to the second server for risk assessment, so that intention information indicating acceptance service is generated, and the second server is triggered to apply for resource scheduling to the third server based on the intention information.
In this embodiment, the client obtains the content of the first service protocol page from the first server, analyzes and renders the content, and displays the content to the user, when the user reads the first service protocol page, if the user reads the first service protocol page and agrees/confirms the content of the first service protocol, the client can directly confirm/agrees with the icon in the first service protocol page, and the client correspondingly generates an operation instruction for indicating that the user confirms the first service protocol, and then sends the operation instruction to the first server.
In this embodiment, the third service end corresponds to a third service party that provides a third business service to the user. Specifically, the third service party is a resource provider that provides resources to the user, such as a bank, etc., that can provide high-concentration resources to the user.
In this embodiment, before executing the step S102, the method further includes:
Identifying the user type of the user, if the user is a decoupling user or a new user, initiating a credit evaluation request to a third server through a second server, and performing credit approval based on a credit evaluation result; if the user is not the decoupling user or the new user, the above step S102 is directly performed.
In this embodiment, the decoupled user is a trusted user whose latest credit inquiry time exceeds a preset time. Specifically, the preset time is three months, and can be adjusted according to actual needs.
In a specific embodiment, since the third service end generates the corresponding credit evaluation record (including credit inquiry reasons, credit evaluation time and credit evaluation results (including credit reports)) each time the third service end performs credit evaluation (e.g. performs credit inquiry) on the user, and feeds back the credit evaluation record to the first service end through the second service end, the first service end can determine whether the user is a decoupling user according to the credit evaluation record of the user and the system time, for example, determine whether the last credit inquiry time of the user exceeds three months according to the system time of the first service end, if yes, determine that the user is a decoupling user. The new user is a user which is not trusted yet, so if the new user applies for resource scheduling, the third service end naturally needs to carry out credit evaluation on the new user, and credit approval is carried out according to the credit evaluation result.
In this embodiment, when the first service end identifies that the user is a new user and then decouples the user, a credit evaluation request is generated and sent to a third service end through a second service end, that is, a credit evaluation request is initiated to the third service end through the second service end, specifically, the credit evaluation request includes user information of the user, a credit evaluation reason, a default credit evaluation method (for example, only credit inquiry is performed or credit evaluation score is calculated by combining credit inquiry with a preset credit evaluation model) and credit evaluation auxiliary data are sent to a second service end, then the third service end is sent by the second service end, that is, the user information, the credit evaluation reason, the credit evaluation method and the credit evaluation auxiliary data are sent to the third service end through the second service end to initiate credit evaluation, and after the third service end performs credit evaluation, the credit evaluation result is fed back to the second service end, the first service end performs an examination and approval result according to the credit evaluation result, and the credit evaluation result is sent to the third service end through the first service end, and the second service end performs the examination result correspondingly, and the examination result is sent to the third service end, thereby completing the examination result from the first service end. Specifically, the credit evaluation of the third server includes credit inquiry of the user, then according to credit score obtained by calculating the credit report of the user and the user information by adopting a preset credit evaluation model, the credit report and/or the credit score are fed back to the second server, and the credit evaluation result (including the credit report and/or the credit score) is fed back to the first server by the second server.
In a specific embodiment, when the second server performs internal approval (i.e. risk assessment) according to the resource scheduling information received by the second server, if the approval passes, a corresponding underwriting intention is generated by the second server, i.e. intention information of accepting the service, and then the second server initiates a resource scheduling application to the third server based on the intention information, i.e. the first server triggers the second server to apply for resource scheduling to the third server. Correspondingly, a third server corresponding to the third server performs internal approval according to the resource scheduling information, and the third server feeds back an approval result and a resource scheduling result of the third server to the second server, wherein the resource scheduling result comprises a transmission resource amount passing the approval, a release date of the resource and the like.
S103, the first server sends a resource scheduling result query request to the second server to acquire a resource scheduling result fed back by the third server, and feeds the resource scheduling result back to the client for display.
Further, the second service end feeds back the payment account checking file to the first service end and the third service end on day d+1 (i.e. the second day of the payment date), so that the first service side and the third service side can check later.
In this embodiment, the first server pushes the resource scheduling application information to the second server, then the second server corresponding to the second server performs risk assessment on the user to generate intention information representing the accepted service, and the second server triggers the third server to perform resource scheduling based on the intention, so that the first server and the second server recommend corresponding users to take over risk, thereby avoiding the problem that a great amount of resources with high concentration are not fully utilized due to the adoption of the cash elimination company as a main part, and simultaneously, the third server is not required to take over corresponding risk, and the third server is not required to take over supervision such as pricing or asset performance, i.e. the system resources of the third server are reduced.
Further, since the user generally needs to apply for trust before applying for resource scheduling, correspondingly, referring to fig. 2, before executing step S101, the multiparty service method of the embodiment further includes:
s201, a first service end responds to a credit request sent by a client end, generates first information page content for a user to input user information, and feeds back the first information page content to the client end to display the first information page to the user.
In this embodiment, the user accesses a first service display page through a client (specifically, when the client responds to an access operation of the user, the client obtains content of the first service display page from the first service end to display the first service display page) so as to know related information of a first service provided by a first service party, and when the user triggers an operation of applying for trust in the first service display page, the client responds to the operation, generates the trust request, and sends the trust request to the first service end to obtain the content of the first information page.
S203, the first server side responds to the operation instruction sent by the client side and representing the input user information, the second server side sends the user information to the third server side to initiate a credit evaluation request, and the third server side receives a credit evaluation result returned by the second server side.
In this embodiment, the client obtains the corresponding user information, such as name, certificate information (e.g. identification card number or passport number, etc.), bank information (e.g. bank name and bank card number, etc.), mobile terminal identification number (e.g. mobile phone number), etc. entered in the first information page, and after the user has recorded the user information, the client generates an operation instruction indicating that the user information is submitted based on the operation of the user (e.g. clicking a pre-configured submitting icon of the first information page), and sends the operation instruction and the user information to the first service. The operation instruction carries the reason for initiating credit evaluation, the credit evaluation method used and other credit evaluation auxiliary information transmission.
In this embodiment, after the first service end receives the user information, the credit evaluation reason, the credit evaluation method and the credit evaluation auxiliary data are sent to the second service end, the second service end sends the third service end, that is, the user information, the credit evaluation reason, the credit evaluation method and the credit evaluation auxiliary data are sent to the third service end through the second service end to initiate credit evaluation, and after the third service end performs credit evaluation, the credit evaluation result is fed back to the second service end, and the second service end feeds back the credit evaluation result to the client. Specifically, the third service side performs credit assessment on the user under the operation of the third service side, the credit assessment mode includes credit inquiry on the user, and/or credit score obtained by calculating a credit report of the user and user information (credit assessment auxiliary data in the third service side) by adopting a preset credit assessment model (provided by the first service side and preconfigured in the third service side), then the credit report and/or the credit score are fed back to the second service side, and the second service side feeds back the credit report and/or the credit score, namely the credit assessment result, to the first service side. That is, the credit assessment results include credit reporting and/or credit scores for the user.
S205, the first service end carries out credit approval based on the credit evaluation result, and sends the credit approval result to the third service end through the second service end for the third service end to carry out final examination, and receives the final examination result returned by the third service end through the second service end.
In this embodiment, the first service end performs credit approval based on the credit evaluation result under the operation of the first service side, if the credit evaluation result of the user meets the preset condition, for example, the credit report indicates that the credit condition of the user is good, and/or the credit score is greater than the preset credit score threshold, the user passes the credit approval, and the first service side will give the user a certain credit limit, and accordingly, the first service side sends the credit approval result to the second service side for review, and the second service side sends the credit approval result to the third service side for final review (i.e. the credit limit of the user is reviewed in the third service side), and the third service side feeds the final review result back to the first service side through the second service side, and meanwhile, the review result of the second service side is also fed back to the first service side.
S207, the first service end generates a corresponding credit approval result based on the final result, and feeds the credit approval result back to the client end for display.
In this embodiment, the trust approval result includes the maximum resource scheduling limit acquired by the user. In one embodiment, the maximum resource scheduling credit is a credit obtained by the subscriber, i.e., the credit is the maximum credit that the subscriber can apply to from the third party.
Further, after the user applies for successful resource scheduling, in order to avoid that the user forgets to return to time, and the like, a corresponding resource return plan is correspondingly generated according to the resource scheduling information of the user to notify the user, and specifically, referring to fig. 3, the multi-service method in this embodiment further includes:
s301, the first service end judges whether the current time node is a preset resource return plan generation time node based on the system time, if yes, step S303 is executed, otherwise, judgment is continued.
In this embodiment, the system time is the system time of the first service end. Specifically, in order to save system energy consumption, twelve early morning points on the day of resource release are uniformly set as resource return plan generation time nodes, namely when the first service end judges that the current early morning twelve early morning points are judged according to system time, a corresponding resource return plan is generated based on the current resource release information. Of course, if a plurality of users receive corresponding resources on the same day, that is, the third service end issues resources to a plurality of users on the same day, the first service end in twelve morning hours on the same day generates a corresponding resource return plan in batch according to the release information corresponding to each user fed back by the third service end through the second service end, and feeds back the resource return plan to the client.
S303, generating a corresponding resource return plan based on the current-day resource release information, and feeding back to the client for display to the user.
In this embodiment, the step of generating the resource return plan in the step S303 includes: the first service end obtains the total number of additional resources generated by resource scheduling application and a preset resource return time node from the second service end, calculates the total number of resources to be returned of the user according to the resource issuing information and the total number of the additional resources, and finally generates a corresponding resource return plan based on the calculated total number of resources to be returned and the resource return time node. The total number of the additional resources comprises a first additional resource number calculated by the third service end, such as interest, and a second additional resource number calculated by the second service end, such as premium; and the resource return time node is acquired by the second server from the third server. Further, if there is a fourth service party providing a fourth business service (i.e., vouching) to the user, then the additional total number of resources correspondingly also includes a premium sent by the fourth service party to the second service party.
In a specific embodiment, if a staged return mode is selected in the resource scheduling application information filled in when the user applies for resource scheduling, correspondingly, the resource return plan includes the amount of resources to be returned in each stage of the user, a return time node and the like; if the user selects a non-periodic returning mode when applying for resource scheduling, the resource returning plan correspondingly comprises all the to-be-returned resource amounts and returning time nodes of the user. Of course, further, when the first server side still sends a corresponding return reminding notification to the client side before the returnable time node, the user is prompted to return the current resource amount to be returned/all the resource amounts to be returned.
Further, when the user receives the above-mentioned resource return plan, the return operation is usually performed in advance at the return time node or before, and accordingly, referring to fig. 4, the multiparty service method of the embodiment further includes:
s401, the first service end sends a first resource scheduling instruction to a third party resource scheduling service end associated with the user based on a resource return request sent by the client end and based on a service identifier of the second service end, so as to trigger the third party resource scheduling service end associated with the user to schedule the resources to be returned of the user in the current period to the second service end.
In this embodiment, the user may obtain, on a resource return page (the resource return page is a client side in response to a corresponding access operation of the user, a resource return page content from a first service end, and then parse and render the resource return page to obtain) a return-in-date resource (for example, click on a repayment icon/button in the resource return page), so as to trigger the client side to obtain a return information page content from the service end, so that the user may select a pre-added third party resource scheduling mechanism (for example, a repayment bank and a corresponding bank card for repayment)/add a third party resource scheduling mechanism (for example, a repayment bank and a corresponding bank card for repayment), and after the user selects the third party resource scheduling mechanism/adds a corresponding third party resource scheduling mechanism, the client side generates a corresponding resource return request and sends the corresponding resource return request to the first service end. The resource return request comprises a service identifier of the second service side and third-party resource scheduling mechanism information, which are acquired in advance.
In a specific embodiment, the service identifier refers to a merchant number of the second service party, and accordingly, after the third party resource scheduling service end corresponding to the third party resource scheduling mechanism (i.e. the deduction bank) selected/added by the user receives the first resource scheduling request (i.e. the deduction request), the user previously establishes an account (i.e. a bank card) in the third party resource scheduling mechanism, and schedules a corresponding number of resources to an account pre-designated by the second service party.
In this embodiment, the resources to be returned include a quota allocated resource (such as principal) and a first additional resource (interest) which are allocated to the third service party, and a second additional resource (premium) which is commonly allocated by the first service party and the second service party. Of course, if the user selects the guarantee service, the resource to be returned correspondingly further comprises the guarantee fee distributed to the guarantee institution.
Further, if the user does not pay before the return time node, correspondingly, the first service end can judge that the user is overdue according to the system time and generate a corresponding payment notice, correspondingly, the resources to be returned by the user at the moment comprise the rated allocation resources, the first additional resources and the second additional resources, fourth additional resources which are directionally allocated to the first service side, namely overdue default and fifth additional resources which are jointly distributed by the first service side and the second service side, namely fine generated by overdue.
Further, although the user selects the stage return mode, the user may also apply for selecting the early clearing mode, and accordingly, referring to fig. 5, the multiparty service method of the embodiment further includes:
s501, the first server responds to the forward settlement request sent by the client, and sends a second resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule all resources to be returned of the user to the second server.
In this embodiment, the user may select to clear in advance on the resource return page, for example, by clicking on a clear in advance icon/button, so as to trigger the client to obtain the corresponding return information page content from the first service end, so that the user may select a pre-added third party resource scheduling mechanism/add a third party resource scheduling mechanism in the return information page, and when the user selects the third party resource scheduling mechanism/add the corresponding third party resource scheduling mechanism, the client generates a corresponding clear in advance request and sends the request to the first service end. The advanced clearing request comprises a service identifier of a second service side, which is acquired in advance, and third-party resource scheduling mechanism information, such as third-party resource scheduling server information corresponding to the third-party resource scheduling mechanism.
In one embodiment, after the third party resource scheduling server corresponding to the third party resource scheduling mechanism (i.e. the deduction bank) selected/added by the user receives the second resource scheduling request (i.e. the deduction request), the user transfers the corresponding amount of resources to the account pre-designated by the second service side in the account pre-established by the third party resource scheduling mechanism (i.e. the bank card) in advance.
In this embodiment, the resources to be returned include a rated resource and a first additional resource which are allocated to the third service party in a targeted manner, a second additional resource which is shared by the first service party and the second service party, and a third additional resource which is allocated to the first service party in a targeted manner, i.e. clearing the default deposit in advance.
Further, the user sometimes overdue pays, and at this time, the first service side is required to pay the premium, and accordingly, referring to fig. 6, the multiparty service method of the embodiment further includes:
and S603, the first service end responds to the third resource scheduling instruction sent by the second service end to indicate that the first service end is invited to pay the second additional resource, and sends a fourth resource scheduling instruction indicating that the first service end is affirmed to pay to the third party resource scheduling service end associated with the first service end, so that the third party resource scheduling service end associated with the first service end schedules a corresponding number of second additional resources to the second service end, the second service end corresponding to the second service end is triggered to send a fifth resource scheduling instruction indicating that the second service end pays resources to the third party resource scheduling service end associated with the second service end, and the third party resource scheduling service end associated with the second service end schedules the corresponding number of resources to the third service end at a preset scheduling time node.
In this embodiment, the second server first determines whether the current overdue time of the user reaches a preset trigger condition according to the system time and the resource return plan, that is, determines whether the overdue time of the user (i.e., the resource is not returned on time) reaches a preset expected time threshold, if so, the second server determines to send a third resource scheduling instruction to the first server, which indicates that the first server is required to pay the second additional resource. Specifically, the preset overdue time threshold is an expected 38 days, that is, when the overdue time of the user reaches 38 days, the second service side requests the first service side to pay the premium for the user through the second service side. Of course, the expiration time threshold may be adjusted according to the actual needs of the second server.
In this embodiment, after the first service side receives the third resource scheduling instruction, the first service side sends a fourth resource scheduling instruction indicating that the second additional resource is to be paid to the third party resource scheduling server side associated with the first service side, so as to enable the bank pre-designated by the first service side to deduct a corresponding amount of resources from the corresponding bank account to the second service side on the overdue day 39; accordingly, since the second additional resource (the second additional resource includes a second additional resource before the return time node and 15% of the second additional resource between the days of claim of the return time node) is received, 85% of the second additional resource before the return time node is scheduled to the first service side corresponding to the first service side, and 30% of the 85% is settled in two weeks, 55% does not participate in claim settlement, and three months before each month is calculated), the second service side sends a fifth resource scheduling instruction indicating that the corresponding amount of resources are paid to the third service side to the third party service side terminal (i.e., the bank service side corresponding to the bank account of the second service side) through the second service side, and the third party service side terminal (i.e., the bank service side corresponding to the bank account of the second service side) associated with the second service side schedules the corresponding amount of resources to the third service side at the preset scheduling time node. In one embodiment, the predetermined schedule time node is the second day for the first server to reserve the second additional resource.
Based on the same inventive concept as the multi-server-based resource scheduling method in the foregoing embodiment, the present invention also provides another multi-server-based resource scheduling method, which is used for solving the problem in the prior art that resources with high concentration are wasted due to an unreasonable resource allocation/scheduling manner, and in order to solve the above problem, the general idea of the present invention is as follows: the second service end calculates the quantity of the resources allocated by each resource receiver based on the resources to be allocated and preset resource allocation rules returned currently by the user, and generates corresponding resource allocation details; the resource allocation rule comprises service identifiers of a plurality of resource receivers for receiving the resources to be allocated and allocation quantity or percentage of each resource receiver under different return scenes; the second service end corresponds to a second service side for providing second business service for the user; and the second server side sends the resource allocation details to a third-party resource scheduling server side associated with the second server side so as to trigger the third-party resource scheduling server side to schedule the resources to be allocated. The technical scheme of the invention is described and illustrated in detail through specific embodiments.
In this embodiment, the multiple service sides involved in the resource scheduling method include the first service side, the second service side and the third service side in the above embodiment, and the functions of the service sides corresponding to the service sides are the same, except that in the above embodiment, the process of scheduling the resource from the third service side to the user and the process of returning the resource by the user are described from the perspective of the first service side, and in this embodiment, the resource scheduling process between the service sides is described after the user returns the resource from the perspective of the second service side.
Referring to fig. 7, another multi-server-based resource scheduling method of the present invention includes:
s701, the second server calculates the number of resources allocated by each resource receiver based on the resources to be allocated and the preset resource allocation rules returned by the user currently, and generates corresponding resource allocation details.
In this embodiment, the resource allocation rule includes service identifiers of multiple resource receivers that receive the resource to be allocated, and the number/percentage of resource allocations of each resource receiver in different return scenarios. Specifically, the plurality of resource recipients includes a first service party providing a first service to the user, a second service party providing a second service to the user, and a third service party providing a third service to the user.
In this embodiment, if the return scenario is normal return, that is, the user returns at or before the return time node in the above resource return plan, the resource to be allocated includes the quota allocated resource and the first additional resource which are allocated to the third service party in a targeted manner, and the second additional resource which is commonly allocated by the first service party and the second service party. Specifically, the first preset percentage is 30%, the second percentage is 15%, i.e. 30% of the second additional resource will be allocated to the first server, 15% will be allocated to the second server, 55% is used for the pay, and when there is still remaining after the pay, the first server and the second server each allocate 50% of the remaining portion.
In this embodiment, if the return scenario is overdue return, that is, the user returns after the return time node in the resource return plan, the to-be-allocated resource includes, in addition to the above-mentioned quota allocated resource, the first additional resource, and the second additional resource, a fourth additional resource that is directionally allocated to the first service side, and a fifth additional resource that is commonly allocated by the first service side and the second service side. Specifically, the third preset percentage is 50%, i.e. the first server and the second server will each allocate 50% of the fifth additional resource.
In this embodiment, if the return scenario is a forward settlement, that is, the user settles all before the return time node in the resource return plan, the to-be-allocated resource includes, in addition to the above-mentioned quota allocated resource, the first additional resource, and the second additional resource, a third additional resource that is directionally allocated to the first service side.
In this embodiment, the step of calculating the number of resources allocated to each resource receiver in the step S701 specifically includes:
identifying a current return scenario of the user;
if the current return scene of the user is normal return, taking the rated allocation resource and the first additional resource as resources allocated to a third service party, and respectively calculating the quantity of the resources allocated to the first service party and the second service party based on the second additional resource, the first preset percentage and the second preset percentage;
if the current return scene of the user is identified as overdue return, taking the rated allocation resource and the first additional resource as resources allocated to a third service party, taking the fourth additional resource as resources allocated to the first service party, respectively calculating the quantity of resources allocated to the first service party and the second service party based on the second additional resource, the first preset percentage and the second preset percentage, and respectively calculating the quantity of resources allocated to the first service party and the second service party based on the fifth additional resource and the third preset percentage;
If the current return scene of the user is identified as the advanced clearing, the rated allocation resource and the first additional resource are used as resources allocated to a third service party, the third additional resource is used as resources allocated to the first service party, and the number of the resources allocated to the first service party and the second service party is calculated based on the second additional resource, the first preset percentage and the second preset percentage.
S702, the second server sends the resource allocation details to a third party resource scheduling server associated with the second server so as to trigger the third party resource scheduling server to allocate the resources to be allocated.
In a specific embodiment, a third party resource scheduling server associated with the second server sets up a bank server of a bank account for the second server, and correspondingly, when the third party resource scheduling server receives a resource allocation detail sent by the second server, the third party resource scheduling server allocates resources to be allocated received in the bank account corresponding to the second server according to the resource allocation detail, that is, transfers corresponding amounts of resources to the first server and the third server respectively.
Of course, if the user also uses the fourth service, i.e. the guaranteed service, the resource to be returned correspondingly also includes the premium allocated to the fourth service party.
Based on the same inventive concept as the resource scheduling method based on multiple service parties in the foregoing embodiments, the present invention further provides a service end, on which a computer program is stored, which when executed by a processor, implements the steps of any one of the foregoing multi-party service methods.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the methods of the present invention. For details not disclosed in the device embodiments of the present embodiment, please refer to the method embodiments of the present disclosure.
As shown in fig. 8, the present embodiment provides a service end, where the service end corresponds to a first service party that provides a first service to a user, and specifically includes:
a first response module 81, configured to send, to a client, a first service protocol page content to be displayed to a user in response to a resource scheduling request sent by the client; the first service protocol page content is obtained in advance from the second server by the first response module 61; the second service end corresponds to a second service party providing a second service for the user;
the second response module 82 is configured to, in response to an operation instruction sent by the client and indicating that the first service protocol is confirmed, push resource scheduling application information pre-recorded by the user to the second server for risk assessment, generate intention information indicating that the service is accepted, and trigger the second server to apply for resource scheduling to the third server based on the intention information; the third service end corresponds to a third service side for providing a third service for the user;
The first data obtaining module 83 is configured to send a resource scheduling result query request to the second server, so as to obtain a resource scheduling result fed back from the third server, and feed back the resource scheduling result to the client for display.
Further, as is well known, before the user applies for resource scheduling, the user needs to apply for credit approval, and correspondingly, the server side in this embodiment further includes:
a third response module 84, configured to generate, in response to the trust request sent by the client, a first information page content for the user to enter user information, and feed back to the client to display to the user;
a fourth response module 85, configured to send, by means of the second server, the user information to the third server to initiate credit assessment in response to an operation instruction sent by the client and indicating to submit the user information; receiving a credit evaluation result returned by the third server through the second server; the credit evaluation result comprises credit score obtained by calculating a credit report of the user and the user information by adopting a preset credit evaluation model;
the credit approval module 86 is configured to perform credit approval based on the credit evaluation result, send the credit approval result to the third server through the second server for final approval, receive the final approval result returned by the third server through the second server, generate a corresponding credit approval result based on the final approval result, and feed the credit approval result back to the client for display, where the credit approval result includes the maximum resource scheduling limit acquired by the user.
Furthermore, if the user is the decoupling user, the user needs to re-perform credit evaluation before applying for resource scheduling, and of course, if the user is not the decoupling user, the user does not need to re-perform credit evaluation, so in this embodiment, before performing credit evaluation on the user by the fourth response module, whether the user is the decoupling user needs to be identified, and accordingly, the server in this embodiment further includes:
the second data acquisition module is used for acquiring the credit evaluation record of the user before the second response module triggers the second server to perform risk evaluation;
the user judging module is used for judging whether the user is a decoupling user according to the credit evaluation record, triggering the fourth response module to initiate credit evaluation to the third server through the second server when the user is judged to be the decoupling user, and carrying out credit approval again based on the credit evaluation result; the decoupling user is the user with the latest credit inquiry time exceeding three months.
Further, after the user applies for successful resource scheduling, information of the scheduled resources, such as a return time node, the number of return resources, and the like, needs to be returned to the user, that is, a resource return plan needs to be returned to the user, and accordingly, the server in this embodiment further includes:
The return plan module is used for judging whether the current time node is generated for the resource return plan based on the system time, if so, generating a corresponding resource return plan based on the resource scheduling information applied by the user on the same day and the user information of the user, and feeding back the resource return plan to the client; specifically, the return planning module 69 includes: the additional resource obtaining unit is used for obtaining the total number of additional resources generated by applying for resource scheduling and a preset resource return time node from the second server; the computing unit is used for computing the total number of resources to be returned of the user according to the resource scheduling information applied by the user and the obtained total number of additional resources; the plan feedback unit is used for generating a resource return plan based on the total number of the resources to be returned and the resource return time node and feeding back the resource return plan to the client; the total number of the additional resources includes a first additional resource number calculated by the third server and a second additional resource number calculated by the second server.
Further, in general, the user returns the corresponding amount of resources according to the resource return plan, specifically, the user accesses a resource return page through the client, and then triggers a corresponding resource return request on the resource return page, and accordingly, the server in this embodiment further includes:
And the fifth response module is used for responding to the resource return request sent by the client, and sending a first resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule the resources to be returned of the user in the current period to the second server.
Further, sometimes, the user does not return the resource according to the resource return plan, but applies for the advanced settlement, specifically, the user accesses the resource return page through the client first, and then triggers a second resource scheduling request indicating the advanced settlement on the resource return page, and correspondingly, the server side of the embodiment further includes:
and the sixth response module is used for responding to the resource advance clearing request sent by the client, and sending a second resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule the resources to be returned of the user in the current period to the second server.
Further, the server side of this embodiment further includes:
a seventh response module, configured to respond to a third resource scheduling instruction sent by the second server and indicating that the first server is invited to pad the second additional resource, and send a fourth resource scheduling instruction indicating that the second additional resource is acknowledged to the third party resource scheduling server associated with the first server, so that the third party resource scheduling server associated with the first server schedules a corresponding amount of the second additional resource to the second server, so as to trigger the second server to send a fifth resource scheduling instruction to the third party resource scheduling server associated with the second server, and so that the third party resource scheduling server associated with the second server schedules a corresponding amount of resources to the third server at a preset scheduling time node; the second server determines that the current overdue time reaches a preset trigger condition based on the system time and the resource return plan, and the pad request comprises the number of second additional resources for requesting the first server to replace the pad.
Based on the same inventive concept as the other multi-server-based resource scheduling method in the foregoing embodiment, the present invention further provides another server, on which a computer program is stored, which when executed by a processor, implements the steps of any one of the other multi-party service methods described above.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the methods of the present invention. For details not disclosed in the device embodiments of the present embodiment, please refer to the method embodiments of the present disclosure.
Referring to fig. 9, the server in this embodiment is a second server corresponding to providing a second business service to a user, and specifically includes:
the first calculation module is used for calculating the quantity of the resources to be allocated by each resource receiver based on the resources to be allocated and preset resource allocation rules returned by the user currently, and generating corresponding resource allocation details;
and the resource allocation module is used for sending the resource allocation details to a third party resource scheduling server associated with a second service side so as to trigger the third party resource scheduling server to allocate the resources to be allocated.
In this embodiment, the above-mentioned resource allocation rule includes receiving service identifiers of multiple resource receivers of the resource to be allocated, and allocation number or percentage of each resource receiver under different return scenarios; wherein the plurality of resource recipients includes a first service party that receives the fixedly allocated resources and the first additional resources and provides the first business service to the user, a second service party, and a third service party that provides the third business to the user.
In this embodiment, the first computing module specifically includes: a return scene recognition unit for recognizing the current return scene of the user; specifically, the return scenario identifying unit may identify, based on the user initiating a corresponding request, that the resource to be allocated currently returned by the user includes a quota allocated resource (such as principal) and a first additional resource (such as interest) that are allocated to the third service party in a targeted manner, and a second additional resource (such as premium) that is commonly allocated by the first service party and the second service party, if the user returns normally; if the result is that the result is advanced, the resources to be allocated currently returned by the user comprise, besides the rated allocation resources and the first and second additional resources, a third additional resource (such as advanced settlement default) which is directionally allocated to the first service side; if the resources are returned after overdue, the resources to be allocated currently returned by the user comprise a rated allocation resource, a first additional resource and a second additional resource, and further comprise a fourth additional resource (such as overdue default gold); and a fifth additional resource (e.g., penalty) commonly wetted by the first server and the second server; the first calculating unit is used for respectively calculating the quantity of the resources allocated to the first server and the second server based on the second additional resources, the first preset percentage and the second preset percentage when the return scene identifying unit identifies that the current return is normal; the second calculation unit is used for taking the quota-allocated resource and the first additional resource as resources allocated to the third service party when the angelica return scene recognition unit recognizes that the current return is overdue, calculating the quantity of the resources allocated to the first service party and the second service party respectively based on the second additional resource, the first preset percentage and the second preset percentage, taking the fourth additional resource as the resources directionally allocated to the first service party, and calculating the quantity of the resources allocated to the first service party and the second service party respectively based on the fifth additional resource and the third preset percentage; and the third calculation unit is used for respectively calculating the quantity of the resources allocated to the first server and the second server based on the second additional resources, the first preset percentage and the second preset percentage, and taking the third additional resources as the resources directionally allocated to the first server when the Chinese angelica scene-returning identification unit identifies that the current scene-returning state is the advanced state.
The third embodiment of the present specification also provides an electronic device comprising a memory 1002, a processor 1001 and a computer program stored on the memory 1002 and executable on the processor 1001, the processor 301 implementing the steps of the method described above when executing the program. For convenience of description, only those parts related to the embodiments of the present specification are shown, and specific technical details are not disclosed, please refer to the method parts of the embodiments of the present specification. The server may be a server device formed by various electronic devices, such as a PC computer, a network cloud server, or even a server function provided on any electronic device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant ), a POS (Point of Sales), a vehicle-mounted computer, or a desktop computer.
In particular, the server component block diagram shown in FIG. 10, which is related to the teachings provided by the embodiments of the present disclosure, bus 1000 may comprise any number of interconnected buses and bridges linking together various circuits, including one or more processors, represented by processor 1001, and memory, represented by memory 1002. Bus 1000 may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., as are well known in the art and, therefore, will not be described further herein. Communication interface 1003 provides an interface between bus 1000 and a receiver and/or transmitter 1004, which may be a separate stand-alone receiver or transmitter or may be the same element, such as a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 1001 is responsible for managing the bus 1000 and general processing, while the memory 1002 may be used to store data used by the processor 1001 in performing operations.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a computer readable storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, or a network device, etc.) to perform the above-described method according to the embodiments of the present disclosure.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The computer-readable medium carries one or more programs, which when executed by one of the devices, cause the computer-readable medium to perform the functions of:
the method comprises the steps that a first service end responds to a resource scheduling request sent by a client, and sends first service protocol page content to the client to be displayed to a user; the first service protocol page content is obtained from a second service end by the first service end; the first service end corresponds to a first service party providing first business service for the user, and the second service end corresponds to a second service party providing second business service for the user;
The first server side responds to an operation instruction which is sent by the client side and used for indicating and confirming a first service protocol, user information of the user and pre-input resource scheduling application information are pushed to the second server side to carry out risk assessment, so that intention information which indicates to accept the service is generated, and the second server side is triggered to apply for resource scheduling to a third server side based on the intention information; the third service end corresponds to a third service side for providing third business service for the user;
the first server side sends a resource scheduling result query request to the second server side so as to acquire a resource scheduling result fed back from the third server side, and feeds the resource scheduling result back to the client side for display; alternatively, the following functions are realized:
the second service end calculates the quantity of the resources allocated by each resource receiver based on the resources to be allocated and preset resource allocation rules returned currently by the user, and generates corresponding resource allocation details; the resource allocation rule comprises service identifiers of a plurality of resource receivers for receiving the resources to be allocated and allocation quantity or percentage of each resource receiver under different return scenes; the second service end corresponds to a second service side for providing second business service for the user;
And the second server side sends the resource allocation details to a third-party resource scheduling server side associated with the second server side so as to trigger the third-party resource scheduling server side to schedule the resources to be allocated.
Those skilled in the art will appreciate that the modules may be distributed throughout several devices as described in the embodiments, and that corresponding variations may be implemented in one or more devices that are unique to the embodiments. The modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or in combination with the necessary hardware. Thus, the technical solutions according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and include several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
While preferred embodiments of the present description have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the disclosure.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that this disclosure is not limited to the particular arrangements, instrumentalities and methods of implementation described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. In addition, the structures, proportions, sizes, etc. shown in the drawings in the specification are used for the understanding and reading of the disclosure, and are not intended to limit the applicable limitations of the disclosure, so that any structural modification, change in proportion, or adjustment of size is not technically significant, and yet falls within the scope of the disclosure without affecting the technical effects and the objects that can be achieved by the disclosure. Meanwhile, the terms such as "upper", "first", "second", and "a" and the like recited in the present specification are also for convenience of description only, and are not intended to limit the scope of the disclosure, in which the relative relationship changes or modifications thereof are not limited to essential changes in technical content, but are also regarded as the scope of the disclosure.

Claims (20)

1. A multi-service-party-based resource scheduling method, comprising:
the method comprises the steps that a first service end responds to a credit request sent by a client end, generates first information page content for a user to input user information, and feeds back the first information page content to the client end to display the user information;
the client receives the input user information and generates an operation instruction for submitting the user information, and the user information and the operation instruction are sent to a first service end, wherein the operation instruction carries a credit evaluation reason, a credit evaluation method and credit evaluation auxiliary data;
the first server sends the user information, the credit evaluation reason, the credit evaluation method and the credit evaluation auxiliary data to a third server through a second server to initiate credit evaluation, and receives a credit evaluation result returned by the third server through the second server; the credit evaluation method comprises the steps of carrying out credit inquiry and/or calculating the credit report and the user information by adopting a preset credit evaluation model to obtain credit score, wherein the credit evaluation result comprises the credit report of the user and/or calculating the credit report and the user information by adopting the preset credit evaluation model;
The first service end carries out credit approval based on the credit evaluation result; the first server side sends the credit approval result to a second server side for review; the second server side sends the review result to the third server side for review, and receives a review result returned by the third server side; the second service end sends the final result to a first service end, the first service end generates a corresponding credit approval result based on the final result, and feeds the credit approval result back to the client end for display, wherein the credit approval result comprises the maximum resource scheduling limit acquired by the user;
the first service end responds to a resource scheduling request sent by a client end, and sends first service protocol page content to the client end so as to be displayed to a user; the first service protocol page content is acquired from the second service end by the first service end; the first service end corresponds to a first service party providing first business service for the user, and the second service end corresponds to a second service party providing second business service for the user;
The first server responds to an operation instruction sent by the client and used for indicating and confirming a first service protocol, the resource scheduling application information pre-input by the user is pushed to the second server for risk assessment, if the risk assessment passes, the second server generates intention information indicating that the service is accepted, and the second server is triggered to apply for resource scheduling to a third server based on the intention information; the third service end corresponds to a third service side for providing third business service for the user;
the third server corresponding to the third server performs internal approval according to the resource scheduling information, and the third server feeds back an approval result and a resource scheduling result of the third server to the second server;
the first server sends a resource scheduling result query request to the second server to acquire an approval result and the resource scheduling result of the third server fed back from the third server in the second server, and feeds back the resource scheduling result to the client for display.
2. The method for scheduling resources according to claim 1, wherein before the step of pushing the resource scheduling application information to the second server for risk assessment, the method further comprises:
Identifying whether the user is of a user type; if the user is a decoupling user or a new user, a credit evaluation request is initiated to the third server through the second server, and credit approval is conducted based on a new credit evaluation result;
the decoupling user is a user whose latest credit inquiry time exceeds a preset time.
3. The resource scheduling method of claim 2, further comprising:
and the first service end judges whether the current time node is a resource return plan generation time node or not based on the system time, if so, generates a corresponding resource return plan based on the current resource release information, and feeds back the resource return plan to the client end for display.
4. A method of scheduling resources according to claim 3, wherein the step of generating the resource return plan comprises:
the first server side obtains the total number of additional resources generated by resource scheduling application and a preset resource return time node from the second server side;
the first server calculates the total number of resources to be returned of the user according to the resource issuing information and the total number of the additional resources, and generates the resource return plan based on the total number of the resources to be returned and the resource return time node;
The total number of the additional resources comprises a first additional resource number calculated by the third server side and a second additional resource number calculated by the second server side.
5. The resource scheduling method of claim 4, further comprising:
and the first server responds to the resource return request sent by the client, and sends a first resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule the current resource to be returned of the user to the second server.
6. The resource scheduling method of claim 5, further comprising:
and the first server responds to the advanced clearing request sent by the client, and sends a second resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule all resources to be returned of the user to the second server.
7. The resource scheduling method of claim 6, further comprising:
The first server responds to the third resource scheduling instruction sent by the second server to indicate that the second additional resource of the first service Fang Dianfu is required, and sends a fourth resource scheduling instruction for indicating confirmation of the payment to a third party resource scheduling server associated with the first server, so that the third party resource scheduling server associated with the first server schedules a corresponding number of second additional resources to the second server, and the second server is triggered to send a fifth resource scheduling instruction for indicating the payment resources to the third party resource scheduling server associated with the second server, so that the third party resource scheduling server associated with the second server schedules a corresponding number of resources to the third server at a preset scheduling time node.
8. The resource scheduling method based on the multiple service parties is characterized by comprising the following steps:
the second service end is based on the resources to be allocated which are returned currently by the user and a preset resource allocation rule, wherein the resource allocation rule comprises service identifiers of a plurality of resource receivers for receiving the resources to be allocated and allocation quantity or percentage of each resource receiver under different returning scenes;
Identifying the current return scene of the user;
if the current return scene of the user is normal return, the rated allocation resources and the first additional resources are counted as resources allocated to the third service side, and the number of resources allocated to the first service side and the second service side is calculated based on the second additional resources, a first preset percentage and a second preset percentage respectively;
if the current return scene of the user is overdue return, the rated allocation resource and the first additional resource are counted as resources allocated to the third server, the fourth additional resource is used as resources directionally allocated to the first server, the quantity of the resources allocated to the first server and the second server is calculated based on the second additional resource, a first preset percentage and a second preset percentage, and the quantity of the resources allocated to the first server and the second server is calculated based on the fifth additional resource and the third preset percentage;
if the current return scene of the user is a forward balance, taking the quota allocation resource and the first additional resource as resources allocated to the third service party, taking the third additional resource as resources directionally allocated to the first service party, respectively calculating the quantity of the resources allocated to the first service party and the second service party based on the second additional resource, a first preset percentage and a second preset percentage, and generating corresponding resource allocation details, wherein the second service terminal corresponds to the second service party; the resource receiver comprises a first service party for providing a first business service for the user, a second service party for providing a second business service for the user, and a third service party for providing a third business service for the user, and if the user uses a fourth business service, the resource receiver also comprises a fourth service party for providing a fourth business service for the user;
The second server sends the resource allocation details to a third party resource scheduling server associated with the second server so as to trigger the third party resource scheduling server to schedule the resources to be allocated received by the second server to the first server and the third server or trigger the third party resource scheduling server to schedule the resources to be allocated received by the second server to the first server, the third server and the fourth server.
9. The method according to claim 8, wherein if the return scenario is normal return, the resources to be allocated include a quota allocated resource and a first additional resource which are allocated to the third service side in a targeted manner, and a second additional resource which is commonly allocated by the first service side and the second service side; or alternatively, the process may be performed,
if the return scenario is overdue return, the resources to be allocated further include a quota allocation resource and a first additional resource which are allocated to the third service party in a directed manner, a second additional resource which is shared by the first service party and the second service party, a fourth additional resource which is allocated to the first service party in a directed manner, and a fifth additional resource which is shared by the first service party and the second service party; or alternatively, the process may be performed,
And if the return scene is a forward balance, the resources to be allocated further comprise rated allocation resources and first additional resources which are allocated to the third service side in a directed manner, second additional resources which are commonly allocated by the first service side and the second service side are allocated to the third additional resources of the first service side in a directed manner.
10. A server, comprising:
the third response module is used for responding to the credit request sent by the client, generating first information page content for the user to enter user information, and feeding back to the client to display the user;
the fourth response module is used for calling the client to receive the input user information, generating an operation instruction for submitting the user information, and sending the user information and the operation instruction to the first service end, wherein the operation instruction carries a credit evaluation reason, a credit evaluation method and credit evaluation auxiliary data; and invoking the first server to send the user information, the credit evaluation reason, the credit evaluation method and the credit evaluation auxiliary data to the third server through the second server so as to initiate credit evaluation, and receiving a credit evaluation result returned by the third server through the second server; the credit evaluation method comprises the steps of carrying out credit inquiry and/or calculating the credit report and the user information by adopting a preset credit evaluation model to obtain credit score, wherein the credit evaluation result comprises the credit report of the user and/or calculating the credit report and the user information by adopting the preset credit evaluation model;
The credit approval module is used for calling the first service end to carry out credit approval based on the credit evaluation result; the first server side sends the credit approval result to a second server side for review; the second server side sends the review result to the third server side for review, and receives a review result returned by the third server side; the second service end sends the final result to a first service end, the first service end generates a corresponding credit approval result based on the final result, and feeds the credit approval result back to the client end for display, wherein the credit approval result comprises the maximum resource scheduling limit acquired by the user;
the first response module is used for responding to the resource scheduling request sent by the client, and sending the first business protocol page content to the client so as to be displayed to the user; the first service protocol page content is obtained from a second service end in advance by the first service end; the first service end corresponds to a first service party providing first business service for the user, and the second service end corresponds to a second service party providing second business service for the user;
The second response module is used for responding to the operation instruction sent by the client and used for confirming the first service protocol, pushing the resource scheduling application information pre-input by the user to the second server for risk assessment, generating intention information for representing accepting the service by the second server if the risk assessment passes, and triggering the second server to apply for resource scheduling to a third server based on the intention information; the third service end corresponds to a third service side for providing third business service for the user;
the first data acquisition module is used for carrying out internal approval according to the resource scheduling information, the third server feeds back the approval result and the resource scheduling result of the third server to the second server, and sends a resource scheduling result query request to the second server so as to acquire the resource scheduling result fed back from the third server, and feeds back the resource scheduling result to the client for display.
11. The server according to claim 10, further comprising:
the second data acquisition module is used for acquiring the credit evaluation record of the user before the second response module triggers the second server to perform risk evaluation;
The user judging module is used for judging whether the user is a decoupling user according to the credit evaluation record, triggering the fourth response module to initiate a credit evaluation request to the third server through the second server when the user is judged to be the decoupling user, and carrying out credit approval again based on a credit evaluation result; the decoupling user is a user whose latest credit inquiry time exceeds a preset time.
12. The server according to claim 11, further comprising:
and the return plan module is used for judging whether the current time node is generated for the resource return plan based on the system time, if so, generating a corresponding resource return plan based on the current resource release information and the user information, and feeding back to the client.
13. The server according to claim 12, wherein the resource return planning module specifically includes:
the additional resource information acquisition unit is used for acquiring the total number of additional resources generated by resource scheduling application and a preset resource return time node from the second server;
the computing unit is used for computing the total number of resources to be returned of the user according to the current-day resource release information of the user and the total number of the additional resources;
The plan feedback unit is used for generating the resource return plan based on the total number of the resources to be returned and the resource return time node and feeding back the resource return plan to the client;
the total number of the additional resources comprises a first additional resource number calculated by the third server side and a second additional resource number calculated by the second server side.
14. The server according to claim 13, further comprising:
and the fifth response module is used for responding to the resource return request sent by the client, and sending a first resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule the resources to be returned of the user in the current period to the second server.
15. The server according to claim 14, further comprising:
and the sixth response module is used for responding to the resource advance clearing request sent by the client, and sending a second resource scheduling instruction to a third party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third party resource scheduling server to schedule all the resources to be returned of the user to the second server.
16. The server according to claim 15, further comprising;
a seventh response module, configured to send, in response to the second server sending a third resource scheduling instruction indicating that the first service Fang Dianfu second additional resource is required, a fourth resource scheduling instruction indicating that the first server is acknowledged to be paid to a third party resource scheduling server associated with the first server, so that the third party resource scheduling server associated with the first server schedules a corresponding number of second additional resources to the second server, so as to trigger the second server to send, to the third party resource scheduling server associated with the second server, a fifth resource scheduling instruction indicating that the first additional resources are paid to the third party resource scheduling server, so that the third party resource scheduling server associated with the second server schedules a corresponding number of resources to the third server at a preset scheduling time node;
the third resource scheduling request is generated when the second server judges that the current overdue time of the user reaches a preset trigger condition based on the system time and the resource return plan, and the payment request comprises the number of second additional resources for requesting the first server to pay.
17. A server, comprising:
the first calculation module is used for receiving service identifiers of a plurality of resource receivers of the resources to be allocated and the allocation quantity or percentage of each resource receiver under different returning scenes based on the resources to be allocated and a preset resource allocation rule returned by the user; identifying the current return scene of the user; if the current return scene of the user is normal return, the rated allocation resources and the first additional resources are counted as resources allocated to the third service side, and the number of resources allocated to the first service side and the second service side is calculated based on the second additional resources, a first preset percentage and a second preset percentage respectively; if the current return scene of the user is overdue return, the rated allocation resource and the first additional resource are counted as resources allocated to the third server, the fourth additional resource is used as resources directionally allocated to the first server, the quantity of the resources allocated to the first server and the second server is calculated based on the second additional resource, a first preset percentage and a second preset percentage, and the quantity of the resources allocated to the first server and the second server is calculated based on the fifth additional resource and the third preset percentage; if the current return scene of the user is a forward balance, taking the quota allocation resource and the first additional resource as resources allocated to the third service party, taking the third additional resource as resources directionally allocated to the first service party, respectively calculating the quantity of the resources allocated to the first service party and the second service party based on the second additional resource, a first preset percentage and a second preset percentage, and generating corresponding resource allocation details, wherein the second service terminal corresponds to the second service party; the resource receiver comprises a first service party for providing a first business service for the user, a second service party for providing a second business service for the user, and a third service party for providing a third business service for the user, and if the user uses a fourth business service, the resource receiver also comprises a fourth service party for providing a fourth business service for the user;
And the resource allocation module is used for sending the resource allocation details to a third party resource scheduling server associated with a second server so as to trigger the third party resource scheduling server to allocate the resources to be allocated to the first server and the third server received by the second server or trigger the third party resource scheduling server to allocate the resources to be allocated to the first server, the third server and the fourth server received by the second server.
18. The server of claim 17, wherein when the return scenario is a normal return, the resources to be allocated include a quota allocated resource and a first additional resource that are directionally allocated to the third server, and a second additional resource that is commonly allocated by the first server and a second server; or alternatively, the process may be performed,
when the return scenario is overdue return, the resources to be allocated further include a quota allocation resource and a first additional resource which are allocated to the third service party in a directed manner, a second additional resource which is commonly allocated by the first service party and the second service party, a fourth additional resource which is allocated to the first service party in a directed manner, and a fifth additional resource which is commonly allocated by the first service party and the second service party; or alternatively, the process may be performed,
When the return scene is a forward balance, the resources to be allocated further comprise rated allocation resources and first additional resources which are allocated to the third service side in a directed manner, second additional resources which are commonly allocated by the first service side and the second service side are allocated to the third additional resources of the first service side in a directed manner.
19. An electronic device comprising at least one processor, at least one memory, a communication interface, and a bus; wherein, the liquid crystal display device comprises a liquid crystal display device,
the processor, the memory and the communication interface complete the communication with each other through the bus;
the memory is used for storing a program for executing the method of any one of claims 1 to 7 or claim 8 or 10;
the processor is configured to execute a program stored in the memory.
20. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 7 or of claims 8 or 9.
CN202010319617.5A 2020-04-22 2020-04-22 Resource scheduling method, server, electronic equipment and storage medium Active CN111681092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010319617.5A CN111681092B (en) 2020-04-22 2020-04-22 Resource scheduling method, server, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010319617.5A CN111681092B (en) 2020-04-22 2020-04-22 Resource scheduling method, server, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111681092A CN111681092A (en) 2020-09-18
CN111681092B true CN111681092B (en) 2023-10-31

Family

ID=72451653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010319617.5A Active CN111681092B (en) 2020-04-22 2020-04-22 Resource scheduling method, server, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111681092B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131003A (en) * 2020-09-25 2020-12-25 中国建设银行股份有限公司 Resource allocation method, device and equipment
CN112561402A (en) * 2020-12-29 2021-03-26 平安银行股份有限公司 Resource security allocation method, computer device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109104471A (en) * 2018-07-26 2018-12-28 新疆玖富万卡信息技术有限公司 A kind of method of recommendation service, management server and recommendation server
CN109146659A (en) * 2017-06-16 2019-01-04 阿里巴巴集团控股有限公司 Resource allocation methods and device, system
CN110363666A (en) * 2018-04-11 2019-10-22 腾讯科技(深圳)有限公司 Information processing method, calculates equipment and storage medium at device
CN110912712A (en) * 2019-12-18 2020-03-24 东莞市大易产业链服务有限公司 Service operation risk authentication method and system based on block chain

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146659A (en) * 2017-06-16 2019-01-04 阿里巴巴集团控股有限公司 Resource allocation methods and device, system
CN110363666A (en) * 2018-04-11 2019-10-22 腾讯科技(深圳)有限公司 Information processing method, calculates equipment and storage medium at device
CN109104471A (en) * 2018-07-26 2018-12-28 新疆玖富万卡信息技术有限公司 A kind of method of recommendation service, management server and recommendation server
CN110912712A (en) * 2019-12-18 2020-03-24 东莞市大易产业链服务有限公司 Service operation risk authentication method and system based on block chain

Also Published As

Publication number Publication date
CN111681092A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
US7848978B2 (en) Enhanced transaction resolution techniques
US8510184B2 (en) System and method for resolving transactions using weighted scoring techniques
US8825544B2 (en) Method for resolving transactions
US20090248574A1 (en) Peer-to-peer currency exchange and associated systems and methods
US8175971B1 (en) Lifetime guaranteed income rider
US20110178934A1 (en) System and method for resolving transactions with selective use of user submission parameters
US20110178860A1 (en) System and method for resolving transactions employing goal seeking attributes
CN111681092B (en) Resource scheduling method, server, electronic equipment and storage medium
CN111833179A (en) Resource allocation platform, resource allocation method and device
JP4831555B2 (en) Method and apparatus for counting securities brokerage services
KR20090103109A (en) System and Method for providing online loan brokerage service
WO2001067321A1 (en) Stock selling/purchasing system and stock selling/purchasing method
KR101899217B1 (en) Method for finance technology service for deposit money loan of stock allocated and apparatus thereof
JP2019212231A (en) Information processing device, information processing method and program
US20110178859A1 (en) System and method for resolving transactions employing optional benefit offers
US8346579B1 (en) Systems and methods for supporting extended pay date options on an insurance policy
US20180082363A1 (en) Online auction platform for invoice purchasing
US7809588B1 (en) Systems and methods for supporting extended pay date options on an insurance policy
CN112950358A (en) Traceable financial market transaction guarantee management method, device, equipment and medium
JP2002329074A (en) Derivative dealing processing method and its system
CN112163846A (en) Payment scheme determination method, device and system based on block chain
US7809589B1 (en) Systems and methods for supporting extended pay date options on an insurance policy
CN110909294A (en) Data processing method and device
KR102248319B1 (en) Method for finance technology service for stock subscription money loan and apparatus thereof
US11551175B1 (en) Facilitating shareholder voting and associated proxy rights

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant