CN114240511A - User point processing method, device, equipment, medium and program product - Google Patents

User point processing method, device, equipment, medium and program product Download PDF

Info

Publication number
CN114240511A
CN114240511A CN202111585671.5A CN202111585671A CN114240511A CN 114240511 A CN114240511 A CN 114240511A CN 202111585671 A CN202111585671 A CN 202111585671A CN 114240511 A CN114240511 A CN 114240511A
Authority
CN
China
Prior art keywords
user
integral
data
credit
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111585671.5A
Other languages
Chinese (zh)
Inventor
丁欢
邱晓海
陈磊
丁明翼
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp filed Critical China Construction Bank Corp
Priority to CN202111585671.5A priority Critical patent/CN114240511A/en
Publication of CN114240511A publication Critical patent/CN114240511A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present disclosure provides a user point processing method. The method comprises the following steps: obtaining a first integration result based on static data calculation, wherein the static data comprises first user data generated according to a first behavior of a user, and the first behavior is used for indirectly obtaining user integration; calculating and obtaining a second integration result based on real-time data, wherein the real-time data comprises second user data generated according to a second behavior of the user, and the second behavior is used for directly obtaining user integration; respectively processing the first integral result and the second integral result to obtain N integral messages, and writing the N integral messages into a message queue; and the N credit messages are consumed from the message queue to calculate and obtain the user credit. The present disclosure also provides a user point processing apparatus, a device, a storage medium, and a program product.

Description

User point processing method, device, equipment, medium and program product
Technical Field
The present disclosure relates to the field of data processing, and more particularly, to a user point processing method, apparatus, device, medium, and program product.
Background
Some companies can accumulate points according to user behaviors, so that users can exchange rights and interests by taking the accumulated points as exchange carriers, and the purposes of rewarding the users and enhancing the user stickiness are achieved.
In the process of calculating the user integral, the user behavior scene is more complex, and the integral is derived based on different user behaviors. Therefore, the point processing end may be located at the end of the company data chain, for example, the point calculation process is handed to the front end of the data chain for processing, and the user points are updated only according to the calculated point results, wherein the point processing end may also receive the point results of multiple channels, and the data formats are various.
Therefore, in the related art, due to the connection of the front end and the back end of the data chain, the data formats are various, the job tasks are interdependent, and the common logic cost is high in the integration processing process, so how to improve the flexibility, the expandability, and the integration processing speed in the integration processing process is a problem to be solved at present.
Disclosure of Invention
In view of the above, the present disclosure provides a user point processing method, apparatus, device, medium, and program product that can improve flexibility, extensibility, and point processing speed in a point processing process.
In an aspect of the embodiments of the present disclosure, a user point processing method is provided, including: obtaining a first integration result based on static data calculation, wherein the static data comprises first user data generated according to a first behavior of a user, and the first behavior is used for indirectly obtaining user integration; calculating and obtaining a second integration result based on real-time data, wherein the real-time data comprises second user data generated according to a second behavior of the user, and the second behavior is used for directly obtaining user integration; respectively processing the first integral result and the second integral result to obtain N integral messages, and writing the N integral messages into a message queue, wherein N is an integer greater than or equal to 2; and the N credit messages are consumed from the message queue to calculate and obtain the user credit.
According to an embodiment of the present disclosure, the static data includes point files from S channels, and the obtaining a first point result based on the static data calculation includes: acquiring point files of the S channels, wherein the S channels comprise channels which respond to the first behavior and/or the second behavior and provide services for users, the point files comprise the first user data, and S is an integer greater than or equal to 1; and preprocessing the integral file of each channel based on the preprocessing rule associated with each channel in the S channels.
According to an embodiment of the present disclosure, after the preprocessing the point file of each channel, a target table of each channel is obtained, where the target table includes M flow records, and the method further includes: matching each of the M running water records with a corresponding integral calculation rule, wherein the M running water records comprise the first user data, and M is an integer greater than or equal to 1; and calculating the first user data in each flow record based on the integral calculation rule to obtain the first integral result.
According to the embodiment of the present disclosure, the real-time data includes real-time request messages from S channels, and the obtaining of the second integration result based on the real-time data calculation includes: calling corresponding first online service according to the channel to which the real-time request message belongs, wherein the real-time request message comprises the second user data; performing a credit calculation on the second user data based on the first online service to obtain the second credit result.
According to an embodiment of the present disclosure, said calculating to obtain the user score by consuming the N score messages from the message queue includes: invoking a second inorganic service based on each of the N credit messages; updating the user points of a user points account with the second affiliate service in accordance with the points message.
According to an embodiment of the present disclosure, the invoking a second inorganic service based on each of the N credit messages includes: converting each integral message into a corresponding online request; invoking the second linkage service based on the online request to cause the second linkage service to parse the online request to update the user credit account.
According to an embodiment of the present disclosure, the method further comprises: writing a processing record in an index repository, wherein the processing record comprises a record of consuming credit messages from the message queue; before converting each credit message into a corresponding online request, the method further comprises: and inquiring the processing record of each integral message through the index database.
According to an embodiment of the present disclosure, the processing record includes a processing status, and in a case that the second connection service resolves the connection request to update the user credit account, the method further includes: modifying the processing state of the credit message corresponding to the online request in the index database, wherein the modifying includes: and modifying the processing state into an abnormal state.
Another aspect of the embodiments of the present disclosure provides a user point processing apparatus, including: the device comprises a static data calculation module, a real-time data calculation module, an integral result conversion module and an integral message consumption module. The static data calculation module is used for calculating and obtaining a first integration result based on static data, wherein the static data comprises first user data generated according to a first behavior of a user, and the first behavior is used for indirectly obtaining user integration; the real-time data calculation module is used for calculating and obtaining a second integral result based on real-time data, wherein the real-time data comprises second user data generated according to a second behavior of the user, and the second behavior is used for directly obtaining user integral; the integral result conversion module is used for respectively processing the first integral result and the second integral result to obtain N integral messages and writing the N integral messages into a message queue, wherein N is an integer greater than or equal to 2; and the point message consumption module is used for calculating and obtaining the user points by consuming the N point messages from the message queue.
Another aspect of the disclosed embodiments provides an electronic device, including: one or more processors; a storage device to store one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method as described above.
Yet another aspect of the embodiments of the present disclosure provides a computer-readable storage medium having stored thereon executable instructions, which when executed by a processor, cause the processor to perform the method as described above.
Yet another aspect of the disclosed embodiments provides a computer program product comprising a computer program that when executed by a processor implements the method as described above.
One or more of the above embodiments have the following advantageous effects: compared with the prior art, the integration processing flow is realized by directly calculating static data and real-time data respectively and introducing a message queue, the first integration result and the second integration result can be converted into integration messages, and the integration processing of a user is realized by consuming the integration messages. Therefore, static data or real-time data can be flexibly processed in a targeted manner, the front end and the rear end of a data chain are decoupled, the common logic cost is reduced, and the integral processing speed is increased.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following description of embodiments of the disclosure, which proceeds with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an architecture diagram of a user point processing system according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a user point processing method according to an embodiment of the present disclosure;
fig. 3 schematically shows a flowchart of obtaining a first integration result in operation S210 according to an embodiment of the present disclosure;
FIG. 4 schematically shows an interaction diagram of a static data source, a first cluster and a second cluster, according to an embodiment of the disclosure;
FIG. 5 schematically shows an interaction diagram of a second cluster, a third cluster and a rules engine according to an embodiment of the disclosure;
FIG. 6 schematically shows a flow chart for obtaining a second integration result according to an embodiment of the disclosure;
FIG. 7 schematically shows an interaction diagram of a real-time data source, a fourth cluster, and message middleware, according to an embodiment of the disclosure;
FIG. 8 schematically shows a flow chart for obtaining user points according to an embodiment of the present disclosure;
FIG. 9 schematically illustrates a flow diagram for invoking a second federated service in accordance with an embodiment of the present disclosure;
FIG. 10 schematically shows an interaction diagram among message middleware, a fifth cluster, and a sixth cluster according to an embodiment of the disclosure;
fig. 11 schematically shows a block diagram of a structure of a user point processing apparatus according to an embodiment of the present disclosure;
fig. 12 schematically shows a block diagram of an electronic device adapted to implement a user credit processing method according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In the technical scheme of the disclosure, the data acquisition, collection, storage, use, processing, transmission, provision, disclosure, application and other processing are all in accordance with the regulations of relevant laws and regulations, necessary security measures are taken, and the public order and good custom are not violated.
Taking an example that a company provides services by using the internet technology, a user can log in a client of the company to perform operations such as consumption, sign-in, lottery drawing and the like. The client can record data generated by various operations of the user and transmit the data to the client server. The point processing system can obtain data from the client server and process the user points. The link through which the data generated by the user operation is transferred from the client to the backend server and then transferred from the backend server to the point processing system may be referred to as a data link. Since the client is directly oriented to the user, the client is located at the front end of the data chain, and the point processing system is located at the back end of the data chain.
In the related art, the client server may process data operated by the user to obtain an integration result, and then push the integration result to the integration processing system. Due to the connection of the front end and the back end of the data chain, data formats are various (different client servers are connected in an abutting mode) in the point processing process, operation tasks are mutually dependent (for example, operation task dependence in the client server or operation task dependence between the client server and the point processing system), the common logic cost is high, batch point results can be pushed to the point processing system by the client server at regular time, the pushing time from different client servers is inconsistent, and the point processing system cannot update user points in time.
The embodiment of the disclosure provides a user point processing method, a user point processing device, user point processing equipment, a user point processing medium and a program product. The method comprises the following steps: a first integration result is obtained based on static data calculation, wherein the static data comprises first user data generated according to a first behavior of a user, and the first behavior is used for indirectly obtaining the user integration. And calculating and obtaining a second integration result based on the real-time data, wherein the real-time data comprises second user data generated according to a second behavior of the user, and the second behavior is used for directly obtaining the user integration. And respectively processing the first integration result and the second integration result to obtain N integration messages, and writing the N integration messages into a message queue, wherein N is an integer greater than or equal to 2. And consuming N point messages from the message queue to calculate and obtain the user point.
Compared with the process of integrating processing by linking the front end and the rear end of a data chain in the related technology, the embodiment of the disclosure can directly acquire source data from the front end of the data chain, respectively calculate static data and real-time data directly, introduce a message queue to realize compatible processing of the static data and the real-time data, convert a first integration result and a second integration result into integration messages, and realize processing of user integration by consuming the integration messages. Therefore, static data or real-time data can be flexibly processed in a targeted manner, the front end and the rear end of a data chain are decoupled, the common logic cost is reduced, and the integral processing speed is increased.
Fig. 1 schematically shows an architecture diagram of a user point processing system according to an embodiment of the present disclosure.
As shown in fig. 1, the user point processing system 100 according to this embodiment may include a data source 110, a first cluster 120, a second cluster 130, a third cluster 140, a rule engine 150, a fourth cluster 160, message middleware 170, a fifth cluster 180, and a sixth cluster 190. The data source 110 may include a static data source 111 and a real-time data source 112, among others. The first cluster 120, the second cluster 130, the third cluster 140, the fourth cluster 160, the fifth cluster 180, and the sixth cluster 190 may be distributed clusters, respectively. Message middleware 170 may be a message cluster to provide message queue services.
The data source 110 is configured to receive a static data source, such as first user data generated based on a first activity of a user, and may also be configured to receive a real-time data source, such as second user data generated based on a second activity.
The first cluster 120 may include a plurality of server clusters, where each server cluster may include several servers. The first cluster 120 may be used to pre-process static data.
The second cluster 130 may comprise a Hadoop cluster for receiving the preprocessed data from the first cluster 120 for further processing the calculations for the preprocessing. The Hadoop cluster is a platform suitable for distributed storage and distributed computing of mass data, and can comprise a distributed storage framework (HDFS) component, a distributed computing framework (MapReduce) component and a resource scheduling platform (yann) component.
The third cluster 140 may include a Spark cluster, which is used to read the files processed by the second cluster 130 for concurrent processing. Specifically, rule matching may be performed item by item, and integral calculation may be performed according to the matched rule. The Spark cluster can be used as a caller for resource allocation, wherein Spark applications running in the Spark cluster run independently and are isolated from each other to a certain extent.
The rules engine 150 may be configured to invoke one or more score rules, and in response to invocation of the third cluster 140, filter the score rules and return matching score rules.
The fourth cluster 160 may include a plurality of online application clusters for providing the first online service.
Message middleware 170 may include Kafka message clusters to provide message queue services. The Kafka message cluster can be classified according to topic when storing messages, each topic can be divided into a plurality of part groups, and each part is an ordered queue. Each message in the partition is assigned an ordered id.
The fifth cluster 180 may comprise a streaming Storm cluster for listening to messages in the Kafka message cluster and streaming.
The sixth cluster 190 may include a plurality of online application clusters for providing second online services.
The user point processing system 100 may be used to implement the user point processing method according to the embodiment of the present disclosure, and may also be used to deploy the user point processing apparatus according to the embodiment of the present disclosure.
The user point processing method according to the embodiment of the present disclosure will be described in detail below with reference to fig. 2 to 10, taking the financial institution as an example to perform user point processing based on the user point processing system described in fig. 1.
Fig. 2 schematically shows a flow chart of a user point processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the user point processing method of this embodiment includes operations S210 to S240.
In operation S210, a first integration result is obtained based on static data calculation, wherein the static data includes first user data generated according to a first behavior of a user, and the first behavior is used for indirectly obtaining the user integration.
Indirect acquisition here is, for example, an action in which the first action is not directly acquiring user points, and points that can be acquired cannot be directly determined for the first user data. Taking the financial institution as a banking institution as an example, the first behavior may include, but is not limited to, a behavior of consuming with a credit card, a behavior of consuming with a debit card, a behavior of meeting a standard for engaging in an activity, and the like. The first user data generated by the user when consuming with the credit card may comprise a running stream of credit card consumption and the first user data generated when consuming with the debit card may comprise a running stream of debit card consumption. The first user data generated by the recommendation activity, for example, when the user engages in a recommendation activity, may include a recommendation compliance record. Taking the recommendation activity as an example, if 10 people are required to be recommended and points are allocated, the recommendation behavior cannot directly obtain the points when the 5 th person is recommended by the user.
Since the first user data cannot directly determine the integral, the first user data of one or more users in a period of time can be acquired for batch processing, and therefore, the first user data can be called as static data. The first integration result is, for example, a calculation of the first user data to determine the number of integrations that the user can obtain.
In operation S220, a second integration result is calculated based on real-time data, wherein the real-time data includes second user data generated according to a second behavior of the user, and the second behavior is used to directly obtain the user integration.
The direct acquisition here is, for example, that the second behavior is directly associated with the user points, for which the available points can be directly determined. Therefore, the second behavior includes a behavior of directly generating the integral in response to the user operation, and the first behavior includes a behavior of further performing rule matching to generate the integral in response to the user operation. For example, the second behavior includes, but is not limited to, a check-in credit line behavior at the bank client, a card binding credit line behavior, and a threshold-free consumption credit line behavior. The second user data generated by the user upon check-in may include a check-in record, the second user data generated upon card binding may include a card binding record, and the second user data generated upon consumption may include a consumption record.
Since the second behavior can directly obtain the user score, the second user data can be processed in real time, that is, the obtained score can be directly determined as the second user data is generated, and thus, the second behavior can be called as real-time data. The second integration result is, for example, a calculation of the second user data to determine the number of integrations that the user can obtain.
In operation S230, the first integration result and the second integration result are respectively processed to obtain N integration messages, and the N integration messages are written into the message queue, where N is an integer greater than or equal to 2.
In operation S240, N credit messages are consumed from the message queue to calculate a credit for the user.
Referring to fig. 1, the source file from which user data is directly obtained may be subjected to integration calculations. On one hand, the flow of calculating the integral is not delivered to the front end of the data chain, so that the operation tasks are not dependent on each other, and the front end of the data chain can directly push the generated source file. On the other hand, the user point processing system can calculate source files of different formats and different sources (such as different clients), and the user point processing does not depend on the push time of the front end any more, so that the processing speed is improved.
Compared with the prior art, the integration processing flow is realized by directly calculating static data and real-time data respectively and introducing a message queue, the first integration result and the second integration result can be converted into integration messages, and the integration processing of a user is realized by consuming the integration messages. Therefore, static data or real-time data can be flexibly processed in a targeted manner, the front end and the rear end of a data chain are decoupled, the common logic cost is reduced, and the integral processing speed is increased.
Fig. 3 schematically shows a flowchart of obtaining a first integration result in operation S210 according to an embodiment of the present disclosure. FIG. 4 schematically shows an interaction diagram of a static data source, a first cluster and a second cluster according to an embodiment of the disclosure. FIG. 5 schematically shows an interaction diagram of the second cluster, the third cluster and the rule engine according to an embodiment of the disclosure.
As shown in fig. 3, the user point processing method of this embodiment includes operations S310 to S340.
In operation S310, a score file of S channels is obtained, where the S channels include a channel providing a service to a user in response to a first behavior and/or a second behavior, and the score file includes first user data, where S is an integer greater than or equal to 1.
Channels include, for example, but are not limited to, credit card channels, debit card channels, qualifying campaign channels, and the like. For example, the credit card channel may provide credit card consumption services to the user in response to the user's credit card consumption behavior. The debit-card channel may provide debit-card consumption services to the user in response to the user's debit-card consumption behavior. The standard activity channel can respond to the activity participation of the user and provide services such as activity entries, activity descriptions and the like for the user.
The points file may comprise first user data of one or more users over a period of time. For example, the credit card integration file may include information on how current credit card consumption is for a number of users over a period of time. The debit card integration file may include debit card consumption chronological information for a plurality of users over a period of time. The compliance recording points file may include compliance recording information for a plurality of users over a period of time.
In operation S320, the score file of each channel is preprocessed based on the preprocessing rule associated with each channel of the S channels.
According to the embodiment of the disclosure, the preprocessing rules can be realized in a configuration file form by using an abstraction, centralization and configuration mode, and the configuration file can include the preprocessing rules. Firstly, abstracting steps or objects to be processed according to the characteristics of each channel in S channels to form preprocessing rules adapted to each channel. Then, the configuration files of the S channels are managed in a centralized mode, and each channel corresponds to at least one configuration file. And realizing the configuration design of the preprocessing rules adapted to the S channels in the form of the configuration file. Therefore, dynamic change and extensible effect of user processing logic can be realized through operation on the configuration file. The pre-processing rules in the corresponding configuration files can be executed for the score files of the S channels to process the user data in the score files. And finally, obtaining the user points according to the processing result.
Referring to fig. 4, the first cluster 120 may run the bulk application 1, the bulk application 2, the bulk application 3 … …, the bulk application N. Each batch application may execute the preprocessing task to process the batch data, for example, the batch applications 1 to N may execute the preprocessing tasks 1 to N one by one.
The first cluster 120 may determine a corresponding parsing profile based on the channel identification of each channel; and executing analysis logic in the analysis configuration file, and analyzing the integral file of each channel, wherein the analysis logic is used for converting the user data into the user data with the preset format.
The parsing configuration file may include X parsing units, which specifically includes: determining X first target fields from user data, wherein X is an integer greater than or equal to 1; and correspondingly mapping the values of the X first target fields to X columns in a first database table, wherein the mapped X columns comprise user data in a preset format, and the X analysis units comprise X analysis logics corresponding to the X columns.
In some embodiments, a channel type number may be agreed with S channels in advance as a channel identification identifier. When pushing the score files into the first cluster 120 in batches, the file names are named in a unified specification by channel numbers. The points files generated by the various channels may have different formats, such as file formats, or different field meanings in the files. In an optional implementation manner, each channel is docked with a fixed batch application, executable scripts are stored in the batch applications, and the score file is analyzed in a targeted manner by running the scripts. Wherein the bulk application pushed to the channel interface can be determined according to the file name of the point file. In another alternative, each channel is not docked with a fixed batch application, and the first cluster 120 may assign a score file to each batch application based on a load balancing algorithm after receiving the score file. For example, after receiving the point file, the batch application 1 calls an associated executable script from a script library based on the file name to analyze the point file in a targeted manner. The process of running the script analysis file is the process of running the preprocessing task, and the executable script comprises the preprocessing rule.
Different channels may adopt different data processing processes, so that the content and format of the score files received from each channel are different, and the score calculation cannot be performed through a uniform calculation rule. Therefore, the integral file can be preprocessed in a targeted manner by adopting the preprocessing rule associated with each channel, so that files with uniform formats can be obtained, and integral calculation can be performed uniformly.
After the point file of each channel is preprocessed in operation S320, a target table of each channel is obtained, the target table including M flow records.
In operation S330, a corresponding integral calculation rule is matched for each of M flow records, where M is an integer greater than or equal to 1, and the M flow records include first user data.
In operation S340, the first user data in each of the pipeline records is calculated based on an integral calculation rule to obtain a first integral result.
Referring to fig. 4 and 5, in fig. 5, a Hadoop cluster is taken as the second cluster 130, and a Spark cluster is taken as the third cluster 140. After the first cluster 120 drops the target table into the database, the target table may be further unloaded into large files, such as large files A, B, C and D. The large file can be pushed into an HDFS path through a scheduling task, and a to-be-distributed computing framework carries out processing computation.
Firstly, a Hadoop cluster is utilized to perform file segmentation. And by utilizing the Hadoop cluster distributed file processing computing capacity, the pushed large file is segmented into files with the size not exceeding 50MB and is stored in a distributed mode. The method has the advantages that each Bolck block in Hadoop can store mass data, large data volume scenes are met, rapid access can be achieved through NameNode, the Mapreduce is used for rewriting and compensating default values of formats when large file data are segmented, and repeated loading of repeated data can be avoided. And finally, outputting the segmented files to a result directory according to a certain naming rule through task scheduling.
Secondly, data filtering is performed by utilizing Spark clusters. And (4) reading the HDFS file path segmentation file by Spark and carrying out concurrent one-by-one processing. And performing data cleaning according to preset data filtering rules, such as merchant filtering, blacklist filtering, amount filtering and the like. The merchant filter is, for example, a merchant within a predetermined industry may count points. The blacklist filtering is, for example, that blacklisted merchants in a certain industry cannot count points, or that users cannot count points in a blacklist. The amount filtering may not count points below a predetermined amount, for example. And the Spark calls the integration rule Jar package layer by layer, completes the acquisition and authentication of the user information on line, and finally writes the conforming rule data into the fragment file. After the Spark cluster client submits a Spark command (e.g., a command for processing a pipeline), a Drive thread is started. In fig. 5, DriveA, DriveB, and DriveC may represent different threads to process merchant filtering, blacklist filtering, and amount filtering, respectively, where DriveA1, DriveA2, DriveB1, DriveB2, DriveC1, and DriveC2 are sub-processes under a corresponding process, and are used for performing data filtering after being matched to at least one filtering rule. It should be noted that the filtering rules that can be matched in the Spark cluster are not limited to the merchant filtering, the blacklist filtering, and the amount filtering shown in fig. 5. Can be flexibly selected according to actual conditions.
The Spark cluster can determine a corresponding process configuration file based on the channel identifier of each channel; and executing the flow logic in the flow configuration file, and processing the analyzed integral file of each channel.
The process configuration file comprises Y process unit sets, such as a merchant filtering process unit set, a blacklist filtering process unit set and a money filtering process unit set. Executing the flow logic in the flow configuration file, wherein the step of processing the analyzed integral file of each channel comprises the following steps: sequentially executing each flow unit set in the Y flow unit sets based on a preset sequence of the Y flow unit sets, wherein each flow unit set corresponds to each processing flow for obtaining the first integration result; each flow unit set comprises Z flow units, each flow unit in the Z flow units corresponds to each sub-processing flow in each processing flow, each flow unit comprises at least one flow logic, and Y, Z is an integer greater than or equal to 1.
Finally, the integral calculation is performed by using Spark clusters. And reading the qualified data again by the Spark, and calculating an integral value of the flowing water record according to the rule matched in the rule engine, if the recommended number of people of the user in the recommended activity is 10, calculating the integral value in the standard-reaching record according to the configuration rule. There may be cases where a single stream record matches multiple integration rules, and multiple cumulative integration records (i.e., first integration results) are generated and written into the Kafka message cluster.
According to embodiments of the present disclosure, a first action by a user may trigger a different score calculation rule or involve one or more activities. And the running water records are subjected to integral matching through the rule engine, so that the accuracy and efficiency of integral calculation are improved.
Fig. 6 schematically shows a flowchart for obtaining a second integration result in operation S220 according to an embodiment of the present disclosure. Fig. 7 schematically shows an interaction diagram of a real-time data source, a fourth cluster and message middleware according to an embodiment of the disclosure.
As shown in fig. 6, obtaining the second integration result based on the real-time data calculation in operation S220 may include operations S610 to S620.
In operation S610, a corresponding first online service is called according to a channel to which the real-time request packet belongs, where the real-time request packet includes second user data.
In operation S620, a second integration result is obtained by performing an integration calculation on the second user data based on the first online service.
As shown in fig. 7, the fourth cluster 160 may receive real-time request messages from S channels. The fourth cluster 160 may include a router or gateway, an A online application 1, an A online application 2, an A online application 3 … … A online application N, and a database. The online applications 1 to N can provide a first online service, and the first online service can respond to the received real-time request message to perform rapid drop processing of points, thereby realizing instant response.
Referring to fig. 1 and 7, first, the real-time data source may include data that calls an API interface for real-time transmission after the client receives an operation of a user. The real-time data may be transmitted to the fourth cluster 160 in an HTTP message (i.e., a real-time request message), for example, by an HTTP request.
Second, the fourth cluster 160 may distribute different HTTP messages to different a online applications using routing or gateways. In an alternative embodiment, each channel corresponds to a fixed number of a online applications, and the a online applications correspond to the first online service. For example, the route may determine the channel to which it belongs based on the channel number in the HTTP message, and then forward it to the corresponding a online application. In another alternative embodiment, each channel does not correspond to a fixed number of a online applications, but corresponds to a fixed first online service. The routing can distribute the HTTP message to a certain A online application based on a load balancing algorithm, the A online application determines the channel to which the A online application belongs according to the channel number in the HTTP message, and the corresponding first online service is called.
And finally, analyzing the HTTP message by using the first online service to obtain second user data, and performing integral calculation. Taking the example that the user participates in the lottery drawing activity at the bank client, after the user performs the lottery drawing operation, the lottery drawing result can be obtained. The bank client may transmit the lottery result to the fourth cluster 160 in real time in the form of an HTTP message. The routing in the fourth cluster 160 forwards the HTTP packet to the a online application 1. The online application 1 calls a first online service through the lottery channel number in the message so that the first online service obtains the lottery result. For example, the lottery result is three prizes, and the first online service may obtain the points obtained by the three prizes as the second point result.
In some embodiments, referring to fig. 7, the second integration result may be written to a database. And the second integration result in the database is converted into an integration message in an asynchronous manner through a quartz automatic task mechanism, and the integration message is written into the message middleware 170.
Fig. 8 schematically shows a flowchart of obtaining user points in operation S230 according to an embodiment of the present disclosure. Fig. 10 schematically shows an interaction diagram among message middleware, a fifth cluster and a sixth cluster according to an embodiment of the present disclosure.
As shown in fig. 8, consuming N point messages from the message queue to calculate and obtain the user point in operation S230 may include operations S810 to S820.
In operation S810, a second inorganic service is invoked based on each of the N credit messages.
As shown in FIG. 10, the Storm cluster may be referred to as a fifth cluster 180. The sixth cluster 190 may receive a request from the Storm cluster. The sixth cluster 190 may include a router or gateway, a B online application 1, a B online application 2, a B online application 3 … … B online application N, and a database. The B online applications 1 to N may provide a second contact service, and the second contact service may perform update of the user points in units of user point accounts, according to a request from the Storm cluster.
Referring to fig. 1 and 10, after the third cluster and the fourth cluster are processed, the first integration result and the second integration result are converted into an integration message data stream, and data processing is performed by a message middleware (such as Kafka cluster) bridging a streaming computing framework (Storm cluster). In other words, the Storm cluster may act as a consuming side to consume credit messages from a message queue in the message middleware 170 to invoke the second federated service.
In operation S820, the user points of the user point account are updated using the second linkage service according to the point message.
Before performing operation S820, a target parameter identifier may be determined based on at least one parameter identifier in the third database table; determining at least one scene field corresponding to the target parameter identifier from a fourth database table; matching the second target field with at least one scene field to obtain a matching result; and determining corresponding configuration conditions based on the matching result.
The third database table can be used as a parameter identification set definition table for defining parameter identification (such as name, number, etc.) and type. The fourth database table may be used as a parameter identifier specific value table for defining enumerated values of the parameter identifier, i.e., scene fields. Parameter identification may refer to configuring the integral calculation rule in a parameterized manner. Each score calculation rule may be assigned a corresponding parameter identification. The scene field may refer to detailed scene information specifically related to the user data, such as fields corresponding to information of a plurality of point activities, point star levels, consumption amounts, merchant names, and the like. The above-described integral calculation rule is different from the rule arrangement form called by the rule engine, and the content of the processed user data is also different.
Referring to fig. 10, the router or gateway may load balance the received requests, for example, send a request corresponding to a credit message to the B online application 1. The point message may include user information, accumulated point value, etc., and the B online application 1 may call the second contact service to obtain the user point account and query the database for the point balance (i.e., user point) in the user point account. In an alternative embodiment, the second contact service may accumulate the accumulated point value and the point balance, determine whether the maximum allowable point value of the user point account has been exceeded, and update the point balance to the maximum value if the maximum allowable point value of the user point account has been exceeded. If not, the result is directly updated to the accumulated result. And finally, writing the updated user point account and point balance into a database. In some embodiments, the credit message may include user information, curtailed credit values, and the like.
On one hand, after the first point result is obtained based on the static data, the point is updated to the user point account by analyzing the message corresponding to the first point result through the second online service, so that the updating logic is prevented from being distributed in online and batch, and the subsequent maintenance complexity is increased. On the other hand, in the process of obtaining the second integral result based on the real-time data, the first online service is called, the second online service is called during updating, and the real-time data is processed immediately and the real-time data and the static data are processed compatibly through the combination of the first online service and the second online service. In addition, a scheduling control logic for updating the credit account is added to the second contact server, so that special business processing such as abnormal retry, large credit unified management and the like can be completed.
Fig. 9 schematically shows a flowchart of invoking the second federated service in operation S810 according to an embodiment of the present disclosure.
As shown in fig. 9, invoking the second linkage service based on each of the N credit messages in operation S810 may include operations S910 to S920.
In operation S910, each credit message is converted into a corresponding online request.
The conversion process can be data format conversion, and can also take out valid data from each integral message to be reprocessed to obtain an online request.
In operation S920, a second join service is invoked based on the connection request, such that the second join service parses the connection request to update the user point account.
Referring to fig. 10, monitoring message change conditions in a consumption queue through a spout process of the Storm cluster, consuming point messages and flowing to a bolt thread group for dynamic balanced consumption, writing bolt processing logic to convert the point messages into HTTP online requests, and calling an online service to complete adjustment of a point account. In particular, credit messages may be consumed in terms of topic in the Storm cluster, such as consuming topic1, consuming topic2, and so forth, in response to a topic classification in the message middleware. Wherein the consumption topic1 is used to process credit messages from the credit card channel. Consumption topic2 is used to process credit messages from the debit card channel. It should be noted that, in the user point processing method according to the embodiment of the present disclosure, a plurality of channels may be set as needed, and correspondingly, a plurality of topics may be correspondingly set in the Storm cluster.
According to embodiments of the present disclosure, a process record may be written in the index repository, wherein the process record includes a record of consuming credit messages from the message queue. Before performing operation S910 to convert each credit message into a corresponding online request, the method further includes: and querying a processing record of each integral message through the index library.
As shown in FIG. 10, an index repository 1010 may be provided to interface with the Storm cluster. The index repository 1010 may store records of scores that have been historically processed by Storm clusters. Before a credit message stream is transferred to the bolt thread group for dynamic balanced consumption, the index database 1010 may be queried to determine whether there is a processing record of the credit message, and if so, the credit message stream is not processed. The method has the effect of preventing any link before consuming the point information from repeatedly sending the processed data, thereby avoiding the waste of computing resources caused by processing repeated data and even the repeated accumulation of the user points.
According to the embodiment of the disclosure, the processing record includes a processing state, and when the second session service resolves the online request and updates the user credit account, the processing record further includes: modifying the processing state of the credit message corresponding to the online request in an index database, wherein the modifying step comprises the following steps: the processing state is modified to an exception state.
Referring to fig. 10, after the sixth cluster 190 receives the connection request, if the analysis may fail, the point message corresponding to the connection request may be returned to the Sorm cluster, and the processing state of the point message in the index database may be modified to an abnormal state. And the situation that the updating of the user points fails can also occur, and the point message corresponding to the online request can also be returned to the storm cluster, and the state can be modified. In some embodiments, an exception database may be configured for the sixth cluster 190 for storing online requests for which exceptions occur. Data synchronization may then be timed between the anomaly database and the index repository to update the state of the corresponding credit message.
By modifying the processing state of the integral message, the functions of anti-reprocessing and abnormal data compensation can be achieved. For example, before performing operation S910, it may be queried whether there is a record of processing of the current credit message in the index database, and if there is no record, the processing may be continued. If there is a record and the processing status is a success status, the processing is not continued. If there is a record and the processing state is an abnormal state, the processing can be continued. Therefore, after the second linkage service generates abnormal data, a compensation mechanism is carried out to improve the correctness of user point processing.
Based on the user point processing method, the disclosure also provides a user point processing device. The apparatus will be described in detail below with reference to fig. 11.
Fig. 11 schematically shows a block diagram of a user point processing apparatus 1100 according to an embodiment of the present disclosure.
As shown in fig. 11, the user point processing apparatus 1100 of this embodiment includes a static data calculation module 1110, a real-time data calculation module 1120, a point result conversion module 1130, and a point message consumption module 1140.
The static data calculation module 1110 may perform operation S210, for example, for obtaining a first integration result based on static data calculation, wherein the static data includes first user data generated according to a first behavior of a user, and the first behavior is used for indirectly obtaining the user integration.
In some embodiments, the static data calculation module 1110 may further perform operations S310 to S320, for example, which are not described herein.
The real-time data calculation module 1120 may perform operation S220, for example, to calculate and obtain a second point result based on real-time data, wherein the real-time data includes second user data generated according to a second behavior of the user, and the second behavior is used to directly obtain the user point.
In some embodiments, the real-time data calculation module 1120 may further perform operations S610 to S620, which are not described herein.
The integration result conversion module 1130 may perform operation S230, for example, to process the first integration result and the second integration result to obtain N integration messages, and write the N integration messages into the message queue, where N is an integer greater than or equal to 2.
In some embodiments, the integration result conversion module 1130 may further perform operations S810 to S820, for example, and operations S910 to S920 are not described herein again.
The tally message consumption module 1140 may, for example, perform operation S240 for consuming N tally messages from the message queue to calculate the obtained user tally.
According to an embodiment of the present disclosure, any plurality of the static data calculation module 1110, the real-time data calculation module 1120, the integration result conversion module 1130, and the integration message consumption module 1140 may be combined into one module to be implemented, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the static data calculation module 1110, the real-time data calculation module 1120, the integration result conversion module 1130, and the integration message consumption module 1140 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of three implementations of software, hardware, and firmware, or in any suitable combination of any of them. Alternatively, at least one of the static data calculation module 1110, the real-time data calculation module 1120, the tally result conversion module 1130, and the tally message consumption module 1140 may be implemented at least in part as a computer program module that, when executed, may perform corresponding functions.
Fig. 12 schematically shows a block diagram of an electronic device adapted to implement a user credit processing method according to an embodiment of the disclosure.
As shown in fig. 12, an electronic apparatus 1200 according to an embodiment of the present disclosure includes a processor 1201, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1202 or a program loaded from a storage section 1208 into a Random Access Memory (RAM) 1203. The processor 1201 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 1201 may also include on-board memory for caching purposes. The processor 1201 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 1203, various programs and data necessary for the operation of the electronic apparatus 1200 are stored. The processor 1201, the ROM 1202, and the RAM 1203 are connected to each other by a bus 1204. The processor 1201 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 1202 and/or the RAM 1203. Note that the programs may also be stored in one or more memories other than the ROM 1202 and the RAM 1203. The processor 1201 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 1200 may also include input/output (I/O) interface 1205, according to an embodiment of the disclosure, input/output (I/O) interface 1205 also connected to bus 1204. The electronic device 1200 may also include one or more of the following components connected to the I/O interface 1205: an input section 1206 including a keyboard, mouse, etc. Including an output portion 1207 such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker and the like. A storage section 1208 including a hard disk and the like. And a communication section 1209 including a network interface card such as a LAN card, a modem, or the like. The communication section 1209 performs communication processing via a network such as the internet. A driver 1210 is also connected to the I/O interface 1205 as needed. A removable medium 1211, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 1210 as necessary, so that a computer program read out therefrom is mounted into the storage section 1208 as necessary.
The present disclosure also provides a computer-readable storage medium, which may be embodied in the devices/apparatuses/systems described in the above embodiments. Or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 1202 and/or the RAM 1203 and/or one or more memories other than the ROM 1202 and the RAM 1203 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method illustrated in the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to realize the item recommendation method provided by the embodiment of the disclosure.
The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure when executed by the processor 1201. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of a signal on a network medium, downloaded and installed through the communication section 1209, and/or installed from the removable medium 1211. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 1209, and/or installed from the removable medium 1211. The computer program, when executed by the processor 1201, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (12)

1. A user point processing method, comprising:
obtaining a first integration result based on static data calculation, wherein the static data comprises first user data generated according to a first behavior of a user, and the first behavior is used for indirectly obtaining user integration;
calculating and obtaining a second integration result based on real-time data, wherein the real-time data comprises second user data generated according to a second behavior of the user, and the second behavior is used for directly obtaining user integration;
respectively processing the first integral result and the second integral result to obtain N integral messages, and writing the N integral messages into a message queue, wherein N is an integer greater than or equal to 2;
and the N credit messages are consumed from the message queue to calculate and obtain the user credit.
2. The method of claim 1, wherein the static data comprises a points file from S channels, and the calculating based on the static data to obtain a first points result comprises:
acquiring point files of the S channels, wherein the S channels comprise channels which respond to the first behavior and/or the second behavior and provide services for users, the point files comprise the first user data, and S is an integer greater than or equal to 1;
and preprocessing the integral file of each channel based on the preprocessing rule associated with each channel in the S channels.
3. The method of claim 2, wherein after said pre-processing the points file for each channel, obtaining a goal table for each channel, the goal table comprising M water records, the method further comprising:
matching each of the M running water records with a corresponding integral calculation rule, wherein the M running water records comprise the first user data, and M is an integer greater than or equal to 1;
and calculating the first user data in each flow record based on the integral calculation rule to obtain the first integral result.
4. The method of claim 1, wherein the real-time data comprises real-time request messages from S channels, and the obtaining second integration results based on the real-time data calculation comprises:
calling corresponding first online service according to the channel to which the real-time request message belongs, wherein the real-time request message comprises the second user data;
performing a credit calculation on the second user data based on the first online service to obtain the second credit result.
5. The method of claim 4, wherein said consuming said N tally messages from said message queue to compute said user tally comprises:
invoking a second inorganic service based on each of the N credit messages;
updating the user points of a user points account with the second affiliate service in accordance with the points message.
6. The method of claim 5, wherein the invoking a second inorganic service based on each of the N credit messages comprises:
converting each integral message into a corresponding online request;
invoking the second linkage service based on the online request to cause the second linkage service to parse the online request to update the user credit account.
7. The method of claim 6, wherein the method further comprises:
writing a processing record in an index repository, wherein the processing record comprises a record of consuming credit messages from the message queue;
before converting each credit message into a corresponding online request, the method further comprises:
and inquiring the processing record of each integral message through the index database.
8. The method of claim 7, wherein the processing record includes a processing state, and in the event that the second affiliate service resolves the online request to update the user points account, the method further comprises:
modifying the processing state of the credit message corresponding to the online request in the index database, wherein the modifying includes: and modifying the processing state into an abnormal state.
9. A user point processing apparatus comprising:
the static data calculation module is used for calculating and obtaining a first integration result based on static data, wherein the static data comprises first user data generated according to a first behavior of a user, and the first behavior is used for indirectly obtaining the user integration;
the real-time data calculation module is used for calculating and obtaining a second integral result based on real-time data, wherein the real-time data comprises second user data generated according to a second behavior of the user, and the second behavior is used for directly obtaining user integral;
an integral result conversion module, configured to process the first integral result and the second integral result respectively to obtain N integral messages, and write the N integral messages into a message queue, where N is an integer greater than or equal to 2;
and the credit message consumption module is used for calculating and obtaining the user credit by consuming the N credit messages from the message queue.
10. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-8.
11. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 8.
12. A computer program product comprising a computer program which, when executed by a processor, implements a method according to any one of claims 1 to 8.
CN202111585671.5A 2021-12-22 2021-12-22 User point processing method, device, equipment, medium and program product Pending CN114240511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111585671.5A CN114240511A (en) 2021-12-22 2021-12-22 User point processing method, device, equipment, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111585671.5A CN114240511A (en) 2021-12-22 2021-12-22 User point processing method, device, equipment, medium and program product

Publications (1)

Publication Number Publication Date
CN114240511A true CN114240511A (en) 2022-03-25

Family

ID=80761717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111585671.5A Pending CN114240511A (en) 2021-12-22 2021-12-22 User point processing method, device, equipment, medium and program product

Country Status (1)

Country Link
CN (1) CN114240511A (en)

Similar Documents

Publication Publication Date Title
CN112771500B (en) Functional instant service gateway
US11240289B2 (en) Apparatus and method for low-latency message request/response processing
US9053231B2 (en) Systems and methods for analyzing operations in a multi-tenant database system environment
CN111737270B (en) Data processing method and system, computer system and computer readable medium
WO2020258290A1 (en) Log data collection method, log data collection apparatus, storage medium and log data collection system
US20200210481A1 (en) Parallel graph events processing
US9489445B2 (en) System and method for distributed categorization
US20230334017A1 (en) Api for implementing scoring functions
US11755461B2 (en) Asynchronous consumer-driven contract testing in micro service architecture
CN114172966B (en) Service calling method, service processing method and device under unitized architecture
CN114282011B (en) Knowledge graph construction method and device, and graph calculation method and device
US11676063B2 (en) Exposing payload data from non-integrated machine learning systems
US11182144B2 (en) Preventing database package updates to fail customer requests and cause data corruptions
CN112990991A (en) Method and device for merging invoices
CN114240511A (en) User point processing method, device, equipment, medium and program product
US11614981B2 (en) Handling of metadata for microservices processing
CN114490136A (en) Service calling and providing method, device, electronic equipment, medium and program product
CN114780361A (en) Log generation method, device, computer system and readable storage medium
CN114780807A (en) Service detection method, device, computer system and readable storage medium
CN114428723A (en) Test system, system test method, related device and storage medium
CN114237762A (en) Point file processing method, device, equipment, medium and program product
US20240098036A1 (en) Staggered payload relayer for pipelining digital payloads across network services
US11989661B1 (en) Dynamic rules for rules engines
CN116166558A (en) Transaction testing method, device, equipment and storage medium
CN116257375A (en) Kafka data automatic stream processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination