CN112434241A - Service processing method, computer device and storage medium - Google Patents

Service processing method, computer device and storage medium Download PDF

Info

Publication number
CN112434241A
CN112434241A CN201910789849.4A CN201910789849A CN112434241A CN 112434241 A CN112434241 A CN 112434241A CN 201910789849 A CN201910789849 A CN 201910789849A CN 112434241 A CN112434241 A CN 112434241A
Authority
CN
China
Prior art keywords
service
current
service processing
self
downstream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910789849.4A
Other languages
Chinese (zh)
Inventor
赵海林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN201910789849.4A priority Critical patent/CN112434241A/en
Publication of CN112434241A publication Critical patent/CN112434241A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Some embodiments of the present invention provide a service processing method, a computer device, and a storage medium, including: receiving a service request; executing corresponding service processing according to the service request and obtaining a current service processing result; the current service processing result at least comprises current request feedback; and respectively sending the current service processing result to each downstream service system so that each downstream service system executes corresponding service processing according to the current service processing result. The invention ensures that the successful processing of each downstream service system does not depend on the successful processing of other downstream service systems, realizes the service decoupling and parallel processing of each downstream service system, and can greatly improve the service processing efficiency and the service processing success rate.

Description

Service processing method, computer device and storage medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a service processing method, a computer device, and a storage medium.
Background
In some application scenarios, after the current service system completes service processing, the downstream service system performs corresponding service processing based on the service processing information of the current service system. For example, in the live broadcast process, the audience can send a prop present service request to the server through the audience client. After receiving the request, the server end completes the property presentation service processing according to the request through the property system, and then all downstream systems of the property system, such as the medal intimacy system, the user experience system, the bullet screen system, the property lottery system and the ranking list system, execute corresponding service processing according to the property presentation service processing information.
In the prior art, each downstream service system performs service processing by using a serial flow, that is, the operation sequence of all the downstream service systems is predetermined, and after the current service system completes the service processing, each downstream service system sequentially executes corresponding service processing according to the predetermined sequence. Therefore, successful processing of the subsequent downstream service system depends on successful processing of the previous downstream service system, not only is service processing efficiency low, but also the whole process is influenced as long as one system fails to execute or overtime, and data exception, data error, data inconsistency and service processing success rate are influenced when the system is serious.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, it is an object of the present invention to provide an improved method of traffic handling to solve the problem that successful handling of a subsequent downstream traffic system is dependent on successful handling of a previous downstream traffic system.
In order to achieve the above object, some embodiments of the present invention provide a service processing method, including:
receiving a service request;
executing corresponding service processing according to the service request and obtaining a current service processing result; the current service processing result at least comprises current request feedback;
and respectively sending the current service processing result to each downstream service system so that each downstream service system executes corresponding service processing according to the current service processing result.
In some embodiments of the present invention, the current service processing result further includes identification information; the executing the corresponding service processing according to the service request and obtaining the current service processing result includes:
executing a preset first service process according to the service request to obtain the current request feedback;
executing a preset second service process according to the current request feedback to obtain corresponding identification information;
the sending of the current service processing result to each downstream service system, so that each downstream service system executes corresponding service processing according to the current service processing result, includes:
and respectively sending the current service processing result to each downstream service system, so that when each downstream service system detects that the identification information is not stored, corresponding service processing is executed according to the current request feedback.
In some embodiments of the present invention, the sending the current service processing result to each downstream service system includes:
and packaging the current service processing result into a message and sending the message to a preset message queue so as to respectively send the message to each downstream service system through the message queue.
In some embodiments of the present invention, the second service processing includes:
calling a preset self-increment counter;
and when the self-increment identification code corresponding to the current request feedback is acquired through the self-increment counter, generating the identification information according to the self-increment identification code.
In some embodiments of the present invention, the generating the identification information according to the self-increment identification code includes:
taking the self-increment identification code as the identification information;
or, acquiring a current timestamp through a preset system time unit, and generating the identification information according to the current timestamp and the self-increment identification code;
or when the service request carries a plurality of service information, performing modular operation on at least one service information to obtain at least one module value, and generating the identification information according to the at least one module value and the self-increment identification code;
or generating the identification information according to the at least one module value, the current timestamp and the self-increment identification code.
In some embodiments of the present invention, the service processing method further includes: and when the self-increment identification code fails to be acquired, randomly generating the self-increment identification code within a preset numerical range.
In some embodiments of the present invention, before sending the current service processing result to each downstream service system, the method further includes: and processing the identification information by adopting a preset distributed lock to obtain the locked identification information.
In some embodiments of the present invention, the receiving a service request includes:
receiving a service request for presenting the props input by a client of the audience; the service request for presenting the props at least comprises audience identification, anchor identification, prop identification and consumption type identification.
In some embodiments of the present invention, the executing the corresponding service processing according to the service request and obtaining the current service processing result includes:
reading the audience identification, the anchor identification, the prop identification and the consumption type identification in the service request;
when the consumption type identifier is a free identifier, directly establishing presentation associated information among the audience identifier, the anchor identifier and the prop identifier, and taking the presentation associated information as the current request feedback;
and when the consumption type identifier is a payment identifier, deducting the cost corresponding to the prop identifier from the account corresponding to the audience identifier, then establishing presentation associated information among the audience identifier, the anchor identifier and the prop identifier, and using the presentation associated information as the current request feedback.
In some embodiments of the present invention, the downstream service system includes at least one of the following systems: medal intimacy system, user experience system, bullet screen system, property lottery system and ranking list system.
In order to achieve the above object, some embodiments of the present invention further provide a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the foregoing method when executing the computer program.
In order to achieve the above object, some embodiments of the present invention further provide a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the aforementioned method.
By adopting the steps, the partial embodiment of the invention has the following beneficial effects:
after receiving a service request, part of embodiments of the invention execute corresponding service processing according to the service request and obtain a current service processing result; the current service processing result at least comprises current request feedback; and respectively sending the current service processing result to each downstream service system so that each downstream service system executes corresponding service processing according to the current service processing result. Therefore, the successful processing of each downstream service system does not depend on the successful processing of other downstream service systems, the service decoupling and parallel processing of each downstream service system are realized, and the service processing efficiency and the service processing success rate can be greatly improved compared with the conventional serial processing flow.
Drawings
Fig. 1 is a system architecture diagram of a live system;
FIG. 2 is a schematic diagram of a prior art traffic handling method;
fig. 3 is a schematic diagram of an embodiment of a service processing method according to the present invention;
FIG. 4 is a flow chart of one embodiment of a business process method of the present invention;
FIG. 5 is a flow chart of sending a message to a message queue in the present invention;
FIG. 6 is a block diagram of a business processing apparatus according to an embodiment of the present invention;
fig. 7 is a hardware architecture diagram of one embodiment of the computer apparatus of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The online live broadcast refers to live broadcast by utilizing the internet and a streaming media technology. Referring to fig. 1, an anchor client a may establish an online live room on a website through the internet, and perform online live broadcast to viewer clients B, C, D accessing the online live room. When the live broadcasting is carried out on line, the anchor client A obtains video and voice information input by the anchor through an external device such as a camera device and a microphone, and sends live broadcasting content to the server W after the fusion of the audio and video information, and then the server W pushes the live broadcasting content to the audience client B, C, D which is accessed to the live broadcasting room where the anchor client is located. The server W is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction. The server W may be a computer, or may be a single network server, a server group composed of a plurality of network servers, or a cloud composed of a large number of hosts or network servers based on cloud computing, where the cloud computing is one of distributed computing and is a super virtual computer composed of a group of loosely coupled computers. The anchor client and the audience client are not limited to the mobile devices shown in the figure, and all intelligent terminals capable of pushing streams and pulling streams are applicable.
In the process of watching the live broadcast, the audience can send a property presenting instruction, such as presenting virtual fresh flowers, virtual coins, virtual shoe-shaped gold ingots and the like, to the server through the audience client, and the server receives the property presenting instruction through the property system and executes corresponding property presenting service processing according to the instruction to obtain property presenting service processing information. In order to encourage users to give up properties, the server is also provided with a plurality of downstream business systems located at the downstream of the property system, for example, a medal intimacy system, which is used for updating intimacy between audiences and corresponding anchor according to the property giving business processing information obtained by the property system; the user experience system is used for giving business processing information according to the props obtained by the prop system and updating the user grade integral of the audience; the bullet screen system is used for broadcasting the property presentation service processing information obtained by the property system through the bullet screen; the property lottery system is used for judging whether to trigger lottery according to the property presentation service processing information obtained by the property system; and the ranking list system is used for presenting the business processing information according to the props obtained by the prop system and updating the gift list.
In the prior art, each of the downstream service systems performs service processing by using a serial flow as shown in fig. 2, that is, the operation sequence of all the downstream service systems is predetermined, and after the current service system completes service processing, each of the downstream service systems sequentially performs corresponding service processing according to the predetermined sequence. In fig. 2, medal intimacy system, user experience system, barrage system, property lottery system, and ranking list system sequentially perform business processing in order from front to back.
Example one
The present embodiment provides a service processing method, as shown in fig. 3 and 4, the method includes the following steps:
s1, the current service system receives a service request, where the service request may carry a plurality of service information.
Taking the current service system as the aforementioned prop system as an example, this step may specifically include: receiving a service request for presenting the props input by a client of the audiences through a prop system, wherein service information carried by the service request for presenting the props mainly comprises audience identification, anchor identification, prop identification and consumption type identification; besides, the service information may further include a service type identifier and/or a client type identifier. Specifically, the consumption type flag may be 0 or 1, where 0 represents a free flag and 1 represents a paid flag; the service type identifier may include, but is not limited to, 0, 1, and 2, where 0 represents a property presentation identifier, 1 represents a daemon purchase identifier, and 2 represents a VIP purchase identifier, and it can be understood that, when the received service request is a service request for presenting a property, the carried service type identifier should be a property presentation identifier; the client type identifier may be 0, 1, or 2, where 0 represents a web version client identifier, 1 represents an android version client identifier, and 2 represents an IOS client identifier.
In this embodiment, when the current service system is the aforementioned prop system, the downstream service system includes, but is not limited to, at least one of the following systems: medal intimacy system, user experience system, bullet screen system, property lottery system and ranking list system.
S2, the current service system executes corresponding service processing according to the received service request and obtains the current service processing result, and the current service processing result at least comprises the current request feedback.
In this step, the executing the corresponding service processing according to the service request at least includes: and executing a preset first service process according to the service request to obtain the current request feedback. In addition, the service processing may further include: and executing preset second service processing according to the current request feedback to obtain corresponding identification information.
Taking the current service system as the aforementioned prop system as an example, assuming that the prop system receives a service request for presenting a prop input by a viewer client, the first service processing specifically includes: reading audience identification, anchor identification, prop identification and consumption type identification in the service request; when the consumption type identifier is a free identifier, because fee deduction operation is not needed, presentation associated information among the audience identifier, the anchor identifier and the prop identifier is directly established, and the established presentation associated information is used as current request feedback; when the consumption type identifier is a payment identifier, firstly deducting the cost corresponding to the prop identifier from the account corresponding to the audience identifier, then establishing the presentation associated information among the audience identifier, the anchor identifier and the prop identifier, and taking the established presentation associated information as the current request feedback. In this embodiment, the creation of the presentation related information indicates that the item presentation service processing is completed. For example, assuming that the present association information between the audience identifier X, the anchor identifier Y, and the property identifier M is established, this indicates that the operation of presenting the property M to the anchor Y by the audience X is completed.
It is understood that the prop system can complete the aforementioned deduction operation by calling a preset wallet system, which is a system for charging or deducting the fee from the user account according to the instruction.
In this embodiment, the performing a preset second service process according to the current request feedback to obtain corresponding identification information includes:
first, a preset auto-increment counter is called to generate an auto-increment identification code corresponding to the current request feedback, in this embodiment, the auto-increment counter may obtain the auto-increment identification code through a Redis incr command, for example, a current service system obtains one current request feedback after completing processing once, and calls the auto-increment counter once, and the value output by the auto-increment counter is incremented by 1 once.
And then, when the self-increment identification code corresponding to the current request feedback is acquired through the self-increment counter, the identification information is generated according to the acquired self-increment identification code. If the self-increment identification code fails to be acquired, the current service system randomly generates the self-increment identification code within a preset numerical range, and generates identification information according to the randomly generated self-increment identification code. It is understood that the preset data range should be a higher data range, that is, the minimum value of the data range should be larger than the currently acquired maximum self-increment identification code by a certain value.
In this embodiment, generating the identification information may be implemented in at least any one of the following 4 ways:
in the first mode, the self-increment identification code is directly used as identification information. For example, assuming that the obtained self-increment identification code is 03258, the generated identification information may be 03258.
And in the second mode, a preset system time unit is called to obtain a current timestamp, and identification information is generated according to the current timestamp and the self-increment identification code. Generally, a server and a client are both provided with a system time unit in advance, and the system time unit obtains a current timestamp through a system time function, so that the current service system can obtain the current timestamp by calling the system time unit of the local server or the client. For example, assuming that the obtained current timestamp is 0611083026 (indicating that the current time is 06 months, 30 days, 08 points, 30 minutes and 26 seconds), and the obtained self-increment identification code is 03258, the identification information 061108302603258 may be generated by concatenating the current timestamp and the self-increment identification code, and of course, in addition to the concatenation, the identification information may be generated by performing operations such as superimposing on the current timestamp and the self-increment identification code.
In a third mode, a modular operation (for example, a modular 10 operation) is performed on at least one service message to obtain at least one modular value, and the identification message is generated according to the at least one modular value and the self-increment identification code. For example, when the received service request for presenting the item carries service information such as an audience identifier, a anchor identifier, an item identifier, a consumption type identifier, a service type identifier, and a client type identifier, the identifier information may be generated according to the respective module value and the self-increment identifier of the service type identifier, the client type identifier, the consumption type identifier, and/or the audience identifier. In this case, assuming that the module value of the service type identifier is 0, the module value of the client type identifier is 1, the module value of the consumption type identifier is 1, the module value of the viewer identifier is 7, and the obtained self-increment identification code is 03258, the identification information may be generated by splicing the generated identification information 011703258, or by performing operations such as superimposing each module value and the self-increment identification code, except for splicing.
In a fourth manner, the identification information is generated according to the current timestamp, the at least one module value and the self-increment identification code, for example, assuming that the module value of the service type identifier is 0, the module value of the client type identifier is 1, the module value of the consumption type identifier is 1, the module value of the viewer identifier is 7, the obtained current timestamp is 0611083026, and the obtained self-increment identification code is 03258, the code may be spliced to generate the identification information 0611083026011703258 with 19 bits in total, and of course, in addition to the splicing, the identification information may be generated by operations such as superimposing each module value and the self-increment identification code.
In addition, after the identification information is generated, the identification information can be processed by adopting a preset distributed lock to obtain the locked identification information. The distributed lock can realize that one piece of identification information can be locked only once, so that the uniqueness of the identification information is ensured. In this embodiment, the preset distributed lock may implement locking processing on the identification information through a redis setnx command.
And S3, the current service system sends the current service processing result to each downstream service system respectively, so that each downstream service system executes corresponding service processing according to the current service processing result. Therefore, the successful processing of each downstream service system does not depend on the successful processing of other downstream service systems, the service decoupling and parallel processing of each downstream service system are realized, and the service processing efficiency and the service processing success rate can be greatly improved compared with the conventional serial processing flow.
In addition, if step S2 includes executing a preset second service process according to the current request feedback to obtain corresponding identification information, the following operations are specifically executed in this step: and respectively sending the current service processing result to each downstream service system, so that when each downstream service system detects that the identification information is not stored, corresponding service processing is executed according to the current request feedback. Wherein, the step of sending the current service processing result to each downstream service system respectively may include: and packaging the current service processing result into a message, sending the message to a preset message queue, and respectively sending the message to each downstream service system through the message queue.
Taking the current service system as the aforementioned property system as an example, after receiving a service request for presenting a property, the property system performs corresponding first service processing, that is, property presentation service processing, to obtain current request feedback (such as presentation associated information between audience identifier X, anchor identifier Y, and property identifier M), and further generates identifier information (such as 0611083026011703258) corresponding to the current request feedback through a second service, and then encapsulates the current request feedback and the corresponding identifier information into a message and sends the message to a preset message queue. After receiving the message, the message queue forwards the message to downstream business systems such as the medal intimacy system, the user experience system, the property presentation barrage system, the property lottery system, the ranking list system and the like, wherein the message is used for indicating each downstream business system to execute corresponding business processing according to the message.
After each downstream service message receives the message, preferably detecting whether the predetermined database stores the identification information in the message, if so, indicating that the message is processed without repeated processing, ending the process, otherwise, indicating that the message is not processed, storing the identification information in the message, and executing corresponding service processing according to the current request feedback in the message. For example, the medal intimacy system updates the intimacy between the audience X and the corresponding anchor Y according to the present associated information between the audience identifier X, the anchor identifier Y and the property identifier M in the message; the user experience system updates the user grade integral of the audience X according to the present associated information among the audience identifier X, the anchor identifier Y and the prop identifier M in the message; the barrage system broadcasts the information that the audience X presents the prop M to the anchor Y through the barrage according to the presentation associated information among the audience identifier X, the anchor identifier Y and the prop identifier M in the message; the property lottery drawing system judges whether to trigger the audience X to draw a lottery according to the present associated information among the audience identifier X, the anchor identifier Y and the property identifier M in the message; and the ranking list system updates the gift list corresponding to the anchor Y according to the presentation associated information among the audience identifier X, the anchor identifier Y and the prop identifier M in the message. Because the identification information is set, each downstream service system can be prevented from repeatedly processing the same current request feedback.
In addition, the present embodiment may further set different types of message queues, so that different current service processing results are sent to each downstream service system through different message queues. For example, in practical application scenarios, the items given away by the audience are mainly classified into paid items and free items, and are mostly free items. In order to ensure that the paid items can be processed in time in each downstream service system after being presented, the present application preferably presets a paid sub-message queue and a free sub-message queue, and in this case, a process of sending a message to the preset message queue is shown in fig. 5, and includes: detecting whether the charge type identification in the current service processing result carried by the message is a charge identification or a free identification; if the mark is the payment mark, the corresponding message is sent to the payment sub-message queue, and if the mark is the free mark, the corresponding message is sent to the free sub-message queue. The messages in the payment sub-message queue and the free sub-message queue can be sent to all downstream service systems in parallel, so that the presentation information of the presented paid props can be timely sent to all downstream service systems for processing when the presentation information of the presented free props is more.
It should be noted that, for the sake of simplicity, the present embodiment is described as a series of acts, but those skilled in the art should understand that the present invention is not limited by the described order of acts, because some steps can be performed in other orders or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Example two
The present embodiment provides a service processing apparatus 10, as shown in fig. 6, the apparatus includes:
a request receiving module 11, configured to receive a service request;
the service processing module 12 is configured to execute corresponding service processing according to the service request and obtain a current service processing result; the current service processing result at least comprises current request feedback;
a result sending module 13, configured to send the current service processing result to each downstream service system, so that each downstream service system executes corresponding service processing according to the current service processing result.
In this embodiment, the service processing module includes:
the first service processing unit is used for executing preset first service processing according to the service request to obtain the current request feedback;
the second service processing unit is used for executing the preset second service processing according to the current request feedback to obtain corresponding identification information;
the result sending module is specifically configured to:
and respectively sending the current service processing result to each downstream service system, so that when each downstream service system detects that the identification information is not stored, corresponding service processing is executed according to the current request feedback.
In this embodiment, the result sending module is configured to encapsulate the current service processing result into a message, and send the message to a preset message queue, so as to send the message to each downstream service system through the message queue.
In this embodiment, the second service processing unit includes:
the self-increment identification code acquisition subunit is used for calling a preset self-increment counter to acquire a self-increment identification code corresponding to the current request feedback;
and the identification information generating subunit is used for generating the identification information according to the self-increment identification code.
In this embodiment, the identification information generating subunit is specifically configured to:
taking the self-increment identification code as the identification information;
or, acquiring a current timestamp through a preset system time unit, and generating the identification information according to the current timestamp and the self-increment identification code;
or when the service request carries a plurality of service information, performing modular operation on at least one service information to obtain at least one module value, and generating the identification information according to the at least one module value and the self-increment identification code;
or generating the identification information according to the at least one module value, the current timestamp and the self-increment identification code.
In this embodiment, the second service processing unit further includes: and the self-increment identification code generating unit is used for randomly generating the self-increment identification code within a preset numerical range when the self-increment identification code fails to be acquired.
In this embodiment, the service processing apparatus further includes: and the locking module is used for processing the identification information by adopting a preset distributed lock before the result sending module sends the current service processing result to each downstream service system respectively to obtain the locked identification information.
In this embodiment, the request receiving module is specifically configured to:
receiving a service request for presenting the props input by a client of the audience; the service request for presenting the props at least comprises audience identification, anchor identification, prop identification and consumption type identification.
In this embodiment, the service processing module is specifically configured to:
reading the audience identification, the anchor identification, the prop identification and the consumption type identification in the service request;
when the consumption type identifier is a free identifier, directly establishing presentation associated information among the audience identifier, the anchor identifier and the prop identifier, and taking the presentation associated information as the current request feedback;
and when the consumption type identifier is a payment identifier, deducting the cost corresponding to the prop identifier from the account corresponding to the audience identifier, then establishing presentation associated information among the audience identifier, the anchor identifier and the prop identifier, and using the presentation associated information as the current request feedback.
In this embodiment, the downstream service system includes at least one of the following systems: medal intimacy system, user experience system, bullet screen system, property lottery system and ranking list system.
It should also be understood by those skilled in the art that the embodiments described in the specification are preferred embodiments and that the modules referred to are not necessarily essential to the invention.
EXAMPLE III
The present embodiment provides a computer device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a rack server (including an independent server or a server cluster composed of multiple servers) capable of executing programs. The computer device 20 of the present embodiment includes at least, but is not limited to: a memory 21, a processor 22, which may be communicatively coupled to each other via a system bus, as shown in FIG. 7. It is noted that fig. 7 only shows a computer device 20 with components 21-22, but it is to be understood that not all shown components are required to be implemented, and that more or fewer components may be implemented instead.
In the present embodiment, the memory 21 (i.e., a readable storage medium) includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 21 may be an internal storage unit of the computer device 20, such as a hard disk or a memory of the computer device 20. In other embodiments, the memory 21 may also be an external storage device of the computer device 20, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device 20. Of course, the memory 21 may also include both internal and external storage devices of the computer device 20. In this embodiment, the memory 21 is generally used for storing an operating system installed in the computer device 20 and various application software, such as the program codes of the service processing apparatus 10 in the second embodiment. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is typically used to control the overall operation of the computer device 20. In this embodiment, the processor 22 is configured to execute the program code stored in the memory 21 or process data, for example, execute the service processing apparatus 10, so as to implement the service processing method according to the first embodiment.
Example four
The present embodiment provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by a processor implements a corresponding function. The computer-readable storage medium of this embodiment is used for storing the service processing apparatus 10, and when being executed by a processor, the service processing apparatus implements the service processing method of the first embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for processing a service, the method comprising:
receiving a service request;
executing corresponding service processing according to the service request and obtaining a current service processing result; the current service processing result at least comprises current request feedback;
and respectively sending the current service processing result to each downstream service system so that each downstream service system executes corresponding service processing according to the current service processing result.
2. The service processing method according to claim 1, wherein the current service processing result further includes identification information; the executing the corresponding service processing according to the service request and obtaining the current service processing result includes:
executing a preset first service process according to the service request to obtain the current request feedback;
executing a preset second service process according to the current request feedback to obtain corresponding identification information;
the sending of the current service processing result to each downstream service system, so that each downstream service system executes corresponding service processing according to the current service processing result, includes:
and respectively sending the current service processing result to each downstream service system, so that when each downstream service system detects that the identification information is not stored, corresponding service processing is executed according to the current request feedback.
3. The service processing method according to claim 1, wherein said sending the current service processing result to each downstream service system respectively comprises:
and packaging the current service processing result into a message and sending the message to a preset message queue so as to respectively send the message to each downstream service system through the message queue.
4. The traffic processing method according to claim 2, wherein the second traffic processing includes:
calling a preset self-increment counter;
and when the self-increment identification code corresponding to the current request feedback is acquired through the self-increment counter, generating the identification information according to the self-increment identification code.
5. The service processing method according to claim 4, wherein the generating the identification information according to the self-increment identification code comprises:
taking the self-increment identification code as the identification information;
or, acquiring a current timestamp through a preset system time unit, and generating the identification information according to the current timestamp and the self-increment identification code;
or when the service request carries a plurality of service information, performing modular operation on at least one service information to obtain at least one module value, and generating the identification information according to the at least one module value and the self-increment identification code;
or generating the identification information according to the at least one module value, the current timestamp and the self-increment identification code.
6. The traffic processing method of claim 4, wherein the second traffic processing further comprises: and when the self-increment identification code fails to be acquired, randomly generating the self-increment identification code within a preset numerical range.
7. The service processing method according to claim 2, wherein before sending the current service processing result to each downstream service system, the method further comprises: and processing the identification information by adopting a preset distributed lock to obtain the locked identification information.
8. The service processing method according to claim 1, wherein said receiving a service request comprises:
receiving a service request for presenting the props input by a client of the audience; the service request for presenting the props comprises audience identification, anchor identification, prop identification and consumption type identification.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 8 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN201910789849.4A 2019-08-26 2019-08-26 Service processing method, computer device and storage medium Pending CN112434241A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910789849.4A CN112434241A (en) 2019-08-26 2019-08-26 Service processing method, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910789849.4A CN112434241A (en) 2019-08-26 2019-08-26 Service processing method, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN112434241A true CN112434241A (en) 2021-03-02

Family

ID=74689543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910789849.4A Pending CN112434241A (en) 2019-08-26 2019-08-26 Service processing method, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN112434241A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327818A (en) * 2021-12-23 2022-04-12 广州钛动科技有限公司 Algorithm scheduling method, device and equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130139164A1 (en) * 2011-11-28 2013-05-30 Sap Ag Business Process Optimization
CN106708879A (en) * 2015-11-16 2017-05-24 阿里巴巴集团控股有限公司 Business data processing method and apparatus
CN107229555A (en) * 2017-05-04 2017-10-03 北京小度信息科技有限公司 Mark generating method and device
CN109636514A (en) * 2018-11-29 2019-04-16 腾讯科技(深圳)有限公司 Business data processing method, calculates equipment and storage medium at device
CN109634597A (en) * 2018-12-11 2019-04-16 武汉瓯越网视有限公司 Data processing method, device, electronic equipment and storage medium
CN109800063A (en) * 2019-01-25 2019-05-24 深圳乐信软件技术有限公司 Business method for parallel processing, device, server, storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130139164A1 (en) * 2011-11-28 2013-05-30 Sap Ag Business Process Optimization
CN106708879A (en) * 2015-11-16 2017-05-24 阿里巴巴集团控股有限公司 Business data processing method and apparatus
CN107229555A (en) * 2017-05-04 2017-10-03 北京小度信息科技有限公司 Mark generating method and device
CN109636514A (en) * 2018-11-29 2019-04-16 腾讯科技(深圳)有限公司 Business data processing method, calculates equipment and storage medium at device
CN109634597A (en) * 2018-12-11 2019-04-16 武汉瓯越网视有限公司 Data processing method, device, electronic equipment and storage medium
CN109800063A (en) * 2019-01-25 2019-05-24 深圳乐信软件技术有限公司 Business method for parallel processing, device, server, storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327818A (en) * 2021-12-23 2022-04-12 广州钛动科技有限公司 Algorithm scheduling method, device and equipment and readable storage medium
CN114327818B (en) * 2021-12-23 2024-03-26 广州钛动科技有限公司 Algorithm scheduling method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN109614209B (en) Task processing method, application server and system
CN107920094B (en) Data acquisition method and device, server and network equipment
CN104579909B (en) Method and equipment for classifying user information and acquiring user grouping information
CN110826799B (en) Service prediction method, device, server and readable storage medium
CN112915548A (en) Data processing method, device and equipment of multimedia playing platform and storage medium
EP4030314A1 (en) Blockchain-based data processing method, apparatus and device, and readable storage medium
CN113254210A (en) OFD file signature verification method, system and equipment based on cloud service
CN111556115A (en) Data processing method, device and equipment based on block chain and storage medium
CN112995700B (en) Method and device for processing electronic resources and electronic equipment
CN112434241A (en) Service processing method, computer device and storage medium
CN109683951A (en) A kind of code method for automatically releasing, system, medium and electronic equipment
CN111327680B (en) Authentication data synchronization method, device, system, computer equipment and storage medium
CN110930163B (en) Method, system and storage medium for implementing house source entrusting business
CN110262892B (en) Ticket issuing method and device based on distributed storage data chain and data chain node
CN116993523A (en) Configurable account checking method, device, equipment and storage medium
CN109348298B (en) Method and equipment for pushing and playing multimedia data stream
CN108829824B (en) Resource processing method and device in internet operation activity
CN112101810A (en) Risk event control method, device and system
CN112650911A (en) Service message issuing method, device, computer equipment and storage medium
CN107704557B (en) Processing method and device for operating mutually exclusive data, computer equipment and storage medium
CN113132928A (en) Charging method and device for video short message service
CN110647757A (en) Data processing method based on intelligent contract and related device
CN112463076B (en) Data export method, computer equipment and storage medium
CN111127003B (en) Wallet account payment method, device and storage medium
CN111680111B (en) Billing method and device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination