CN116521400B - Article information processing method and device, storage medium and electronic equipment - Google Patents

Article information processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116521400B
CN116521400B CN202310815598.9A CN202310815598A CN116521400B CN 116521400 B CN116521400 B CN 116521400B CN 202310815598 A CN202310815598 A CN 202310815598A CN 116521400 B CN116521400 B CN 116521400B
Authority
CN
China
Prior art keywords
node
article information
flow node
article
execution chain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310815598.9A
Other languages
Chinese (zh)
Other versions
CN116521400A (en
Inventor
辛培灵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202310815598.9A priority Critical patent/CN116521400B/en
Publication of CN116521400A publication Critical patent/CN116521400A/en
Application granted granted Critical
Publication of CN116521400B publication Critical patent/CN116521400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure relates to the field of data processing, and in particular relates to an article information processing method, an article information processing device, a storage medium and electronic equipment. The article information processing method comprises the following steps: responding to the issue request of the article information, determining a current flow node of a target execution chain based on the execution chain pool, and executing the current flow node; after the current flow node is successfully executed, determining the next flow node of the target execution chain based on the execution chain pool, and generating an asynchronous message based on the next flow node so as to issue the asynchronous message to a message queue; updating the next flow node to the current flow node according to the asynchronous message in the message queue, and repeating the steps until all the flow nodes in the target execution chain are executed; and generating a processing end message after all the flow nodes in the target execution chain are executed, and issuing the processing end message to a message queue. The article information processing method can improve the robustness and expansibility of article information processing.

Description

Article information processing method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to the field of data processing, and in particular relates to an article information processing method, an article information processing device, a storage medium and electronic equipment.
Background
The prior art solution for processing the article data is that the front-end system transmits html (hypertext markup language) content and other article related information in the article rich text editing box to the server-end system in a large object, the server-end system performs a series of uninterrupted sequential logic processing on the article data, then stores the processed article data in a database, and responds to the front-end system after the execution is completed.
However, the more the processing logic of the processing mode is, the longer the line is, the longer the time consumption is caused under the complex logic, the response is slow, and the user experience is deteriorated; in addition, the line has no break point, and when a certain logic fragment is abnormal and overtime to cause the break, the front end response is reported to be wrong. If the abnormal logic is processed in time, the article data cannot be quickly restarted to process the data from the abnormal position; when a logic segment is updated and iterated, the historical data is required to be cleaned again; and is not suitable for high concurrency scenes, when a large number of users issue article requests or import large amount of seal data, memory overflow is easy to occur due to the technical design limit of the users, and the concurrency request quantity is limited.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure aims to provide an article information processing method, an apparatus, a storage medium and an electronic device, aiming to improve the robustness and expansibility of article information processing.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to an aspect of the embodiments of the present disclosure, there is provided an article information processing method, including: responding to a release request of article information, determining a current flow node of a target execution chain based on an execution chain pool, and executing the current flow node; after the current flow node is successfully executed, determining a next flow node of the target execution chain based on the execution chain pool, and generating an asynchronous message based on the next flow node so as to issue the asynchronous message to a message queue; updating the next flow node to the current flow node according to the asynchronous message in the message queue, and repeating the steps until all flow nodes in the target execution chain are executed; and generating a processing end message after all flow nodes in the target execution chain are executed, and issuing the processing end message to the message queue.
According to some embodiments of the disclosure, based on the foregoing solution, in response to the publishing request of the article information, the method further includes: the article information is disassembled into a plurality of pieces of article data according to the data type; and configuring a main key identifier for each article data.
According to some embodiments of the disclosure, based on the foregoing scheme, when the current flow node is a first flow node in the target execution chain, the generating an asynchronous message based on the next flow node includes: and placing each article data, each primary key identifier and the link identifier of the next process node in an asynchronous message body to generate the asynchronous message.
According to some embodiments of the disclosure, based on the foregoing scheme, when the current flow node is a flow node other than the first flow node in the target execution chain, the generating an asynchronous message based on the next flow node includes: and placing each primary key identifier and the link identifier of the next flow node in an asynchronous message body to generate the asynchronous message.
According to some embodiments of the disclosure, based on the foregoing scheme, the method further comprises: abstracting the logic fragments for processing the article information into flow nodes; configuring the link identification of the flow nodes, and configuring the sequence among a plurality of the flow nodes to obtain an execution chain so as to form the execution chain pool.
According to some embodiments of the disclosure, based on the foregoing solution, the flow nodes of one execution chain in the execution chain pool are a false writing node, a persistence node, a content analysis and evaluation node, a static page node, a security audit node, and a state processing node in sequence.
According to some embodiments of the disclosure, based on the foregoing solution, when the current flow node is the false write node, the executing the current flow node includes: and storing each piece of seal data and the corresponding primary key identification in a distributed cache and a search server.
According to some embodiments of the disclosure, based on the foregoing solution, when the current flow node is the persistent node, the executing the current flow node includes: acquiring article data based on the primary key identifiers; each article data is stored in a database.
According to some embodiments of the disclosure, based on the foregoing solution, when the current flow node is the content parsing and evaluating node, the executing the current flow node includes: acquiring article data based on the primary key identifiers; performing content processing on each article data to obtain an execution result; wherein the content processing comprises one or more of content evaluation, content judgment and content analysis; and updating the database, the distributed cache and the search server according to the execution result.
According to some embodiments of the disclosure, based on the foregoing scheme, when the current flow node is the fake write node, after generating an asynchronous message based on the next flow node, the method further includes: and returning the prompt information to be audited of the article, and carrying out popup prompt according to the prompt information to be audited of the article.
According to some embodiments of the disclosure, based on the foregoing, after publishing the processing end message into the message queue, the method further comprises: and responding to the check request of the article information, and returning the article information obtained after all the flow nodes are executed for display.
According to a second aspect of the embodiments of the present disclosure, there is provided an article information processing apparatus, characterized by comprising: the execution module is used for responding to the release request of the article information, determining the current flow node of the target execution chain based on the execution chain pool and executing the current flow node; the iteration module is used for determining the next flow node of the target execution chain based on the execution chain pool after the current flow node is successfully executed, and generating an asynchronous message based on the next flow node so as to issue the asynchronous message to a message queue; the circulation module is used for updating the next flow node into the current flow node according to the asynchronous message in the message queue, and repeating the steps until all the flow nodes in the target execution chain are executed; and the issuing module is used for generating a processing end message after all the flow nodes in the target execution chain are executed, and issuing the processing end message to the message queue.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the article information processing method as in the above embodiments.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic device, including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the article information processing method as in the above-described embodiments.
Exemplary embodiments of the present disclosure may have some or all of the following advantages:
in the technical schemes provided by some embodiments of the present disclosure, article information processing is abstracted into an execution chain formed by flow nodes, so that when an issue request of article information is received, article information processing can be realized by combining the execution chain and a message queue, on one hand, any flow node in a processing process is supported to be processed independently, and a data processing flow can be restarted from a designated flow node, so that the problem of article issue failure caused by an abnormal scene is solved, and the robustness of a system is improved; on the other hand, any flow node in the processing flow is supported to be combined and executed according to the requirement, so that a new execution link is opened conveniently to quickly solve the problem of data cleaning caused by the change of the flow node, and the expansibility of the system is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
fig. 1 schematically illustrates a flowchart of an article information processing method in an exemplary embodiment of the present disclosure.
Fig. 2 schematically illustrates a flowchart of an article information processing in an exemplary embodiment of the present disclosure.
Fig. 3 schematically shows a flow timing diagram of an article information process in an exemplary embodiment of the present disclosure.
Fig. 4 schematically illustrates a composition diagram of an article information processing apparatus in an exemplary embodiment of the present disclosure.
Fig. 5 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the present disclosure.
Fig. 6 schematically illustrates a structural diagram of a computer system of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
The conventional solution for processing the article data is to execute the line scheme for the article data, and the logic execution cannot be suspended from the beginning to the end of the data processing. The method comprises the steps that the front-end system transmits html content and other article related information in an article rich text editing box to the server-end system in a large object, the server-end system performs a series of uninterrupted sequential logic processing on article data, then stores the article data in a database, and responds to the front-end system after execution is completed, and the processing is completed.
This approach has the following disadvantages:
1. the article data implements a line scheme, the more processing logic the longer the "line". And the complex logic causes long time consumption and slow response, so that the user experience is poor.
2. The article data implements a line scheme, "line" without break points. When a logic segment is abnormal and overtime is encountered to cause interruption, the front end response is reported to be wrong. If the exception logic is processed in time, the article data cannot be quickly restarted from the exception position to process the data.
3. Article data implements a line scheme, "line" needs to be from beginning to end. When a certain logic segment upgrade iteration change is encountered, historical data needs to be cleaned again. Two options exist in the current scheme, (1) one method is rewritten to clean the article data, and the method consists of logic segments necessary for the current requirement. The disadvantage is that each logical change requires a redevelopment of the method run data. (2) The disadvantage of walking the original "line" logic is that the whole logic is walked in order to walk a certain logic segment again.
4. The article data performs a linear scheme that is not applicable to high concurrency scenarios. When a large number of users issue article requests or import large amount of seal data, memory overflow is easy to occur due to the technical design limit of the users, and the concurrent request quantity is limited.
Aiming at a plurality of defects in the prior art, the disclosure provides an article information processing method, which can abstract key logic fragments in articles into individual flow nodes, and maintain an execution sequence by an execution chain pool, so as to form a plurality of detachable diversified article data execution chains. And then the asynchronous message queue and the article data execution chain are combined to construct a multi-section execution chain framework, so that the multi-section execution chain framework has robustness, expansibility and high efficiency compared with the article data execution chain.
Implementation details of the technical solutions of the embodiments of the present disclosure are set forth in detail below.
Fig. 1 schematically illustrates a flowchart of an article information processing method in an exemplary embodiment of the present disclosure. As shown in fig. 1, the article information processing method includes steps S101 to S104:
step S101, responding to a release request of article information, determining a current flow node of a target execution chain based on an execution chain pool, and executing the current flow node;
step S102, after the current flow node is successfully executed, determining a next flow node of the target execution chain based on the execution chain pool, and generating an asynchronous message based on the next flow node so as to issue the asynchronous message to a message queue;
step S103, updating the next flow node to the current flow node according to the asynchronous message in the message queue, and repeating the steps until all flow nodes in the target execution chain are executed;
step S104, after all flow nodes in the target execution chain are executed, a processing end message is generated, and the processing end message is issued to the message queue.
In the technical schemes provided by some embodiments of the present disclosure, article information processing is abstracted into an execution chain formed by flow nodes, so that when an issue request of article information is received, article information processing can be realized by combining the execution chain and a message queue, on one hand, any flow node in a processing process is supported to be processed independently, and a data processing flow can be restarted from a designated flow node, so that the problem of article issue failure caused by an abnormal scene is solved, and the robustness of a system is improved; on the other hand, any flow node in the processing flow is supported to be combined and executed according to the requirement, so that a new execution link is opened conveniently to quickly solve the problem of data cleaning caused by the change of the flow node, and the expansibility of the system is improved.
Next, each step of the article information processing method in the present exemplary embodiment will be described in more detail with reference to the drawings and examples.
In step S101, in response to a release request of article information, a current flow node of a target execution chain is determined based on an execution chain pool, and the current flow node is executed.
Specifically, the user can perform the publishing operation of the article information through the publishing platform, and after the user finishes inputting the article information, the publishing platform can send a publishing request of the article information to the article system. At this time, the article system receives the issue request of the article information, and further executes a series of operations to process the article information.
The execution chain pool comprises a plurality of preset execution chains for processing the article information, and each execution chain comprises each flow node. Thus, in one embodiment of the disclosure, the method further comprises: abstracting the logic fragments for processing the article information into flow nodes; configuring the link identification of the flow nodes, and configuring the sequence among a plurality of the flow nodes to obtain an execution chain so as to form the execution chain pool.
Specifically, the key logic segments in the article information processing process can be planned into different flow nodes, and globally unique link identifiers are configured for the flow nodes. Meanwhile, the process nodes are connected in series according to the sequence of the article information processing process to obtain a plurality of execution chains so as to obtain an execution chain pool.
For example, the flow nodes of one execution chain in the execution chain pool are a false writing node, a persistence node, a content analysis and evaluation node, a static page node, a security audit node and a state processing node in sequence.
Thus, when the article system responds to the issue request of the article information, a corresponding target execution chain is determined from the execution chain pool, and the current flow node on the target execution chain is then executed.
In one embodiment of the present disclosure, before determining the current flow node of the target execution chain based on the execution chain pool, the method further comprises: the article information is disassembled into a plurality of pieces of article data according to the data type; and configuring a main key identifier for each article data.
Specifically, when the user issues the article information, the user may include information with multiple data types, such as pictures, characters, connections, and the like, and when the article information is processed, the article information may be disassembled into multiple pieces of article data with single data types.
Based on the method, the general data can be disassembled into the small data, so that the method can be used for multi-process processing, is light and strong, and further avoids the problems that a large object exists for a long time, the CPU (Central Processing Unit ) rises due to slow recovery, and the subsequent user cannot normally issue articles due to blockage in a high concurrency scene.
In order to distinguish each piece of document data, a common sender may be used to obtain an id (Identification) for assigning a value to each piece of document data, so as to obtain a primary key identifier (primary key id) corresponding to each piece of document data.
It should be noted that, before determining the current flow node of the target execution chain based on the execution chain pool, the article information may be checked, and after the check is successful, the disassembly is started and the execution is performed according to each flow node in the execution chain.
Wherein, the contents executed by different flow nodes are different.
In one embodiment of the disclosure, when the current flow node is the false write node, the executing the current flow node includes: and storing each piece of seal data and the corresponding primary key identification in a distributed cache and a search server.
Specifically, the article data and the corresponding primary key identification are stored in r2m (distributed cache) and jes (search server), and this process completes the process of pseudo-writing the article information. Article information is first stored in a storage medium of a cache and search server, not in a database, in order to more quickly store and query data to improve response efficiency.
In one embodiment of the disclosure, when the current flow node is the persistence node, the executing the current flow node includes: acquiring article data based on the primary key identifiers; each article data is stored in a database.
Specifically, in order to persist article information, it is necessary to store it in a database. The root asynchronous message can acquire the primary key id of the article data, acquire corresponding various article data according to the primary key id, further acquire complete article information, and store the complete article information in a database, for example, store cds (database of sub-database and sub-table).
In one embodiment of the disclosure, when the current flow node is the content parsing and evaluating node, the executing the current flow node includes: acquiring article data based on the primary key identifiers; performing content processing on each article data to obtain an execution result; wherein the content processing comprises one or more of content evaluation, content judgment and content analysis; and updating the database, the distributed cache and the search server according to the execution result.
Specifically, the content analysis and evaluation node performs content processing, such as content evaluation, content determination, content analysis, etc., on the article data, and may perform multiple processes concurrently during execution, and then assemble the results after the concurrent processes to obtain a final execution result. And then updating the execution result to the database, the distributed cache and the search server at a time, namely updating all the positions stored with the article information to ensure that the position is the latest data result.
It should be noted that, the process node execution process in the present disclosure is only an exemplary description, and the present disclosure does not specifically limit other process nodes and corresponding execution processes, which are all within the protection scope of the present disclosure.
Step S102, after the current flow node is executed successfully, determining a next flow node of the target execution chain based on the execution chain pool, and generating an asynchronous message based on the next flow node, so as to issue the asynchronous message to a message queue.
Specifically, after the current flow node is successfully executed, the next flow node corresponding to the current flow node of the target execution chain needs to be determined, namely, the message topic (theme) of the next flow node is determined, and then an asynchronous message is issued to the message queue, so that the next flow node can execute corresponding operation after receiving the asynchronous message.
In one embodiment of the disclosure, when the current flow node is the first flow node in the target execution chain, the generating an asynchronous message based on the next flow node includes: and placing each article data, each primary key identifier and the link identifier of the next process node in an asynchronous message body to generate the asynchronous message.
In one embodiment of the disclosure, when the current flow node is another flow node except for the first flow node in the target execution chain, the generating an asynchronous message based on the next flow node includes: and placing each primary key identifier and the link identifier of the next flow node in an asynchronous message body to generate the asynchronous message.
Specifically, when executing the first flow node in the execution chain, the main key identifier, the article data and the link identifier are required to be all put in an asynchronous message body and released into a message queue; when executing other flow nodes, only the main key identification and the link identification are needed to be placed in the asynchronous message body, and the article data can be acquired by utilizing the main key identification, so that the complete article data is not needed to be issued in the asynchronous message. Thus, large objects such as article data can be prevented from being transferred, and the article data processing is more efficient.
In step S103, updating the next flow node to the current flow node according to the asynchronous message in the message queue, and repeating the above steps until all flow nodes in the target execution chain are executed;
Specifically, based on the message queue, the next flow node subscribes to the topic of the previous flow node, so that upon receiving the asynchronous message published by the previous flow node, the next flow node becomes the current flow node.
And repeating the execution flow nodes and the flow of issuing the asynchronous message until all the flow nodes in the target execution chain are executed, and finishing the processing of the article information.
Step S104, after all flow nodes in the target execution chain are executed, a processing end message is generated, and the processing end message is issued to the message queue.
Specifically, after all the flow nodes in the target execution chain complete execution, the article information processing is completed, at this time, a processing end message, which is also an asynchronous message, is broadcasted to the message queue, and other downstream systems subscribing to the message queue can receive the information of the article information processing end, so as to execute other operations.
In one embodiment of the present disclosure, when the current flow node is the fake write node, after generating an asynchronous message based on the next flow node, the method further comprises: and returning the prompt information to be audited of the article, and carrying out popup prompt according to the prompt information to be audited of the article.
Specifically, after the article system successfully writes the article information falsely, the article to-be-audited prompt information can be returned to the issuing platform, and the issuing platform can generate popup prompt according to the article to-be-audited prompt information, for example, the popup prompt displays a word of 'successful issuing and auditing', so that a user is reminded that the issued article information is in an auditing state, and the article can be displayed in the auditing state.
Based on the method, the false writing flow combined with the asynchronous message queue can realize quick response of the user to issue the article and support the display.
In one embodiment of the present disclosure, after publishing the process end message to the message queue, the method further comprises: and responding to the check request of the article information, and returning the article information obtained after all the flow nodes are executed for display.
Specifically, when the user refreshes the interface of the publishing platform, or the publishing platform refreshes the interface periodically, the publishing platform generates a viewing request of the article information, and if the processing of the article information is finished, the processed article information is granted to be displayed.
It should be noted that, because the execution chain pool is configured, a flow visualization interface can be provided, so that an operator can see the flow circulation of each step of the article information processing in real time in the content background list, thereby enhancing the visualization and facilitating the problem positioning.
Based on the method, the logic segments of the article information processing are abstracted into individual flow nodes, an execution sequence is maintained by an execution chain pool, so that a plurality of detachable diversified article data execution chains are formed, and then a multi-section execution chain framework is built by combining an asynchronous message queue and the article data execution chains. Therefore, on one hand, the execution chain of the article information processing and the process of article release by the user can be separated, the flow is split and blocked, and good stability is kept when a large amount of data is input; on the other hand, the execution chain can control the flow rate of each chain through the control of the message queue, so that the controllability of the treatment process is enhanced; in another aspect, the process nodes are independent of each other, and can be arbitrarily inserted and logically assembled, so that the process nodes can be restarted, and the fault tolerance and the robustness of the system are improved.
The following is performed in order to perform the chain: the process is described in detail by taking a fake writing node, a persistence node, a content analysis and evaluation node, a static page node, a security auditing node and a state processing node as examples.
Fig. 2 schematically illustrates a flowchart of an article information processing in an exemplary embodiment of the present disclosure. Referring to fig. 2, the specific procedure of the article information processing is as follows:
Step S201, performing data verification on article information; if the test is successful, step S201 is performed, and if the test is failed, the process jumps directly to the end.
Step S202, executing the fake writing node, and after the execution is completed, executing step S209, and returning a response to the issuing platform, namely returning the prompt information to be audited of the article.
Step S203, executing a persistent node;
step S204, executing content analysis and evaluation nodes;
step S205, executing a static page node;
step S206, executing a security audit node;
step S207, executing a state processing node;
step S208, distributing to other systems;
the first flow node in step S202 is a flow node that executes synchronously, and steps S203 to S208 are flow nodes that execute asynchronously.
Fig. 3 schematically shows a flow timing diagram of an article information process in an exemplary embodiment of the present disclosure. Referring to fig. 3, the main composition structure includes a user, a publishing platform, an article system and a message queue, and the specific process is as follows:
step S301, a user inputs article information in a release platform and clicks a release button;
step S302, the publishing platform sends a publishing request of the article information to the article system;
Step S303, the article system responds to the release request to carry out data verification on the article information;
step S304, when the data check passes, executing [ false writing ], wherein article information can be firstly disassembled to obtain a plurality of pieces of seal data, then using a public sender to obtain id assignment of unique identification of each piece of seal data, namely a primary key id, and then storing the article data in an r2m distributed cache and a jes search server in advance, so that the data can be stored and queried more quickly to improve response efficiency;
step S305, after [ false write ] succeeds, the message topic of the next flow node is found in the execution chain pool, and the article data with the assigned primary key id and the link identification of the next flow node are put into an asynchronous message body and are released into a message queue.
Step S306, returning prompt information to be audited of the article to the release platform;
step S307, the release platform popup prompts the user to release successfully and check in process;
step S308, the [ persistence ] subscribes to the topic of the [ false write ], and receives the asynchronous message published by the [ false write ] node;
step S309, executing [ persistence ], and starting to store the disassembled pieces of text data in the cds database and the sub-table database.
Step S310, the [ persistence ] is successfully executed, the message topic of the next flow node is found in the execution chain pool, and only the primary key id and the link identification are put into an asynchronous message body to be issued into a message queue, so that large object transfer is avoided.
Step S311, the [ content analysis and evaluation ] subscribes to the topic of the [ persistence ] and receives the asynchronous message issued by the [ persistence ] node;
step S312, executing [ content analysis and evaluation ], inquiring the cache according to the article primary key id to obtain the article integral data, and concurrently executing content evaluation, content judgment and content analysis. Concurrently executing the obtained result assembly, and updating to a database, a distributed cache and a search server once;
step S313, the [ content analysis and evaluation ] is successfully executed, the message topic of the next flow node is found in the execution chain pool, and the main key id and the link identifier are only put into an asynchronous message body and are released into a message queue;
the following steps S314-S316, S317-S319 and S320-S322 are similar to the steps S310-S312 in that asynchronous messages are subscribed and issued, and the nodes and asynchronous messages are issued, except that the nodes processed by the steps S314-S316 are [ static pages ], the nodes processed by the steps S317-S319 are [ security audits ], and the nodes processed by the steps S320-S322 are [ state processes ], so that redundant description is omitted, and the processing of article information is completed;
Since step S322 is the last flow node of the execution chain when the node is the [ state processing ], the asynchronous message is a processing end message when the asynchronous message is issued.
Step S323, the user refreshes the release interface;
step S324, the publishing platform generates a viewing request of the article information;
step S325, the article system queries the article information after the processing is finished;
step S326, after the article information of which the processing is finished is queried, the article auditing passing prompt information is returned to the publishing platform;
in step S327, the publishing platform displays the article information.
Based on the method, the method mainly has the following technical effects:
on the one hand, the problem of user experience can be solved: and the user request is responded quickly, the abnormal follow-up repairing user does not feel, and the user experience is enhanced.
On the other hand, the problem of service use can be solved: (1) Any flow node in the article data processing flow is supported to be processed independently, and the data processing flow is restarted from the appointed flow node, so that the problem that the article is failed to be issued due to an abnormal scene is solved. (2) Any process node in the article data processing process is supported to be combined and executed according to the requirement, and a new article execution link is opened up to quickly solve the problem of data cleaning caused by process node change. (3) The problem that CPU rises due to slow recovery caused by long time of executing a process and existence of a large object is solved. (4) The problem that subsequent users cannot normally issue articles due to blockage in a high concurrency scene is solved.
Fig. 4 schematically illustrates a composition diagram of an article information processing apparatus in an exemplary embodiment of the present disclosure, and as illustrated in fig. 4, the article information processing apparatus 400 may include an execution module 401, an iteration module 402, a loop module 403, and a publication module 404. Wherein:
the execution module is used for responding to the release request of the article information, determining the current flow node of the target execution chain based on the execution chain pool and executing the current flow node;
the iteration module is used for determining the next flow node of the target execution chain based on the execution chain pool after the current flow node is successfully executed, and generating an asynchronous message based on the next flow node so as to issue the asynchronous message to a message queue;
the circulation module is used for updating the next flow node into the current flow node according to the asynchronous message in the message queue, and repeating the steps until all the flow nodes in the target execution chain are executed;
and the issuing module is used for generating a processing end message after all the flow nodes in the target execution chain are executed, and issuing the processing end message to the message queue.
According to an exemplary embodiment of the present disclosure, the execution module is further configured to disassemble the article information into a plurality of pieces of document data according to a data type when responding to a request for issuing the article information; and configuring a main key identifier for each article data.
According to an exemplary embodiment of the present disclosure, the iteration module is further configured to, when the current flow node is a first flow node in the target execution chain, place each article data, each primary key identifier, and a link identifier of the next flow node in an asynchronous message body to generate the asynchronous message.
According to an exemplary embodiment of the present disclosure, the iteration module is further configured to, when the current flow node is another flow node except for a first flow node in the target execution chain, place each of the primary key identifier and the link identifier of the next flow node in an asynchronous message body to generate the asynchronous message.
According to an exemplary embodiment of the present disclosure, the article information processing apparatus 400 further includes a configuration module for abstracting a logical segment for processing article information into a flow node; configuring the link identification of the flow nodes, and configuring the sequence among a plurality of the flow nodes to obtain an execution chain so as to form the execution chain pool.
According to an exemplary embodiment of the present disclosure, the flow nodes of one execution chain in the execution chain pool are a false writing node, a persistence node, a content analysis and evaluation node, a static page node, a security audit node and a state processing node in sequence.
According to an exemplary embodiment of the present disclosure, the execution module is further configured to store each piece of seal data and a corresponding primary key identifier in a distributed cache and a search server when the current flow node is the false write node.
According to an exemplary embodiment of the present disclosure, when the current flow node is the persistence node, acquiring each article data based on each primary key identifier; each article data is stored in a database.
According to an exemplary embodiment of the present disclosure, when the current flow node is the content parsing and evaluating node, acquiring article data based on the primary key identifiers; performing content processing on each article data to obtain an execution result; wherein the content processing comprises one or more of content evaluation, content judgment and content analysis; and updating the database, the distributed cache and the search server according to the execution result.
According to an exemplary embodiment of the disclosure, when the current flow node is the false writing node, the iteration module is further configured to return a prompt message to be audited of an article, so as to perform popup prompt according to the prompt message to be audited of the article.
According to an exemplary embodiment of the present disclosure, the article information processing apparatus 400 is further configured to, after the processing end message is issued to the message queue, return, in response to a request for viewing the article information, the article information obtained after all the process nodes have completed execution, for display.
The specific details of each module in the above article information processing apparatus 400 are described in detail in the corresponding article information processing method, and thus are not described herein.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In an exemplary embodiment of the present disclosure, a storage medium capable of implementing the above method is also provided. Fig. 5 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the present disclosure, as shown in fig. 5, describing a program product 500 for implementing the above-described method according to an embodiment of the present disclosure, which may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a cell phone. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided. Fig. 6 schematically illustrates a structural diagram of a computer system of an electronic device in an exemplary embodiment of the present disclosure.
It should be noted that, the computer system 600 of the electronic device shown in fig. 6 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
As shown in fig. 6, the computer system 600 includes a central processing unit (Central Processing Unit, CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 602 or a program loaded from a storage section 608 into a random access Memory (Random Access Memory, RAM) 603. In the RAM 603, various programs and data required for system operation are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An Input/Output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and a speaker, etc.; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. When executed by a Central Processing Unit (CPU) 601, performs the various functions defined in the system of the present disclosure.
It should be noted that, the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As another aspect, the present disclosure also provides a computer-readable medium that may be contained in the electronic device described in the above embodiments; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. An article information processing method, comprising:
abstracting a logic segment for processing article information into flow nodes, configuring link identifiers of the flow nodes, and configuring a plurality of execution chains among the flow nodes in order to form an execution chain pool;
responding to the issue request of the article information, determining a current flow node of a target execution chain based on the execution chain pool, and executing the current flow node;
after the current flow node is successfully executed, determining a next flow node of the target execution chain based on the execution chain pool, and generating an asynchronous message based on the next flow node so as to issue the asynchronous message to a message queue;
Updating the next flow node to the current flow node according to the asynchronous message in the message queue, and repeating the steps until all flow nodes in the target execution chain are executed;
and generating a processing end message after all flow nodes in the target execution chain are executed, and issuing the processing end message to the message queue.
2. The article information processing method according to claim 1, wherein in response to a request for issuing article information, the method further comprises:
the article information is disassembled into a plurality of pieces of article data according to the data type;
and configuring a main key identifier for each article data.
3. The article information processing method according to claim 2, wherein when the current flow node is a first flow node in the target execution chain, the generating an asynchronous message based on the next flow node includes:
and placing each article data, each primary key identifier and the link identifier of the next process node in an asynchronous message body to generate the asynchronous message.
4. The article information processing method according to claim 2, wherein when the current flow node is a flow node other than a first flow node in the target execution chain, the generating an asynchronous message based on the next flow node includes:
And placing each primary key identifier and the link identifier of the next flow node in an asynchronous message body to generate the asynchronous message.
5. The article information processing method according to claim 1, wherein the flow nodes of one execution chain in the execution chain pool are a false writing node, a persistence node, a content analysis and evaluation node, a static page node, a security audit node, and a state processing node in this order.
6. The article information processing method according to claim 5, wherein when the current flow node is the false write node, the executing the current flow node includes:
and storing each piece of seal data and the corresponding primary key identification in a distributed cache and a search server.
7. The article information processing method according to claim 5, wherein when the current flow node is the persistence node, the executing the current flow node includes:
acquiring article data based on the primary key identifiers;
each article data is stored in a database.
8. The article information processing method according to claim 5, wherein when the current flow node is the content parsing and evaluation node, the executing the current flow node includes:
Acquiring article data based on the primary key identifiers;
performing content processing on each article data to obtain an execution result; wherein the content processing comprises one or more of content evaluation, content judgment and content analysis;
and updating the database, the distributed cache and the search server according to the execution result.
9. The article information processing method according to claim 5, wherein when the current flow node is the false write node, after generating an asynchronous message based on the next flow node, the method further comprises:
and returning the prompt information to be audited of the article, and carrying out popup prompt according to the prompt information to be audited of the article.
10. The article information processing method according to claim 1, wherein after the processing end message is issued into the message queue, the method further comprises:
and responding to the check request of the article information, and returning the article information obtained after all the flow nodes are executed for display.
11. An article information processing apparatus, comprising:
the configuration module is used for abstracting the logic fragments for processing the article information into flow nodes; configuring the link identifiers of the flow nodes and configuring the sequence among a plurality of the flow nodes to obtain an execution chain so as to form an execution chain pool;
The execution module is used for responding to the release request of the article information, determining the current flow node of the target execution chain based on the execution chain pool and executing the current flow node;
the iteration module is used for determining the next flow node of the target execution chain based on the execution chain pool after the current flow node is successfully executed, and generating an asynchronous message based on the next flow node so as to issue the asynchronous message to a message queue;
the circulation module is used for updating the next flow node into the current flow node according to the asynchronous message in the message queue, and repeating the steps until all the flow nodes in the target execution chain are executed;
and the issuing module is used for generating a processing end message after all the flow nodes in the target execution chain are executed, and issuing the processing end message to the message queue.
12. A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the article information processing method according to any one of claims 1 to 10.
13. An electronic device, comprising:
One or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the article information processing method of any one of claims 1 to 10.
CN202310815598.9A 2023-07-04 2023-07-04 Article information processing method and device, storage medium and electronic equipment Active CN116521400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310815598.9A CN116521400B (en) 2023-07-04 2023-07-04 Article information processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310815598.9A CN116521400B (en) 2023-07-04 2023-07-04 Article information processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN116521400A CN116521400A (en) 2023-08-01
CN116521400B true CN116521400B (en) 2023-11-03

Family

ID=87406792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310815598.9A Active CN116521400B (en) 2023-07-04 2023-07-04 Article information processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116521400B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9231858B1 (en) * 2006-08-11 2016-01-05 Dynatrace Software Gmbh Completeness detection of monitored globally distributed synchronous and asynchronous transactions
CN109799981A (en) * 2018-12-19 2019-05-24 成都多用科技有限公司 A kind of integrated system and method based on execution chain
CN110297927A (en) * 2019-05-17 2019-10-01 百度在线网络技术(北京)有限公司 Article dissemination method, device, equipment and storage medium
CN113157461A (en) * 2020-01-22 2021-07-23 北京京东振世信息技术有限公司 Method and device for transmitting message in process of executing task list
CN114238703A (en) * 2021-12-31 2022-03-25 城云科技(中国)有限公司 Event flow arrangement method, device and application
CN114610413A (en) * 2022-03-22 2022-06-10 平安普惠企业管理有限公司 Method, device, equipment and storage medium for executing synchronous and asynchronous tasks based on Java
CN115061796A (en) * 2022-06-17 2022-09-16 特赞(上海)信息科技有限公司 Execution method and system for calling between subtasks and electronic equipment
CN115098255A (en) * 2022-06-17 2022-09-23 特赞(上海)信息科技有限公司 Design method and system of distributed file asynchronous processing service and electronic equipment
CN115098254A (en) * 2022-06-17 2022-09-23 特赞(上海)信息科技有限公司 Method and system for triggering execution of subtask sequence and electronic equipment
CN115775170A (en) * 2021-09-06 2023-03-10 北京橙心无限科技发展有限公司 Method and device for acquiring article attribute information, electronic equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220382761A1 (en) * 2021-06-01 2022-12-01 Tableau Software, LLC Metadata inheritance for data assets

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9231858B1 (en) * 2006-08-11 2016-01-05 Dynatrace Software Gmbh Completeness detection of monitored globally distributed synchronous and asynchronous transactions
CN109799981A (en) * 2018-12-19 2019-05-24 成都多用科技有限公司 A kind of integrated system and method based on execution chain
CN110297927A (en) * 2019-05-17 2019-10-01 百度在线网络技术(北京)有限公司 Article dissemination method, device, equipment and storage medium
CN113157461A (en) * 2020-01-22 2021-07-23 北京京东振世信息技术有限公司 Method and device for transmitting message in process of executing task list
CN115775170A (en) * 2021-09-06 2023-03-10 北京橙心无限科技发展有限公司 Method and device for acquiring article attribute information, electronic equipment and medium
CN114238703A (en) * 2021-12-31 2022-03-25 城云科技(中国)有限公司 Event flow arrangement method, device and application
CN114610413A (en) * 2022-03-22 2022-06-10 平安普惠企业管理有限公司 Method, device, equipment and storage medium for executing synchronous and asynchronous tasks based on Java
CN115061796A (en) * 2022-06-17 2022-09-16 特赞(上海)信息科技有限公司 Execution method and system for calling between subtasks and electronic equipment
CN115098255A (en) * 2022-06-17 2022-09-23 特赞(上海)信息科技有限公司 Design method and system of distributed file asynchronous processing service and electronic equipment
CN115098254A (en) * 2022-06-17 2022-09-23 特赞(上海)信息科技有限公司 Method and system for triggering execution of subtask sequence and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于消息中心通信的雷达仿真系统研究与实现;陈杰;王磊;曹建蜀;陈明燕;张可;;系统仿真学报(01);全文 *

Also Published As

Publication number Publication date
CN116521400A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN105608086A (en) Transaction processing method and device of distributed database system
JP5652228B2 (en) Database server device, database update method, and database update program
CN104423960A (en) Continuous project integration method and continuous project integration system
CN103699548B (en) A kind of method and apparatus being recovered database data by usage log
WO2021057252A1 (en) Service processing flow configuration method and apparatus, and service request processing method and apparatus
CN103440285A (en) Large-scale mobile phone game system and database updating method of large-scale mobile phone game system
Nicol et al. Parallel simulation of timed petri-nets
CN103020304A (en) Data processing method and equipment
CN114675987A (en) Cache data processing method and device, computer equipment and storage medium
CN109800226A (en) A kind of data administer in task management method and device
CN111404755A (en) Network configuration method, device and storage medium
CN114997414A (en) Data processing method and device, electronic equipment and storage medium
CN116521400B (en) Article information processing method and device, storage medium and electronic equipment
CN110377827A (en) Course Training scene method for pushing, device, medium and electronic equipment
CN113378007B (en) Data backtracking method and device, computer readable storage medium and electronic device
CN113111066A (en) Automatic online method, device and system for database operation work order and computer equipment
Chaves Formal methods at AT&T-an industrial usage report
Popovic et al. Formal verification of distributed transaction management in a SOA based control system
CN104881455B (en) A kind of architectural difference processing method and system based on MYSQL
Hachemi et al. Reusing process patterns in software process models modification
CN113127036A (en) Software development system, method, apparatus and medium for continuous integration of code
CN112084768A (en) Multi-round interaction method and device and storage medium
CN109522098A (en) Transaction methods, device, system and storage medium in distributed data base
CN109814991A (en) A kind of data administer in task management method and device
Beohar et al. Hierarchical states in the compositional interchange format

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant