CN112306695A - Data processing method and device, electronic equipment and computer storage medium - Google Patents

Data processing method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN112306695A
CN112306695A CN202011305365.7A CN202011305365A CN112306695A CN 112306695 A CN112306695 A CN 112306695A CN 202011305365 A CN202011305365 A CN 202011305365A CN 112306695 A CN112306695 A CN 112306695A
Authority
CN
China
Prior art keywords
thread
context
asynchronous processing
asynchronous
memory block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011305365.7A
Other languages
Chinese (zh)
Inventor
韦强
杨国胜
崔华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Travelsky Technology Co Ltd
China Travelsky Holding Co
Original Assignee
China Travelsky Holding Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Travelsky Holding Co filed Critical China Travelsky Holding Co
Priority to CN202011305365.7A priority Critical patent/CN112306695A/en
Publication of CN112306695A publication Critical patent/CN112306695A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Abstract

The application provides a data processing method, a data processing device, electronic equipment and a computer storage medium, wherein the method comprises the following steps: firstly, generating an asynchronous processing request; the asynchronous processing request is used for requesting a calling service to execute asynchronous operation; then, extracting the context of the target thread and releasing the thread resource of the thread where the asynchronous processing request is positioned; wherein, the target thread context refers to the thread context of the thread where the asynchronous processing request is located; when the called service is executed, creating a thread context and a response processing context logic of the asynchronous processing response thread by using the target thread context; finally, the response processing context logic is called and executed, and the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread is established. Therefore, the aim of rapidly and accurately processing the service needing asynchronous processing is fulfilled.

Description

Data processing method and device, electronic equipment and computer storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method and apparatus, an electronic device, and a computer storage medium.
Background
With the increase of the informatization degree of an enterprise and the complexity of a business scene, in the current informatization development process of the enterprise, along with the technology conversion from centralized to distributed and the development of services from centralized cohesion to micro-service, the relationship between the services becomes complicated, so that a complex business process is realized, the work which can be completed by a single service or a plurality of services is not needed any more, and dozens or even hundreds of micro-services may be needed to be completed through complex call logic.
At present, for the situation that a plurality of services need to be coordinated to complete business data processing, a context required by business processing is brought in the calling process, the called party returns the information to the calling party without changing the information after the called party finishes processing, and the calling party performs business processing according to the data. However, this method increases the amount of data transferred, resulting in a very time consuming process, and the correctness of the processed traffic depends on the called service.
Disclosure of Invention
In view of the above, the present application provides a data processing method, an apparatus, an electronic device, and a computer storage medium, which are used to quickly and accurately process a service requiring asynchronous processing.
A first aspect of the present application provides a data processing method, including:
generating an asynchronous processing request; wherein the asynchronous processing request is used for requesting a call service to execute an asynchronous operation;
extracting a target thread context and releasing thread resources of the thread where the asynchronous processing request is located; wherein the target thread context refers to a thread context of a thread in which the asynchronous processing request is located;
when the called service is executed, creating a thread context and a response processing context logic of the asynchronous processing response thread by using the target thread context;
and calling and executing the response processing context logic, and establishing the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread.
Optionally, after extracting the target thread context and releasing the thread resource of the thread in which the asynchronous processing request is located, the method further includes:
creating a unique identifier of the asynchronous processing request;
and establishing a binding relation between the unique identifier of the asynchronous processing request and the target thread context.
Optionally, after the establishing the binding relationship between the unique identifier of the asynchronous processing request and the target thread context, the method further includes:
if the called service needs cross-process calling, storing the unique identifier of the asynchronous processing request and the target thread context to a shared memory;
when the called service is finished executing, using the target thread context to create a thread context and a response processing context logic of an asynchronous processing response thread, comprising:
when the called service is executed, extracting the target thread context bound by the unique identifier of the asynchronous processing request from the shared memory by using the unique identifier of the asynchronous processing request;
thread context and response processing context logic for asynchronously processing the response thread are created using the target thread context.
Optionally, the storing the unique identifier of the asynchronous processing request and the target thread context to a shared memory includes:
calculating the number of memory blocks required to be used according to the size of the context of the target thread;
judging whether the number of the memory blocks in the memory block idle table is greater than or equal to the number of the memory blocks needing to be used;
and if the number of the memory blocks in the memory block idle table is judged to be larger than or equal to the number of the memory blocks needing to be used, storing the target thread context and the unique identifier to a shared memory.
Optionally, after the storing the target thread context and the unique identifier to the shared memory, the method further includes:
creating a used memory block index table; the used memory block index table is used for recording information of the memory block used by the target thread context;
establishing a corresponding relation between the target thread context and the used memory block index table in a context memory block index table; the context memory block index table is used for recording a corresponding relation between the target thread context and the used memory block index table;
deleting the index of the memory block which is used by the target thread context in the memory block idle table; the memory block idle table is used for storing an index of an idle memory block.
Optionally, the invoking and executing the response processing context logic, establishing a binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread, and after releasing the thread resource of the thread where the asynchronous processing request is located, further includes:
when the service processing to be processed is completed, deleting the corresponding relation between the target thread context and the used memory block index table in the context memory block index table;
adding the index of the memory block in the used memory block index table to the memory block idle table;
and deleting the index table of the used memory block.
A second aspect of the present application provides a data processing apparatus, including:
a generating unit configured to generate an asynchronous processing request; wherein the asynchronous processing request is used for requesting a call service to execute an asynchronous operation;
the first extraction unit is used for extracting a target thread context and releasing thread resources of a thread where the asynchronous processing request is located; wherein the target thread context refers to a thread context of a thread in which the asynchronous processing request is located;
the creating unit is used for creating a thread context and a response processing context logic of the asynchronous processing response thread by using the target thread context when the execution of the called service is finished;
and the calling unit is used for calling and executing the response processing context logic and establishing the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread.
Optionally, the data processing apparatus further includes:
the identification creating unit is used for creating a unique identification of the asynchronous processing request;
and the first establishing unit is used for establishing the binding relationship between the unique identifier of the asynchronous processing request and the target thread context.
Optionally, the establishing unit includes:
the storage unit is used for storing the unique identifier of the asynchronous processing request and the target thread context to a shared memory if the called service needs cross-process calling;
wherein the creating unit includes:
a second extracting unit, configured to, when execution of a called service is completed, extract, from the shared memory, a target thread context bound to the unique identifier of the asynchronous processing request by using the unique identifier of the asynchronous processing request;
and the creating subunit is used for creating a thread context and response processing context logic of the asynchronous processing response thread by utilizing the target thread context.
Optionally, the saving unit includes:
the computing unit is used for computing the number of the memory blocks required to be used according to the size of the context of the target thread;
a determining unit, configured to determine whether the number of memory blocks in the memory block idle table is greater than or equal to the number of memory blocks that need to be used;
and a storage subunit, configured to, if the determining unit determines that the number of the memory blocks in the memory block idle table is greater than or equal to the number of the memory blocks that need to be used, store the target thread context and the unique identifier in the shared memory.
Optionally, the data processing apparatus further includes:
a used memory block index table creating unit configured to create a used memory block index table; the used memory block index table is used for recording information of the memory block used by the target thread context;
a second establishing unit, configured to establish a correspondence between the target thread context and the used memory block index table in a context memory block index table; the context memory block index table is used for recording a corresponding relation between the target thread context and the used memory block index table;
a first deleting unit, configured to delete an index of a memory block that has been used by the target thread context in a memory block idle table; the memory block idle table is used for storing an index of an idle memory block.
Optionally, the data processing apparatus further includes:
a second deleting unit, configured to delete, when the to-be-processed service processing is completed, a correspondence between the target thread context and the used memory block index table in the context memory block index table;
an adding unit, configured to add an index of a memory block in the used memory block index table to the memory block idle table;
a third deleting unit, configured to delete the used memory block index table.
A third aspect of the present application provides an electronic device comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of the first aspects.
A fourth aspect of the present application provides a computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method according to any one of the first aspect.
As can be seen from the above aspects, in the data processing method, apparatus, electronic device and computer storage medium provided in the present application, the method includes: firstly, generating an asynchronous processing request; wherein the asynchronous processing request is used for requesting a call service to execute an asynchronous operation; then, extracting a target thread context and releasing thread resources of the thread where the asynchronous processing request is located; wherein the target thread context refers to a thread context of a thread in which the asynchronous processing request is located; when the called service is executed, creating a thread context and a response processing context logic of the asynchronous processing response thread by using the target thread context; and finally, calling and executing the response processing context logic, and establishing the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread. Therefore, the aim of rapidly and accurately processing the service needing asynchronous processing is fulfilled.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a detailed flowchart of a data processing method according to an embodiment of the present application;
fig. 2 is a detailed flowchart of a data processing method according to another embodiment of the present application;
fig. 3 is a detailed flowchart of a data processing method according to another embodiment of the present application;
fig. 4 is a detailed flowchart of a data processing method according to another embodiment of the present application;
fig. 5 is a detailed flowchart of a data processing method according to another embodiment of the present application;
fig. 6 is a detailed flowchart of a data processing method according to another embodiment of the present application;
fig. 7 is a schematic flowchart of a data processing method according to another embodiment of the present application;
fig. 8 is a schematic diagram of a data processing apparatus according to another embodiment of the present application;
fig. 9 is a schematic diagram of a data processing apparatus according to another embodiment of the present application;
fig. 10 is a schematic diagram of an electronic device executing a data processing method according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
An embodiment of the present application provides a data processing method, as shown in fig. 1, which specifically includes the following steps:
and S101, generating an asynchronous processing request.
The asynchronous processing request is used for requesting the calling service to execute asynchronous operation.
It should be noted that the asynchronous processing request is generated at a service invoker, the invoked service may be provided by a service provider, and the service provider may belong to the service invoker or may not belong to the service invoker, which is not limited herein.
It should be noted that, not only one asynchronous processing request may be generated at the same time, but also multiple asynchronous processing requests may be generated at the same time, which is not limited herein.
Specifically, the service caller generates an asynchronous processing request for requesting the calling service to perform an asynchronous operation.
S102, extracting the context of the target thread and releasing the thread resource of the thread where the asynchronous processing request is located.
Wherein, the target thread context refers to the thread context of the thread where the asynchronous processing request is located.
Specifically, context information is extracted from the thread in which the asynchronous processing request is located as a target thread context.
Optionally, in another embodiment of the present application, an implementation after step S102, as shown in fig. 2, includes:
s201, creating a unique identification of the asynchronous processing request.
And creating an identifier of the asynchronous processing request as the unique identifier of the asynchronous processing request.
S202, establishing a binding relation between the unique identifier of the asynchronous processing request and the target thread context.
And establishing the one-to-one correspondence between the unique identifier of the asynchronous processing request and the target thread context, so that the target thread context can be obtained by subsequent inquiry according to the unique identifier.
S103, when the called service is finished, the thread context and the response processing context logic of the asynchronous processing response thread are established by using the target thread context.
It should be noted that, if the unique identifier of the asynchronous processing request is created after the target thread context is extracted and refers to the thread context of the thread where the asynchronous processing request is located, and the binding relationship between the unique identifier of the asynchronous processing request and the target thread context is established, the unique identifier of the asynchronous processing request is sent to the service provider, the service provider executes the called service, and then returns the unique identifier to the service caller, and the service caller can query the target thread context corresponding to the unique identifier of the asynchronous processing request according to the unique identifier of the asynchronous processing request, so that the target thread context creates the thread context of the asynchronous processing response thread and the response processing context logic.
S104, calling and executing the response processing context logic, and establishing the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread.
According to the scheme, the data processing method provided by the application comprises the following steps: firstly, generating an asynchronous processing request; the asynchronous processing request is used for requesting a calling service to execute asynchronous operation; then, extracting a target thread context, and releasing the thread resource of the thread where the asynchronous processing request is located, wherein the target thread context refers to the thread context of the thread where the asynchronous processing request is located; when the called service is executed, creating a thread context and a response processing context logic of the asynchronous processing response thread by using the target thread context; finally, the response processing context logic is called and executed, and the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread is established. Therefore, the aim of rapidly and accurately processing the service needing asynchronous processing is fulfilled.
Optionally, in another embodiment of the present application, an implementation manner of the data processing method, as shown in fig. 3, includes:
s301, generating an asynchronous processing request.
The asynchronous processing request is used for requesting the calling service to execute asynchronous operation.
It should be noted that the specific implementation process of step S301 is the same as the specific implementation process of step S101, and reference may be made to this.
S302, extracting the context of the target thread and releasing the thread resource of the thread where the asynchronous processing request is located.
Wherein, the target thread context refers to the thread context of the thread where the asynchronous processing request is located.
It should be noted that the specific implementation process of step S302 is the same as the specific implementation process of step S102, and reference may be made to this process.
S303, creating a unique identification of the asynchronous processing request.
It should be noted that the specific implementation process of step S303 is the same as the specific implementation process of step S201, and reference may be made to this.
S304, establishing a binding relation between the unique identifier of the asynchronous processing request and the target thread context.
It should be noted that the specific implementation process of step S304 is the same as the specific implementation process of step S202, and reference may be made to this.
S305, if the called service needs cross-process calling, storing the unique identifier of the asynchronous processing request and the target thread context to a shared memory.
It should be noted that, if the invoked service needs to be invoked across processes, that is, the service provider does not belong to the service caller, in order to reduce the transmission amount of data, the unique identifier of the asynchronous processing request and the target thread context may be stored together in the shared memory, and in the data transmission process, only the unique identifier of the asynchronous processing request needs to be transmitted to the service provider.
Optionally, in another embodiment of the present application, an implementation manner of step S305, as shown in fig. 4, includes:
s401, calculating the number of the memory blocks required to be used according to the size of the context of the target thread.
The shared memory may preset N memory blocks according to the size of the context of the service processed by the thread and the processing amount of the service, and the initial value of the size of each memory block may be estimated according to the minimum value of the usage amount of the context of the service processed by the thread, which is not limited herein.
It should be noted that the size of each memory block can be dynamically expanded, and if the number of services increases, the size of each memory block can be increased.
S402, judging whether the number of the memory blocks in the memory block free list is larger than or equal to the number of the memory blocks needing to be used.
The memory block free table is used for storing unused, i.e. free, memory block indexes.
Specifically, if it is determined that the number of memory blocks in the memory block free table is greater than or equal to the number of memory blocks that need to be used, step S403 is executed.
And S403, storing the target thread context and the unique identifier to the shared memory.
The corresponding relationship between the target thread context and the unique identifier may be stored in the shared memory in a hash value and linked list manner.
Specifically, a storage space array sequential storage structure may be adopted to store the unique identifier, that is, the unique identifier is stored through a continuous segment of storage space. And calculating the hash value of the unique identifier according to the preset identifier, and mapping the unique identifiers with the same hash value to the same storage space array position. Because of the continuity of the storage space, the time complexity of adding and searching the array positions of the upper and lower positions corresponding to the unique identifier according to the hash value does not change with the number of the stored contexts. The spalling degree of the hash value is related to the size of the array of the storage space, and the relationship between the size of the array and the length of the linked list can be balanced by selecting the proper array size according to the number of contexts, so that the aims of optimizing addition, deletion and searching performance are fulfilled.
And for the target thread context with the same hash value, the target thread context is stored in a linked list form, the operations of adding the linked list, deleting and the like only need to process the reference among the nodes, and if the time complexity is O (1), the searching operation needs to traverse the linked list one by one to compare the complexity to O (n), so the value of the storage space array influences the searching efficiency of the linked list.
Therefore, for asynchronous processing with large throughput, a large number of access synchronous operations are inevitably involved in the context operation process, so that in order to improve the adding and querying speed of storage, fine-grained concurrent locks can be set for each storage space array, and the system performance can be improved by performing concurrent operations on contexts with different limiting values.
Optionally, in another embodiment of the present application, an implementation after step S403, as shown in fig. 5, includes:
s501, creating a used memory block index table.
The used memory block index table is used for recording information of the memory blocks used by the target thread context.
S502, establishing a corresponding relation between the target thread context and the used memory block index table in the context memory block index table.
The context memory block index table is used for recording the corresponding relation between the target thread context and the used memory block index table.
S503, deleting the index of the memory block used by the target thread context in the memory block idle table.
The memory block idle table is used for storing an index of an idle memory block.
And S306, when the called service is executed, extracting the target thread context bound by the unique identifier of the asynchronous processing request from the shared memory by using the unique identifier of the asynchronous processing request.
Specifically, when the called service is executed, the unique identifier of the asynchronous processing request may be used to match the plurality of unique identifiers in the shared memory, and after the matching is successful, the target thread context corresponding to the successfully matched unique identifier, that is, the target thread context bound to the unique identifier of the asynchronous processing request, is found in the shared memory.
S307, using the target thread context, creating a thread context and response processing context logic of the asynchronous processing response thread.
It should be noted that the specific implementation process of step S307 is the same as the specific implementation process of step S103, and can be referred to each other.
S308, calling and executing the response processing context logic, and establishing the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread.
It should be noted that the specific implementation process of step S308 is the same as the specific implementation process of step S104, and reference may be made to this process.
Optionally, in another embodiment of the present application, an implementation after step S308, as shown in fig. 6, includes:
s601, when the service processing to be processed is completed, deleting the corresponding relation between the target thread context and the used memory block index table in the context memory block index table.
S602, adding the index of the memory block in the used memory block index table to the memory block idle table.
S603, deleting the used memory block index table.
An exemplary description of the embodiments of the present application is now provided with reference to fig. 7, which includes the overall business logic, the thread on which the asynchronous processing request is located, the asynchronous processing response thread, the thread context component, the asynchronous processing component, the shared memory, the thread pre-processing component, and the service provider.
The asynchronous processing component is used for providing an interface of asynchronous operation for the service, and storing a target thread context in the thread context component into a shared memory when the service requests asynchronous processing; the thread preprocessing component is used for distributing threads for subsequent processing logic when the asynchronous operation is finished and the subsequent processing logic needs to be triggered, taking out the target thread context which is stored in the shared memory before calling the service logic, and moving the target thread context to the thread context component; a thread context component for providing thread-level, isolated object storage; and the shared memory is used for storing the target thread context and the unique identifier in the data interaction process.
Specifically, if in the normal processing process of the business logic, a service call is initiated, that is, an asynchronous processing request is generated, a thread context of a thread in which the asynchronous processing request is generated is taken as a target thread context, the thread context is moved to a thread context component through an asynchronous processing component, a unique identifier of the asynchronous request is generated, a binding relationship between the unique identifier of the asynchronous processing request and the target thread context is established, the unique identifier of the asynchronous processing request and the target thread context are stored in a shared memory, only the unique identifier is sent to a service provider, after the service provider finishes asynchronous processing of the called service, the thread preprocessing component receives the processed unique identifier, the target thread context is obtained by matching or inquiring the unique identifier in the shared memory, the thread preprocessing component allocates an asynchronous processing response thread for subsequent processing logic, and creating a thread context and a response processing context logic of the asynchronous processing response thread by using the obtained target thread context, establishing a binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread, and performing subsequent service logic through the asynchronous processing response thread, thereby releasing the thread resource of the thread where the asynchronous processing request is located.
According to the scheme, the data processing method provided by the application comprises the following steps: firstly, generating an asynchronous processing request; the asynchronous processing request is used for requesting a calling service to execute asynchronous operation; then, extracting a target thread context, and releasing the thread resource of the thread where the asynchronous processing request is located, wherein the target thread context refers to the thread context of the thread where the asynchronous processing request is located; creating a unique identifier of the asynchronous processing request; establishing a binding relation between the unique identifier of the asynchronous processing request and the context of the target thread; if the called service needs cross-process calling, storing the unique identifier of the asynchronous processing request and the target thread context to a shared memory; when the called service is executed, extracting the target thread context bound by the unique identifier of the asynchronous processing request from the shared memory by using the unique identifier of the asynchronous processing request; creating a thread context and response processing context logic for asynchronously processing the response thread using the target thread context; finally, the response processing context logic is called and executed, and the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread is established. Therefore, the aim of rapidly and accurately processing the service needing asynchronous processing is fulfilled.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Computer program code for carrying out operations for the present disclosure may be written in one or more programming languages, including but not limited to object oriented programming languages such as Python, Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Another embodiment of the present application provides a data processing apparatus, as shown in fig. 8, specifically including:
a generating unit 801 is configured to generate an asynchronous processing request.
The asynchronous processing request is used for requesting the calling service to execute asynchronous operation.
The first fetching unit 802 is configured to fetch a target thread context and release thread resources of a thread in which the asynchronous processing request is located.
Wherein, the target thread context refers to the thread context of the thread where the asynchronous processing request is located.
A creating unit 803, configured to create a thread context and response processing context logic for asynchronously processing the response thread by using the target thread context when the called service is completely executed.
The calling unit 804 calls and executes the response processing context logic, establishes the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread, and releases the thread resource of the thread where the asynchronous processing request is located.
For a specific working process of the unit disclosed in the above embodiment of the present application, reference may be made to the content of the corresponding method embodiment, as shown in fig. 1, which is not described herein again.
Optionally, in another embodiment of the present application, an implementation manner of the data processing apparatus further includes:
and the identification creating unit is used for creating the unique identification of the asynchronous processing request.
And the first establishing unit is used for establishing the binding relationship between the unique identifier of the asynchronous processing request and the target thread context.
For a specific working process of the unit disclosed in the above embodiment of the present application, reference may be made to the content of the corresponding method embodiment, as shown in fig. 2, which is not described herein again.
As can be seen from the above, the present application provides a data processing apparatus: first, the generation unit 801 generates an asynchronous processing request; the asynchronous processing request is used for requesting a calling service to execute asynchronous operation; then, the first extraction unit 802 extracts a target thread context and releases the thread resource of the thread where the asynchronous processing request is located, where the target thread context refers to the thread context of the thread where the asynchronous processing request is located; when the called service is executed, the creating unit 803 creates a thread context and response processing context logic of the asynchronous processing response thread by using the target thread context; finally, the calling unit 804 calls and executes the response processing context logic, and establishes the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread. Therefore, the aim of rapidly and accurately processing the service needing asynchronous processing is fulfilled.
Another embodiment of the present application provides a data processing apparatus, as shown in fig. 9, specifically including:
a generating unit 901, configured to generate an asynchronous processing request.
The asynchronous processing request is used for requesting the calling service to execute asynchronous operation.
The first fetching unit 902 is configured to fetch a target thread context and release thread resources of a thread in which the asynchronous processing request is located.
Wherein, the target thread context refers to the thread context of the thread where the asynchronous processing request is located.
An identifier creating unit 903, configured to create a unique identifier of the asynchronous processing request.
A first establishing unit 904, configured to establish a binding relationship between the unique identifier of the asynchronous processing request and the target thread context.
A saving unit 905, configured to, if the called service needs to be invoked across processes, save the unique identifier of the asynchronous processing request and the target thread context to the shared memory.
Optionally, in another embodiment of the present application, an implementation manner of the saving unit 905 includes:
and the calculating unit is used for calculating the number of the memory blocks required to be used according to the size of the target thread context.
And the judging unit is used for judging whether the number of the memory blocks in the memory block idle table is greater than or equal to the number of the memory blocks needing to be used.
And the storage subunit is configured to, if the judging unit judges that the number of the memory blocks in the memory block idle table is greater than or equal to the number of the memory blocks that need to be used, store the target thread context and the unique identifier in the shared memory.
For a specific working process of the unit disclosed in the above embodiment of the present application, reference may be made to the content of the corresponding method embodiment, as shown in fig. 4, which is not described herein again.
Optionally, in another embodiment of the present application, an implementation manner of the data processing apparatus further includes:
and the used memory block index table creating unit is used for creating the used memory block index table.
The used memory block index table is used for recording information of the memory blocks used by the target thread context.
And a second establishing unit, configured to establish a correspondence between the target thread context and the used memory block index table in the context memory block index table.
The context memory block index table is used for recording the corresponding relation between the target thread context and the used memory block index table.
A first deleting unit, configured to delete an index of a memory chunk that is already used by the target thread context in the memory chunk idle table.
The memory block idle table is used for storing an index of an idle memory block.
For a specific working process of the unit disclosed in the above embodiment of the present application, reference may be made to the content of the corresponding method embodiment, as shown in fig. 5, which is not described herein again.
A second extracting unit 906, configured to, when the called service is completed executing, extract, from the shared memory, the target thread context bound to the unique identifier of the asynchronous processing request by using the unique identifier of the asynchronous processing request.
A create subunit 907 is used to create a thread context and response processing context logic for asynchronously processing the response thread using the target thread context.
The calling unit 908 is configured to call and execute the response processing context logic, and establish a binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread.
For a specific working process of the unit disclosed in the above embodiment of the present application, reference may be made to the content of the corresponding method embodiment, as shown in fig. 3, which is not described herein again.
Optionally, in another embodiment of the present application, an implementation manner of the data processing apparatus includes:
and the second deleting unit is used for deleting the corresponding relation between the target thread context and the used memory block index table in the context memory block index table when the service processing to be processed is finished.
And the adding unit is used for adding the index of the memory block in the used memory block index table into the memory block idle table.
And a third deleting unit, configured to delete the used memory block index table.
For a specific working process of the unit disclosed in the above embodiment of the present application, reference may be made to the content of the corresponding method embodiment, as shown in fig. 6, which is not described herein again.
As can be seen from the above, the present application provides a data processing apparatus: first, the generation unit 901 generates an asynchronous processing request; the asynchronous processing request is used for requesting a calling service to execute asynchronous operation; then, the first extracting unit 902 extracts a target thread context and releases the thread resource of the thread where the asynchronous processing request is located, where the target thread context refers to the thread context of the thread where the asynchronous processing request is located; the identifier creating unit 903 creates a unique identifier of the asynchronous processing request; the first establishing unit 904 establishes a binding relationship between the unique identifier of the asynchronous processing request and the target thread context; if the called service needs cross-process calling, the storage unit 905 stores the unique identifier of the asynchronous processing request and the target thread context to the shared memory; when the called service is executed, the second extracting unit 906 extracts the target thread context bound by the unique identifier of the asynchronous processing request from the shared memory by using the unique identifier of the asynchronous processing request; the creating subunit 907 creates a thread context and response processing context logic of the asynchronous processing response thread using the target thread context; finally, the call unit 908 calls and executes the response processing context logic, establishing a binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread. Therefore, the aim of rapidly and accurately processing the service needing asynchronous processing is fulfilled.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
Another embodiment of the present application provides an electronic device, as shown in fig. 10, including:
one or more processors 1001.
Storage 1002 on which one or more programs are stored.
The one or more programs, when executed by the one or more processors 1001, cause the one or more processors 1001 to implement the methods as in any of the above embodiments.
Another embodiment of the present application provides a computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method as described in any of the above embodiments.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Another embodiment of the present application provides a computer program product for performing the method of processing data according to any one of the above when the computer program product is executed.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means, or installed from a storage means, or installed from a ROM. The computer program, when executed by a processing device, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
While several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A method for processing data, comprising:
generating an asynchronous processing request; wherein the asynchronous processing request is used for requesting a call service to execute an asynchronous operation;
extracting a target thread context and releasing thread resources of the thread where the asynchronous processing request is located; wherein the target thread context refers to a thread context of a thread in which the asynchronous processing request is located;
when the called service is executed, creating a thread context and a response processing context logic of the asynchronous processing response thread by using the target thread context;
and calling and executing the response processing context logic, and establishing the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread.
2. The processing method according to claim 1, wherein after extracting the target thread context and releasing the thread resource of the thread in which the asynchronous processing request is located, further comprising:
creating a unique identifier of the asynchronous processing request;
and establishing a binding relation between the unique identifier of the asynchronous processing request and the target thread context.
3. The processing method according to claim 2, wherein after establishing the binding relationship between the unique identifier of the asynchronous processing request and the target thread context, further comprising:
if the called service needs cross-process calling, storing the unique identifier of the asynchronous processing request and the target thread context to a shared memory;
when the called service is finished executing, using the target thread context to create a thread context and a response processing context logic of an asynchronous processing response thread, comprising:
when the called service is executed, extracting the target thread context bound by the unique identifier of the asynchronous processing request from the shared memory by using the unique identifier of the asynchronous processing request;
thread context and response processing context logic for asynchronously processing the response thread are created using the target thread context.
4. The processing method according to claim 3, wherein said saving the unique identifier of the asynchronous processing request and the target thread context to a shared memory comprises:
calculating the number of memory blocks required to be used according to the size of the context of the target thread;
judging whether the number of the memory blocks in the memory block idle table is greater than or equal to the number of the memory blocks needing to be used;
and if the number of the memory blocks in the memory block idle table is judged to be larger than or equal to the number of the memory blocks needing to be used, storing the target thread context and the unique identifier to a shared memory.
5. The processing method as claimed in claim 4, wherein after saving said target thread context and said unique identifier to a shared memory, further comprising:
creating a used memory block index table; the used memory block index table is used for recording information of the memory block used by the target thread context;
establishing a corresponding relation between the target thread context and the used memory block index table in a context memory block index table; the context memory block index table is used for recording a corresponding relation between the target thread context and the used memory block index table;
deleting the index of the memory block which is used by the target thread context in the memory block idle table; the memory block idle table is used for storing an index of an idle memory block.
6. The processing method according to claim 5, wherein said invoking and executing said response processing context logic establishes a binding relationship between a thread context of said asynchronous processing response thread and said asynchronous processing response thread, and after releasing a thread resource of a thread in which said asynchronous processing request is received, further comprises:
when the service processing to be processed is completed, deleting the corresponding relation between the target thread context and the used memory block index table in the context memory block index table;
adding the index of the memory block in the used memory block index table to the memory block idle table;
and deleting the index table of the used memory block.
7. An apparatus for processing data, comprising:
a generating unit configured to generate an asynchronous processing request; wherein the asynchronous processing request is used for requesting a call service to execute an asynchronous operation;
the first extraction unit is used for extracting a target thread context and releasing thread resources of a thread where the asynchronous processing request is located; wherein the target thread context refers to a thread context of a thread in which the asynchronous processing request is located;
the creating unit is used for creating a thread context and a response processing context logic of the asynchronous processing response thread by using the target thread context when the execution of the called service is finished;
and the calling unit is used for calling and executing the response processing context logic and establishing the binding relationship between the thread context of the asynchronous processing response thread and the asynchronous processing response thread.
8. The processing apparatus as in claim 7, further comprising:
the identification creating unit is used for creating a unique identification of the asynchronous processing request;
and the first establishing unit is used for establishing the binding relationship between the unique identifier of the asynchronous processing request and the target thread context.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
10. A computer storage medium, having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method of any of claims 1 to 6.
CN202011305365.7A 2020-11-19 2020-11-19 Data processing method and device, electronic equipment and computer storage medium Pending CN112306695A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011305365.7A CN112306695A (en) 2020-11-19 2020-11-19 Data processing method and device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011305365.7A CN112306695A (en) 2020-11-19 2020-11-19 Data processing method and device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN112306695A true CN112306695A (en) 2021-02-02

Family

ID=74335023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011305365.7A Pending CN112306695A (en) 2020-11-19 2020-11-19 Data processing method and device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112306695A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037875A (en) * 2021-05-24 2021-06-25 武汉众邦银行股份有限公司 Method for realizing asynchronous gateway in distributed real-time service system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060161921A1 (en) * 2003-08-28 2006-07-20 Mips Technologies, Inc. Preemptive multitasking employing software emulation of directed exceptions in a multithreading processor
US20060195683A1 (en) * 2003-08-28 2006-08-31 Mips Technologies, Inc. Symmetric multiprocessor operating system for execution on non-independent lightweight thread contexts
CN101140531A (en) * 2007-10-10 2008-03-12 中兴通讯股份有限公司 Quick-speed application EMS memory method
CN105516086A (en) * 2015-11-25 2016-04-20 广州华多网络科技有限公司 Service processing method and apparatus
CN106681829A (en) * 2016-12-09 2017-05-17 上海斐讯数据通信技术有限公司 Memory management method and system
CN109298922A (en) * 2018-08-30 2019-02-01 百度在线网络技术(北京)有限公司 Parallel task processing method, association's journey frame, equipment, medium and unmanned vehicle
CN109992465A (en) * 2017-12-29 2019-07-09 中国电信股份有限公司 Service tracks method, apparatus and computer readable storage medium
CN110287044A (en) * 2019-07-02 2019-09-27 广州虎牙科技有限公司 Without lock shared drive processing method, device, electronic equipment and readable storage medium storing program for executing
CN110764930A (en) * 2019-10-21 2020-02-07 中国民航信息网络股份有限公司 Request or response processing method and device based on message mode
CN110825441A (en) * 2019-09-23 2020-02-21 万达信息股份有限公司 Method for implementing asynchronous system, computer equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060161921A1 (en) * 2003-08-28 2006-07-20 Mips Technologies, Inc. Preemptive multitasking employing software emulation of directed exceptions in a multithreading processor
US20060195683A1 (en) * 2003-08-28 2006-08-31 Mips Technologies, Inc. Symmetric multiprocessor operating system for execution on non-independent lightweight thread contexts
CN101140531A (en) * 2007-10-10 2008-03-12 中兴通讯股份有限公司 Quick-speed application EMS memory method
CN105516086A (en) * 2015-11-25 2016-04-20 广州华多网络科技有限公司 Service processing method and apparatus
CN106681829A (en) * 2016-12-09 2017-05-17 上海斐讯数据通信技术有限公司 Memory management method and system
CN109992465A (en) * 2017-12-29 2019-07-09 中国电信股份有限公司 Service tracks method, apparatus and computer readable storage medium
CN109298922A (en) * 2018-08-30 2019-02-01 百度在线网络技术(北京)有限公司 Parallel task processing method, association's journey frame, equipment, medium and unmanned vehicle
CN110287044A (en) * 2019-07-02 2019-09-27 广州虎牙科技有限公司 Without lock shared drive processing method, device, electronic equipment and readable storage medium storing program for executing
CN110825441A (en) * 2019-09-23 2020-02-21 万达信息股份有限公司 Method for implementing asynchronous system, computer equipment and storage medium
CN110764930A (en) * 2019-10-21 2020-02-07 中国民航信息网络股份有限公司 Request or response processing method and device based on message mode

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037875A (en) * 2021-05-24 2021-06-25 武汉众邦银行股份有限公司 Method for realizing asynchronous gateway in distributed real-time service system
CN113037875B (en) * 2021-05-24 2021-07-27 武汉众邦银行股份有限公司 Method for realizing asynchronous gateway in distributed real-time service system

Similar Documents

Publication Publication Date Title
CN107688500B (en) Distributed task processing method, device, system and equipment
CN110008045B (en) Method, device and equipment for aggregating microservices and storage medium
CN108023908B (en) Data updating method, device and system
CN111277639B (en) Method and device for maintaining data consistency
CN111478781B (en) Message broadcasting method and device
CN110928912A (en) Method and device for generating unique identifier
CN112000734A (en) Big data processing method and device
CN111290842A (en) Task execution method and device
WO2021238259A1 (en) Data transmission method, apparatus and device, and computer-readable storage medium
CN112306695A (en) Data processing method and device, electronic equipment and computer storage medium
CN113282589A (en) Data acquisition method and device
CN110321252B (en) Skill service resource scheduling method and device
CN112948138A (en) Method and device for processing message
CN109284177B (en) Data updating method and device
CN112711485A (en) Message processing method and device
CN112052152A (en) Simulation test method and device
CN115525411A (en) Method, device, electronic equipment and computer readable medium for processing service request
CN114374657A (en) Data processing method and device
CN113760487A (en) Service processing method and device
CN113779122A (en) Method and apparatus for exporting data
CN113556370A (en) Service calling method and device
CN113626176A (en) Service request processing method and device
CN113541987A (en) Method and device for updating configuration data
CN111382953A (en) Dynamic process generation method and device
CN116991562B (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination