CN109471731A - A kind of data processing, EMS memory management process, device, equipment and medium - Google Patents

A kind of data processing, EMS memory management process, device, equipment and medium Download PDF

Info

Publication number
CN109471731A
CN109471731A CN201811393931.7A CN201811393931A CN109471731A CN 109471731 A CN109471731 A CN 109471731A CN 201811393931 A CN201811393931 A CN 201811393931A CN 109471731 A CN109471731 A CN 109471731A
Authority
CN
China
Prior art keywords
data
queue
processed
data processing
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811393931.7A
Other languages
Chinese (zh)
Inventor
李亚龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811393931.7A priority Critical patent/CN109471731A/en
Publication of CN109471731A publication Critical patent/CN109471731A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

It includes: to form data handling queues that this specification embodiment, which discloses a kind of data processing, EMS memory management process, device, equipment and medium, data processing method, and the position in the data handling queues is for corresponding to pending data;Target data is determined according to the data handling queues;Handle the target data.

Description

Data processing and memory management method, device, equipment and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for data processing and memory management.
Background
In the prior art, when data is processed, the data to be processed is usually processed immediately or all the existing data to be processed are processed uniformly each time. But the former approach requires frequent calls to the processing resources or processing capabilities of the system or machine; in the latter mode, the amount of data to be processed generally fluctuates greatly, and the amount of data to be processed increases rapidly in a short time, and when data is processed each time, the amount of data to be processed is easy to occur or is very large, so that a large amount of system or machine resources are occupied, and the availability of the system or machine is influenced or damaged; or the amount of data to be processed is small and thus a large amount of system or machine resources are idle. It can be seen that the data processing efficiency in the prior art is low and unstable.
In view of the above, there is a need for a more efficient and reliable data processing scheme.
Disclosure of Invention
Embodiments of the present specification provide a method, an apparatus, a device, and a medium for data processing and memory management, so as to solve a technical problem of how to perform data processing and memory management more efficiently and reliably.
In order to solve the above technical problem, the embodiments of the present specification are implemented as follows:
an embodiment of the present specification provides a data processing method, including:
forming a data processing queue, wherein the position in the data processing queue is used for corresponding to data to be processed;
determining target data according to the data processing queue;
and processing the target data.
An embodiment of the present specification provides a memory management method, including:
forming a memory management queue, wherein the position in the memory management queue is used for corresponding to a memory occupied object;
determining a target object according to the memory management queue;
and processing the target object.
An embodiment of the present specification provides a data processing apparatus, including:
the queue module is used for forming a data processing queue, and the position in the data processing queue is used for corresponding to data to be processed;
the target module is used for determining target data according to the data processing queue;
and the processing module is used for processing the target data.
An embodiment of the present specification provides a memory management device, including:
the queue module is used for forming a memory management queue, and the position in the memory management queue is used for corresponding to a memory occupied object;
the target module is used for determining a target object according to the memory management queue;
and the processing module is used for processing the target object.
An embodiment of the present specification provides a data processing apparatus, including:
at least one processor;
and the number of the first and second groups,
a memory communicatively coupled to the at least one processor;
wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
forming a data processing queue, wherein the position in the data processing queue is used for corresponding to data to be processed;
determining target data according to the data processing queue;
and processing the target data.
An embodiment of the present specification provides a memory management device, including:
at least one processor;
and the number of the first and second groups,
a memory communicatively coupled to the at least one processor;
wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
forming a memory management queue, wherein the position in the memory management queue is used for corresponding to a memory occupied object;
determining a target object according to the memory management queue;
and processing the target object.
Embodiments of the present specification provide a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of:
forming a data processing queue, wherein the position in the data processing queue is used for corresponding to data to be processed;
determining target data according to the data processing queue;
and processing the target data.
Embodiments of the present specification provide a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of:
forming a memory management queue, wherein the position in the memory management queue is used for corresponding to a memory occupied object;
determining a target object according to the memory management queue;
and processing the target object.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
by constructing the data processing queue, managing the data to be processed according to the rule and the controllable rule of the data processing queue, and selecting the target data actually processed according to the rule and the controllable rule, not only is the frequent calling of processing resources or processing capacity avoided, but also the stability of the number of the target data processed each time is ensured, the condition that a large amount of system or machine resources are occupied or the load of the system or machine is greatly increased is avoided, and the data processing efficiency and the reliability are improved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present specification or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive labor.
Fig. 1 is a diagram illustrating a data processing load in the prior art.
FIG. 2 is a schematic diagram of the operation of a data processing system according to a first embodiment of the present description.
Fig. 3 is a schematic diagram of a data processing method in a second embodiment of the present specification.
Fig. 4 is a schematic diagram of a data processing procedure in a second embodiment of the present specification.
Fig. 5 is a schematic diagram illustrating the operation of the memory management system according to the third embodiment of the present disclosure.
Fig. 6 is a schematic diagram of a memory management method in a fourth embodiment of the present disclosure.
Fig. 7 is a schematic diagram of a memory management process in a fourth embodiment of the present disclosure.
Fig. 8 is a schematic configuration diagram of a data processing apparatus in a fifth embodiment of the present specification.
Fig. 9 is a schematic structural diagram of a memory management device in a sixth embodiment of the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any inventive step based on the embodiments of the present disclosure, shall fall within the scope of protection of the present application.
As shown in fig. 1, in the prior art, the amount of data to be processed generally fluctuates greatly, and it is easy to occur or the amount of data to be processed is very large each time data is processed, so that a large amount of system or machine resources are occupied, and the availability of the system or machine is affected or destroyed; or the amount of data to be processed is small and thus a large amount of system or machine resources are idle. The sharp peaks in fig. 1 are the system or machine load spikes.
As shown in fig. 2, in a first embodiment of the present specification, a data processing system forms a data processing queue, a position in the data processing queue being for corresponding data to be processed; and the data processing system determines target data according to the data processing queue and processes the target data.
In the embodiment, by constructing the data processing queue, managing the data to be processed according to the rule of the data processing queue and controllably selecting the target data to be actually processed, frequent calling of processing resources or processing capacity is avoided, the stability of the number of the target data to be processed each time is ensured, and the situation that a large amount of system or machine resources are occupied or the load of the system or machine is greatly increased is avoided, so that the data processing efficiency and reliability are improved.
From the program perspective, the execution subject of the above-mentioned flow may be a computer or a server or a corresponding data processing system, etc. In addition, the execution subject may also be assisted by a third-party application client to execute the above-mentioned flow.
Figure 3 shows a schematic flow chart of a data processing method in a second embodiment of the present description,
fig. 4 shows a data processing procedure in a second embodiment of the present specification, and in conjunction with fig. 3 and fig. 4, the data processing method in the present embodiment includes:
s101: and forming a data processing queue, wherein the position in the data processing queue is used for corresponding to the data to be processed.
In this embodiment, the formed data processing queue has one or more positions, and the number of the positions may be fixed or dynamically changed, and the positions in the data processing queue (hereinafter referred to as "queue positions") are used for corresponding to the data to be processed. Specifically, "forming a data processing queue, the position in the data processing queue being used for corresponding to the data to be processed" may adopt the following manners (but is not limited to the following manners):
1. the data processing queues are formed by fixed locations.
A first number is determined, and a data processing queue having a first number of positions is generated.
A first number may be determined and a data processing queue having a first number of positions may be generated, the positions in the queue may be sequential, i.e. first position, second position, etc. The data to be processed (the data to be processed existing before the data processing queue is generated and/or the data to be processed newly appearing after the data processing queue is generated) is corresponding to the position in the data processing queue, for example, the data to be processed a corresponds to the first position of the data processing queue, the data to be processed B corresponds to the second position of the data processing queue, and so on.
2. The data processing queues are formed by incremental positions.
(1) After a certain data to be processed appears, the corresponding position of the data to be processed is determined, and a data processing queue with the corresponding position is formed.
The "certain data to be processed" may be a designated certain data to be processed, or may be a randomly selected certain data to be processed. After the "certain data to be processed" appears, the corresponding position (generally the first bit) of the data to be processed is determined, and a data processing queue with the corresponding position is formed, which is only one position in the data processing queue.
(2) And determining the increment position corresponding to the existing data to be processed before the certain data to be processed appears and/or the newly appeared data to be processed after the certain data to be processed appears.
After the above-mentioned "certain data to be processed" appears and a data processing queue having one position is formed, the incremental position of the existing data to be processed before the "certain data to be processed" appears can be determined. The "incremental position" here is a position to be added to an existing data processing queue, for example, the existing data processing queue has a first position and a second position, and the "incremental position" is a third position, and the existing data processing queue is added with the third position, and the same is applied below. For example, if there are data a, data B, and data C in the existing data to be processed before the "certain data to be processed" appears, the corresponding positions (i.e., incremental positions) of the data a, the data B, and the data C may be determined. For example, data a corresponds to the second location, data B corresponds to the third location, and data C corresponds to the fourth location. Of course, it is also possible that one of the existing data to be processed before the "certain data to be processed" appears corresponds to the first position, so that the "certain data to be processed" is changed to correspond to other positions.
And/or the presence of a gas in the gas,
after the above "certain data to be processed" appears and a data processing queue with a position is formed, an incremental position corresponding to newly appearing data to be processed after the "certain data to be processed" appears may be determined, that is, after the "certain data to be processed" appears, a corresponding position (i.e., an incremental position) is determined for each newly appearing data to be processed. It is of course also possible that one of the to-be-processed data newly appearing after the "certain to-be-processed data" appears corresponds to the first bit, so that the "certain to-be-processed data" described above corresponds to other positions.
It can be seen that the incremental position can come from two sources.
(3) And for any increment position, adding the increment position to the existing data processing queue before the increment position appears to form an updated data processing queue.
And adding the incremental position to the existing data processing queue before the incremental position appears every time the incremental position appears to form an updated data processing queue, namely adding the incremental position to the existing data processing queue before the incremental position appears every time the incremental position appears, which is equivalent to updating the existing data processing queue before the incremental position appears once.
It can be seen that the number of positions in the data processing queue is dynamically increased, and the data processing queue is added from a position after the occurrence of the "certain data to be processed" to a position with an increment in succession.
3. The data processing queue is formed by fixed and incremental positions.
(1) A data processing queue having a second number of locations is formed, the locations in the data processing queue for corresponding data to be processed.
The process is the same as above 1, i.e. after determining the second number, a data processing queue is formed having the second number of positions, and the effect of the "second number" in the process is equivalent to the "first number" in 1. The second number and the first number above may be modified, but are fixed relative to each other after determination, and may be 1.
(2) And when the second number of positions are all corresponding, determining the corresponding incremental positions of the data to be processed without corresponding positions.
When the second number of positions are all corresponded (i.e. corresponded to the data to be processed), the corresponding incremental positions of the data to be processed without the corresponding queue positions are determined. The data to be processed without corresponding positions generally has two sources, namely, the existing data to be processed without determining corresponding queue positions when the data processing queues with the second number of positions are formed, and the data to be processed which newly appears after the data processing queues with the second number of positions are formed and without determining corresponding queue positions.
(3) And for any increment position, adding the increment position to the existing data processing queue before the increment position appears to form an updated data processing queue.
The same as in (3) of 2.
It can be seen that the content of 3 corresponds to the combination of the contents of 1 and 2, i.e. the number of positions of the data processing queue can be dynamically increased while the data processing queue with a relatively fixed number of positions is formed.
In particular, in 1 and 2, it is also possible to determine a limit which is not increased any more when the number of positions of the data processing queue increases dynamically to this limit, and which is also variable. In addition, no matter which data processing queue forming method is adopted, the queue position corresponding to the data to be processed can be dynamically changed, or the data to be processed corresponding to a certain queue position can be dynamically changed, that is, the corresponding relationship between the data to be processed and the queue position can be dynamically changed. For example, the data a to be processed is determined to correspond to the second position, and then another data to be processed may be determined to correspond to the second position again, so that the data a to be processed corresponds to another position again, or of course, a plurality of data to be processed may correspond to the same queue position together.
In this embodiment, determining to-be-processed data corresponding to a position in the data processing queue may be: and determining the queue position corresponding to the data to be processed according to the time characteristic and/or the priority characteristic of the data to be processed. Wherein the time characteristic comprises the generation time of the data to be processed and/or the prediction processing time. For example, the earlier the generation time, the more forward (or backward) the location; the longer the prediction processing time is, the more forward (or backward) the position is; the higher the priority, the more forward (or backward) the position.
It should be noted that, in this specification, positions in the data processing queue are corresponding to the data to be processed, but the data to be processed is not necessarily moved or copied to the queue.
S102: and determining target data according to the data processing queue.
In this embodiment, after the data processing queue is formed, the target data may be determined from the data processing queue. Further, target data may be determined from the data processing queue when a processing condition is triggered. The processing condition trigger may include: the number of actual corresponding positions of the data processing queue reaches a first limit value;
and/or the presence of a gas in the gas,
reaching a predetermined processing time;
and/or the presence of a gas in the gas,
the next processing time determined by the actual corresponding position number in the data processing queue triggered by the last processing condition is reached.
The above respective process conditions are further explained below:
(1) the actual number of corresponding positions of the data processing queue reaches a first limit.
For a data processing queue (whether fixed position and/or dynamic incremental position as described above), a processing condition is triggered when the number of positions in the data processing queue that are actually mapped reaches a first limit (i.e., a "processing condition"). It can be seen that, for the data processing queue already in the position dynamically increasing state, the actual number of positions to which the data processing queue is actually corresponding is the actual number of positions of the data processing queue. In addition, for the fixed-position data processing queue in fig. 1, the first limit value may be the first number.
(2) A predetermined processing time is reached.
In this embodiment, reaching the predetermined processing time may include: one or more time intervals set by the last processing condition trigger are reached. For example, if the processing condition trigger reaches a set time interval from the last processing condition trigger, the processing condition trigger is actually a timing task, i.e., a timing scanning data processing queue, and the target data is determined according to the data processing queue. The time interval may be altered.
The above "multiple time intervals" may be multiple time intervals, for example, 1 second, 2 seconds, 3 seconds, etc., and the processing condition is triggered again 1 second after the previous triggering of the processing condition, the processing condition is triggered again 2 seconds later, the processing condition is triggered again 3 seconds later, etc.
(3) The next processing time determined by the actual corresponding position number in the data processing queue triggered by the last processing condition is reached.
That is, the next processing time can be determined according to the number of the actual corresponding positions in the data processing queue when the previous processing condition is triggered, and generally, the more the number of the actual corresponding positions in the data processing queue when the previous processing condition is triggered, the closer the next processing time.
In the present specification, the "previous time" and the "next time" are two adjacent times. The time at which the "first" processing condition is triggered may be determined as desired.
In this embodiment, determining the target data according to the data processing queue includes:
and when any processing condition is triggered, taking the data to be processed corresponding to the first third number of positions in the data processing queue when the processing condition is triggered as target data. And if the number of the actual corresponding positions of the data processing queue is smaller than the third number, taking the data to be processed corresponding to all the actual corresponding positions in the data processing queue as target data. If no position in the data processing queue corresponds to the position number or the position number is zero or no data processing queue is formed, the determined target data is zero or uncertain target data. The third number may be changeable.
S103: and processing the target data.
In this embodiment, the processing target data may be processed in different manners according to different situations, for example, the target data may be sorted, deleted, copied, and the like.
Further, processing the target data includes: and processing the target data, removing the queue position corresponding to the target data which is successfully processed, and determining whether to remove or reserve or adjust the queue position corresponding to the target data which is not successfully processed.
For example, when the target data determined when a certain processing condition is triggered has data a, data B, data C and data D, the target data is processed, the data a and the data B are successfully processed, the data C and the data D are unsuccessfully processed, the queue positions corresponding to the data a and the data B are released, and the empty queue positions can be continuously used for corresponding to the data to be processed; and determining whether to release or reserve or increase the queue positions corresponding to the data C and the data D, wherein if the queue positions corresponding to the data C and the data D are reserved or adjusted, the next time the target data is determined, the target data is possibly determined as the target data again. In addition, the target data which has not been successfully processed last time can be made to be at a front position, so that the target data can be more easily determined as the target data when the next processing condition is triggered.
Or, processing the target data comprises: and processing the target data, releasing the queue position corresponding to the target data, and determining whether to re-determine the queue position corresponding to the target data which is not processed successfully.
For example, when the target data determined when a certain processing condition is triggered has data a, data B, data C, and data D, the target data is processed and the queue position corresponding to the data is released, and the queue position left can be continuously used for corresponding to-be-processed data. And if the data A and the data B are successfully processed and the data C and the data D are unsuccessfully processed, determining whether to correspond the data C and the data D with the queue position again and whether to determine the queue position corresponding to the data C and the data D again. In addition, the target data which has not been successfully processed last time can be made to be at a front position, so that the target data can be more easily determined as the target data when the next processing condition is triggered.
It can be seen that before the "first" processing condition is triggered, a data processing queue is formed, and the corresponding relationship between the data to be processed and the queue position may be dynamically changed, and the corresponding relationship between the data to be processed and the queue position between any two adjacent processing condition triggers may also be dynamically changed, for example, after the last processing condition trigger, new data to be processed corresponds to the existing position or the incremental position of the data processing queue, or data which was not successfully processed last corresponds to the queue position again, so as to form the data processing queue when the next processing condition trigger is performed.
In this embodiment, by constructing the data processing queue, performing management on the data to be processed according to the rule of the data processing queue and controllably, and selecting the target data actually processed according to the rule and controllably, the data to be processed can be delayed, so as to achieve the effect of peak clipping and valley filling, so that the data processing amount in unit time tends to be stable, thereby not only avoiding frequent calling of processing resources or processing capacity, but also ensuring the stability of the target data amount processed each time, and avoiding the occurrence of a situation that (sudden or instantaneous) a large amount of system or machine resources are occupied or a large amount of system or machine loads are increased (i.e. avoiding system or machine load spikes), thereby improving the data processing efficiency and reliability.
As shown in fig. 5, in the third embodiment of the present specification, a memory management system forms a memory management queue, where a position in the memory management queue is used for a corresponding memory occupied object; and the memory management system determines a target object according to the memory management queue and processes the target object.
In this embodiment, by constructing the memory management queue, performing management on the memory occupied objects according to the rules and the controllable conditions of the memory management queue, and selecting the target objects actually processed according to the rules and the controllable conditions, not only is frequent calling of processing resources or processing capacity avoided, but also the stability of the number of the memory occupied objects processed each time is ensured, and the situation that a large amount of system or machine resources are occupied or the load of the system or machine is increased is avoided, so that the memory management efficiency and reliability are improved.
From the program perspective, the execution subject of the above flow may be a computer or a server or a corresponding memory management system. In addition, the execution subject may also be assisted by a third-party application client to execute the above-mentioned flow.
Fig. 6 shows a schematic flow chart of a memory management method in a fourth embodiment of this specification, and fig. 7 shows a memory management process in the fourth embodiment of this specification, and with reference to fig. 6 and 7, the memory management method in this embodiment includes:
s201: and forming a memory management queue, wherein the position in the memory management queue is used for corresponding to the memory occupied object.
In S101, the memory management queue may refer to the data processing queue, the memory occupied object may refer to data to be processed, and the process of forming the memory management queue may refer to the process of forming the data processing queue in S101. The memory occupying object occupies the memory, for example: in the running process of the business system process, because of the existence of continuous business requests and various timing tasks, a large number of temporary objects can be distributed in the memory, and the temporary objects can be memory occupied objects. The memory footprint object may also be another memory-footprint application.
S202: and determining a target object according to the memory management queue.
Like S102, in this embodiment, after the memory management queue is formed, the target object may be determined according to the memory management queue. Further, when the processing condition is triggered, the target object may be determined according to the memory management queue. In this embodiment, the triggering of the processing condition includes: the occupied amount of the memory reaches a first limit value or a first percentage;
and/or the presence of a gas in the gas,
the actual corresponding position number of the memory management queue reaches a second limit value;
and/or the presence of a gas in the gas,
reaching a predetermined processing time;
and/or the presence of a gas in the gas,
the next processing time determined by the actual corresponding position number in the data processing queue triggered by the last processing condition is reached.
The above respective process conditions are further explained below:
(1) the occupied memory amount reaches a first limit or a first percentage.
Wherein both the first limit and the first percentage are modifiable.
(2) The number of actual corresponding positions of the memory management queue reaches a second limit value.
Reference may be made to (1) in S102, and "second limit value" herein may be referred to as "first limit value" in S102 (1).
(3) A predetermined processing time is reached.
Reference may be made to (2) in S102.
(4) The next processing time determined by the actual corresponding position number in the data processing queue triggered by the last processing condition is reached.
Reference may be made to (3) in S102.
In the present embodiment, the target object may refer to the target data in S102, and the determination of the target object may refer to the determination of the target data in S102. That is, when any processing condition is triggered, the memory occupied objects corresponding to the fourth number of positions in the memory management queue when the processing condition is triggered are taken as target objects. If the number of the actual corresponding positions of the memory management queue is smaller than the fourth number, the memory occupied objects corresponding to all the actual corresponding positions in the memory management queue are used as target objects. If no position in the memory management queue corresponds to the memory management queue or the position number is zero or the memory management queue is not formed, the determined target object is zero or an uncertain target object. The fourth number may be modified.
S203: and processing the target object.
In this embodiment, the processing target object may include: and cleaning the target object to release the occupied memory, namely recycling the memory.
Further, processing the target object includes:
processing the target object, removing the queue position corresponding to the target object which is successfully processed, and determining whether to remove or reserve or adjust the queue position corresponding to the target object which is not successfully processed; or processing the target object, releasing the queue position corresponding to the target object, and determining whether to re-determine the queue position corresponding to the target object with unsuccessful processing. The specific process may refer to S103.
In this embodiment, by constructing the memory management queue, performing management on the memory occupied objects according to the rules of the memory management queue and controllably, and selecting the target object actually processed according to the rules and controllably, the memory occupied objects can be delayed to achieve the effect of peak clipping and valley filling, so that the processing amount of the memory occupied objects in unit time tends to be stable, thereby not only avoiding frequent calling of processing resources or processing capacity, but also ensuring the stability of the number of the target objects processed each time, and avoiding the occurrence of a situation that (sudden or instantaneous) a large amount of system or machine resources are occupied or a large amount of system or machine load is increased (i.e. avoiding system or machine load spikes), thereby improving the memory management efficiency and reliability.
The subject (e.g., processing system or processing machine) for processing the target object in the present embodiment may be the same as or different from the subject (e.g., processing system or processing machine) for processing the target data in the second embodiment, and may be selected according to actual circumstances.
As shown in fig. 8, a fifth embodiment of the present specification provides a data processing apparatus including:
a queue module 301, configured to form a data processing queue, where a position in the data processing queue is used for corresponding to data to be processed;
a target module 302, configured to determine target data according to the data processing queue;
a processing module 303, configured to process the target data.
Optionally, forming the data processing queue includes;
a first number is determined, and a data processing queue having a first number of positions is generated.
Optionally, forming a data processing queue, where a position in the data processing queue is used for corresponding to the data to be processed includes:
after a certain data to be processed appears, determining the corresponding position of the data to be processed to form a data processing queue with the corresponding position;
determining an increment position corresponding to existing data to be processed before the certain data to be processed appears and/or newly appearing data to be processed after the certain data to be processed;
for any increment position, adding the increment position to an existing data processing queue before the increment position appears to form an updated data processing queue;
or,
forming a data processing queue, wherein positions in the data processing queue for corresponding to data to be processed comprise:
forming a data processing queue having a second number of locations, the locations in the data processing queue for corresponding data to be processed;
when the second number of positions are all corresponding, determining corresponding incremental positions of the data to be processed without corresponding positions;
and for any increment position, adding the increment position to the existing data processing queue before the increment position appears to form an updated data processing queue.
Optionally, determining a queue position corresponding to the data to be processed according to a time characteristic and/or a priority characteristic of the data to be processed;
wherein the time characteristic comprises the generation time of the data to be processed and/or the prediction processing time.
Optionally, determining the target data according to the data processing queue includes:
and when the processing condition is triggered, determining target data according to the data processing queue.
Optionally, the processing condition trigger includes:
the number of actual corresponding positions of the data processing queue reaches a first limit value; and/or the presence of a gas in the gas,
reaching a predetermined processing time; and/or the presence of a gas in the gas,
the next processing time determined by the actual corresponding position number in the data processing queue triggered by the last processing condition is reached.
Optionally, the reaching of the predetermined processing time includes:
one or more time intervals set by the last processing condition trigger are reached.
Optionally, determining the target data according to the data processing queue includes:
taking the data to be processed corresponding to the first third number of positions in the data processing queue as target data;
and if the number of the positions actually corresponding to the data processing queue is smaller than the third number, taking the data to be processed corresponding to all the positions actually corresponding to the data processing queue as target data.
Optionally, processing the target data includes:
processing the target data, removing the queue position corresponding to the target data which is successfully processed, and determining whether to remove or reserve or adjust the queue position corresponding to the target data which is not successfully processed; or processing the target data, releasing the queue position corresponding to the target data, and determining whether to re-determine the queue position corresponding to the target data which is not processed successfully.
As shown in fig. 9, a sixth embodiment of the present disclosure provides a memory management device, including:
a queue module 401, configured to form a memory management queue, where a position in the memory management queue is used for a corresponding memory occupied object;
a target module 402, configured to determine a target object according to the memory management queue;
a processing module 403, configured to process the target object.
Optionally, forming the memory management queue includes;
a first number is determined, and a memory management queue having a first number of locations is generated. The "first number" here may be the same as or different from the "first number" in the previous embodiments.
Optionally, forming a memory management queue, where a position in the memory management queue is used for a corresponding memory occupied object includes:
after a certain memory occupied object appears, determining the corresponding position of the memory occupied object to form a memory management queue with the corresponding position;
determining an existing memory occupying object before the certain memory occupying object appears and/or an incremental position corresponding to a memory occupying object which newly appears after the certain memory occupying object;
for any increment position, adding the increment position to an existing memory management queue before the increment position appears to form an updated memory management queue;
or,
forming a memory management queue, wherein the position in the memory management queue is used for corresponding to a memory occupied object, and the method comprises the following steps:
forming a memory management queue with a second number of positions, wherein the positions in the memory management queue are used for corresponding memory occupied objects;
when the second number of positions are all corresponding, determining corresponding incremental positions of the memory occupied objects without corresponding positions;
and for any increment position, adding the increment position to the existing memory management queue before the increment position appears to form an updated memory management queue.
The "second number" here may be the same as or different from the "second number" in the previous embodiments.
Optionally, determining a queue position corresponding to the memory occupied object according to a time characteristic and/or a priority characteristic of the memory occupied object;
wherein the time characteristic comprises the generation time and/or the prediction processing time of the memory occupation object.
Optionally, determining the target object according to the memory management queue includes:
and when the processing condition is triggered, determining a target object according to the memory management queue.
Optionally, the processing condition trigger includes:
the occupied amount of the memory reaches a first limit value or a first percentage; and/or the presence of a gas in the gas,
the actual corresponding position number of the memory management queue reaches a second limit value; and/or the presence of a gas in the gas,
reaching a predetermined processing time; and/or the presence of a gas in the gas,
the next processing time determined by the actual corresponding position number in the data processing queue triggered by the last processing condition is reached.
Optionally, the reaching of the predetermined processing time includes:
one or more time intervals set by the last processing condition trigger are reached.
Optionally, determining the target object according to the memory management queue includes:
taking memory occupied objects corresponding to the front fourth number of positions in the memory management queue as target objects;
and if the number of the actual corresponding positions of the memory management queue is smaller than the fourth number, taking all the memory occupied objects corresponding to the actual corresponding positions of the memory management queue as target objects.
Optionally, processing the target object includes:
and cleaning the target object to release the occupied memory of the target object.
Optionally, processing the target object includes:
processing the target object, removing the queue position corresponding to the target object which is successfully processed, and determining whether to remove or reserve or adjust the queue position corresponding to the target object which is not successfully processed; or processing the target object, releasing the queue position corresponding to the target object, and determining whether to re-determine the queue position corresponding to the target object with unsuccessful processing.
A seventh embodiment of the present specification provides a data processing apparatus comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
forming a data processing queue, wherein the position in the data processing queue is used for corresponding to data to be processed;
determining target data according to the data processing queue;
and processing the target data.
An eighth embodiment of the present disclosure provides a memory management device, including:
at least one processor;
and a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
forming a memory management queue, wherein the position in the memory management queue is used for corresponding to a memory occupied object;
determining a target object according to the memory management queue;
and processing the target object.
A ninth embodiment of the present specification provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, perform the steps of:
forming a data processing queue, wherein the position in the data processing queue is used for corresponding to data to be processed;
determining target data according to the data processing queue;
and processing the target data.
A tenth embodiment of the present specification provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, perform the steps of:
forming a memory management queue, wherein the position in the memory management queue is used for corresponding to a memory occupied object;
determining a target object according to the memory management queue;
and processing the target object.
The above embodiments may be used in combination.
While certain embodiments of the present disclosure have been described above, other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily have to be in the particular order shown or in sequential order to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, device, and non-volatile computer-readable storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and in relation to the description, reference may be made to some portions of the description of the method embodiments.
The apparatus, the device, the nonvolatile computer readable storage medium, and the method provided in the embodiments of the present specification correspond to each other, and therefore, the apparatus, the device, and the nonvolatile computer storage medium also have similar advantageous technical effects to the corresponding method.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, the present specification embodiments may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (22)

1. A data processing method, characterized in that,
forming a data processing queue, wherein the position in the data processing queue is used for corresponding to data to be processed;
determining target data according to the data processing queue;
and processing the target data.
2. The method of claim 1, wherein forming a data processing queue comprises;
a first number is determined, and a data processing queue having a first number of positions is generated.
3. The method of claim 1,
forming a data processing queue, wherein positions in the data processing queue for corresponding to data to be processed comprise:
after a certain data to be processed appears, determining the corresponding position of the data to be processed to form a data processing queue with the corresponding position;
determining an increment position corresponding to existing data to be processed before the certain data to be processed appears and/or newly appearing data to be processed after the certain data to be processed;
for any increment position, adding the increment position to an existing data processing queue before the increment position appears to form an updated data processing queue;
or,
forming a data processing queue, wherein positions in the data processing queue for corresponding to data to be processed comprise:
forming a data processing queue having a second number of locations, the locations in the data processing queue for corresponding data to be processed;
when the second number of positions are all corresponding, determining corresponding incremental positions of the data to be processed without corresponding positions;
and for any increment position, adding the increment position to the existing data processing queue before the increment position appears to form an updated data processing queue.
4. The method according to claim 1, wherein the queue position corresponding to the data to be processed is determined according to the time characteristic and/or the priority characteristic of the data to be processed;
wherein the time characteristic comprises the generation time of the data to be processed and/or the prediction processing time.
5. The method of claim 1, wherein determining target data from the data processing queue comprises:
and when the processing condition is triggered, determining target data according to the data processing queue.
6. The method of claim 5, wherein processing a conditional trigger comprises:
the number of actual corresponding positions of the data processing queue reaches a first limit value;
and/or the presence of a gas in the gas,
reaching a predetermined processing time;
and/or the presence of a gas in the gas,
the next processing time determined by the actual corresponding position number in the data processing queue triggered by the last processing condition is reached.
7. The method of claim 6, wherein reaching the predetermined processing time comprises:
one or more time intervals set by the last processing condition trigger are reached.
8. The method of any of claims 1 to 7, wherein determining target data from the data processing queue comprises:
taking the data to be processed corresponding to the first third number of positions in the data processing queue as target data;
and if the number of the positions actually corresponding to the data processing queue is smaller than the third number, taking the data to be processed corresponding to all the positions actually corresponding to the data processing queue as target data.
9. The method of any of claims 1 to 7, wherein processing the target data comprises:
processing the target data, removing the queue position corresponding to the target data which is successfully processed, and determining whether to remove or reserve or adjust the queue position corresponding to the target data which is not successfully processed; or processing the target data, releasing the queue position corresponding to the target data, and determining whether to re-determine the queue position corresponding to the target data which is not processed successfully.
10. A memory management method is characterized in that,
forming a memory management queue, wherein the position in the memory management queue is used for corresponding to a memory occupied object;
determining a target object according to the memory management queue;
and processing the target object.
11. The method of claim 10, wherein determining a target object from the memory management queue comprises:
taking memory occupied objects corresponding to the front fourth number of positions in the memory management queue as target objects;
and if the number of the actual corresponding positions of the memory management queue is smaller than the fourth number, taking all the memory occupied objects corresponding to the actual corresponding positions of the memory management queue as target objects.
12. The method of claim 10, wherein processing the target object comprises:
and cleaning the target object to release the occupied memory of the target object.
13. The method of any of claims 10 to 12, wherein processing the target object comprises:
processing the target object, removing the queue position corresponding to the target object which is successfully processed, and determining whether to remove or reserve or adjust the queue position corresponding to the target object which is not successfully processed; or processing the target object, releasing the queue position corresponding to the target object, and determining whether to re-determine the queue position corresponding to the target object with unsuccessful processing.
14. A data processing apparatus, comprising:
the queue module is used for forming a data processing queue, and the position in the data processing queue is used for corresponding to data to be processed;
the target module is used for determining target data according to the data processing queue;
and the processing module is used for processing the target data.
15. The apparatus of claim 14,
forming a data processing queue, wherein positions in the data processing queue for corresponding to data to be processed comprise:
after a certain data to be processed appears, determining the corresponding position of the data to be processed to form a data processing queue with the corresponding position;
determining an increment position corresponding to existing data to be processed before the certain data to be processed appears and/or newly appearing data to be processed after the certain data to be processed;
for any increment position, adding the increment position to an existing data processing queue before the increment position appears to form an updated data processing queue;
or,
forming a data processing queue, wherein positions in the data processing queue for corresponding to data to be processed comprise:
forming a data processing queue having a second number of locations, the locations in the data processing queue for corresponding data to be processed;
when the second number of positions are all corresponding, determining corresponding incremental positions of the data to be processed without corresponding positions;
and for any increment position, adding the increment position to the existing data processing queue before the increment position appears to form an updated data processing queue.
16. The apparatus of claim 14 or 15, wherein determining target data from the data processing queue comprises:
taking the data to be processed corresponding to the first third number of positions in the data processing queue as target data;
and if the number of the positions actually corresponding to the data processing queue is smaller than the third number, taking the data to be processed corresponding to all the positions actually corresponding to the data processing queue as target data.
17. A memory management device, comprising:
the queue module is used for forming a memory management queue, and the position in the memory management queue is used for corresponding to a memory occupied object;
the target module is used for determining a target object according to the memory management queue;
and the processing module is used for processing the target object.
18. The apparatus of claim 17, wherein determining a target object from the memory management queue comprises:
taking memory occupied objects corresponding to the front fourth number of positions in the memory management queue as target objects;
and if the number of the actual corresponding positions of the memory management queue is smaller than the fourth number, taking all the memory occupied objects corresponding to the actual corresponding positions of the memory management queue as target objects.
19. A data processing apparatus, characterized by comprising:
at least one processor;
and the number of the first and second groups,
a memory communicatively coupled to the at least one processor;
wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
forming a data processing queue, wherein the position in the data processing queue is used for corresponding to data to be processed;
determining target data according to the data processing queue;
and processing the target data.
20. A memory management device, comprising:
at least one processor;
and the number of the first and second groups,
a memory communicatively coupled to the at least one processor;
wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
forming a memory management queue, wherein the position in the memory management queue is used for corresponding to a memory occupied object;
determining a target object according to the memory management queue;
and processing the target object.
21. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, perform the steps of:
forming a data processing queue, wherein the position in the data processing queue is used for corresponding to data to be processed;
determining target data according to the data processing queue;
and processing the target data.
22. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, perform the steps of:
forming a memory management queue, wherein the position in the memory management queue is used for corresponding to a memory occupied object;
determining a target object according to the memory management queue;
and processing the target object.
CN201811393931.7A 2018-11-21 2018-11-21 A kind of data processing, EMS memory management process, device, equipment and medium Pending CN109471731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811393931.7A CN109471731A (en) 2018-11-21 2018-11-21 A kind of data processing, EMS memory management process, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811393931.7A CN109471731A (en) 2018-11-21 2018-11-21 A kind of data processing, EMS memory management process, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN109471731A true CN109471731A (en) 2019-03-15

Family

ID=65674549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811393931.7A Pending CN109471731A (en) 2018-11-21 2018-11-21 A kind of data processing, EMS memory management process, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN109471731A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1542623A (en) * 2003-04-29 2004-11-03 华为技术有限公司 Method for implementing memory management
CN104199790A (en) * 2014-08-21 2014-12-10 北京奇艺世纪科技有限公司 Data processing method and device
CN106802826A (en) * 2016-12-23 2017-06-06 中国银联股份有限公司 A kind of method for processing business and device based on thread pool
CN107885789A (en) * 2017-10-18 2018-04-06 上海瀚之友信息技术服务有限公司 A kind of data relay system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1542623A (en) * 2003-04-29 2004-11-03 华为技术有限公司 Method for implementing memory management
CN104199790A (en) * 2014-08-21 2014-12-10 北京奇艺世纪科技有限公司 Data processing method and device
CN106802826A (en) * 2016-12-23 2017-06-06 中国银联股份有限公司 A kind of method for processing business and device based on thread pool
CN107885789A (en) * 2017-10-18 2018-04-06 上海瀚之友信息技术服务有限公司 A kind of data relay system and method

Similar Documents

Publication Publication Date Title
CN107391526B (en) Data processing method and device based on block chain
CN107577694B (en) Data processing method and device based on block chain
CN108712454B (en) File processing method, device and equipment
CN108459898B (en) Resource recovery method and device
CN107391527B (en) Data processing method and device based on block chain
US10324836B2 (en) Balanced double deques for eliminating memory fences in garbage collection
CN107578338B (en) Service publishing method, device and equipment
CN113961145B (en) Data migration method and device
CN116225669B (en) Task execution method and device, storage medium and electronic equipment
CN111273965B (en) Container application starting method, system and device and electronic equipment
CN116502225B (en) Virus scanning method and device for self-adaptive packet redundancy arrangement and electronic equipment
CN111459573B (en) Method and device for starting intelligent contract execution environment
CN113032119A (en) Task scheduling method and device, storage medium and electronic equipment
CN110908429B (en) Timer operation method and device
CN109471731A (en) A kind of data processing, EMS memory management process, device, equipment and medium
CN110032433B (en) Task execution method, device, equipment and medium
CN116108498A (en) Program execution method, program execution device, storage medium and electronic equipment
CN113641872B (en) Hashing method, hashing device, hashing equipment and hashing medium
CN110737524B (en) Task rule management method, device, equipment and medium
CN114968422A (en) Method and device for automatically executing contracts based on variable state
CN109753351B (en) Time-limited task processing method, device, equipment and medium
CN109614388B (en) Budget deduction method and device
CN114691621A (en) IPFS file storage strategy system based on block chain
CN111880922A (en) Processing method, device and equipment for concurrent tasks
CN113010551A (en) Resource caching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201012

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20201012

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right