CN111158610B - Method, device and equipment for synchronously setting cache acceleration and readable medium - Google Patents

Method, device and equipment for synchronously setting cache acceleration and readable medium Download PDF

Info

Publication number
CN111158610B
CN111158610B CN201911421298.2A CN201911421298A CN111158610B CN 111158610 B CN111158610 B CN 111158610B CN 201911421298 A CN201911421298 A CN 201911421298A CN 111158610 B CN111158610 B CN 111158610B
Authority
CN
China
Prior art keywords
client
command
pause
server
flag bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911421298.2A
Other languages
Chinese (zh)
Other versions
CN111158610A (en
Inventor
侯红生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN201911421298.2A priority Critical patent/CN111158610B/en
Publication of CN111158610A publication Critical patent/CN111158610A/en
Application granted granted Critical
Publication of CN111158610B publication Critical patent/CN111158610B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a method for synchronously setting cache acceleration, which comprises the following steps: the server side issues a cache acceleration setting command to the client side; the client side responds to the received command, sets a pause and processes the command; the client responds to the completion of command processing, adds a successful flag bit and feeds back the successful flag bit to the server; the server receives the successful flag bit and confirms the successful flag bit; and the client cancels the pause in response to receiving the confirmation information. The invention also discloses a device, computer equipment and a readable storage medium. According to the invention, the pause operation is set at the client, the client completes the processing and then transmits the state to the server, the server receives the information of the client to cancel the pause operation, the whole starting or closing process is completed, the last command is ensured to be processed and completed when the command for starting or closing the cache is executed each time, and the conflict is avoided.

Description

Method, device and equipment for synchronously setting cache acceleration and readable medium
Technical Field
The invention relates to the technical field of hard disk cache, in particular to a method, a device, equipment and a readable medium for synchronously setting cache acceleration.
Background
In the Linux system, due to the performance problem of the hdd disk, the cache data refreshing speed is low, and in order to accelerate the cache data falling, the existing method for accelerating the cache by using the ssd disk is adopted. The existing method for controlling the start-up acceleration of the ssd disk mostly adopts an asynchronous processing method, the asynchronous processing method is already completed on a server side, and a thread in a background continuously executes related actions, so that the related requirements can be met under the condition of not demanding time requirements.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, a device, and a readable medium for setting cache acceleration synchronously, where a pause operation is set at a client, a state is sent to a server after processing of the client is completed, the server receives information from the client to cancel the pause operation, and the whole opening or closing process is completed, so as to ensure that the previous command is processed and completed each time a command for opening or closing a cache is executed, thereby avoiding a conflict.
Based on the above object, an aspect of the embodiments of the present invention provides a method for setting cache acceleration synchronously, including the following steps: the server side issues a cache acceleration setting command to the client side; the client side responds to the received command, sets a pause and processes the command; the client responds to the completion of command processing, adds a successful flag bit and feeds back the successful flag bit to the server; the server receives the successful flag bit and confirms the successful flag bit; and the client cancels the pause in response to receiving the confirmation information.
In some embodiments, the step of the server issuing the cache acceleration setting command to the client further includes: and the server side issues a cache opening or closing acceleration command to the client side.
In some embodiments, the client, in response to receiving the command, setting a pause and processing the command further comprises: the client side responds to the received command and judges the client side state; in response to the status being no pause is set, the client sets a pause and processes the command.
In some embodiments, further comprising: in response to the status being set suspended, the client refuses to process the command.
In some embodiments, the receiving, by the server, the successful flag bit, and the acknowledging the successful flag bit further includes: and displaying that the cache starting or closing acceleration operation is successful through the display module.
In some embodiments, the client, in response to receiving the confirmation information, unpausing further comprises: the client releases the suspended state in response to receiving the acknowledgement information.
In some embodiments, the cache is configured as a ssd disk cache under Linux systems.
In another aspect of the embodiments of the present invention, a device for setting cache acceleration synchronously is further provided, including:
the server is configured to issue a set cache acceleration command;
the client sides are in remote communication connection with the server side and are configured to respond to the received command, set a pause and process the command; responding to the completion of the command processing, adding a successful flag bit and feeding back the successful flag bit to the server;
the server is further configured to receive the successful flag bit and confirm the successful flag bit; and sending the confirmation information to the client to cancel the client pause.
In another aspect of the embodiments of the present invention, there is also provided a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to implement the method steps as above.
In a further aspect of the embodiments of the present invention, a computer-readable storage medium is also provided, in which a computer program for implementing the above method steps is stored when the computer program is executed by a processor.
The invention has the following beneficial technical effects: the pause operation is set at the client, the client transmits the state to the server after processing is finished, the server receives the information of the client to cancel the pause operation, the whole starting or closing process is finished, the previous command is guaranteed to be processed and finished when the command for starting or closing the cache is executed each time, and the conflict is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a schematic diagram of an embodiment of a method for setting cache acceleration in synchronization according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
In view of the foregoing, a first aspect of the embodiments of the present invention provides an embodiment of a method for setting cache acceleration synchronously. Fig. 1 is a schematic diagram illustrating an embodiment of a method for setting cache acceleration in synchronization according to the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
s1, the server side issues a cache acceleration setting command to the client side;
s2, the client responds to the received command, sets a pause and processes the command;
s3, the client responds to the completion of command processing, adds a success flag bit and feeds back the success flag bit to the server;
s4, the server receives the success flag bit and confirms the success flag bit; and
s5, the client cancels the pause in response to receiving the confirmation information.
In some embodiments of the present invention, when the synchronous setting of the ssd disk cache acceleration is performed in the Linux system, when the client executes the start or stop command, a pause action is introduced, and when all commands of the client are executed, the addition success flag is sent to the server, and the server confirms after receiving the addition success flag, cancels the pause action, and completes the whole setting process.
In some embodiments of the present invention, the issuing, by the server, the cache acceleration setting command to the client further includes: and the server side issues a cache opening or closing acceleration command to the client side.
In some embodiments of the invention, the client, in response to receiving the command, setting a pause and processing the command further comprises: the client side responds to the received command and judges the client side state; in response to the status being no pause is set, the client sets a pause and processes the command.
In some embodiments of the invention, further comprising: in response to the status being set suspended, the client refuses to process the command. When the starting acceleration command is executed and the closing command is received, the closing task is executed without processing the starting command, and faults are avoided.
In some embodiments of the present invention, the receiving, by the server, the successful flag bit, and the confirming the successful flag bit further includes: and displaying that the cache starting or closing acceleration operation is successful through the display module.
In some embodiments of the invention, the client, in response to receiving the acknowledgement information, cancelling the suspension further comprises: the client releases the suspended state in response to receiving the acknowledgement information. At this time, the client may continue to process other commands issued by the server.
In some embodiments of the present invention, the cache is configured as a ssd disk cache under Linux systems.
It should be particularly noted that, the steps in the embodiments of the above-mentioned method for setting cache acceleration synchronously can be intersected, replaced, added, or deleted, and therefore, these reasonable permutation and combination transformations should also belong to the scope of the present invention, and should not limit the scope of the present invention to the embodiments.
In view of the foregoing, according to a second aspect of the embodiments of the present invention, there is provided an apparatus for setting cache acceleration synchronously, including:
the server is configured to issue a set cache acceleration command;
the client sides are connected with the server side in a remote communication mode and are configured to respond to the received cache acceleration command, set a pause and process the cache acceleration command; responding to the completion of the processing of the cache acceleration command, adding a successful flag bit and feeding back the successful flag bit to the server;
the server is further configured to receive the successful zone bit and confirm the successful zone bit; and sending a confirmation message to the client to cancel the client pause.
In view of the above object, a third aspect of the embodiments of the present invention provides a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to implement the method steps as above.
The invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, performs the method as above.
Finally, it should be noted that, as one of ordinary skill in the art can appreciate that all or part of the processes of the methods of the above embodiments can be implemented by a computer program to instruct related hardware, and the program of the method for setting cache acceleration synchronously can be stored in a computer readable storage medium, and when executed, the program can include the processes of the embodiments of the methods as described above. The storage medium of the program may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like. The embodiments of the computer program may achieve the same or similar effects as any of the above-described method embodiments.
Furthermore, the methods disclosed according to embodiments of the present invention may also be implemented as a computer program executed by a processor, which may be stored in a computer-readable storage medium. Which when executed by a processor performs the above-described functions defined in the methods disclosed in embodiments of the invention.
Further, the above method steps and system elements may also be implemented using a controller and a computer readable storage medium for storing a computer program for causing the controller to implement the functions of the above steps or elements.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which can act as external cache memory. By way of example and not limitation, RAM is available in a variety of forms such as synchronous RAM (DRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary designs, the functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk, blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (8)

1. A method for synchronously setting cache acceleration is characterized by comprising the following steps:
the server side issues a cache acceleration setting command to the client side;
the client responds to the received command, sets a pause and processes the command;
the client responds to the completion of the command processing, adds a successful flag bit and feeds back the successful flag bit to the server;
the server receives the successful zone bit and confirms the successful zone bit; and
the client cancels the pause in response to receiving the confirmation information;
wherein the client, in response to receiving the command, setting a pause and processing the command further comprises:
the client responds to the received command and judges the client state;
in response to the status being that the pause is not set, the client sets a pause and processes the command;
and in response to the status being set to suspend, the client refuses to process the command.
2. The method of claim 1, wherein the step of the server issuing the set cache acceleration command to the client further comprises:
and the server side issues a cache opening or closing acceleration command to the client side.
3. The method of claim 1, wherein the server receives the success flag bit and acknowledges the success flag bit further comprises:
and displaying that the cache starting or closing acceleration operation is successful through the display module.
4. The method of claim 1, wherein the client, in response to receiving the acknowledgement, cancels the suspension further comprising:
the client releases the suspended state in response to receiving the acknowledgement information.
5. The method of claim 1, wherein the cache is configured as a ssd disk cache under a Linux system.
6. An apparatus for synchronously setting cache acceleration, comprising:
the server is configured to issue a set cache acceleration command;
the client sides are in remote communication connection with the server side and are configured to respond to the received command, set a pause and process the command; responding to the completion of the command processing, adding a successful flag bit and feeding back the successful flag bit to the server;
the server is further configured to receive the successful flag bit and confirm the successful flag bit; and sending the confirmation information to the client to cancel the client pause; the client is further configured to determine the client state in response to receiving the command; in response to the status being that the pause is not set, the client sets a pause and processes the command; and in response to the status being set to suspend, the client refuses to process the command.
7. A computer device, comprising:
at least one processor; and
a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of any of the methods 1-5.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201911421298.2A 2019-12-31 2019-12-31 Method, device and equipment for synchronously setting cache acceleration and readable medium Active CN111158610B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911421298.2A CN111158610B (en) 2019-12-31 2019-12-31 Method, device and equipment for synchronously setting cache acceleration and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911421298.2A CN111158610B (en) 2019-12-31 2019-12-31 Method, device and equipment for synchronously setting cache acceleration and readable medium

Publications (2)

Publication Number Publication Date
CN111158610A CN111158610A (en) 2020-05-15
CN111158610B true CN111158610B (en) 2022-02-22

Family

ID=70560560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911421298.2A Active CN111158610B (en) 2019-12-31 2019-12-31 Method, device and equipment for synchronously setting cache acceleration and readable medium

Country Status (1)

Country Link
CN (1) CN111158610B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111737201B (en) * 2020-06-05 2022-11-22 苏州浪潮智能科技有限公司 Method for closing opened file, computer equipment and storage medium
CN111858538B (en) * 2020-06-19 2022-05-24 苏州浪潮智能科技有限公司 Method, device, equipment and medium for configuring BeeGFS quota by cluster
CN112463528B (en) * 2020-11-20 2022-07-12 苏州浪潮智能科技有限公司 In-band and out-band data interaction method, device, equipment and readable medium
CN112748884B (en) * 2021-01-16 2022-07-29 苏州浪潮智能科技有限公司 Distributed storage online upgrading method, device, equipment and readable medium
CN113992768B (en) * 2021-10-27 2023-05-12 西安微电子技术研究所 TTE network remote management method, TTE network remote management system, TTE network remote management equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183315A (en) * 2007-12-10 2008-05-21 华中科技大学 Paralleling multi-processor virtual machine system
CN104834558A (en) * 2015-05-19 2015-08-12 北京京东尚科信息技术有限公司 Method and system for processing data
CN106293623A (en) * 2015-05-18 2017-01-04 北京忆芯科技有限公司 Microinstruction sequence performs method and device thereof
CN107147620A (en) * 2017-04-19 2017-09-08 武汉票据交易中心有限公司 A kind of interacting message processing method, client and service end
CN109150558A (en) * 2017-06-15 2019-01-04 北京京东尚科信息技术有限公司 The methods, devices and systems of managing message queues node
WO2019078951A1 (en) * 2017-10-16 2019-04-25 Dropbox, Inc. Workflow functions of content management system enforced by client device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183315A (en) * 2007-12-10 2008-05-21 华中科技大学 Paralleling multi-processor virtual machine system
CN106293623A (en) * 2015-05-18 2017-01-04 北京忆芯科技有限公司 Microinstruction sequence performs method and device thereof
CN104834558A (en) * 2015-05-19 2015-08-12 北京京东尚科信息技术有限公司 Method and system for processing data
CN107147620A (en) * 2017-04-19 2017-09-08 武汉票据交易中心有限公司 A kind of interacting message processing method, client and service end
CN109150558A (en) * 2017-06-15 2019-01-04 北京京东尚科信息技术有限公司 The methods, devices and systems of managing message queues node
WO2019078951A1 (en) * 2017-10-16 2019-04-25 Dropbox, Inc. Workflow functions of content management system enforced by client device

Also Published As

Publication number Publication date
CN111158610A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111158610B (en) Method, device and equipment for synchronously setting cache acceleration and readable medium
CN107800708B (en) Vehicle-mounted machine account automatic login method and vehicle-mounted machine device
US11188560B2 (en) Synchronizing object in local object storage node
JP2017157152A (en) Storage control apparatus, storage system, storage control method, and program
EP4213037A1 (en) Data storage and reconciliation method and system
CN111143258B (en) Method, system, device and medium for accessing FPGA (field programmable Gate array) by system based on Opencl
CN113420082A (en) Data synchronization anomaly detection method and device
CN110727534A (en) Method, equipment and storage medium for repairing file system
CN114116531B (en) Cache consistency write-back method, device, equipment and medium
CN111090504B (en) Method, equipment and medium for realizing timing task based on placemaker
CN116244049A (en) Lightweight distributed flow transaction controller and method
CN110990313B (en) Method, equipment and storage medium for processing clock stretching of I3C bus
CN114253870A (en) Method, system, device and medium for updating L2P table
CN111090362B (en) Scene processing method and device and electronic equipment
CN110795318B (en) Data processing method and device and electronic equipment
CN110427199B (en) Method, equipment and readable medium for upgrading BMC and BIOS in matching manner
CN113986403A (en) State machine operation method and device, computer equipment and storage medium
CN113515082A (en) Polling communication method and device
JP6183876B2 (en) Replication apparatus, replication method, and program
CN113746759B (en) Dual-control communication method, device and equipment in garbage recovery and readable medium
CN113992876A (en) Method for recording document and playing video, storage medium and terminal
CN110764927A (en) Method and equipment for asynchronous communication among multiple machines
CN111491018B (en) Model downloading method and system
JP6149416B2 (en) Service bus device, service bus device control method, message transmission / reception system, and program
CN113905053A (en) Method, device and equipment for transmitting bitmap message and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant