CN111831655A - Data processing method, device, medium and electronic equipment - Google Patents

Data processing method, device, medium and electronic equipment Download PDF

Info

Publication number
CN111831655A
CN111831655A CN202010587489.2A CN202010587489A CN111831655A CN 111831655 A CN111831655 A CN 111831655A CN 202010587489 A CN202010587489 A CN 202010587489A CN 111831655 A CN111831655 A CN 111831655A
Authority
CN
China
Prior art keywords
block
data structure
data
block index
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010587489.2A
Other languages
Chinese (zh)
Other versions
CN111831655B (en
Inventor
姜哓庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010587489.2A priority Critical patent/CN111831655B/en
Publication of CN111831655A publication Critical patent/CN111831655A/en
Application granted granted Critical
Publication of CN111831655B publication Critical patent/CN111831655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2272Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/452Remote windowing, e.g. X-Window System, desktop virtualisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a data processing method, apparatus, medium, and electronic device. The embodiment of the disclosure constructs a shared block index data structure in a cache, and the block index data structure generates a structure merging tree with a hierarchical structure on the basis of a skip list data structure. And retrieving the block index data structure in the cache according to the read request information sent by the sub-mirror image application, and returning the block data serving as the read request data when the block data of the block index data structure meets the read request information. Meanwhile, the block index data structure is hierarchically arranged in the physical storage module according to the response speed. In a scenario of concurrent startup and concurrent use of a large amount of desktop clouds, common software data is almost inevitably loaded, and hot spot data is frequently accessed. The embodiment of the disclosure improves the starting and application program loading speed, reduces the access flow to the physical storage module, establishes the hierarchical shared block cache, and reduces the cache occupation.

Description

Data processing method, device, medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a medium, and an electronic device for data processing.
Background
Cloud Service (Cloud Service) is a model based on the addition, usage and interaction of network-related services that can provide dynamic, easily scalable, virtualized resources over the internet. With the computing distributed over a large number of distributed computers, rather than local computers or remote servers, an enterprise data center operates more like the internet. And the user can switch the resources to the required application and access the computer and the storage system according to the requirement.
In cloud computing, Infrastructure as a Service (IaaS) and server virtualization technologies have been developed and rapidly popularized. The desktop virtualization has a wide application prospect. The terminal is connected with a virtual machine (virtual desktop) provided by IaaS to replace a personal desktop computer, so that a great deal of office cost is undoubtedly saved.
However, when a large amount of desktop clouds are accessed in a centralized manner in the same time period, extremely high instantaneous access flow is often generated on an IaaS virtual machine storage system, pressure is applied to back-end storage, and concurrent starting is slow.
For this reason, some caching technologies are applied in the scenario to improve the concurrent starting speed of the virtual machine and relieve the access pressure of the backend storage.
However, in the prior art, the caching technology has the problems that the mirror image occupies too much memory and the algorithm is complex, and the resource cost is extremely high.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
An object of the present disclosure is to provide a data processing method, apparatus, medium, and electronic device, which can solve at least one of the above-mentioned technical problems. The specific scheme is as follows:
according to a specific implementation manner of the present disclosure, in a first aspect, the present disclosure provides a data processing method, applied to a parent mirror of a cloud service, including:
acquiring reading request information of a first application sent by a sub-mirror image;
obtaining cache access parameters corresponding to the first application based on the read request information;
retrieving a block index data structure in the cache according to the cache access parameter; the block index data structure comprises a multi-level tree data structure;
and when all the block data meeting the cache access parameter exist in the block index data structure, acquiring read request data based on all the block data.
According to a second aspect, the present disclosure provides an apparatus for data processing, including:
the reading request information acquiring unit is used for acquiring the reading request information of the first application sent by the sub-mirror image;
a cache access parameter obtaining unit, configured to obtain a cache access parameter corresponding to the first application based on the read request information;
the retrieval unit is used for retrieving the block index data structure in the cache according to the cache access parameter; the block index data structure comprises a multi-level tree data structure;
and the block data acquisition unit is used for acquiring the read request data based on all the block data when all the block data meeting the cache access parameter exist in the block index data structure.
According to a third aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of data processing according to any of the first aspects.
According to a fourth aspect thereof, the present disclosure provides an electronic device, comprising: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out a method of data processing according to any one of the first aspect.
Compared with the prior art, the scheme of the embodiment of the disclosure at least has the following beneficial effects:
the present disclosure provides a data processing method, apparatus, medium, and electronic device. The embodiment of the disclosure constructs a shared block index data structure in a cache, and the block index data structure generates a structure merging tree with a hierarchical structure on the basis of a skip list data structure. And retrieving the block index data structure in the cache according to the read request information sent by the sub-mirror image application, and returning the block data serving as the read request data when the block data of the block index data structure meets the read request information. Meanwhile, the block index data structure is hierarchically arranged in the physical storage module according to the response speed. For a scene with more reads and less writes, the hot spot block data at the uppermost layer is directly put into the cache through the block index of the block index data structure, and the devices with the access speeds decreasing can be sequentially used at the lower layer. In a complex application scene, the cache write-back rate is greatly improved. In a scenario of concurrent startup and concurrent use of a large amount of desktop clouds, common software data is almost inevitably loaded, and hotspot block data is frequently accessed. The embodiment of the disclosure improves the starting and application program loading speed, reduces the access flow to the physical storage module, establishes the hierarchical shared block cache, and reduces the cache occupation.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
FIG. 1 shows a flow diagram of a method of data processing according to an embodiment of the present disclosure;
FIG. 2 illustrates a parent mirror and child mirror relationship diagram of a method of data processing according to an embodiment of the present disclosure;
FIG. 3 illustrates a block index data structure diagram of a method of data processing according to an embodiment of the present disclosure;
FIG. 4 illustrates a workflow diagram of a multi-level shared cache system according to an embodiment of the disclosure;
FIG. 5 shows a block diagram of elements of an apparatus for data processing according to an embodiment of the present disclosure;
fig. 6 shows an electronic device connection structure schematic according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Alternative embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
A first embodiment, namely, an embodiment of a method of data processing, is provided for the present disclosure. The embodiment of the disclosure is applied to the parent mirror image of the cloud service.
The embodiments of the present disclosure will be described in detail with reference to fig. 1 to 3.
As shown in fig. 1, step S101 obtains read request information of a first application issued by a sub-image.
As shown in fig. 2, the embodiment of the present disclosure provides a multi-level shared cache system, wherein a parent mirror is disposed in a mirror server to provide access service for a mirror website. The parent mirror image has the same service content as the main server, but is installed on a different server, and the mirror image server and the main server can be respectively stored at different locations for sharing the load capacity of the main server. The services of the parent image are available, but not the original services. For example, the specific applications that the parent image provides to the user include desktop office services, system software services, and kernel data services. The parent mirror image establishes an upper storage child mirror image for different virtual machines. And the child mirror image directly provides specific application service for the user, and acquires and mounts the specific application of the user from the parent mirror image. Namely, creating the desktop cloud virtual machine based on the child mirror mount volume. For example, if the first application of the user a is a desktop office service, the child mirror image 1 acquires and mounts the desktop office service from the parent mirror image; if the second application of the user B is system software service, the child mirror image 2 acquires and mounts the system software service from the parent mirror image; if the third application of the user C is the kernel data service, the child mirror image 3 acquires and mounts the kernel data service from the parent mirror image; and if the second application of the user D is the system software service, the child mirror image 4 acquires and mounts the system software service from the parent mirror image. Different desktop cloud virtual machines correspond to different child images, and if the child images are applied to the same multi-level shared cache system, the child images are required to be applied to the same parent image.
The first application refers to a specific application of a user. Such as desktop office services. In practical application, a user sends out read request information of a first application, and a sub-image is firstly searched, and if the sub-image has services required by the user, request data responding to the read request information is returned by the sub-image. When the corresponding service does not exist in the child mirror image, the read request information of the first application is sent to the parent mirror image, and after the child mirror image acquires the service (namely, the read request data of the first application) required by the user, the service required by the user is stored in the child mirror image.
Step S102, obtaining cache access parameters corresponding to the first application based on the read request information.
The disclosed embodiments provide a block index data structure in a cache of a parent mirror, the block index data structure comprising a multi-level tree data structure. The parent mirror stores the services applied by the user in the cached block index data structure in units of blocks, typically 512B or 4096B in size. For example, a 10GB (10737418240 bytes) block device (i.e., a device that holds the block index data structure) can have 10737418240/512 ═ 20971520 block data if the block data size is 512B. For example, the block index data structure includes a skip list data structure, and the block index data structure generates a structure Merge Tree (LSM-Tree) having a hierarchical structure based on the skip list data structure.
For example, as shown in fig. 2, block data 1 holds read request data of desktop office service, block data 2 holds read request data of system software service, and block data 3 holds read request data of kernel data service.
To improve retrieval efficiency, the block index data structure includes a block index.
Cache access parameters are provided for each application in the parent image to locate the required block data in the block index data structure. The cache access parameter includes a block offset and a block length.
The offset is the sequence number of the block data in the block index data structure, and the block length is the number of block data from which the offset starts. For example: the block data size is 512B, and the cache access parameter corresponding to the first application: the offset is 1, the block length is 2, and the byte address of the corresponding block index data structure is 1024 bytes of data starting at the 512 th byte.
And step S103, retrieving a block index data structure in the cache according to the cache access parameter.
Specifically, the method comprises the following steps:
and step S103-1, retrieving a block index data structure in the cache according to the block offset and the block length.
Further, the retrieving the block index data structure in the cache according to the block offset and the block length includes the following steps:
and step S103-1-1, retrieving the block index according to the block offset and the block length.
The chunk index includes chunk intervals used to indicate a range of chunk data that can be retrieved under the chunk index. For example, the chunk interval is denoted [12, 20], indicating that chunk data with sequence numbers 12 to 20 is included under the chunk index.
And step S104, when all the block data meeting the cache access parameter exist in the block index data structure, acquiring the read request data based on all the block data.
I.e. all block data needed for the read request information of the first application can be found in the cached block index data structure.
Optionally, when the block index includes a block interval, and when all block data meeting the cache access parameter exists in the block index data structure, acquiring read request data based on all block data, including the following steps:
and step S104-1, matching the block interval based on the block offset to obtain a matched block interval.
For example, as shown in FIG. 3, the block region is denoted [12, 20], the cache access parameter of the desktop office service: offset is 16, block length is 4; it is determined from the offset 16 that the tile data of the desktop office service is associated with a tile interval [12, 20], i.e., the matching tile interval is [12, 20 ].
And step S104-2, acquiring a request block interval based on the block offset and the block length.
The request block interval is the block interval satisfying the cache access parameter.
For example, continuing with the above example, from offset 16 and block length 4, the block data for the desktop office service is: 16 block data, 17 block data, 18 block data, and 19 block data; the request block interval is [16, 19 ].
And step S104-3, when the matching block interval range comprises the request block interval, acquiring the corresponding read request data based on the request block interval.
For example, continuing the above example, since the request chunk interval [16, 19] is completely within the range of the matching chunk interval [12, 20], the read request data of the desktop office service can be directly obtained from the chunk index data structure in the cache.
When only partial block data needed by the read request information exists in the block index data structure, after retrieving the block index data structure in the cache according to the cache access parameter, the method further comprises the following steps:
step S105, when there is partial block data satisfying the cache access parameter in the block index data structure, acquiring partial read request data based on the partial block data, acquiring remaining read request data from the physical storage module, and loading the remaining read request data into the block index data structure.
A physical storage module comprising: memory and hard disks. The physical storage modules may be hierarchically arranged according to response speed.
That is, part of the read request data in the block index data structure and the rest of the read request data in the physical storage module constitute complete read request data responding to the read request information.
Meanwhile, the rest of the read request data is loaded into the block index data structure, so that the efficiency of subsequent application is improved.
When the block data needed by the read request information does not exist in the block index data structure, after the block index data structure in the cache is retrieved according to the cache access parameter, the method further comprises the following steps:
step S106, when the block data meeting the cache access parameter does not exist in the block index data structure, the read request data is obtained from a physical storage module, and the read request data is loaded into the block index data structure.
I.e., no read request data exists in the block index data structure, the complete read request data is retrieved from the physical storage module.
Meanwhile, the read request data is loaded into the block index data structure so as to improve the efficiency of subsequent application.
Based on the data processing method, the working process of the multi-level shared cache system is completely introduced. As shown in fig. 4, the user sends a read request message to the sub-mirror for a specific application (e.g., desktop office service); if the read request information is executed by the sub-mirror image, the sub-mirror image stores the corresponding read request data; if the read request information is not executed by the child mirror image, the read request information is sent to a parent mirror image; the father mirror image searches a block index data structure in the cache through the offset and the block length in the reading request information, if the father mirror image has corresponding reading request data, the reading request data is returned to a user, if the father mirror image does not have the rest reading request data, the rest reading request data is obtained from the physical storage module, loaded into the father mirror image, and the complete reading request data is returned to the user; while the child image stores the read request data.
The embodiment of the disclosure constructs a shared block index data structure in a cache, and the block index data structure generates a structure merging tree with a hierarchical structure on the basis of a skip list data structure. The block index data structure is hierarchically arranged in the physical storage module according to the response speed. For a scene with more reads and less writes, the hot spot block data at the uppermost layer is directly put into the cache through the block index of the block index data structure, and the devices with the access speeds decreasing can be sequentially used at the lower layer. In a complex application scene, the cache write-back rate is greatly improved. In a scenario of concurrent startup and concurrent use of a large amount of desktop clouds, common software data is almost inevitably loaded, and hotspot block data is frequently accessed. The embodiment of the disclosure improves the starting and application program loading speed, reduces the access flow to the physical storage module, establishes the hierarchical shared block cache, and reduces the cache occupation.
Corresponding to the first embodiment provided by the present disclosure, the present disclosure also provides a second embodiment, that is, an apparatus for data processing. Since the second embodiment is basically similar to the first embodiment, the description is simple, and the relevant portions should be referred to the corresponding description of the first embodiment. The device embodiments described below are merely illustrative.
Fig. 5 illustrates an embodiment of a data processing apparatus provided by the present disclosure.
As shown in fig. 5, the present disclosure provides an apparatus for data processing, comprising:
a read request information obtaining unit 501, configured to obtain read request information of a first application sent by a child mirror;
a cache access parameter obtaining unit 502, configured to obtain a cache access parameter corresponding to the first application based on the read request information;
a retrieving unit 503, configured to retrieve a block index data structure in the cache according to the cache access parameter; the block index data structure comprises a multi-level tree data structure;
a block data acquiring unit 504, configured to acquire, when all block data satisfying the cache access parameter exists in the block index data structure, read request data based on all block data.
Optionally, the cache access parameter includes a block offset and a block length;
the search unit 503 includes:
and the retrieval subunit is used for retrieving the block index data structure in the cache according to the block offset and the block length.
Optionally, the block index data structure includes a block index;
in the retrieval subunit, the method comprises:
and the retrieval block index subunit is used for retrieving the block index according to the block offset and the block length.
Optionally, the block index includes a block interval;
in the block data acquiring unit 504, the method includes:
a matching block interval obtaining subunit, configured to match the block interval based on the block offset, and obtain a matching block interval;
an obtaining request block interval subunit configured to obtain a request block interval based on the block offset and the block length;
and the block data acquisition subunit is configured to, when the matching block interval range includes the request block interval, acquire the corresponding read request data based on the request block interval.
Optionally, the apparatus further includes:
and a partial block data acquiring unit, configured to, after retrieving the block index data structure in the cache according to the cache access parameter, acquire partial read request data based on the partial block data when there is partial block data satisfying the cache access parameter in the block index data structure, acquire the remaining read request data from the physical storage module, and load the remaining read request data into the block index data structure.
Optionally, the apparatus further includes:
and the physical storage data acquiring unit is used for acquiring the read request data from a physical storage module and loading the read request data into the block index data structure when the block data meeting the cache access parameter does not exist in the block index data structure after the block index data structure in the cache is retrieved according to the cache access parameter.
Optionally, the block index data structure includes a skip list data structure.
The embodiment of the disclosure constructs a shared block index data structure in a cache, and the block index data structure generates a structure merging tree with a hierarchical structure on the basis of a skip list data structure. The block index data structure is hierarchically arranged in the physical storage module according to the response speed. For a scene with more reads and less writes, the hot spot block data at the uppermost layer is directly put into the cache through the block index of the block index data structure, and the devices with the access speeds decreasing can be sequentially used at the lower layer. In a complex application scene, the cache write-back rate is greatly improved. In a scenario of concurrent startup and concurrent use of a large amount of desktop clouds, common software data is almost inevitably loaded, and hotspot block data is frequently accessed. The embodiment of the disclosure improves the starting and application program loading speed, reduces the access flow to the physical storage module, establishes the hierarchical shared block cache, and reduces the cache occupation.
The third embodiment of the present disclosure provides an electronic device, which is used in a data processing method, and includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the one processor to cause the at least one processor to perform the method of data processing according to the first embodiment.
The present disclosure provides a fourth embodiment, which is a computer storage medium for data processing, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions can execute the method for data processing as described in the first embodiment.
Referring now to FIG. 6, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A data processing method is applied to a parent mirror image of a cloud service, and is characterized by comprising the following steps:
acquiring reading request information of a first application sent by a sub-mirror image;
obtaining cache access parameters corresponding to the first application based on the read request information;
retrieving a block index data structure in the cache according to the cache access parameter; the block index data structure comprises a multi-level tree data structure;
and when all the block data meeting the cache access parameter exist in the block index data structure, acquiring read request data based on all the block data.
2. The method of claim 1,
the cache access parameter comprises a block offset and a block length;
the retrieving a block index data structure in a cache according to the cache access parameter includes:
and retrieving a block index data structure in the cache according to the block offset and the block length.
3. The method of claim 2,
the block index data structure comprises a block index;
the retrieving a block index data structure in a cache according to the block offset and the block length comprises:
retrieving the block index according to the block offset and the block length.
4. The method of claim 3,
the block index comprises a block interval;
when all block data meeting the cache access parameter exist in the block index data structure, acquiring read request data based on all block data, including:
matching the block interval based on the block offset to obtain a matched block interval;
obtaining a request block interval based on the block offset and the block length;
and when the matching block interval range comprises the request block interval, acquiring the corresponding read request data based on the request block interval.
5. The method of claim 1, after retrieving the block index data structure in the cache according to the cache access parameter, further comprising:
and when partial block data meeting the cache access parameter exists in the block index data structure, acquiring partial read request data based on the partial block data, acquiring the rest read request data from a physical storage module, and loading the rest read request data into the block index data structure.
6. The method of claim 1, after retrieving the block index data structure in the cache according to the cache access parameter, further comprising:
and when the block data meeting the cache access parameter does not exist in the block index data structure, acquiring the read request data from a physical storage module, and loading the read request data into the block index data structure.
7. The method of claim 1, wherein the block index data structure comprises a skip list data structure.
8. An apparatus for data processing, comprising:
the reading request information acquiring unit is used for acquiring the reading request information of the first application sent by the sub-mirror image;
a cache access parameter obtaining unit, configured to obtain a cache access parameter corresponding to the first application based on the read request information;
the retrieval unit is used for retrieving the block index data structure in the cache according to the cache access parameter; the block index data structure comprises a multi-level tree data structure;
and the block data acquisition unit is used for acquiring the read request data based on all the block data when all the block data meeting the cache access parameter exist in the block index data structure.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1 to 7.
CN202010587489.2A 2020-06-24 2020-06-24 Data processing method, device, medium and electronic equipment Active CN111831655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010587489.2A CN111831655B (en) 2020-06-24 2020-06-24 Data processing method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010587489.2A CN111831655B (en) 2020-06-24 2020-06-24 Data processing method, device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111831655A true CN111831655A (en) 2020-10-27
CN111831655B CN111831655B (en) 2024-04-09

Family

ID=72898924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010587489.2A Active CN111831655B (en) 2020-06-24 2020-06-24 Data processing method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111831655B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092775A (en) * 2013-01-31 2013-05-08 武汉大学 Spatial data double cache method and mechanism based on key value structure
US20140136510A1 (en) * 2012-11-13 2014-05-15 International Business Machines Corporation Hybrid table implementation by using buffer pool as permanent in-memory storage for memory-resident data
CN104503703A (en) * 2014-12-16 2015-04-08 华为技术有限公司 Cache processing method and device
CN105721485A (en) * 2016-03-04 2016-06-29 安徽大学 Secure nearest neighbor query method oriented to plurality of data owners in outsourcing cloud environment
CN105933376A (en) * 2016-03-31 2016-09-07 华为技术有限公司 Data manipulation method, server and storage system
CN108399263A (en) * 2018-03-15 2018-08-14 北京大众益康科技有限公司 The storage of time series data and querying method and storage and processing platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140136510A1 (en) * 2012-11-13 2014-05-15 International Business Machines Corporation Hybrid table implementation by using buffer pool as permanent in-memory storage for memory-resident data
CN103092775A (en) * 2013-01-31 2013-05-08 武汉大学 Spatial data double cache method and mechanism based on key value structure
CN104503703A (en) * 2014-12-16 2015-04-08 华为技术有限公司 Cache processing method and device
CN105721485A (en) * 2016-03-04 2016-06-29 安徽大学 Secure nearest neighbor query method oriented to plurality of data owners in outsourcing cloud environment
CN105933376A (en) * 2016-03-31 2016-09-07 华为技术有限公司 Data manipulation method, server and storage system
CN108399263A (en) * 2018-03-15 2018-08-14 北京大众益康科技有限公司 The storage of time series data and querying method and storage and processing platform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐义臻: "一种适合高校的云服务器架构的设计与实现", 中国硕士电子期刊信息科技辑》, pages 3 *
魏晓辉: "基于八叉树结构的三维地震数据压缩方法", 《2014年中国地球科学联合学术年会—专题65:深部探测技术与实验—探测仪器装备论文集》 *

Also Published As

Publication number Publication date
CN111831655B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN111581563B (en) Page response method and device, storage medium and electronic equipment
CN111475235B (en) Acceleration method, device, equipment and storage medium for function calculation cold start
CN112379982B (en) Task processing method, device, electronic equipment and computer readable storage medium
CN112035529A (en) Caching method and device, electronic equipment and computer readable storage medium
CN111400625A (en) Page processing method and device, electronic equipment and computer readable storage medium
WO2023174013A1 (en) Video memory allocation method and apparatus, and medium and electronic device
CN112099982A (en) Collapse information positioning method, device, medium and electronic equipment
CN111262907B (en) Service instance access method and device and electronic equipment
CN110704188B (en) Memory allocator optimization method, device, equipment and storage medium
CN110888773B (en) Method, device, medium and electronic equipment for acquiring thread identification
CN110545313B (en) Message push control method and device and electronic equipment
CN109614089B (en) Automatic generation method, device, equipment and storage medium of data access code
CN116541174A (en) Storage device capacity processing method, device, equipment and storage medium
WO2023273564A1 (en) Virtual machine memory management method and apparatus, storage medium, and electronic device
CN111831655B (en) Data processing method, device, medium and electronic equipment
CN111625745B (en) Recommendation method, recommendation device, electronic equipment and computer readable medium
CN111459893B (en) File processing method and device and electronic equipment
CN111581556B (en) Page data processing method, device, electronic equipment and readable medium
CN113971192A (en) Data processing method and device, readable medium and electronic equipment
CN112131181A (en) Storage path display method and device and electronic equipment
CN113391860A (en) Service request processing method and device, electronic equipment and computer storage medium
CN112163176A (en) Data storage method and device, electronic equipment and computer readable medium
CN110730251A (en) Method, device, medium and electronic equipment for analyzing domain name
CN112311840A (en) Multi-terminal data synchronization method, device, equipment and medium
CN113342837B (en) Data transmission method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant