CN111831655B - Data processing method, device, medium and electronic equipment - Google Patents

Data processing method, device, medium and electronic equipment Download PDF

Info

Publication number
CN111831655B
CN111831655B CN202010587489.2A CN202010587489A CN111831655B CN 111831655 B CN111831655 B CN 111831655B CN 202010587489 A CN202010587489 A CN 202010587489A CN 111831655 B CN111831655 B CN 111831655B
Authority
CN
China
Prior art keywords
block
data
data structure
read request
block index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010587489.2A
Other languages
Chinese (zh)
Other versions
CN111831655A (en
Inventor
姜哓庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010587489.2A priority Critical patent/CN111831655B/en
Publication of CN111831655A publication Critical patent/CN111831655A/en
Application granted granted Critical
Publication of CN111831655B publication Critical patent/CN111831655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2272Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/452Remote windowing, e.g. X-Window System, desktop virtualisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a data processing method, a data processing device, a medium and electronic equipment. The embodiment of the disclosure constructs a shared block index data structure in a cache, and the block index data structure generates a structure merging tree with a hierarchical structure on the basis of a skip list data structure. And retrieving the block index data structure in the cache according to the read request information sent by the sub-mirror application, and returning the block data as read request data when the block data of the block index data structure meets the read request information. Meanwhile, the block index data structure is hierarchically arranged in the physical storage module according to the response speed. In the desktop cloud large-batch concurrency starting and concurrency use scene, common software data are almost necessarily loaded, and hot spot block data are accessed frequently. The embodiment of the disclosure improves the starting and application program loading speed, reduces the access flow to the physical storage module, establishes the hierarchical shared block cache and reduces the cache occupation amount.

Description

Data processing method, device, medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a medium, and an electronic device for data processing.
Background
Cloud Service (Cloud Service) is a network-related Service-based model of augmentation, use, and interaction that can provide dynamic, easily-expanded, virtualized resources over the internet. The enterprise data center operates much like the internet by distributing the computations over a large number of distributed computers, rather than in local computers or remote servers. So that the user can switch the resource to the needed application and access the computer and the storage system according to the requirement.
In cloud computing, infrastructure as a service (english acronym Infrastructure as a Service, abbreviated as IaaS) and server virtualization technology have been developed and rapidly popularized. The desktop virtualization application prospect is wide. The virtual machine (virtual desktop) provided by the IaaS is connected with the terminal machine to replace the personal desktop computer, so that a great deal of office cost is certainly saved.
However, the desktop cloud is intensively accessed in a large amount in the same time period, so that extremely high instantaneous access flow is often generated for the IaaS virtual machine storage system, and pressure is caused for the back-end storage, so that the concurrent start is slow.
For this reason, some caching techniques are applied to this scenario in order to increase the concurrent start-up speed of the virtual machine and alleviate the access pressure of the back-end storage.
However, in the prior art, the cache technology has the problems that the mirror image occupies too much memory and the algorithm is complex, and the resource cost is extremely high.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The present disclosure aims to provide a data processing method, a data processing device, a medium and an electronic device, which can solve at least one technical problem mentioned above. The specific scheme is as follows:
according to a first aspect of the present disclosure, the present disclosure provides a data processing method applied to a parent image of a cloud service, including:
acquiring read request information of a first application sent by a child mirror image;
acquiring cache access parameters corresponding to the first application based on the read request information;
retrieving a block index data structure in a cache according to the cache access parameter; the block index data structure comprises a multi-level tree data structure;
and when all the block data meeting the cache access parameters exist in the block index data structure, acquiring read request data based on the all the block data.
According to a second aspect of the present disclosure, there is provided an apparatus for data processing, comprising:
the reading request information acquisition unit is used for acquiring the reading request information of the first application sent by the sub-mirror;
the cache access parameter obtaining unit is used for obtaining cache access parameters corresponding to the first application based on the read request information;
the retrieval unit is used for retrieving the block index data structure in the cache according to the cache access parameters; the block index data structure comprises a multi-level tree data structure;
and acquiring a block data unit, wherein when all block data meeting the cache access parameters exist in the block index data structure, the block data unit is used for acquiring read request data based on all the block data.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of data processing according to any of the first aspects.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: one or more processors; storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of data processing as claimed in any of the first aspects.
Compared with the prior art, the scheme of the embodiment of the disclosure has at least the following beneficial effects:
the disclosure provides a data processing method, a data processing device, a medium and electronic equipment. The embodiment of the disclosure constructs a shared block index data structure in a cache, and the block index data structure generates a structure merging tree with a hierarchical structure on the basis of a skip list data structure. And retrieving the block index data structure in the cache according to the read request information sent by the sub-mirror application, and returning the block data as read request data when the block data of the block index data structure meets the read request information. Meanwhile, the block index data structure is hierarchically arranged in the physical storage module according to the response speed. For the scene of more reading and less writing, the hot spot block data of the uppermost layer is directly put into the cache through the block index of the block index data structure, and the lower layer can sequentially use the equipment with descending access speed in a layering manner. In a complex application scene, the cache write-back rate is greatly improved. And in the desktop cloud large-batch concurrent starting and concurrent use scenes, the common software data are almost necessarily loaded, and the hot spot block data are accessed frequently. The embodiment of the disclosure improves the starting and application program loading speed, reduces the access flow to the physical storage module, establishes the hierarchical shared block cache and reduces the cache occupation amount.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale. In the drawings:
FIG. 1 illustrates a flow chart of a method of data processing according to an embodiment of the present disclosure;
FIG. 2 illustrates a parent-child mirror relationship diagram of a method of data processing according to an embodiment of the present disclosure;
FIG. 3 illustrates a block index data structure diagram of a method of data processing according to an embodiment of the present disclosure;
FIG. 4 illustrates a workflow diagram of a multi-level shared cache system according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of a unit of an apparatus for data processing according to an embodiment of the present disclosure;
fig. 6 illustrates a schematic diagram of an electronic device connection structure according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Alternative embodiments of the present disclosure are described in detail below with reference to the drawings.
The first embodiment provided in the present disclosure is an embodiment of a method of data processing. The embodiment of the disclosure is applied to the parent image of the cloud service.
Embodiments of the present disclosure are described in detail below with reference to fig. 1 through 3.
As shown in fig. 1, in step S101, read request information of a first application sent by a child mirror is obtained.
As shown in fig. 2, an embodiment of the present disclosure provides a multi-level shared cache system, where a parent mirror is disposed in a mirror server to provide access services for a mirror website. The service content of the father mirror image is the same as that of the main server, but the father mirror image and the main server are installed on different servers, and the mirror image servers and the main server can be respectively stored in different places for sharing the load capacity of the main server. The service of the parent image is available, but not the original service. For example, specific applications provided by the parent image to the user include desktop office services, system software services, and kernel data services. The parent image establishes an upper storage child image for the different virtual machines. The child mirror image directly provides specific application service for the user, and the child mirror image acquires and mounts the specific application of the user from the parent mirror image. That is, creating a desktop cloud virtual machine based on the child mirror mount volume. For example, if the first application of the user a is a desktop office service, the child image 1 obtains and mounts the desktop office service from the parent image; the second application of the user B is a system software service, and the child mirror image 2 acquires and mounts the system software service from the father mirror image; the third application of the user C is kernel data service, and the child mirror image 3 acquires and mounts the kernel data service from the father mirror image; the second application of the user D is a system software service, and the child image 4 acquires and mounts the system software service from the parent image. Different desktop cloud virtual machines correspond to different child images, and if the child images are to be applied to the same multi-level shared cache system, the child images are required to be applied to the same parent image.
The first application refers to a specific application of a user. Such as desktop office services. In practical application, a user sends out read request information of a first application, firstly, a sub-mirror image is searched, and if service required by the user exists in the sub-mirror image, the sub-mirror image returns request data responding to the read request information. When the corresponding service does not exist in the child mirror image, the read request information of the first application is sent to the parent mirror image, and after the child mirror image obtains the service required by the user (namely, the read request data of the first application), the service required by the user is stored in the child mirror image.
Step S102, obtaining a cache access parameter corresponding to the first application based on the read request information.
Embodiments of the present disclosure provide a chunk index data structure in a parent mirrored cache, the chunk index data structure comprising a multi-level tree data structure. The parent image stores the service applied by the user in the cached block index data structure in block units, and the size of the block data is generally 512B or 4096B. For example, a 10GB (10737418240 byte) block device (i.e., a device that holds a block index data structure) can have 10737418240/512 = 20971520 block data if the block data size is 512B. For example, the block index data structure includes a skip list data structure, and the block index data structure generates a structure merging Tree (english full name Log Structured Merge Tree, abbreviated as LSM-Tree) with a hierarchical structure based on the skip list data structure.
For example, as shown in fig. 2, block data 1 holds read request data of a desktop office service, block data 2 holds read request data of a system software service, and block data 3 holds read request data of a kernel data service.
To improve retrieval efficiency, the block index data structure includes a block index.
Cache access parameters are provided in the parent image for each application to locate the required chunk data in the chunk index data structure. The cache access parameters include a block offset and a block length.
The offset refers to the sequence number of the block data in the block index data structure, and the block length refers to the number of block data from which the offset starts. For example: the block data size is 512B, and the buffer access parameter corresponding to the first application is: the offset is 1 and the block length is 2, then the byte address of the corresponding block index data structure is 1024 bytes of data starting at byte 512.
Step S103, retrieving the block index data structure in the cache according to the cache access parameters.
Specifically, the method comprises the following steps:
step S103-1, retrieving the block index data structure in the cache according to the block offset and the block length.
Further, the retrieving the block index data structure in the cache according to the block offset and the block length includes the following steps:
step S103-1-1, retrieving the block index according to the block offset and the block length.
The block index includes a block section for indicating a range of block data that can be retrieved under the block index. For example, the chunk interval is denoted as [12, 20], indicating that chunk data with sequence numbers 12 to 20 is included under the chunk index.
Step S104, when all the block data satisfying the cache access parameter exists in the block index data structure, acquiring the read request data based on the all the block data.
I.e. all the block data required for the read request information of the first application can be found in the cached block index data structure.
Optionally, when the block index includes a block interval, and when all the block data satisfying the cache access parameter exists in the block index data structure, acquiring read request data based on the all the block data, including the following steps:
and step S104-1, matching the block interval based on the block offset, and obtaining a matched block interval.
For example, as shown in fig. 3, the block interval is denoted by [12, 20], the cache access parameters of the desktop office service: offset 16, block length 4; the chunk data for the desktop office service is determined to be associated with chunk interval 12, 20 based on offset 16, i.e., the matching chunk interval is 12, 20.
Step S104-2, acquiring a request block interval based on the block offset and the block length.
The request block interval is a block interval meeting the cache access parameters.
For example, continuing with the example above, based on offset 16 and block length 4, the block data for the desktop office service is: 16 blocks of data, 17 blocks of data, 18 blocks of data, and 19 blocks of data; the request block interval is 16, 19.
Step S104-3, when the matching block interval range includes the request block interval, acquiring the corresponding read request data based on the request block interval.
For example, continuing with the above example, since the request chunk interval [16, 19] is entirely within the range of the matching chunk interval [12, 20], the read request data for the desktop office service can be obtained directly from the chunk index data structure in the cache.
When only part of block data required by the read request information exists in the block index data structure, the method further comprises the following steps after the block index data structure in the cache is searched according to the cache access parameters:
step S105, when there is partial block data satisfying the cache access parameter in the block index data structure, acquiring partial read request data based on the partial block data, and acquiring the rest of read request data from the physical storage module, and loading the rest of read request data into the block index data structure.
A physical storage module, comprising: memory and hard disk. The physical memory modules may be hierarchically arranged according to response speed.
That is, the partial read request data in the block index data structure and the rest of the read request data in the physical memory module constitute complete read request data in response to the read request information.
And simultaneously, the rest read request data are loaded into the block index data structure so as to improve the efficiency of subsequent application.
When there is no block data required by the read request information in the block index data structure, after the block index data structure in the cache is retrieved according to the cache access parameter, the method further includes the following steps:
and step S106, when the block data meeting the cache access parameters does not exist in the block index data structure, acquiring the read request data from a physical storage module, and loading the read request data into the block index data structure.
I.e. there is no read request data in the block index data structure, the complete read request data is retrieved from the physical storage module.
Meanwhile, the read request data is loaded into the block index data structure so as to improve the efficiency of subsequent applications.
Based on the data processing method, the working flow of the multi-level shared cache system is fully introduced. As shown in fig. 4, a user issues read request information to a child image for a specific application (e.g., desktop office service); if the read request information has been executed by the sub-mirror, the sub-mirror stores the corresponding read request data; if the read request information is not executed by the child image, the read request information is sent to the parent image; the father mirror image searches the block index data structure in the cache through the offset and the block length in the read request information, if the corresponding read request data exists in the father mirror image, the read request data is returned to the user, if the rest read request data does not exist in part of the father mirror image, the rest read request data is obtained from the physical storage module, loaded into the father mirror image, and the complete read request data is returned to the user; while the child mirror stores the read request data.
The embodiment of the disclosure constructs a shared block index data structure in a cache, and the block index data structure generates a structure merging tree with a hierarchical structure on the basis of a skip list data structure. The block index data structure is hierarchically arranged in the physical storage module according to the response speed. For the scene of more reading and less writing, the hot spot block data of the uppermost layer is directly put into the cache through the block index of the block index data structure, and the lower layer can sequentially use the equipment with descending access speed in a layering manner. In a complex application scene, the cache write-back rate is greatly improved. And in the desktop cloud large-batch concurrent starting and concurrent use scenes, the common software data are almost necessarily loaded, and the hot spot block data are accessed frequently. The embodiment of the disclosure improves the starting and application program loading speed, reduces the access flow to the physical storage module, establishes the hierarchical shared block cache and reduces the cache occupation amount.
Corresponding to the first embodiment provided by the present disclosure, the present disclosure also provides a second embodiment, namely an apparatus for data processing. Since the second embodiment is substantially similar to the first embodiment, the description is relatively simple, and the relevant portions will be referred to the corresponding descriptions of the first embodiment. The device embodiments described below are merely illustrative.
Fig. 5 illustrates an embodiment of an apparatus for data processing provided by the present disclosure.
As shown in fig. 5, the present disclosure provides an apparatus for data processing, including:
the read request information obtaining unit 501 is configured to obtain read request information of a first application sent by a child mirror;
a cache access parameter obtaining unit 502, configured to obtain a cache access parameter corresponding to the first application based on the read request information;
a retrieving unit 503, configured to retrieve the block index data structure in the cache according to the cache access parameter; the block index data structure comprises a multi-level tree data structure;
and an acquiring block data unit 504, configured to acquire, when all block data satisfying the cache access parameter exists in the block index data structure, read request data based on the all block data.
Optionally, the cache access parameters include a block offset and a block length;
the search unit 503 includes:
and the searching subunit is used for searching the block index data structure in the cache according to the block offset and the block length.
Optionally, the block index data structure includes a block index;
in the retrieval subunit, it includes:
and a retrieval block index subunit, configured to retrieve the block index according to the block offset and the block length.
Optionally, the block index includes a block interval;
in the acquisition block data unit 504, it includes:
a sub-unit for acquiring a matching block interval, which is used for matching the block interval based on the block offset, and acquiring a matching block interval;
an acquisition request block interval subunit, configured to acquire a request block interval based on the block offset and the block length;
and the acquisition block data subunit is used for acquiring the corresponding read request data based on the request block interval when the request block interval is included in the matching block interval range.
Optionally, the apparatus further includes:
and acquiring a partial block data unit, configured to acquire partial read request data based on the partial block data when partial block data satisfying the cache access parameter exists in the block index data structure after the block index data structure in the cache is retrieved according to the cache access parameter, acquire remaining read request data from a physical storage module, and load the remaining read request data into the block index data structure.
Optionally, the apparatus further includes:
and acquiring a physical storage data unit, wherein the physical storage data unit is used for acquiring the read request data from a physical storage module and loading the read request data into a block index data structure when the block data meeting the cache access parameters does not exist in the block index data structure after the block index data structure in the cache is searched according to the cache access parameters.
Optionally, the block index data structure comprises a skip list data structure.
The embodiment of the disclosure constructs a shared block index data structure in a cache, and the block index data structure generates a structure merging tree with a hierarchical structure on the basis of a skip list data structure. The block index data structure is hierarchically arranged in the physical storage module according to the response speed. For the scene of more reading and less writing, the hot spot block data of the uppermost layer is directly put into the cache through the block index of the block index data structure, and the lower layer can sequentially use the equipment with descending access speed in a layering manner. In a complex application scene, the cache write-back rate is greatly improved. And in the desktop cloud large-batch concurrent starting and concurrent use scenes, the common software data are almost necessarily loaded, and the hot spot block data are accessed frequently. The embodiment of the disclosure improves the starting and application program loading speed, reduces the access flow to the physical storage module, establishes the hierarchical shared block cache and reduces the cache occupation amount.
An embodiment of the present disclosure provides a third embodiment, that is, an electronic device, which is used for a method for data processing, where the electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the one processor to enable the at least one processor to perform the method of data processing as described in the first embodiment.
The present disclosure provides a fourth embodiment, namely a computer storage medium for data processing, the computer storage medium storing computer-executable instructions that are executable to perform the method for data processing as described in the first embodiment.
Referring now to fig. 6, a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (10)

1. A method of data processing applied to a parent image of a cloud service, comprising:
when the read request data of the first application does not exist in the sub-mirror image, acquiring the read request information of the first application sent by the sub-mirror image;
acquiring cache access parameters corresponding to the first application based on the read request information;
retrieving a block index data structure in a cache according to the cache access parameter; the block index data structure comprises a multi-level tree data structure;
and when all the block data meeting the cache access parameters exist in the block index data structure, acquiring read request data based on all the block data, and storing the read request data in the sub-mirror.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the cache access parameters comprise block offset and block length;
the retrieving the block index data structure in the cache according to the cache access parameter includes:
and retrieving a block index data structure in a cache according to the block offset and the block length.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the block index data structure includes a block index;
the retrieving a block index data structure in a cache according to the block offset and the block length includes:
the block index is retrieved based on the block offset and the block length.
4. The method of claim 3, wherein the step of,
the chunk index includes chunk intervals;
and when all the block data meeting the cache access parameters exist in the block index data structure, acquiring read request data based on the all the block data, wherein the read request data comprises the following steps:
matching the block interval based on the block offset to obtain a matched block interval;
acquiring a request block interval based on the block offset and the block length;
and when the matching block interval range comprises the request block interval, acquiring the corresponding read request data based on the request block interval.
5. The method of claim 1, further comprising, after said retrieving a block index data structure in a cache according to said cache access parameter:
and when partial block data meeting the cache access parameters exists in the block index data structure, acquiring partial read request data based on the partial block data, acquiring the rest read request data from a physical storage module, and loading the rest read request data into the block index data structure.
6. The method of claim 1, further comprising, after said retrieving a block index data structure in a cache according to said cache access parameter:
and when the block data meeting the cache access parameters does not exist in the block index data structure, acquiring the read request data from a physical storage module, and loading the read request data into the block index data structure.
7. The method of claim 1, wherein the block index data structure comprises a skip list data structure.
8. An apparatus for data processing, comprising:
the method comprises the steps of acquiring a read request information unit, wherein the read request information unit is used for acquiring read request information of a first application sent by a sub-mirror when the read request data of the first application does not exist in the sub-mirror;
the cache access parameter obtaining unit is used for obtaining cache access parameters corresponding to the first application based on the read request information;
the retrieval unit is used for retrieving the block index data structure in the cache according to the cache access parameters; the block index data structure comprises a multi-level tree data structure;
and acquiring a block data unit, wherein when all block data meeting the cache access parameters exist in the block index data structure, the block data unit is used for acquiring read request data based on all the block data and storing the read request data in the sub-mirror image.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of any of claims 1 to 7.
CN202010587489.2A 2020-06-24 2020-06-24 Data processing method, device, medium and electronic equipment Active CN111831655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010587489.2A CN111831655B (en) 2020-06-24 2020-06-24 Data processing method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010587489.2A CN111831655B (en) 2020-06-24 2020-06-24 Data processing method, device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111831655A CN111831655A (en) 2020-10-27
CN111831655B true CN111831655B (en) 2024-04-09

Family

ID=72898924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010587489.2A Active CN111831655B (en) 2020-06-24 2020-06-24 Data processing method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111831655B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092775A (en) * 2013-01-31 2013-05-08 武汉大学 Spatial data double cache method and mechanism based on key value structure
CN104503703A (en) * 2014-12-16 2015-04-08 华为技术有限公司 Cache processing method and device
CN105721485A (en) * 2016-03-04 2016-06-29 安徽大学 Secure nearest neighbor query method oriented to plurality of data owners in outsourcing cloud environment
CN105933376A (en) * 2016-03-31 2016-09-07 华为技术有限公司 Data manipulation method, server and storage system
CN108399263A (en) * 2018-03-15 2018-08-14 北京大众益康科技有限公司 The storage of time series data and querying method and storage and processing platform

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10114908B2 (en) * 2012-11-13 2018-10-30 International Business Machines Corporation Hybrid table implementation by using buffer pool as permanent in-memory storage for memory-resident data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092775A (en) * 2013-01-31 2013-05-08 武汉大学 Spatial data double cache method and mechanism based on key value structure
CN104503703A (en) * 2014-12-16 2015-04-08 华为技术有限公司 Cache processing method and device
CN105721485A (en) * 2016-03-04 2016-06-29 安徽大学 Secure nearest neighbor query method oriented to plurality of data owners in outsourcing cloud environment
CN105933376A (en) * 2016-03-31 2016-09-07 华为技术有限公司 Data manipulation method, server and storage system
CN108399263A (en) * 2018-03-15 2018-08-14 北京大众益康科技有限公司 The storage of time series data and querying method and storage and processing platform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种适合高校的云服务器架构的设计与实现;徐义臻;中国硕士电子期刊信息科技辑》;论文第3节 *
基于八叉树结构的三维地震数据压缩方法;魏晓辉;《2014年中国地球科学联合学术年会—专题65:深部探测技术与实验—探测仪器装备论文集》;论文正文 *

Also Published As

Publication number Publication date
CN111831655A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN111581563B (en) Page response method and device, storage medium and electronic equipment
US9742860B2 (en) Bi-temporal key value cache system
CN111400625B (en) Page processing method and device, electronic equipment and computer readable storage medium
CN112379982B (en) Task processing method, device, electronic equipment and computer readable storage medium
CN112035529A (en) Caching method and device, electronic equipment and computer readable storage medium
WO2023174013A1 (en) Video memory allocation method and apparatus, and medium and electronic device
CN111198777A (en) Data processing method, device, terminal and storage medium
CN112099982A (en) Collapse information positioning method, device, medium and electronic equipment
CN112416303B (en) Software development kit hot repair method and device and electronic equipment
CN110888773B (en) Method, device, medium and electronic equipment for acquiring thread identification
CN113391860B (en) Service request processing method and device, electronic equipment and computer storage medium
CN110545313B (en) Message push control method and device and electronic equipment
CN109614089B (en) Automatic generation method, device, equipment and storage medium of data access code
CN111831655B (en) Data processing method, device, medium and electronic equipment
WO2023273564A1 (en) Virtual machine memory management method and apparatus, storage medium, and electronic device
CN112100211B (en) Data storage method, apparatus, electronic device, and computer readable medium
CN111459893B (en) File processing method and device and electronic equipment
CN111625745B (en) Recommendation method, recommendation device, electronic equipment and computer readable medium
CN112084003B (en) Method, device, medium and electronic equipment for isolating data
CN116820354B (en) Data storage method, data storage device and data storage system
CN111626787B (en) Resource issuing method, device, medium and equipment
CN112948108B (en) Request processing method and device and electronic equipment
CN117130751A (en) Data processing method and device and electronic equipment
CN111209042B (en) Method, device, medium and electronic equipment for establishing function stack
CN111309549B (en) Monitoring method, monitoring system, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant