CN110716885B - Data management method and device, electronic equipment and storage medium - Google Patents
Data management method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110716885B CN110716885B CN201911012350.9A CN201911012350A CN110716885B CN 110716885 B CN110716885 B CN 110716885B CN 201911012350 A CN201911012350 A CN 201911012350A CN 110716885 B CN110716885 B CN 110716885B
- Authority
- CN
- China
- Prior art keywords
- data
- subset
- cache
- target
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0804—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0868—Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1024—Latency reduction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the disclosure discloses a data management method, a data management device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a data updating request of an original data set in a hard disk; determining a target data subset corresponding to the data updating request and a cache position corresponding to the target data subset, wherein the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache position is a first cache in a hard disk or a second cache arranged in a memory; and performing data updating processing on the target data subset in the cache position, and returning the target data subset after data updating to the target data subset in the replacement hard disk. The technical scheme provided by the embodiment of the disclosure greatly reduces the read-write operation of the hard disk data, can solve the network transmission bandwidth bottleneck of data management, fully utilizes the cache and reduces the delay.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of data processing, and in particular, to a data management method and apparatus, an electronic device, and a storage medium.
Background
With the development of internet technology, in order to provide information to users more specifically, various types of data on a service platform or a client need to be acquired for analysis, such as user data. The user data may include: user data uploaded by an information provider comprises new user data and old user data; based on the user data mined by the tags, the tags may include high consumption, whether to buy a car, marriage, game or novel, and the like; and user data transmitted by other platforms, such as financial user data of credit level or game segment user data.
At present, data is stored in a hard disk in a service platform based on a distributed storage system, and since most distributed storage systems do not support direct data addition or data deletion, when an information delivery engine requests the service platform to perform data addition or data deletion, all data needs to be read, and after the data is modified, the modified data is written back to the hard disk. Data in the service platform will be more and more along with service development, and when the data volume is large, the operations of reading and writing the hard disk are very many, which easily causes the bottleneck of network transmission bandwidth.
In order to solve the network transmission bandwidth bottleneck, the data request amount of each storage type server is generally reduced by means of capacity expansion, so as to implement data management. However, the currently adopted capacity expansion method consumes machines, increases cost, and only temporarily solves the problem, and when data continuously increases, a network transmission bandwidth bottleneck still exists.
Disclosure of Invention
The embodiment of the disclosure provides a data management method and device, an electronic device and a storage medium, so as to solve the network transmission bandwidth bottleneck of data management and reduce delay.
In a first aspect, an embodiment of the present disclosure provides a data management method, including:
acquiring a data updating request of an original data set in a hard disk;
determining a target data subset corresponding to the data updating request and a cache position corresponding to the target data subset, wherein the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache position is a first cache in the hard disk or a second cache arranged in a memory;
and performing data updating processing on the target data subset in the cache position, and returning the target data subset after data updating to replace the target data subset in the hard disk.
In a second aspect, an embodiment of the present disclosure further provides a data management apparatus, where the apparatus includes:
the updating request acquisition module is used for acquiring a data updating request for the original data set in the hard disk;
a cache determining module, configured to determine a target data subset corresponding to the data update request and a cache position corresponding to the target data subset, where the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache position is a first cache in the hard disk or a second cache set in an internal memory;
and the data updating module is used for performing data updating processing on the target data subset in the cache position and returning the target data subset after data updating to replace the target data subset in the hard disk.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a data management method as described above.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the data management method as described above.
The method includes the steps that a data updating request of an original data set in a hard disk is obtained, a target data subset corresponding to the data updating request and a cache position corresponding to the target data subset are determined, the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache position is a first cache in the hard disk or a second cache arranged in a memory; and performing data updating processing on the target data subset in the cache position, and returning the target data subset after data updating to the target data subset in the replacement hard disk. According to the technical scheme provided by the embodiment of the disclosure, the cache is newly added in the memory, one part of data after the preprocessing of the original data set is stored in the cache of the memory, the other part of data is stored in the cache of the hard disk, when the data in the hard disk is updated, part of data in the cache is updated firstly and is returned to be stored in the hard disk, so that the updating of the data in the hard disk is realized, the read-write operation of the data in the hard disk is greatly reduced, the network transmission bandwidth bottleneck of data management can be solved, the cache is fully utilized, and the delay is reduced.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow chart of a data management method in an embodiment of the present disclosure;
FIG. 2 is a flow chart of another method of data management in an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a data management apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flowchart of a data management method in an embodiment of the present disclosure, where the embodiment is applicable to a case of managing data in a service platform, and the method may be executed by a data management apparatus, where the apparatus may be implemented in a software and/or hardware manner, and the apparatus may be configured in an electronic device, such as a server or a terminal device, where a typical terminal device includes a mobile terminal, and specifically includes a mobile phone, a computer, or a tablet computer. As shown in fig. 1, the method may specifically include:
s110, acquiring a data updating request for the original data set in the hard disk.
The original data set may be various types of data collected by the service platform, and the user data is used as an example in the present scheme for description. The data volume of the original data set is generally large and is stored in a hard disk.
The original data set may include two or more user data, and the number of user data is not limited. Each user data may be configured with a unique data identification and the data identification of the user data is incremented. The user data refers to attribute data or behavior data and the like related to the user, the user data can be transmitted in a crowd packet mode in the embodiment, the crowd packet is a data set comprising a plurality of user data, and the number of corresponding users in the crowd packet can reach the ten-million level.
The data update request is an instruction for instructing the service platform to modify data in the original data set, and the data update request may include a data addition request and a data deletion request.
Specifically, a data update request for an original data set in the hard disk is obtained based on an operation of a user on a setting key, where the setting key may be a virtual key in a service platform, virtual keys corresponding to different data update requests may be different, and an operation form of the setting key may be set according to an actual situation, for example, the operation form may be a single click, a double click, a long press, and the like on the setting key.
And S120, determining a target data subset corresponding to the data updating request and a cache position corresponding to the target data subset.
The target data subset is one of at least two data subsets obtained after preprocessing the original data set, namely the target data subset is one part of the original data set. The cache position may be a first cache in the hard disk or a second cache set in the memory, where the second cache in the memory is a new cache in the present scheme. In the scheme, after the original data set is preprocessed, a part of data subsets are stored in a first cache, and the other part of data subsets are stored in a second cache.
After a data update request for an original data set in a hard disk is acquired, request information included in the data update request can be extracted, and a corresponding target data subset and a cache position corresponding to the target data subset are determined according to the request information. The request information may include a request identifier, an update data identifier, update data, and the like, and different data update requests correspond to different request information.
And S130, performing data updating processing on the target data subset in the cache position, and returning the target data subset after data updating to the target data subset in the replacement hard disk.
After determining the target data subset corresponding to the data update request and the cache location corresponding to the target data subset, the target data subset in the read cache location may be first subjected to data update processing according to the request information in the data update request, and the target data subset after data update is returned to be written into the cache location and returned to replace the original target data subset in the hard disk, thereby implementing data update on the original data set in the hard disk.
In the scheme, when the original data set in the hard disk is subjected to data updating, the data updating can be performed on the target data subset in the hard disk or the cache in the memory, and the updated target data subset is returned to replace the target data subset in the hard disk. Because the target data subset belongs to a small part of the original data set, the cache can be fully utilized, the read-write operation of the hard disk data is greatly reduced, and the efficiency is improved.
According to the technical scheme of the embodiment, a data updating request of an original data set in a hard disk is obtained, a target data subset corresponding to the data updating request and a cache position corresponding to the target data subset are determined, the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache position is a first cache in the hard disk or a second cache arranged in a memory; and performing data updating processing on the target data subset in the cache position, and returning the target data subset after data updating to the target data subset in the replacement hard disk. According to the embodiment of the disclosure, by adding the cache in the memory, a part of data after the preprocessing of the original data set is stored in the cache of the memory, and another part of data is stored in the cache of the hard disk, when the data in the hard disk is updated, the data subset in the cache is updated first, and the data is returned and stored in the hard disk, so that the updating of the data of the hard disk is realized, the read-write operation of the data of the hard disk is greatly reduced, the bottleneck of network transmission bandwidth of data management can be solved, the cache is fully utilized, and the delay is reduced.
In addition, in this scheme, before acquiring the data update request for the original data set in the hard disk, the method may further include: preprocessing an original data set stored in a hard disk based on data identification to obtain at least two data subsets, wherein the preprocessing comprises grouping and sequencing, and the identification range of each data subset is different; determining the data subset which is sorted last as a dynamic data subset; the dynamic data subsets are stored in a first cache, and data subsets other than the dynamic data subsets are stored in a second cache.
Fig. 2 is a flow chart of another data management method in the embodiments of the present disclosure. On the basis of the above embodiments, the present embodiment further optimizes the data management method. Correspondingly, as shown in fig. 2, the method of the embodiment specifically includes:
s201, preprocessing an original data set stored in the hard disk based on the data identification to obtain at least two data subsets.
The preprocessing can include grouping and sorting, and since each user in the original data set is configured with a unique data identifier, the identification range of each data subset is different.
And grouping the original data set in the hard disk according to a set number to obtain at least two data subsets, wherein the set number refers to the number of user data. After grouping, all data subsets are sorted from small to large according to the identification range of each data subset. For example, the original data set includes 5 user data, the data identifiers are 1, 2, 3, 4, and 5, and the data identifiers are grouped according to the number of 3 identifiers to obtain two data subsets, and after the data subsets are sorted, the identifier range of the first data subset is 1-3, and the identifier range of the second data subset is 4-5.
And S202, determining the data subset which is sorted last as a dynamic data subset.
After grouping and sorting the original data set, the data subset sorted last may be determined as a dynamic data subset, where the identification range corresponding to the dynamic data subset is the largest and the number of user data included therein is smaller than or equal to the set number. That is, the data subsets other than the dynamic data subset include the same amount of user data and are all set amounts, and the dynamic data subset may include less users than the set amount.
And S203, storing the dynamic data subsets in the first cache, and storing other data subsets except the dynamic data subsets in the second cache.
The first cache is a cache arranged in the hard disk, and the second cache is a cache arranged in the memory. In the scheme, the cache is added in the memory, and the cache of the hard disk and the memory is fully utilized during data management so as to reduce delay.
Optionally, in this scheme, the identification range of each data subset and the cache position of each data subset may also be stored by a data index.
S204, acquiring a data updating request for the original data set in the hard disk.
After S204, S205-S207 may be performed, or S208-S209 may be performed, depending on the specific type of data update request. The specific type may be determined according to a request identifier in the data update request.
When the data update request is a data addition request, executing S205-S207, specifically:
and S205, if the data updating request is a data adding request, the target data subset is a dynamic data subset, and the cache position corresponding to the target data subset is a first cache.
The data adding request is an instruction for instructing the service platform to add new data in the original data set. When the data update request is a data addition request, since the dynamic data subset is a data subset with a small amount of user data and a largest identification range, new data can be added to the dynamic data subset, the target data subset is the dynamic data subset, the corresponding cache position is the first cache,
and S206, reading the dynamic data subset in the first cache.
According to the pre-constructed data index, the dynamic data subset in the first cache can be read, and the identification range of the dynamic data subset can be determined.
And S207, adding the newly added data in the data newly added request to the dynamic data subset, and writing the dynamic data subset after the data is added into the first cache.
After reading the dynamic data subset in the first cache, the new data in the data new adding request can be extracted, the data identifier is configured for the new data according to the identifier range of the dynamic data subset, and the new data is added to the last position in the dynamic data subset. And then writing the dynamic data subset to which the new data is added into the first cache. For example, the dynamic data subset has a data identification range of 10-15, and the new data has a data identification of 16.
In the scheme, when the data updating request is a data adding request, only the dynamic data subset of the first cache in the hard disk is read and written without influencing the data subset of the second cache in the memory, so that the read and write operations when the data is added are reduced.
When the data update request is a data delete request, executing S208-S209, specifically:
and S208, if the data updating request is a data deleting request, determining the target data subset and the cache position corresponding to the target data subset according to the to-be-deleted data identifier in the data deleting request.
The data deleting request is an instruction used for instructing the service platform to delete the data in the original data set.
When the data updating request is a data deleting request, the data identifier to be deleted in the data deleting request can be extracted, and the target data subset and the cache position corresponding to the target data subset are determined according to the data identifier to be deleted.
Further, determining the target data subset and the cache location corresponding to the target data subset according to the to-be-deleted data identifier in the data deletion request may include: matching the data identifier to be deleted with the identifier range of each data subset; determining the successfully matched data subset as a target data subset; and determining the corresponding cache position according to the sequencing position of the target data subset. And if the data identification to be deleted is included in the identification range of one data subset, determining the data subset as a target data subset. And then, judging whether the sequencing position of the target data subset is the last bit, if so, determining that the target data subset is a dynamic data subset and the corresponding cache position is the first cache, otherwise, determining that the corresponding cache position is the second cache of the memory.
S209, deleting the data to be deleted in the target data subset according to the data to be deleted identifier.
After the target data subset and the cache position corresponding to the target data subset are determined, the target data subset can be read according to the pre-constructed data index, the data to be deleted corresponding to the data identifier to be deleted in the target data subset is deleted, and the target data subset after the data to be deleted is returned and written into the cache position.
Optionally, deleting the data to be deleted in the target data subset includes: storing the data identification to be deleted into a deletion list; and if the data identifier corresponding to the new data processing request is successfully matched with the data identifier to be deleted in the deletion list, deleting the data to be deleted, wherein the data processing request comprises a data updating request and a data reading request. The deletion list may be stored in a memory or a hard disk, or may be stored in a third-party distributed storage space to improve reliability. Through the deletion list, when the data identifier corresponding to the received next data processing request is the data identifier to be deleted, the data to be deleted in the target data subset is deleted, and the reliability of data deletion can be improved.
In the scheme, when the data updating request is a data deleting request, only one target data subset in the hard disk is subjected to read-write operation, and other data subsets are not influenced, so that the read-write operation during data deleting is reduced.
After S207 or S209, S210 may be performed.
And S210, returning the target data subset after data updating to the target data subset in the replacement hard disk.
Specifically, if the data update request is a data addition request, the target data subset after the addition of the added data is returned to replace the target data subset in the original data set in the hard disk. And if the data updating request is a data deleting request, returning the target data subset after the data to be deleted in the first cache or the second cache is deleted to replace the target data subset in the original data set in the hard disk. Optionally, when the data update request is a data deletion request and the cache location corresponding to the target data subset is a second cache in the memory, the specific deletion process may also be: and directly deleting the data to be deleted in the target data subset in the hard disk, and replacing the target data subset of which the data to be deleted is deleted with the target data subset of the second cache in the memory.
According to the embodiment of the disclosure, an original data set stored in a hard disk is preprocessed based on data identification to obtain at least two data subsets, the data subset which is sequenced finally is determined as a dynamic data subset, the dynamic data subset is stored in a first cache, and other data subsets except the dynamic data subset are stored in a second cache; acquiring a data updating request of an original data set in a hard disk; if the data updating request is a data adding request, the target data subset is a dynamic data subset, the cache position corresponding to the target data subset is a first cache, the dynamic data subset in the first cache is read, the added data in the data adding request is added to the dynamic data subset, and the dynamic data subset after the data is added is written into the first cache; if the data updating request is a data deleting request, determining a target data subset and a cache position corresponding to the target data subset according to a data identifier to be deleted in the data deleting request, and deleting data to be deleted in the target data subset according to the data identifier to be deleted; and returning the target data subset after data updating to the target data subset in the replacement hard disk.
According to the embodiment of the disclosure, by adding the cache in the memory, a part of data after the preprocessing of the original data set is stored in the cache of the memory, and another part of data is stored in the cache of the hard disk, when the data in the hard disk is updated, the data subset in the cache is updated first, and the data is returned and stored in the hard disk, so that the updating of the data of the hard disk is realized, the read-write operation of the data of the hard disk is greatly reduced, the bottleneck of network transmission bandwidth of data management can be solved, the cache is fully utilized, and the delay is reduced.
Fig. 3 is a schematic structural diagram of a data management apparatus in this embodiment of the disclosure, which is applicable to a case of managing data in a service platform. The data management device provided by the embodiment of the disclosure can execute the data management method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
The apparatus specifically includes an update request obtaining module 310, a cache determining module 320, and a data updating module 330, where:
an update request obtaining module 310, configured to obtain a data update request for an original data set in a hard disk;
a cache determining module 320, configured to determine a target data subset corresponding to the data update request and a cache location corresponding to the target data subset, where the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache location is a first cache in the hard disk or a second cache set in an internal memory;
and the data updating module 330 is configured to perform data updating processing on the target data subset in the cache location, and return the target data subset after data updating to replace the target data subset in the hard disk.
Optionally, the update request obtaining module 310 is specifically configured to:
and acquiring a data updating request of the original data set in the hard disk based on the operation of the user on the set key.
Optionally, the apparatus further includes a data preprocessing module, where the data preprocessing module is specifically configured to:
preprocessing the original data set stored in the hard disk based on data identification to obtain at least two data subsets, wherein the preprocessing comprises grouping and sequencing, and the identification range of each data subset is different;
determining the data subset which is sorted last as a dynamic data subset;
storing the dynamic data subsets in the first cache, and storing other data subsets than the dynamic data subsets in the second cache.
Optionally, the cache determining module 320 includes a first determining unit, and the first determining unit is specifically configured to:
if the data updating request is a data adding request, the target data subset is the dynamic data subset, and the cache position corresponding to the target data subset is the first cache.
Optionally, the data updating module 330 includes a first updating unit, where the first updating unit is specifically configured to:
reading the dynamic data subset in the first cache;
and adding the newly added data in the data newly adding request to the dynamic data subset, and writing the dynamic data subset after data addition into the first cache.
Optionally, the cache determination module 320 includes a second determination unit, and the second determination unit is configured to:
and if the data updating request is a data deleting request, determining the target data subset and the cache position corresponding to the target data subset according to the data identifier to be deleted in the data deleting request.
Optionally, the second determining unit is specifically configured to:
matching the data identification to be deleted with the identification range of each data subset;
determining the data subset successfully matched as the target data subset;
and determining a corresponding cache position according to the sequencing position of the target data subset.
Optionally, the data updating module 330 includes a second updating unit, and the second updating unit is configured to:
and deleting the data to be deleted in the target data subset according to the data identification to be deleted.
Optionally, the second updating unit is specifically configured to:
storing the data identifier to be deleted into a deletion list;
and if the data identifier corresponding to the new data processing request is successfully matched with the data identifier to be deleted in the deletion list, deleting the data to be deleted, wherein the data processing request comprises the data updating request and the data reading request.
Optionally, the original data set includes two or more user data, the data identifier of the user data is incremented, and the user data is a crowd packet.
The data management device provided by the embodiment of the disclosure can execute the data management method provided by the embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure. Referring now specifically to fig. 4, a schematic diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 400 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), and the like, and fixed terminals such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 406 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 406 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 409, or from the storage means 406, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a data updating request of an original data set in a hard disk; determining a target data subset corresponding to the data updating request and a cache position corresponding to the target data subset, wherein the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache position is a first cache in the hard disk or a second cache arranged in a memory; and performing data updating processing on the target data subset in the cache position, and returning the target data subset after data updating to replace the target data subset in the hard disk.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the unit does not form a limitation on the unit itself under a certain condition, for example, the text broadcast module may also be described as a "module for broadcasting the text to be dictated by a voice broadcast manner".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a data management method including:
acquiring a data updating request of an original data set in a hard disk;
determining a target data subset corresponding to the data updating request and a cache position corresponding to the target data subset, wherein the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache position is a first cache in the hard disk or a second cache arranged in a memory;
and performing data updating processing on the target data subset in the cache position, and returning the target data subset after data updating to replace the target data subset in the hard disk.
According to one or more embodiments of the present disclosure, a data management method provided by the present disclosure, acquiring a data update request for an original data set in a hard disk, includes:
and acquiring a data updating request of the original data set in the hard disk based on the operation of the user on the set key.
According to one or more embodiments of the present disclosure, before acquiring a data update request for an original data set in a hard disk, the data management method further includes:
preprocessing the original data set stored in the hard disk based on data identification to obtain at least two data subsets, wherein the preprocessing comprises grouping and sequencing, and the identification range of each data subset is different;
determining the data subset which is sorted last as a dynamic data subset;
storing the dynamic data subsets in the first cache, and storing other data subsets than the dynamic data subsets in the second cache.
According to one or more embodiments of the present disclosure, in a data management method provided by the present disclosure, determining a target data subset corresponding to the data update request and a cache location corresponding to the target data subset includes:
if the data updating request is a data adding request, the target data subset is the dynamic data subset, and the cache position corresponding to the target data subset is the first cache.
According to one or more embodiments of the present disclosure, in a data management method provided by the present disclosure, performing data update processing on a target data subset in the cache location includes:
reading the dynamic data subset in the first cache;
and adding the newly added data in the data newly adding request to the dynamic data subset, and writing the dynamic data subset after data addition into the first cache.
According to one or more embodiments of the present disclosure, in a data management method provided by the present disclosure, determining a target data subset corresponding to the data update request and a cache location corresponding to the target data subset includes:
and if the data updating request is a data deleting request, determining the target data subset and the cache position corresponding to the target data subset according to the data identifier to be deleted in the data deleting request.
According to one or more embodiments of the present disclosure, in the data management method provided by the present disclosure, determining the target data subset and the cache location corresponding to the target data subset according to the to-be-deleted data identifier in the data deletion request includes:
matching the data identification to be deleted with the identification range of each data subset;
determining the data subset successfully matched as the target data subset;
and determining a corresponding cache position according to the sequencing position of the target data subset.
According to one or more embodiments of the present disclosure, in a data management method provided by the present disclosure, performing data update processing on a target data subset in the cache location includes:
and deleting the data to be deleted in the target data subset according to the data identification to be deleted.
According to one or more embodiments of the present disclosure, in a data management method provided by the present disclosure, deleting data to be deleted in the target data subset includes:
storing the data identifier to be deleted into a deletion list;
and if the data identifier corresponding to the new data processing request is successfully matched with the data identifier to be deleted in the deletion list, deleting the data to be deleted, wherein the data processing request comprises the data updating request and the data reading request.
According to one or more embodiments of the present disclosure, in the data management method provided by the present disclosure, the original data set includes two or more user data, the data identifier of the user data is incremented, and the user data is a crowd packet.
According to one or more embodiments of the present disclosure, there is provided a data management apparatus including:
the updating request acquisition module is used for acquiring a data updating request for the original data set in the hard disk;
a cache determining module, configured to determine a target data subset corresponding to the data update request and a cache position corresponding to the target data subset, where the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache position is a first cache in the hard disk or a second cache set in an internal memory;
and the data updating module is used for performing data updating processing on the target data subset in the cache position and returning the target data subset after data updating to replace the target data subset in the hard disk.
According to one or more embodiments of the present disclosure, in the data management apparatus provided by the present disclosure, the update request obtaining module is specifically configured to:
and acquiring a data updating request of the original data set in the hard disk based on the operation of the user on the set key.
According to one or more embodiments of the present disclosure, in the data management apparatus provided in the present disclosure, the apparatus further includes a data preprocessing module, where the data preprocessing module is specifically configured to:
preprocessing the original data set stored in the hard disk based on data identification to obtain at least two data subsets, wherein the preprocessing comprises grouping and sequencing, and the identification range of each data subset is different;
determining the data subset which is sorted last as a dynamic data subset;
storing the dynamic data subsets in the first cache, and storing other data subsets than the dynamic data subsets in the second cache.
According to one or more embodiments of the present disclosure, in the data management apparatus provided by the present disclosure, the cache determination module includes a first determination unit, where the first determination unit is specifically configured to:
if the data updating request is a data adding request, the target data subset is the dynamic data subset, and the cache position corresponding to the target data subset is the first cache.
According to one or more embodiments of the present disclosure, in the data management apparatus provided by the present disclosure, the data update module includes a first update unit, where the first update unit is specifically configured to:
reading the dynamic data subset in the first cache;
and adding the newly added data in the data newly adding request to the dynamic data subset, and writing the dynamic data subset after data addition into the first cache.
According to one or more embodiments of the present disclosure, in the data management apparatus provided by the present disclosure, the cache determination module includes a second determination unit, and the second determination unit is configured to:
and if the data updating request is a data deleting request, determining the target data subset and the cache position corresponding to the target data subset according to the data identifier to be deleted in the data deleting request.
According to one or more embodiments of the present disclosure, in the data management apparatus provided by the present disclosure, the second determining unit is specifically configured to:
matching the data identification to be deleted with the identification range of each data subset;
determining the data subset successfully matched as the target data subset;
and determining a corresponding cache position according to the sequencing position of the target data subset.
According to one or more embodiments of the present disclosure, in the data management apparatus provided by the present disclosure, the data updating module includes a second updating unit, and the second updating unit is configured to:
and deleting the data to be deleted in the target data subset according to the data identification to be deleted.
According to one or more embodiments of the present disclosure, in the data management apparatus provided by the present disclosure, the second updating unit is specifically configured to:
storing the data identifier to be deleted into a deletion list;
and if the data identifier corresponding to the new data processing request is successfully matched with the data identifier to be deleted in the deletion list, deleting the data to be deleted, wherein the data processing request comprises the data updating request and the data reading request.
According to one or more embodiments of the present disclosure, in the data management apparatus provided by the present disclosure, the original data set includes two or more user data, the data identifier of the user data is incremented, and the user data is a crowd packet.
In accordance with one or more embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement any of the data management methods provided by the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a data management method as any one of the data management methods provided by the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (12)
1. A method for managing data, comprising:
acquiring a data updating request of an original data set in a hard disk;
determining a target data subset corresponding to the data updating request and a cache position corresponding to the target data subset, wherein the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache position is a first cache in the hard disk or a second cache arranged in a memory; the determining a target data subset corresponding to the data update request and a cache location corresponding to the target data subset includes: extracting request information included in the data updating request, and determining the corresponding target data subset and the cache position corresponding to the target data subset according to the request information;
performing data updating processing on the target data subset in the cache position, and returning the target data subset after data updating to replace the target data subset in the hard disk;
before the obtaining of the data update request for the original data set in the hard disk, the method further includes:
preprocessing the original data set stored in the hard disk based on data identification to obtain at least two data subsets, wherein the preprocessing comprises grouping and sequencing, and the identification range of each data subset is different;
determining the data subset which is sorted last as a dynamic data subset;
storing the dynamic data subsets in the first cache, and storing other data subsets than the dynamic data subsets in the second cache.
2. The method of claim 1, wherein obtaining a data update request for an original data set in a hard disk comprises:
and acquiring a data updating request of the original data set in the hard disk based on the operation of the user on the set key.
3. The method of claim 1, wherein determining the target data subset corresponding to the data update request and the cache location corresponding to the target data subset comprises:
if the data updating request is a data adding request, the target data subset is the dynamic data subset, and the cache position corresponding to the target data subset is the first cache.
4. The method of claim 3, wherein performing a data update process on the target subset of data in the cache location comprises:
reading the dynamic data subset in the first cache;
and adding the newly added data in the data newly adding request to the dynamic data subset, and writing the dynamic data subset after data addition into the first cache.
5. The method of claim 1, wherein determining the target data subset corresponding to the data update request and the cache location corresponding to the target data subset comprises:
and if the data updating request is a data deleting request, determining the target data subset and the cache position corresponding to the target data subset according to the data identifier to be deleted in the data deleting request.
6. The method of claim 5, wherein determining the target data subset and the cache location corresponding to the target data subset according to the to-be-deleted data identifier in the data deletion request comprises:
matching the data identification to be deleted with the identification range of each data subset;
determining the data subset successfully matched as the target data subset;
and determining a corresponding cache position according to the sequencing position of the target data subset.
7. The method of claim 5, wherein performing a data update process on the target subset of data in the cache location comprises:
and deleting the data to be deleted in the target data subset according to the data identification to be deleted.
8. The method of claim 7, wherein deleting data to be deleted from the target subset of data comprises:
storing the data identifier to be deleted into a deletion list;
and if the data identifier corresponding to the new data processing request is successfully matched with the data identifier to be deleted in the deletion list, deleting the data to be deleted, wherein the data processing request comprises the data updating request and the data reading request.
9. The method according to any of claims 1-8, wherein the original data set comprises two or more user data, wherein the data identifiers of the user data are incremented, and wherein the user data is a crowd packet.
10. A data management apparatus, comprising:
the updating request acquisition module is used for acquiring a data updating request for the original data set in the hard disk;
a cache determining module, configured to determine a target data subset corresponding to the data update request and a cache position corresponding to the target data subset, where the target data subset is one of at least two data subsets obtained after preprocessing the original data set, and the cache position is a first cache in the hard disk or a second cache set in an internal memory; the determining a target data subset corresponding to the data update request and a cache location corresponding to the target data subset includes: extracting request information included in the data updating request, and determining the corresponding target data subset and the cache position corresponding to the target data subset according to the request information;
the data updating module is used for performing data updating processing on the target data subset in the cache position and returning the target data subset after data updating to replace the target data subset in the hard disk;
the device further comprises a data preprocessing module, wherein the data preprocessing module is specifically used for:
preprocessing the original data set stored in the hard disk based on data identification to obtain at least two data subsets, wherein the preprocessing comprises grouping and sequencing, and the identification range of each data subset is different;
determining the data subset which is sorted last as a dynamic data subset;
storing the dynamic data subsets in the first cache, and storing other data subsets than the dynamic data subsets in the second cache.
11. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a data management method as recited in any of claims 1-9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the data management method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911012350.9A CN110716885B (en) | 2019-10-23 | 2019-10-23 | Data management method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911012350.9A CN110716885B (en) | 2019-10-23 | 2019-10-23 | Data management method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110716885A CN110716885A (en) | 2020-01-21 |
CN110716885B true CN110716885B (en) | 2022-02-18 |
Family
ID=69213148
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911012350.9A Active CN110716885B (en) | 2019-10-23 | 2019-10-23 | Data management method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110716885B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103207836A (en) * | 2012-01-16 | 2013-07-17 | 百度在线网络技术(北京)有限公司 | Write method and write device for solid-state storage hard disk |
US9104580B1 (en) * | 2010-07-27 | 2015-08-11 | Apple Inc. | Cache memory for hybrid disk drives |
CN105117180A (en) * | 2015-09-28 | 2015-12-02 | 联想(北京)有限公司 | Data storing method and device and solid state disc |
CN107220001A (en) * | 2017-05-18 | 2017-09-29 | 记忆科技(深圳)有限公司 | A kind of solid state hard disc cache implementing method and solid state hard disc |
CN110019361A (en) * | 2017-10-30 | 2019-07-16 | 北京国双科技有限公司 | A kind of caching method and device of data |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156753B (en) * | 2011-04-29 | 2012-11-14 | 中国人民解放军国防科学技术大学 | Data page caching method for file system of solid-state hard disc |
CN103885728B (en) * | 2014-04-04 | 2016-08-17 | 华中科技大学 | A kind of disk buffering system based on solid-state disk |
CN104699422B (en) * | 2015-03-11 | 2018-03-13 | 华为技术有限公司 | Data cached determination method and device |
CN109885786B (en) * | 2019-01-23 | 2021-06-08 | 聚好看科技股份有限公司 | Data caching processing method and device, electronic equipment and readable storage medium |
-
2019
- 2019-10-23 CN CN201911012350.9A patent/CN110716885B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9104580B1 (en) * | 2010-07-27 | 2015-08-11 | Apple Inc. | Cache memory for hybrid disk drives |
CN103207836A (en) * | 2012-01-16 | 2013-07-17 | 百度在线网络技术(北京)有限公司 | Write method and write device for solid-state storage hard disk |
CN105117180A (en) * | 2015-09-28 | 2015-12-02 | 联想(北京)有限公司 | Data storing method and device and solid state disc |
CN107220001A (en) * | 2017-05-18 | 2017-09-29 | 记忆科技(深圳)有限公司 | A kind of solid state hard disc cache implementing method and solid state hard disc |
CN110019361A (en) * | 2017-10-30 | 2019-07-16 | 北京国双科技有限公司 | A kind of caching method and device of data |
Non-Patent Citations (1)
Title |
---|
大数据处理技术与系统研究;顾荣;《中国博士学位论文全文数据库 信息科技辑》;20170315;I138-17 * |
Also Published As
Publication number | Publication date |
---|---|
CN110716885A (en) | 2020-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109656923B (en) | Data processing method and device, electronic equipment and storage medium | |
CN110704833A (en) | Data permission configuration method, device, electronic device and storage medium | |
CN111209306A (en) | Business logic judgment method and device, electronic equipment and storage medium | |
CN110781373A (en) | List updating method and device, readable medium and electronic equipment | |
CN112035529A (en) | Caching method and device, electronic equipment and computer readable storage medium | |
CN111680799A (en) | Method and apparatus for processing model parameters | |
CN111597107A (en) | Information output method and device and electronic equipment | |
CN111241137A (en) | Data processing method and device, electronic equipment and storage medium | |
CN113918659A (en) | Data operation method and device, storage medium and electronic equipment | |
CN110781066B (en) | User behavior analysis method, device, equipment and storage medium | |
CN116541174A (en) | Storage device capacity processing method, device, equipment and storage medium | |
CN111580883A (en) | Application program starting method, device, computer system and medium | |
CN111597439A (en) | Information processing method and device and electronic equipment | |
CN110716885B (en) | Data management method and device, electronic equipment and storage medium | |
CN112100211B (en) | Data storage method, apparatus, electronic device, and computer readable medium | |
CN113362097B (en) | User determination method and device | |
CN115794876A (en) | Fragment processing method, device, equipment and storage medium for service data packet | |
CN111581930A (en) | Online form data processing method and device, electronic equipment and readable medium | |
CN113760178A (en) | Cache data processing method and device, electronic equipment and computer readable medium | |
CN110941683A (en) | Method, device, medium and electronic equipment for acquiring object attribute information in space | |
CN111143355A (en) | Data processing method and device | |
CN114040014B (en) | Content pushing method, device, electronic equipment and computer readable storage medium | |
CN112966008B (en) | Data caching method, loading method, updating method and related devices | |
CN115994120B (en) | Data file merging method, device, electronic equipment and computer readable medium | |
CN110619087B (en) | Method and apparatus for processing information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |