CN113297226A - Data storage method, data reading method, data storage device, electronic device and medium - Google Patents

Data storage method, data reading method, data storage device, electronic device and medium Download PDF

Info

Publication number
CN113297226A
CN113297226A CN202110649490.8A CN202110649490A CN113297226A CN 113297226 A CN113297226 A CN 113297226A CN 202110649490 A CN202110649490 A CN 202110649490A CN 113297226 A CN113297226 A CN 113297226A
Authority
CN
China
Prior art keywords
data storage
data
field
target
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110649490.8A
Other languages
Chinese (zh)
Other versions
CN113297226B (en
Inventor
刘畅
刘伟
张谦
陈正亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110649490.8A priority Critical patent/CN113297226B/en
Publication of CN113297226A publication Critical patent/CN113297226A/en
Application granted granted Critical
Publication of CN113297226B publication Critical patent/CN113297226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a data storage method, a data reading method, an apparatus, an electronic device and a medium, and relates to the field of data processing, in particular to distributed storage and data retrieval. A method of data storage, comprising: determining a target data storage group of a plurality of data storage groups based on a value of a first field of data, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions disposed therein; determining a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field of the data, the second field being different from the first field; and storing the data in the target data storage partition.

Description

Data storage method, data reading method, data storage device, electronic device and medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to distributed storage and data retrieval, and in particular, to a data storage method, a data reading method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
Distributed storage and hierarchical routing play an important role in the storage and reading of large amounts of data. For scenes with mass data or uneven data content, a scheme for effectively performing layered storage and reading on distributed data is needed.
Disclosure of Invention
The present disclosure provides a data storage method, a data reading method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a data storage method, including: determining a target data storage group of a plurality of data storage groups based on a value of a first field of data, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions disposed therein; determining a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field of the data, the second field being different from the first field; and storing the data in the target data storage partition.
According to an aspect of the present disclosure, there is also provided a data storage device, including a storage group determining unit configured to determine a target data storage group of a plurality of data storage groups, each of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions provided therein, based on a value of a first field of data; a storage partition determination unit configured to determine a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field of the data, the second field being different from the first field; and a data storage unit configured to store data into the target data storage partition.
According to an aspect of the present disclosure, there is provided a data reading method including: determining a target data storage group of a plurality of data storage groups based on a value of a first field in a data processing request, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions disposed therein; determining a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field in the data processing request, the second field being different from the first field; and reading data from the target data storage partition based on the data processing request.
According to an aspect of the present disclosure, there is also provided a data reading apparatus including: a storage group determination unit configured to determine a target data storage group of a plurality of data storage groups, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions provided therein, based on a value of a first field in the data processing request; a storage partition determination unit configured to determine a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field in the data processing request, the second field being different from the first field; and a data reading unit configured to read data from the target data storage partition based on the data processing request.
According to an aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a data storage method or a data reading method according to embodiments of the present disclosure.
According to an aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a data storage method or a data reading method according to an embodiment of the present disclosure.
According to an aspect of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements a data storage method or a data reading method according to embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, the number of layers of a data segment can be increased, and the fan-out of each layer can be reasonably adjusted.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of a data storage method according to an embodiment of the present disclosure;
3A-3C illustrate schematic diagrams of data storage architectures according to embodiments of the present disclosure;
FIG. 4 shows a flow diagram of a data reading method according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of a data storage device according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of a data reading apparatus according to an embodiment of the present disclosure; and
FIG. 7 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In an embodiment of the present disclosure, the server 120 may run one or more services or software applications that enable the data storage method or the data reading method according to the present disclosure to be performed.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with the server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
A user may use client devices 101, 102, 103, 104, 105, and/or 106 to read data, store data, retrieve data, delete data, and so forth. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, Apple iOS, UNIX-like operating systems, Linux, or Linux-like operating systems (e.g., Google Chrome OS); or include various Mobile operating systems, such as Microsoft Windows Mobile OS, iOS, Windows Phone, Android. Portable handheld devices may include cellular telephones, smart phones, tablets, Personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), Short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
A data storage method 200 according to an embodiment of the present disclosure is described below with reference to fig. 2.
At step 210, a target data storage group of a plurality of data storage groups is determined based on a value of a first field of data, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions disposed therein.
At step 220, a target data storage partition of the data storage partitions in the target data storage group is determined based on the value of the second field of the data. The second field of data is different from the first field of data.
At step 230, the data is stored into the target data storage partition.
According to the embodiment of the disclosure, a method for storing the retrieval data in groups and performing the specified query according to the two-stage routing parameters is provided. The data storage method aims to solve the problem of hierarchical routing of data in distributed storage. In the prior art, when data is stored, hash operation is performed on a single routing parameter or a routing field to distribute the data into each data fragment, so that when the stored data is recalled or read, all possible data fragments need to be fanned out through the routing parameter, the data volume required to be processed by single fanout is large, and an unstable processing result is caused due to the fact that the number of layers of the data fragments is small. In contrast, according to the data storage method of the embodiment of the present disclosure, it is possible to provide a hierarchical routing mechanism in which a packet of the storage hardware is determined based on the first field, and a target data storage partition in the hardware packet selected in the last step is determined based on the second field. Therefore, through a multi-layer distribution mechanism, the size of the fan-out section of each layer can be reduced by increasing the number of layers, and the multi-layer distribution can also support flexible adjustment of capacity and flexible transfer of data. As will be understood by those skilled in the art, the fan-out refers to the number of subordinate modules directly called by the module, and too much fan-out causes too many subordinate modules to be controlled and coordinated, thereby easily causing too much computation load of a single module and even operation errors, etc. The fan-out of every layer can be rationally controlled to the scheme of this disclosure.
A data storage method 200 according to an embodiment of the present disclosure is further described below in conjunction with fig. 3A-3C.
According to some embodiments, each data storage group of the plurality of data storage groups may include a plurality of data storage devices. Each group comprises a plurality of devices, so that the data distribution hierarchy can be further increased, more proper fan-out partition is ensured, and the calculation efficiency and stability are increased.
For example, as shown in fig. 3A, the data storage group 300 may include data storage devices 301, 302 … … 332, etc., which may correspond to scenarios where the amount of data is large, there are more data storage hardware devices, or the capacity of the individual data storage hardware is small relative to the total amount of data.
It will be appreciated that while only one data storage group 300 is shown, there may be other data storage groups, each comprising at least one data storage device.
Further, it is understood that while the data storage group is shown to include 32 hardware storage devices, the disclosure is not so limited. Each data storage group may include an unlimited number depending on the desired layering effect. For example, a data storage group may be configured such that a first storage group includes devices 301 and 302, and so on. Even further, each of the data storage devices 301, 302 … … 332 may be considered a data storage "group" and the first field is utilized for inter-group routing, where each group includes one device.
The number of groups, possible configuration methods of the number of devices in each group will be described below in connection with further embodiments.
With continued reference to FIG. 3A, it is shown that the data storage device may include a data storage area, which may be referred to as a DataSlice or DataRegion, abbreviated as DR in the figures for purposes of distinction. A data storage area refers to a logical data slice. The data storage area is located on a hardware device and can cooperate with data storage areas on other hardware devices to implement the functions of distributed storage. For example, as one example, in the commodity domain, data for a commodity article may be distributively stored in the storage region DR1-1 provided on the apparatus 301, the storage regions DR1-2, … … provided on the apparatus 302, the storage regions DR1-32 provided on the apparatus 332, and the like.
It will be appreciated by those skilled in the art that a hardware device may have one or more different data storage areas DR depending on its function. The apparatus 301 may further include a storage area DR2-1 for storing goods purchase data and a storage area DR3-1 for storing goods recommendation data. Similarly, the apparatus 302 may further include a storage area DR2-2 for storing goods purchase data and a storage area DR3-2 for storing goods recommendation data, and so on, the apparatus 332 may further include a storage area DR2-32 for storing goods purchase data and a storage area DR3-32 for storing goods recommendation data, and so on. It is to be understood that the above-described configuration is merely an example, the present disclosure is not limited to the number of data shards in a hardware device, and a detailed description of a data storage region will be made below by taking an example in which each hardware device includes only one data storage region (DR1-1, DR1-2, … … DR 1-32).
As described above, each data storage area may be partitioned into a plurality of data storage partitions. To provide individual control (read, write, etc.) of these data storage partitions, an associated data processing instance may be set for each data storage partition. A data processing instance refers to a basic unit carrying data, providing capabilities for data processing such as data storage and data retrieval, and one example may be the Lucene instance.
According to some embodiments, determining a target data storage partition of the data storage partitions in the target data storage group comprises: determining a target data processing instance of a plurality of data processing instances, each of the plurality of data processing instances respectively associated with one of the data storage partitions in the target data storage group, and storing data into the target data storage partition comprises: the data is stored into the target data storage partition using the target data processing instance. Thus, distribution of data among multiple data storage partitions can be achieved by breaking the data into target ones of the multiple data processing instances. For convenience, the base unit carrying the data may be referred to as a DataSlot. Each piece of data is routed hierarchically into a unique DataSlot that provides the underlying indexing capability for that data. That is, each hardware storage device includes one or more logical data storage regions (DataSlice) thereon, and each data storage region may include 1-N data processing units DataSlot. In other words, the data storage area on the hardware storage device may be comprised of 1-N data storage partitions. For example, as shown in FIG. 3B, each data storage region (e.g., DR1-1) may include a plurality of base data units DS1-DS8, each corresponding to a data storage partition, and it is understood that the numbers here are merely examples.
Fig. 3C shows the overall grouping of data under the above data architecture. The data storage area DR (or DataSlice as described above) on the hardware devices is also divided into groups, called SliceGroup or DataGroup (abbreviated DG in the figures), corresponding to the groups of hardware devices. For example, the first group DG1 includes 32 data storage regions DR1-1 through DR1-32, the second group DG2 includes 32 data storage regions DR1-33 through DR1-64, and the third group DG3 includes 32 data storage regions DR1-65 through DR 1-96. As described above, the method 200 may utilize the first field to identify a target group to store from a plurality of data groups. The size of the number of groups of retrieved data, GroupNum, may be determined according to the data length maximally supported by a single DataSlot and the data size, and this will be further described below. It is understood that, in the example of fig. 3C, GroupNum is 3, and the present disclosure is not limited thereto. Further, it may be understood that although it is illustrated in fig. 3C that each group includes 32 data storage areas, this is merely an example, and the present disclosure is not limited thereto.
As a primary parameter of route location, the first field may be referred to as SliceKey, and SliceGroup intervals of the retrieved data distribution are determined by SliceKey. Therefore, when a database is built, namely when data are initially acquired and the data storage method according to the embodiment of the disclosure is implemented for the first time to build the database, the data distribution interval can be controlled by using the SliceKey as an index.
According to some embodiments, the first field may be used to identify a retrieval dimension of the data. Those skilled in the art will understand that, as a term of art, a search dimension means a search range or a search section, and a field identifying a search dimension may be understood as a field indicating a search range. For example, in a shopping scenario, the field may be a "store ID," in a library scenario the field may be a "user ID," in a knowledge domain scenario the field may be a discipline, and so on. The data are scattered to different hardware devices by using the fields for identifying the retrieval dimensions, so that a plurality of data belonging to the same shop, the same user and the like can be classified into the same hardware group, and the accurate fan-out of the data in the future is facilitated while the distributed characteristic of the data is ensured.
Compared with the data distribution of a traditional single routing parameter such as ES, the scheme can realize the data hierarchical routing of two levels of routing parameters. According to some embodiments, the second field may be a globally unique identifier of the data. The global unique identifier is used for scattering data into different partitions in the group, and the uniform distribution of the data can be guaranteed to the greatest extent while additional fragmentation parameters are not needed.
According to some embodiments, the data may be for an inverted index document, and wherein the number of data processing instances associated with each data storage group may be determined based on a data length of a maximum inverted index and an upper data limit supported by a single data processing instance.
Therefore, using the data processing scheme according to embodiments of the present disclosure is particularly effective for the problem of zipper length imbalance or data aggregation in inverted index documents.
The inverted zipper-based indexing technology is one of the important recalling modes of the retrieval system, and the recall delay is greatly dependent on the length of the inverted zippers when being merged. Inverted zippers refer to the candidate set of inverted zippers that form a zipper. Due to the clustered nature of the index data, extreme imbalance in index zipper length distribution may occur, for example, some extra long zippers may be present. In this case, the request latency for hitting the very long zipper is high, which greatly affects the retrieval performance and the user experience. By using the data storage method according to the embodiment of the disclosure, the zipper length can be dispersed to the data storage group, the method is particularly suitable for a scene in which the index zipper is extremely unevenly distributed, and the retrieval performance and the user experience can be prevented from being influenced under the condition of an ultra-long zipper by layering and routing based on the zipper length.
According to some embodiments, the number of data storage groups may be determined based on an estimated data growth multiple at the time of library creation, the number of data storage devices in each data storage group may be determined based on an initial amount of data at the time of library creation, a storage capacity limit of a single data storage device, and the number of data storage groups, and the number of data processing instances to be run on each data storage device may be determined based on the number of data processing instances in each data storage group.
In the traditional method, a corresponding hierarchical scale needs to be set for each routing parameter in a manual prior mode, the cost of capacity planning is high, and a fixed routing parameter hierarchical capacity expansion scheme cannot be well adapted to a scene with continuously changing data distribution. In contrast, using the configuration of the present disclosure, capacity can be flexibly planned.
An example of data capacity and groups is described below. In the example, the total amount of database construction is 50 hundred million data, each piece of data is provided with a shop ID identification, the total index size is 600GB, the maximum inverted chain length of partial index fields is 5000 ten thousand, and the single machine specification is 25 GB. The maximum support chain length of the basic retrieval unit or data processing example is 100 ten thousand, namely, more than 100 ten thousand zippers have obvious influence on retrieval delay and stability.
From the above information, the following results can be calculated:
1) number of data storage slices: 64. since each base retrieval unit supports 100 million zipper lengths, and the zipper length with the longest index is 5000 million, at least 50 data storage slices are required. Considering the appropriate reserved space, 64 data storage slices may be set. It can be calculated that the longest zipper set in each data storage slice is only 78 ten thousand.
2) Group number of data group DG: 4. this is determined based on the expected data future growth space being 4 times. The number may be adjustable according to different data properties.
3) Second field or primary parameter: the shop ID. Upon storage (and subsequent reading or retrieval), it is determined within which data group the data is to fall (or which data group is to be queried) by hashing the store ID field.
4) First field or secondary parameter: identifier doc _ ID. And uniformly scattering the data in the determined data group by carrying out hash operation on the field.
5) Number of data storage slices in each data storage region: 8 of the Chinese medicinal herbs. This is because, since each data group DG is expected to carry a quarter of the total amount of data, and each data storage area is 25G in size (assuming that there is only one functional data storage area per hardware), at least 6 data storage areas or 6 hardware devices are required, which can be set to 8, meaning that the number of fan-outs is 8, considering the appropriate reserved space. Since a total of 64 data storage slices per data group have been calculated, then 8 data storage slices need to be accommodated per data storage area.
Thus, 50 hundred million data can be distributed into 4 groups, each group having 8 data storage areas (data storage hardware devices), and 64 data storage slices.
Compared with the scheme only supporting the first-level index, the method according to the embodiment of the disclosure balances the cost of accurate fan-out and capacity adjustment, can avoid the condition that the overlength zipper occurs in the inverted index on the basis of meeting the requirement of reducing the searching fan-out interval of the routing parameters and supporting capacity expansion, and also greatly reduces the cost of routing parameter configuration and dynamic adjustment.
A data reading method 400 according to another embodiment of the present disclosure is described below in conjunction with fig. 4.
At step 410, a target data storage group of a plurality of data storage groups is determined based on a value of a first field in the data processing request, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions disposed therein.
At step 420, a target data storage partition of the data storage partitions in the target data storage group is determined based on a value of a second field in the data processing request, the second field being different from the first field.
At step 430, data is read from the target data storage partition based on the data processing request.
It is understood that the read logic of data read method 400 is in one-to-one correspondence with the store logic of data store method 200. A group (e.g., SliceGroup) in which the data resides is determined from a first field (e.g., SliceKey), after which the appropriate data storage partition for that group interval is fanned out according to a second field. Therefore, since the features of the data reading method 400 and various modifications described below are similar to those of the data storage method 200 and the modifications thereof, the similar effects are not repeated.
According to some embodiments, each data storage group of the plurality of data storage groups may include a plurality of data storage devices. For example, the data storage group 300 of FIG. 3A may include data storage devices 301, 302 … … 332, and the like. This can further increase the level of data distribution. As will be appreciated by those skilled in the art, this may be adjusted depending on the application and the amount of data.
Referring back to fig. 3B, each data storage area may include a plurality of base data units, each corresponding to a data processing instance. According to some embodiments, determining a target data storage partition of the data storage partitions in the target data storage group may comprise: a target one of the plurality of data processing instances is determined, each of the plurality of data processing instances being respectively associated with one of the data storage partitions in the target data storage group. In such embodiments, reading data from the target data storage partition based on the data processing request may include: data is read from the target data storage partition based on the data processing request using the target data processing instance.
As a primary parameter of route location, the first field may be referred to as SliceKey. According to some embodiments, the first field of the data processing request may be a field for identifying a retrieval dimension of the data processing request. The retrieval dimension may indicate a read or retrieval multi-range. For example, in a shopping scenario, the field may be a "store ID," in a library scenario the field may be a "user ID," in a knowledge domain scenario the field may be a discipline, and so on. For example, in a specific search scenario, it is often only desired to read a specific range of data, such as data in a certain store, data specific to a certain user, and so on. Therefore, distribution based on the search dimension is very useful, thereby being able to support the demand of searching scenes by mass data in dimension. For example, during reading and querying, a specified SliceGroup query can be selected according to the routing parameters, the fan-out interval of data distribution is accurately positioned, and full fan-out is avoided. Therefore, deletion intervals and scales during retrieval can be greatly reduced, accurate fan-out is realized, the calculation amount is reduced, and the calculation efficiency and the stability are improved.
In addition, using the second field (e.g., hashing the second field) to determine the particular partition to which the data is stored (and the particular data processing instance that should be invoked) can enable more accurate deletion, saving computational resources. According to some embodiments, the second field of the data processing request may be a globally unique identifier of the data to be read by the data processing request.
As already mentioned above, in the field of data retrieval technology, the inverted zipper-based indexing technology is one of the important recall ways of retrieval systems. When the data reading or retrieval involves an ultra-long zipper, the request delay can be high, greatly affecting the retrieval performance and the user experience. The data reading method according to the embodiment of the present disclosure can effectively avoid such a defect. According to some embodiments, the data may be data for inverted index documents, and wherein the number of data processing instances associated with each data storage group may be determined based on a data length of a maximum inverted index and an upper data limit supported by a single data processing instance. Further, accurate positioning and capacity adjustment costs can also be balanced.
According to some embodiments, the number of data storage groups may be determined based on an estimated data growth multiple at the time of library creation, the number of data storage devices in each data storage group may be determined based on an initial amount of data at the time of library creation, a storage capacity limit of a single data storage device, and the number of data storage groups, and the number of data processing instances to be run on each data storage device may be determined based on the number of data processing instances in each data storage group. Configuration examples of the data storage architecture have been described in connection with method 200 and with reference to fig. 3A-3C and are therefore not described in detail herein.
Data storage device 500 according to an embodiment of the present disclosure is described below in conjunction with FIG. 5. The apparatus 500 may include a memory group determination unit 510, a memory partition determination unit 520, and a data storage unit 530. The storage group determination unit 510 may be configured to determine a target data storage group of a plurality of data storage groups, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions disposed therein, based on a value of a first field of data. The storage partition determination unit 520 may be configured to determine a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field of the data, the second field being different from the first field. Data storage unit 530 may be configured to store data into a target data storage partition.
A data reading apparatus 600 according to an embodiment of the present disclosure is described below with reference to fig. 6. The apparatus 600 may include a storage group determination unit 610, a storage partition determination unit 620, and a data reading unit 630. The storage group determination unit 610 may be configured to determine a target data storage group of a plurality of data storage groups, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions disposed therein, based on a value of a first field in the data processing request. The storage partition determination unit 620 may be configured to determine a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field in the data processing request, the second field being different from the first field. The data reading unit 630 may be configured to read data from the target data storage partition based on the data processing request.
It should be understood that the various modules of the apparatus 500 shown in fig. 5 may correspond to the various steps in the method 200 described with reference to fig. 2, and the various modules of the apparatus 600 shown in fig. 6 may correspond to the various steps in the method 400 described with reference to fig. 4. Thus, the operations, features and advantages described above with respect to the method 200 are equally applicable to the apparatus 500 and the modules comprised thereby, and the operations, features and advantages described above with respect to the method 400 are equally applicable to the apparatus 600 and the modules comprised thereby. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
According to an embodiment of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product.
Referring to fig. 7, a block diagram of a structure of an electronic device 700, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706, an output unit 707, a storage unit 708, and a communication unit 709. The input unit 706 may be any type of device capable of inputting information to the device 700, and the input unit 706 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote controller. Output unit 707 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 708 may include, but is not limited to, magnetic or optical disks. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 1302.11 devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 701 performs the various methods and processes described above, such as the method 200 or 400, and so on. For example, in some embodiments, methods 200 or 400, etc. may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into RAM703 and executed by the computing unit 701, one or more steps of the methods 200 or 400, etc. described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g., by means of firmware) to perform the method 200 or 400, or the like.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (17)

1. A method of data storage, comprising:
determining a target data storage group of a plurality of data storage groups based on a value of a first field of data, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions disposed therein;
determining a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field of the data, the second field being different from the first field; and
storing the data in the target data storage partition.
2. The method of claim 1, wherein determining a target data storage partition of the data storage partitions in the target data storage group comprises:
determining a target one of a plurality of data processing instances, each of the plurality of data processing instances respectively associated with one of the data storage partitions in the target data storage group, and
storing the data into the target data storage partition comprises:
storing the data in the target data storage partition using the target data processing instance.
3. The method of claim 1 or 2, wherein the first field is used to identify a retrieval dimension of the data.
4. The method of any of claims 1-3, wherein the second field is a globally unique identifier of the data.
5. The method of any of claims 1-4, wherein the data is for an inverted index document, and wherein the number of data processing instances associated with each data storage group is determined based on a data length of a maximum inverted index and an upper data limit supported by a single data processing instance.
6. The method of any of claims 1-5, wherein the number of the plurality of data storage groups is determined based on an estimated data growth multiple at the time of the banking, the number of data storage devices in each data storage group is determined based on an initial amount of data at the time of the banking, a storage capacity limit of a single data storage device, and the number of data storage groups, and the number of data processing instances to be run on each data storage device is determined based on the number of data processing instances in each data storage group.
7. A data reading method comprising:
determining a target data storage group of a plurality of data storage groups based on a value of a first field in a data processing request, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions disposed therein;
determining a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field in the data processing request, the second field being different from the first field; and
reading data from the target data storage partition based on the data processing request.
8. The method of claim 7, wherein determining a target data storage partition of the data storage partitions in the target data storage group comprises:
determining a target one of a plurality of data processing instances, each of the plurality of data processing instances respectively associated with one of the data storage partitions in the target data storage group, and
reading data from the target data storage partition based on the data processing request comprises:
using the target data processing instance, data is read from the target data storage partition based on the data processing request.
9. The method of any of claims 7-8, wherein the first field of the data processing request is to identify a retrieval dimension of the data processing request.
10. The method of any of claims 7-9, wherein the second field of the data processing request is a globally unique identifier of the data to be read by the data processing request.
11. The method of any of claims 7-10, wherein the data is for an inverted index document, and wherein the number of data processing instances associated with each data storage group is determined based on a data length of a maximum inverted index and an upper data limit supported by a single data processing instance.
12. The method of any of claims 7-11, wherein the number of the plurality of data storage groups is determined based on an estimated data growth multiple at the time of the banking, the number of data storage devices in each data storage group is determined based on an initial amount of data at the time of the banking, a storage capacity limit of a single data storage device, and the number of data storage groups, and the number of data processing instances to be run on each data storage device is determined based on the number of data processing instances in each data storage group.
13. A data storage device comprises
A storage group determination unit configured to determine a target data storage group of a plurality of data storage groups, each of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions provided therein, based on a value of a first field of data;
a storage partition determination unit configured to determine a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field of the data, the second field being different from the first field; and
a data storage unit configured to store the data into the target data storage partition.
14. A data reading apparatus comprising:
a storage group determination unit configured to determine a target data storage group of a plurality of data storage groups, each data storage group of the plurality of data storage groups including at least one data storage device and having two or more data storage partitions provided therein, based on a value of a first field in a data processing request;
a storage partition determination unit configured to determine a target data storage partition of the data storage partitions in the target data storage group based on a value of a second field in the data processing request, the second field being different from the first field; and
a data read unit configured to read data from the target data storage partition based on the data processing request.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6 or 7-12.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of claims 1-6 or 7-12.
17. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-6 or 7-12 when executed by a processor.
CN202110649490.8A 2021-06-10 2021-06-10 Data storage method, data reading device, electronic equipment and medium Active CN113297226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110649490.8A CN113297226B (en) 2021-06-10 2021-06-10 Data storage method, data reading device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110649490.8A CN113297226B (en) 2021-06-10 2021-06-10 Data storage method, data reading device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN113297226A true CN113297226A (en) 2021-08-24
CN113297226B CN113297226B (en) 2023-10-03

Family

ID=77327879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110649490.8A Active CN113297226B (en) 2021-06-10 2021-06-10 Data storage method, data reading device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN113297226B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002017A (en) * 2022-05-19 2022-09-02 北京思特奇信息技术股份有限公司 Two-field routing method of distributed database

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101578579A (en) * 2007-01-10 2009-11-11 微软公司 Taxonomy object modeling
US20160011911A1 (en) * 2014-07-10 2016-01-14 Oracle International Corporation Managing parallel processes for application-level partitions
US20170068678A1 (en) * 2015-09-03 2017-03-09 Oracle International Corporation Methods and systems for updating a search index
CN111581216A (en) * 2020-05-09 2020-08-25 北京百度网讯科技有限公司 Data processing method, device, equipment and storage medium
CN112347118A (en) * 2021-01-08 2021-02-09 阿里云计算有限公司 Data storage, query and generation method, database engine and storage medium
CN112925792A (en) * 2021-03-26 2021-06-08 北京中经惠众科技有限公司 Data storage control method, device, computing equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101578579A (en) * 2007-01-10 2009-11-11 微软公司 Taxonomy object modeling
US20160011911A1 (en) * 2014-07-10 2016-01-14 Oracle International Corporation Managing parallel processes for application-level partitions
US20170068678A1 (en) * 2015-09-03 2017-03-09 Oracle International Corporation Methods and systems for updating a search index
CN111581216A (en) * 2020-05-09 2020-08-25 北京百度网讯科技有限公司 Data processing method, device, equipment and storage medium
CN112347118A (en) * 2021-01-08 2021-02-09 阿里云计算有限公司 Data storage, query and generation method, database engine and storage medium
CN112925792A (en) * 2021-03-26 2021-06-08 北京中经惠众科技有限公司 Data storage control method, device, computing equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAODONG QI; ZHAO ZHANG; CHEQING JIN; AOYING ZHOU: "BFT-Store: Storage Partition for Permissioned Blockchain via Erasure Coding", 《2020 IEEE 36TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE)》 *
姜韶华;吴峥;: "BIM空间关系数据的云存储与检索方法研究", 图学学报, no. 05 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002017A (en) * 2022-05-19 2022-09-02 北京思特奇信息技术股份有限公司 Two-field routing method of distributed database

Also Published As

Publication number Publication date
CN113297226B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
US9767174B2 (en) Efficient query processing using histograms in a columnar database
US20190385347A1 (en) Graph partitioning for massive scale graphs
US9501562B2 (en) Identification of complementary data objects
US11665064B2 (en) Utilizing machine learning to reduce cloud instances in a cloud computing environment
JP5950267B2 (en) Database management apparatus, database management method, and storage medium
CN112532748B (en) Message pushing method, device, equipment, medium and computer program product
US10417192B2 (en) File classification in a distributed file system
US11307984B2 (en) Optimized sorting of variable-length records
US10885009B1 (en) Generating aggregate views for data indices
CN112148693A (en) Data processing method, device and storage medium
US20210149903A1 (en) Successive database record filtering on disparate database types
CN113297226B (en) Data storage method, data reading device, electronic equipment and medium
US11816130B2 (en) Generating and controlling an elastically scalable stamp data structure for storing data
US11714829B2 (en) Parallel calculation of access plans delimitation using table partitions
Dory Study and Comparison of Elastic Cloud Databases: Myth or Reality?
CN115878627A (en) Database partitioning method, device, equipment and storage medium
CN113254469A (en) Data screening method and device, equipment and medium
CN115981839A (en) Memory allocation method and device, equipment and medium
CN113239258A (en) Method, device, electronic equipment and storage medium for providing query suggestion
CN115809304A (en) Method and device for analyzing field-level blood margin, computer equipment and storage medium
CN115587228A (en) Object query method, object storage method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant