CN107276912B - Memory, message processing method and distributed storage system - Google Patents

Memory, message processing method and distributed storage system Download PDF

Info

Publication number
CN107276912B
CN107276912B CN201610213662.6A CN201610213662A CN107276912B CN 107276912 B CN107276912 B CN 107276912B CN 201610213662 A CN201610213662 A CN 201610213662A CN 107276912 B CN107276912 B CN 107276912B
Authority
CN
China
Prior art keywords
target data
key
memory
message
computing chip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610213662.6A
Other languages
Chinese (zh)
Other versions
CN107276912A (en
Inventor
陈昊
郭海涛
张羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201610213662.6A priority Critical patent/CN107276912B/en
Publication of CN107276912A publication Critical patent/CN107276912A/en
Application granted granted Critical
Publication of CN107276912B publication Critical patent/CN107276912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal
    • H04L49/9078Intermediate storage in different physical parts of a node or terminal using an external memory or storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Abstract

The invention discloses a memory, a message processing method and a distributed storage system, and belongs to the field of data storage. The memory includes: a computing chip and a storage medium; the computing chip is used for analyzing the first KV message received through the Key-Value KV interface to obtain a Key Key and operation information of target data, reading the target data from the storage medium according to the Key, performing computing processing on the target data according to the operation information to obtain processed target data, generating a second KV message by taking the processed target data as a Value, and sending the second KV message through the KV interface; the problem of large network overhead caused by the fact that a traditional memory only has storage capacity and needs to gather a large amount of data to an application node with computing capacity to complete computing processing of target data in the related art is solved, computing processing of the target data through the memory is achieved, the processed target data are directly returned to the application node, and the effect of reducing the network overhead is achieved.

Description

Memory, message processing method and distributed storage system
Technical Field
The embodiment of the invention relates to the field of data storage, in particular to a memory, a message processing method and a distributed storage system.
Background
When a non-relational database (Not Only SQL, NoSQL) is used in the distributed storage system, Key-Value (KV) storage is usually used as a data storage mechanism.
Referring to fig. 1, current distributed storage systems typically include two layers: an application layer 10 and a storage layer 20. The application layer 10 includes a plurality of application nodes 12 deployed in a distributed manner, and each application node 12 is provided with a computing module for processing data; the storage tier 20 includes a plurality of distributively deployed storages 22, each storage 22 for storing data. Each application node 12 and each storage 22 are connected by a wired network or an optical fiber. The memory 22 provides the application node 12 with a KV interface, which is a software interface. The application node 12 may send a KV message to the memory 22 through the KV interface, where the KV message mainly includes a Key field and a Value field, the Key field is used to indicate location information of data in the memory, and the Value field is used to indicate a Value or an attribute of the data, and interaction between the application node 12 and the memory 22 may be implemented through the KV interface and the KV message.
The existing KV message mainly supports data reading and data writing, and realizes the data adding, deleting, checking and modifying. When the data in the memory 22 needs to be calculated, the application node 12 first sends a KV packet for reading the target data to the memory 22 through the KV interface, the memory 22 carries the read target data in another KV packet and transmits the KV packet to the application node 12 through the KV interface, and the application node 12 performs calculation processing on the target data.
Since a large amount of target data is corresponding to some calculation processing, for example, to order the hundred thousand pieces of target data to obtain the top 10 pieces of data, the memory 22 is required to transmit all the hundred thousand pieces of target data to the application node 12, and the application node 12 performs order calculation on the hundred thousand pieces of target data. Obviously, the network overhead for aggregating the data in the storage 22 to the application node is large, and especially, the network overhead required for storing the target data in a plurality of storages 22 is large.
Disclosure of Invention
In order to solve the problem of high network overhead caused by the fact that a memory needs to gather target data to an application node with computing capacity for computing processing, the embodiment of the invention provides the memory, a message processing method and a distributed storage system. The technical scheme is as follows:
in a first aspect, a memory is provided, the memory comprising a computing chip and a storage medium; the computing chip is used for receiving the first KV message through a Key-Value (KV) interface, analyzing the first KV message and obtaining Key and operation information of target data; the computing chip is used for reading target data from the storage medium according to Key; the computing chip is used for computing the target data according to the operation information to obtain processed target data; and the computing chip is used for generating a second KV message by taking the processed target data as Value, and transmitting the second KV message through the KV interface.
In the memory, the computing chip can perform computing processing on the target data according to the operation information and feed back the processed target data, so that the problem of high network overhead caused by the fact that a traditional memory in the related art only has storage capacity, and when the target data needs to be computed, a large amount of data needs to be gathered to an application node with computing capacity to complete the computing processing is solved, the purpose of performing the computing processing on the target data through the memory and directly returning the processed target data to the application node is achieved, and the effect of reducing the network overhead is achieved.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the computing chip is configured to parse the first KV packet, obtain a Key field and a Context field, obtain a Key of the target data from the Key field, and obtain the operation information from the Context field.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the operation information includes: the type of operation, or, the type of operation and the operating parameters; the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation; the operation type and the operation parameter include at least one of the following information: format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
In a second aspect, a memory is provided, the memory comprising: a computing chip and a storage medium; the computing chip is used for receiving the KV message through the KV interface, analyzing the KV message and obtaining Key of target data, the target data and operation information; the computing chip is used for computing the target data according to the operation information to obtain processed target data; and the computing chip is used for writing the processed target data into the storage medium according to the Key.
In the memory, the computing chip can perform computing processing on target data according to the operation information and feed back the processed target data, so that the problem that the network overhead is large because an application node needs to gather a large amount of data to other application nodes with computing capability to complete computing processing and then send the processed data to the memory for storage when the traditional memory in the related art only has storage capability and the target data needs to be computed and processed is required to be transmitted among a plurality of application nodes is solved, the aim that the application node can directly send the target data to the memory, the target data is computed and processed through the memory and is stored in a storage medium is achieved, and the network overhead is reduced.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the computing chip is configured to parse the KV packet to obtain a Key field, a Value field, and a Context field, obtain a Key of the destination data from the Key field, obtain the destination data from the Value field, and obtain the operation information from the Context field.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the operation information includes: the type of operation, or, the type of operation and the operating parameters; the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation; the operation type and the operation parameter include at least one of the following information: format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
In a third aspect, a message processing method is provided, where the message processing method is applied to a memory having a computing chip and a storage medium, and the method includes: receiving a first KV message through a KV interface; analyzing the first KV message to obtain Key and operation information of the target data; reading target data from a storage medium according to Key; calculating the target data according to the operation information to obtain processed target data; and generating a second KV message by using the processed target data as Value, and transmitting the second KV message through the KV interface.
In the message processing method, the computing chip of the memory can perform computing processing on the target data according to the operation information and feed back the processed target data, so that the problem of high network overhead caused by the fact that a traditional memory only has storage capacity and a large amount of data needs to be gathered to an application node with computing capacity to complete computing processing when the target data needs to be computed and processed in the related art is solved, the computing processing of the target data through the memory is achieved, the processed target data is directly returned to the application node, and the effect of reducing the network overhead is achieved.
With reference to the third aspect, in a first possible implementation manner of the third aspect, the analyzing the first KV packet to obtain the Key and the operation information of the target data includes: analyzing the first KV message to obtain a Key field and a Context field; obtaining a Key of the target data from a Key field; the operation information is obtained from the Context field.
With reference to the first possible implementation manner of the third aspect, in a second possible implementation manner of the third aspect, the operation information includes: the type of operation, or, the type of operation and the operating parameters; the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation; the operation type and the operation parameter include at least one of the following information: format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
In a fourth aspect, a message processing method is provided, where the message processing method is applied to a memory having a computing chip and a storage medium, and the method includes: receiving a KV message through a KV interface; analyzing the KV message to obtain Key of the target data, the target data and operation information; calculating the target data according to the operation information to obtain processed target data; and writing the processed target data into the storage medium according to the position information.
In the message processing method, a computing chip of a memory can perform computing processing on target data according to operation information and feed back the processed target data, so that the problem that in the related art, a traditional memory only has storage capacity, when the target data needs to be computed and processed, an application node needs to gather a large amount of data to other application nodes with computing capacity to complete computing processing, and then the processed data is sent to the memory to be stored, namely, the network overhead is large because a large amount of data needs to be transmitted among a plurality of application nodes is solved, the application node can directly send the target data to the memory, the target data is computed and processed through the memory and is stored in a storage medium, and the effect of reducing the network overhead is achieved.
With reference to the fourth aspect, in a first possible implementation manner of the fourth aspect, parsing the KV packet to obtain a Key, target data, and operation information of the target data includes: analyzing the KV message to obtain a Key field, a Value field and a Context field; obtaining a Key of the target data from a Key field; obtaining target data from a Value field; the operation information is obtained from the Context field.
With reference to the first possible implementation manner of the fourth aspect, in a second possible implementation manner of the fourth aspect, the operation information includes: the type of operation, or, the type of operation and the operating parameters; the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation; the operation type and the operation parameter include at least one of the following information: format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
In a fifth aspect, a distributed storage system is provided, the system comprising: the system comprises at least one application node and at least one memory, wherein the application node is connected with the memory through a network; the memory is a memory as in the first or second aspect described above; the application node is used for sending the KV message to the memory; the KV message carries the Key and the operation information of the target data, or the KV message carries the Key, the target data and the operation information of the target data.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a conventional distributed storage system not shown in the present invention;
FIG. 2A is a schematic diagram of a memory according to an exemplary embodiment of the present invention;
FIG. 2B is a schematic diagram of a memory according to an exemplary embodiment of the present invention;
FIG. 2C is a schematic diagram of a memory according to an exemplary embodiment of the present invention;
FIG. 3 is a schematic diagram of a memory according to an exemplary embodiment of the present invention;
FIG. 4 is a schematic diagram of a memory according to another exemplary embodiment of the present invention;
FIG. 5 is a block diagram of a distributed storage system provided by an exemplary embodiment of the present invention;
FIG. 6 is a block diagram of a distributed storage system provided by another exemplary embodiment of the present invention;
FIG. 7 is a block diagram of a distributed storage system provided by another exemplary embodiment of the present invention;
fig. 8 is a flowchart of a message processing method according to an exemplary embodiment of the present invention;
fig. 9 is a flowchart of a message processing method according to an exemplary embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described below with reference to the accompanying drawings.
Referring to fig. 2A, 2B and 2C, a schematic structural diagram of a memory 20 according to an exemplary embodiment of the present invention is shown, where the memory 20 includes: a computing chip 210 and a storage medium 220.
Optionally, the memory 20 is a storage medium 220 with a computing chip 210 built therein, as shown in fig. 2A.
Optionally, the memory 20 is a storage medium 220 set with a computing chip 210 built therein, as shown in fig. 2B.
Optionally, the memory 20 is a combination of an external computing chip 210 and at least one storage medium 220, as shown in fig. 2C, and the number of the storage media is not limited in this embodiment.
The computing chip 20 is a chip having reading, and/or writing, and/or computing capabilities for data.
The storage medium 220 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, a hard disk, flash memory, a magnetic or optical disk. In the embodiment of the present invention, the storage medium 220 is a hard disk or a hard disk set for example, the computing chip 210 may be a single chip with computing capability, and the computing chip 210 is integrated in the hard disk or the hard disk set, or the computing chip 210 is electrically connected to the hard disk or the hard disk set.
Referring to fig. 3, a schematic structural diagram of a memory 30 according to an exemplary embodiment of the present invention is shown, where the memory 30 includes: the computing chip 310 and the storage medium 320 are electrically connected as shown in fig. 2C in the embodiment.
The computing chip 310 includes a processing module 330 and a storage module 340.
The processing module 330 includes one or more processing cores, and the processing module 330 executes various functional applications and information processing by running software programs and units.
The storage module 340 may store at least one application unit 350, and the application unit 350 may include: a first Key-Value (KV) interface 351 and a calculation unit 352,
the first KV interface 351 is an application unit written by software, and is a novel KV interface used in the present embodiment.
The processing module 330 calls the first KV interface 351 in the storage module 340 to receive and/or send a KV message, where the KV message is a novel KV message used to instruct to perform calculation processing on target data in a storage medium and read the processed target data, or write the processed target data into the storage medium.
When the KV message is used for indicating to calculate and process target data in the memory and read the processed target data, the KV message comprises a Key field and a Context field, wherein the Key field is used for indicating a Key of the target data, the Context field is used for indicating operation information, and the target data is data in the storage medium.
When the KV message is used for indicating to calculate and process target data and store the processed target data into the memory, the KV message comprises a Key field, a Value field and a Context field, wherein the Key field is used for indicating a Key of the target data, the Context field is used for indicating operation information, the Value field is used for indicating the target data, and the target data is data which is included in the KV message and needs to be stored into the memory.
The processing module 330 invokes the calculation unit 352 in the storage module 340 to perform calculation processing on the target data.
Based on the memory shown in fig. 3, in this embodiment, a description is given by taking an example that the first KV message is used to instruct to perform calculation processing on target data in the memory, and read the processed target data.
And the computing chip is used for receiving the first KV message through the first KV interface, analyzing the first KV message and obtaining Key and operation information of the target data.
Optionally, the first KV message includes a Key field and a Context field.
And the computing chip is used for analyzing the first KV message, acquiring a Key field and a Context field, acquiring a Key of the target data from the Key field, and acquiring the operation information from the Context field.
In an exemplary example, the data structure of the first KV message received by the computing chip through the first KV interface is as follows:
Figure BDA0000960087940000071
the data structure of the first KV packet includes a POOL _ ID _ S, VOL _ KEY _ S and a CONTEXT _ S, where, when the memory 30 includes a plurality of storage media 320, the POOL _ ID _ S is used to indicate a storage medium where the target data is located, for example, different storage media are represented by different values in the POOL _ ID _ S, the computing chip determines the storage medium where the target data is located according to the value parsed from the POOL _ ID _ S, for example, the memory 30 includes a storage medium 1 and a storage medium 2, and when the value parsed from the POOL _ ID _ S by the computing chip is 1, it indicates that the target data is in the storage medium 1; when the value that the computing chip has resolved from POOL _ ID _ S is 2, it indicates that the target data is in the storage medium 2.
VOL _ KEY _ S is used for indicating the Key of target data, CONTEXT _ S is used for indicating operating parameters, the computing chip analyzes the VOL _ KEY _ S to obtain a Key field, and analyzes the CONTEXT field from the CONTEXT _ S.
Optionally, the first KV message may further include a VOL _ VALUE _ S, where the VOL _ VALUE _ S is used to indicate target data, and the computing chip may obtain a VALUE field from the VOL _ VALUE _ S through parsing, where in this embodiment, the VALUE field is empty.
In the first KV message, a schematic data structure of CONTEXT _ S is as follows:
struct CONTEXT_S{
UINT16 opcode;
void *kv_private;
}
wherein, the opcode field is used for indicating the operation type, the kv _ private field is used for indicating the operation parameter, and the kv _ private field is optional.
In the above exemplary embodiment, the computing chip obtains the operation type by analyzing the opcode field in the Context field, and obtains the operation parameter by analyzing the kv _ private field in the Context field.
Optionally, the operation information includes: the type of operation, or, the type of operation and the operating parameters.
Optionally, the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation;
optionally, the operation type and the operation parameter include at least one of the following information: format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
When the operation type is format conversion, it may be currency conversion, such as converting rmb to dollars, and the format conversion rule is exchange rate, such as converting by 1rmb to 0.1544 dollars; the format conversion may also be a conversion of a 10-system numerical value into a 16-system numerical value or the like.
When the operation type is sorting, the numerical values can be sorted, and the sorting rule can be that the numerical values are arranged from large to small, the data are arranged from small to large, and the like; the time can be sorted, and the sorting rule can be from near to far according to the time, from far to near according to the time, and the like.
And the computing chip is used for reading the target data from the storage medium according to the Key.
Optionally, the memory uses KV storage as a data storage mechanism, the Key is used to indicate location information of target data storage, and the Key is stored in correspondence with the target data, and then the computing chip acquires the target data corresponding to the Key from the storage medium.
And the computing chip is used for computing the target data according to the operation information to obtain the processed target data.
Optionally, the computing chip calls the computing unit 320 to perform computing processing on the target data.
In an exemplary example, the Key field included in the first KV message is "Andy's routing records" for indicating consumption records of Andy of the user, the opcode field in the Context field is "sorting" for indicating that the operation type is a sort operation, and the KV _ private field is "sorting access to the time sequence from the near to the distance, Top (10)" for indicating that the sort rule is to sort the first 10 data from near to far in time. The first KV message is used to indicate "search for the last 10 consumption records of Andy of the user". The computing chip acquires data corresponding to Key from the storage medium according to a Key field in the first KV message, namely all consumption records of the user Andy, sorts all consumption records according to a rule that time goes from near to far, and takes the first 10 data as processed target data, namely the consumption record of the user Andy for the last 10 times.
And the computing chip is used for generating a second KV message by taking the processed target data as Value, and transmitting the second KV message through the first KV interface.
Optionally, the second KV packet includes a Value field, or includes a Key field and a Value field, where a Value of the Value field is the processed target data.
In the above exemplary embodiment, the 10 pieces of data included in the processed target data are used as the Value of the second KV packet, and the second KV packet is sent, that is, reading the last 10 consumption records of Andy of the user from the memory is realized.
In summary, in the memory provided in the embodiment of the present disclosure, the computing chip may perform computing processing on the target data according to the operation information, and feed back the processed target data, so as to solve the problem that the network overhead is large due to the fact that the conventional memory only has storage capacity, and when the target data needs to be computed, a large amount of data needs to be aggregated to an application node with computing capacity to complete the computing processing, thereby achieving the effects of performing the computing processing on the target data through the memory, directly returning the processed target data to the application node, and reducing the network overhead.
Based on the memory shown in fig. 3, in this embodiment, an example will be described in which the third KV message is used to instruct the calculation processing on the target data, and the processed target data is written into the memory.
And the computing chip is used for receiving the third KV message through the first KV interface, analyzing the third KV message and obtaining Key of the target data, the target data and the operation information.
In this embodiment, the third KV packet includes a Key field, a Value field, and a Context field.
And the computing chip is used for analyzing the third KV message to obtain a Key field, a Value field and a Context field, obtaining a Key of the target data from the Key field, obtaining the target data from the Value field and obtaining the operation information from the Context field.
In an exemplary example, the data structure of the third KV packet received by the computing chip through the first KV interface is as follows:
Figure BDA0000960087940000101
the third KV message includes POOL _ ID _ S, VOL _ KEY _ S, VOL _ VALUE _ S and CONTEXT _ S, and the specific meaning may be combined with the above embodiment, and the number structure of the CONTEXT _ S is the same as that of the CONTEXT _ S shown in the above embodiment.
In this embodiment, the computing chip parses the x kv _ private field in the Context field to obtain the operation parameter by parsing the opcode field in the Context field to the operation type.
Optionally, the operation information includes: the type of operation, or, the type of operation and the operating parameters.
Optionally, the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation.
Optionally, the operation type and the operation parameter include at least one of the following information: format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
And the computing chip is used for computing the target data according to the operation information to obtain the processed target data.
Optionally, the computing chip performs computing processing on the target data to obtain processed target data.
In an illustrative example, the Key field in the third KV message is "Andy's tracking records" for indicating the consumption record of user Andy, the Value field is "200 RMB" for indicating RMB 200, the opcode field in the Context field is "RMB to USD" for indicating that the operation type is converting RMB to U.S. dollars, the KV _ private field is "1 RMB 0.1544 USD" for indicating that the exchange rate is 1RMB 0.1544, and the third KV message is used for indicating "converting RMB 200 to U.S. dollars at an exchange rate of 1RMB 0.1544 dollars" and storing as the consumption record of user Andy ". And the computing chip performs format conversion on the target data according to the third KV message, and converts the target data '200 RMB' into processed target data '3.088 USD'.
And the computing chip is used for writing the processed target data into the storage medium according to the Key.
Optionally, the data in the memory is stored in KV as a data storage mechanism, the Key is used to indicate storage location information of the processed target data, and the computing chip stores the processed target data and the Key correspondingly
In the above exemplary embodiment, the computing chip stores processed target data 3.088USD in the storage medium in correspondence with Key, that is, storage of processed target data 3.088 dollars as consumption records of user Andy is realized.
To sum up, in the memory provided in the embodiment of the present disclosure, the computing chip may perform computing processing on target data according to the operation information, and feed back the processed target data, so as to solve the problem that, in the related art, a conventional memory only has storage capacity, when the target data needs to be computed, an application node needs to gather a large amount of data to other application nodes with computing capacity to complete the computing processing, and then send the processed data to the memory for storage, that is, a large amount of data needs to be transmitted among a plurality of application nodes, so that the network overhead is large, and the application node can directly send the target data to the memory, perform computing processing on the target data through the memory and store the target data in a storage medium, thereby reducing the network overhead.
In another alternative embodiment based on the above embodiment, as shown in fig. 4, the application unit 350 in the storage module 340 further includes: a distribution unit 353 and a second KV interface 354.
The second KV interface 354 is an application unit written by software, and is a conventional KV interface.
The processing module 330 analyzes the received KV message by calling the distribution unit 353 in the storage module 340.
In an exemplary example, the schematic data structure of CONTEXT _ S included in the KV message received by the computing chip is as follows:
Figure BDA0000960087940000121
the meaning of the opcode field and the x KV _ private field may refer to the above embodiment, and the sig field is used to indicate the type of the interface receiving the KV packet, for example, when the sig is 1, it indicates that the first KV interface is used to receive the KV packet; and when the sig is 2, indicating to use the second KV interface to receive the KV message.
the type field is used to indicate the type of the KV packet, for example, when the type is 1, the type field indicates that the KV packet is used to read target data; when the type is 2, the KV message is indicated to be used for writing data.
The version field is used to indicate a version number of the memory, for example, version 1.0, version 2.0, or the like.
The reserve field is a reserved field, which can be customized to be a new field.
Optionally, the version field and the reserve field are both optional.
After the KV message is analyzed by the calling distribution unit 353, the processing module 330 receives the KV message through the first KV interface or the second KV interface according to a predetermined field in the KV message, in the above exemplary example, the predetermined field is a sig field, and when sig obtained by the analysis is 1, the KV message is received through the first KV interface; when sig is 2, the KV packet is received through the second KV interface, and the predetermined field may also be a type field, an opcode field, or the like, which is not limited in this embodiment.
When the processing module uses the first KV interface to receive the new KV packet for instructing to perform calculation processing on the target data and read the processed target data, or to store the processed target data in the value storage memory, the method for the calculation chip to perform calculation processing on the target data according to the new KV packet and read the processed target data, or to store the processed target data in the storage medium may refer to the above exemplary embodiment.
When the processing module calls the second KV interface to receive a traditional KV message for indicating to read the target data or writing the target data into the memory, the method comprises the following steps:
in one possible implementation, the computing chip receives a first conventional KV message through the second KV interface, where the first conventional KV message is used to instruct data reading from the memory.
And the computing chip is used for analyzing the first traditional KV message, obtaining a Key field and obtaining a Key of the target data from the Key field.
In an exemplary example, the data structure of the first conventional KV packet received by the computing chip through the second KV interface is as follows:
Figure BDA0000960087940000131
the first conventional KV message includes a POOL _ ID _ S and a VOL _ KEY _ S, and optionally, the first conventional KV message further includes a VOL _ VALUE _ S, and the specific meaning of each field may be combined with the above embodiments.
And the computing chip is used for acquiring the target data according to the Key, generating a second traditional KV message by using the target data as Value, and sending the second traditional KV message through a second KV interface.
Optionally, the second conventional KV packet includes a Value field, or includes a Key field and a Value field.
In an exemplary example, a Key field in the first conventional KV message is "Andy's tracking records" and is used to indicate a consumption record of the user Andy, then the first conventional KV message is used to indicate "search the nearest consumption record of the user Andy", the computing chip obtains target data corresponding to the Key from the storage medium, that is, the consumption record of the user Andy, and sends the second conventional KV message by using the obtained target data as a Value of the second conventional KV message, that is, reading the consumption record of the user Andy from the memory is realized.
In another possible implementation, the computing chip receives a third conventional KV message through the second KV interface, where the third conventional KV message is used to instruct to write data into the memory.
And the computing chip is used for analyzing the third traditional KV message, acquiring a Key field and a Value field, acquiring a Key of the target data from the Key field, and acquiring the target data from the Value field.
In an exemplary example, the data structure of the third conventional KV packet received by the computing chip is as follows:
Figure BDA0000960087940000141
the third conventional KV message includes a POOL _ ID _ S, VOL _ KEY _ S and a VOL _ VALUE _ S, and the specific meaning of each field may be combined with the above embodiments.
And the computing chip is used for writing the target data into the storage medium according to the Key.
To sum up, the memory provided in the embodiment of the present disclosure further includes a conventional KV interface, and the computing chip may further receive a conventional KV packet through the conventional KV interface, and read and/or write the target data according to the conventional KV packet, so that an effect that the novel KV interface is compatible with the conventional KV interface is achieved.
Referring to fig. 5, a schematic structural diagram of a distributed storage system according to an exemplary embodiment of the present invention is shown. The distributed storage system includes: at least one application node 510 and at least one first memory 520.
The first memory 520 is a memory including a first KV interface, the first memory 520 may be the memory shown in fig. 3 or the memory shown in fig. 4, and the first memory 520 is described as an example shown in fig. 4 in this embodiment.
Optionally, the distributed storage system provided by the embodiment of the present disclosure is a non-relational database with an architecture such as Hadoop, MongoDB, Membase, Cassandra, BeansDB, Redis, and the like.
Optionally, in the distributed storage system, at least one second storage 530 is further included, and the second storage 530 is a storage that includes only the second KV interface and does not include the first KV interface.
The application nodes 510 are nodes in a distributed storage system for providing database services, and in a distributed storage system, one or more application nodes 510 may be included, with different application nodes 510 functioning the same or differently, e.g., the application nodes 510 having the ability to send database service requests, or the application nodes 510 having computing capabilities. The meaning of the application node may be different in different distributed architectures, for example, in a montodb scenario, the application node is: a Master access request unit (Master), a slave request access unit (slave) or a slave transaction processing unit (execute); in the Hadoop scenario, the application nodes are: the present embodiment does not limit the specific meaning of the application node, such as a Master access request unit (Master), a slave access request unit (slave), a slave transaction processing unit (executive) or a Storage management unit (Storage).
Optionally, one application node 510 is connected to one or more first memories 520 and/or second memories 530, and one first memory 520 or one second memory 530 is connected to one or more application nodes 510, which is not limited in this embodiment.
The application node 510 is configured to send a new KV packet to the first memory 520 connected to the application node, where the new KV packet carries the Key and the operation information of the target data, or the new KV packet carries the Key, the target data, and the operation information of the target data.
The application node 510 is further configured to send a traditional KV packet to the first memory 520 connected to the application node, where the traditional KV packet carries a Key of the target data, or the traditional KV packet carries the Key of the target data and the target data.
The application node 510 is further configured to send a traditional KV packet to the second memory 530 connected to the application node, where the traditional KV packet carries the Key of the target data, or the traditional KV packet carries the Key of the target data and the target data.
Referring to fig. 6, it shows a schematic structural diagram of a distributed storage system in a MongoDB scenario according to another exemplary embodiment of the present invention.
The distributed storage system includes: a User unit (User Program)610, a Master request access unit (Master)620, a slave request access unit (slave) 630, a slave transaction processing unit (Executor)640, a first memory 650 and a second memory 660.
The slave request accessing unit 630 includes N, and fig. 6 shows that the slave request accessing unit 1631, the slave request accessing unit 2632, and the slave request accessing unit N633, where N is a positive integer, and a value of N is not limited in this embodiment.
Optionally, the first memory 650 is a memory including the first KV interface, and may be a memory as shown in fig. 3 or fig. 4, and the second memory 660 is a memory including only the second KV interface and not including the first KV interface.
It should be noted that, a plurality of master request access units 620 may be included in the distributed storage system, each master request access unit may be connected to one or more slave request access units, and fig. 6 exemplarily shows a case where one master request access unit 620 is included, which is not limited in this embodiment.
Optionally, the main request access unit 620 is connected to at least one subscriber unit 610, and is configured to receive and process a service instruction sent by the subscriber unit 610, where the subscriber unit 610 may be an application program, and the service instruction is used to instruct to read and/or write and/or perform calculation processing on target data.
Optionally, the master request access unit 620 is connected to at least one slave request access unit, and the master request access unit 620 distributes the received service instruction to the slave request access unit according to a predetermined rule, where the predetermined rule may be any type of rule such as age, gender, and region, and the predetermined rule is not limited in this embodiment.
In an exemplary example, the master request access unit 620 is connected to the slave request access unit 1631, the slave request access unit 2632 and the slave request access unit N633, and the service instruction received by the master request access unit 620 is "search for the last 10 consumption records of Andy of the user":
when the predetermined rule is distribution according to the age, a business instruction of "search for 10 latest consumption records of user Andy and the age of user Andy is less than 10 years" is distributed to the slave request access unit 1, and a business instruction of "search for 10 latest consumption records of user Andy and the ages of user Andy are more than or equal to 10 years and less than 20 years" is distributed to the slave request access unit 2, and "search for 10 latest consumption records of user Andy and the ages of user Andy are more than or equal to 50 years and less than 60 years" is distributed to the slave request access unit 3.
The slave request access unit is connected with one or more storage resource pools, one storage resource pool comprises one or more first memories and/or second memories, and the slave request access unit searches the first memories and/or the second memories in the storage resource pool connected with the slave request access unit according to the received service instruction.
When the storage resource pool searched by the slave access requesting unit comprises the first memory, the slave access requesting unit is directly connected with the storage resource pool; when the second memory is included in the storage resource pool searched from the requesting access unit, the requesting access unit is connected to the slave transaction processing unit, and the slave transaction processing unit is connected to the storage resource pool, and the slave transaction processing unit is a unit having a capability of performing calculation processing on data.
As shown in fig. 6, which exemplarily shows that the slave unit 1 is connected to the storage resource pool 1, the first memory 650 included in the storage resource pool 1 is searched; the slave request access unit 2 is connected with the storage resource pool 2, and searches a first memory 650 included in the storage resource pool 2; the slave request access unit N is connected to the slave object processing unit N, and searches the second memory 660 in the memory resource pool N connected to the slave object processing unit N.
In the present embodiment, the master request access unit 620, the slave request access unit 1631, the slave request access unit 2632, the slave request access unit N633 and the slave transaction unit N640 are all application nodes.
After receiving the service instruction sent by the master request access unit, the slave request access unit has two possible implementation manners according to the fact that the searched storage resource pool includes the first storage 650 and the second storage 660:
in a first possible implementation, the first memory in the storage resource pool is searched from the requesting access unit, and as shown in fig. 6, the storage resource pool 1 searched from the requesting access unit 1631 and the storage resource pool 2 searched from the requesting access unit 2632 include the first memory 650.
The slave request access unit generates a KV message from the received service instruction sent by the master request access unit and sends the KV message to a first memory in a storage resource pool, a computing chip in the first memory receives the KV message through a distribution unit, and two different processing modes are provided according to the difference that the received KV message is a novel KV message or a traditional KV message:
in the first processing mode, the KV message sent from the request access node is a novel KV message, the computing chip receives the novel KV message through the first KV interface, and after performing computing processing on the target data according to the novel KV message, the processed target data is stored in the storage medium, or returned to the request access node, and the specific implementation manner may be combined with the above-described embodiments.
In a second processing manner, the KV message sent from the requesting access node is a conventional KV message, the computing chip receives the conventional KV message through the second KV interface, and writes the target data into the storage medium according to the conventional KV message, or returns the target data to the requesting access node, which may be implemented in combination with the above embodiments.
In a second possible implementation, the second memory in the storage resource pool is searched from the requesting access unit, and the storage resource pool N searched from the requesting access unit N633 as shown in fig. 6 includes the second memory 660.
After receiving the service instruction sent by the main request access unit, the slave request access unit transmits the service instruction to the slave transaction processing unit according to the communication rule in the MongoDB scene, and the slave transaction processing unit has two different processing modes:
in the first processing mode, the transaction processing unit generates a traditional KV message and operation information according to a service instruction, the traditional KV message is sent to the second storage from the transaction processing unit, the second storage reads target data from a storage medium according to the traditional KV message and returns the target data to the transaction processing unit, the transaction processing unit performs calculation processing on the target data according to the operation information, and the processed target data is returned to the slave request access unit or sent to the second storage for storage; or processing the target data in the traditional KV message from the transaction processing unit according to the operation information, and sending the processed target data to the second memory for storage.
In the second processing mode, the traditional KV message is generated by the transaction processing unit according to the service instruction, the traditional KV message is forwarded to the second memory by the transaction processing unit, the target data is read from the storage medium by the second memory according to the traditional KV message and returned to the transaction processing unit, and the target data is forwarded to the slave request access unit by the transaction processing unit; alternatively, the target data is stored in a storage medium.
When the distributed storage system comprises at least two slave request access units, the slave request access units feed back the access results to the master request access unit according to the received access results returned by the first storage or the transaction processing unit, and the master request access unit processes and summarizes all the access results after receiving the access results fed back by each slave request access unit to obtain the final result.
Referring to fig. 7, a schematic structural diagram of a distributed storage system in a Hadoop scenario according to another exemplary embodiment of the present invention is shown.
The distributed storage system includes: a User unit (User Program)710, a Master request access unit (Master)720, a slave request access unit (slave) 730, a slave transaction processing unit (executive) 740, a Storage management unit (Storage)750, a first Storage 760 and a second Storage 770.
The slave request accessing unit 720 and the storage managing unit 750 both include M, fig. 6 shows that the slave request accessing unit 1631, the slave request accessing unit 2632 and the slave request accessing unit M633, as well as the storage managing unit 1751, the storage managing unit 2752 and the storage managing unit M753, where M is a positive integer, and a value of M is not limited in this embodiment.
Optionally, the first memory 760 is a memory including the first KV interface, and may be a memory as shown in fig. 3 or fig. 4, and the second memory 770 is a memory including only the second KV interface and not including the first KV interface.
It should be noted that a plurality of master request access units 720 may be included in the distributed storage system, each master request access unit may be connected to one or more slave request access units, and fig. 7 exemplarily shows a case where one master request access unit 720 is included, which is not limited in this embodiment.
Optionally, the slave request access unit is connected to the storage management unit, and is connected to one or more storage resource pools through the storage management unit, where one storage resource pool includes one or more first storages and/or second storages, and the slave request access unit searches the first storages and/or the second storages in the storage resource pool connected to the slave request access unit according to the received service instruction.
When the storage resource pool searched by the slave access requesting unit comprises a first storage, the slave access requesting unit generates a Distributed File System (HDFS) signaling according to a service instruction and sends the signaling to a storage management unit, and the storage management unit generates a KV message according to the HDFS signaling and sends the KV message to the first storage; when the storage resource pool searched by the slave access unit comprises a second storage, the slave access unit transmits a service instruction to the slave transaction processing unit according to a transmission rule in a Hadoop architecture, the slave transaction processing unit generates an HDFS (Hadoop distributed file system) signaling according to the service instruction, the HDFS signaling is sent to the storage management unit, the storage management unit generates a KV message according to the HDFS signaling and sends the KV message to the second storage, and the slave transaction processing unit has the capability of calculating and processing data.
As shown in fig. 7, it exemplarily shows that the slave access unit 1 is connected to the storage management unit 1, the storage management unit 1 is connected to the storage resource pool 1, and the slave access unit 1 searches the first storage 760 included in the storage resource pool 1; the slave request access unit 2 is connected with the slave object processing unit 2, the slave object processing unit 2 is connected with the storage management unit 2, the storage management unit 2 is connected with the storage resource pool 2, and the second storage 770 included in the storage resource pool 2 is searched from the request access unit 2; the slave access unit M is connected to the slave object processing unit M, the slave object processing unit M is connected to the storage management unit M, the storage management unit M is connected to the storage resource pool M, and the slave access unit M searches the second storage 770 in the storage resource pool M.
In the present embodiment, the master request access unit 720, the slave request access unit 1731, the slave request access unit 2732, the slave request access unit M733, the slave transaction processing unit 2741, the slave transaction processing unit 3742, the storage management unit 1, the storage management unit 2, and the storage management unit M are all application nodes.
The storage management unit generates a KV message which is used for indicating calculation processing of target data and reading the processed target data, or storing the processed target data in a novel KV message in a value storage; alternatively, the KV message is a conventional KV message used only to instruct reading or writing of target data. Included in the pool of storage resources searched from the requesting access unit are either the first storage 760 or the second storage 770, there are two possible implementations:
in a first possible implementation, the first memory in the storage resource pool is searched from the requesting access unit, and the storage resource pool 1 searched from the requesting access unit 1731 as shown in fig. 7 includes the first memory 760.
The storage management unit sends the generated KV message to a first storage in a storage resource pool, and a computing chip in the first storage has two different processing modes according to the difference that the received KV message is a novel KV message or a traditional KV message:
in the first processing mode, the KV message sent by the storage management unit is a novel KV message, the computing chip receives the novel KV message through the first KV interface, and writes the target data into the storage medium after performing computing processing on the target data according to the novel KV message, or returns the target data to the storage management unit, and the target data is forwarded to the slave request access node by the storage management unit, which may be combined with the above embodiments.
In the second processing mode, the KV message sent from the requesting access node is a conventional KV message, the computing chip receives the conventional KV message through the second KV interface, and writes the target data into the storage medium according to the conventional KV message, or returns the target data to the storage management unit, and the storage management unit forwards the target data to the requesting access node.
In a second possible implementation, the second memory in the storage resource pool is searched from the requesting access unit, and the storage resource pool 2 searched from the requesting access unit 2732 and the storage resource pool M searched from the requesting access unit M733 as shown in fig. 7 include the second memory 770.
The slave request access unit sends the service instruction to the slave transaction processing unit, and there are two different processing modes:
in the first processing mode, the service instruction sent from the request access unit is used for instructing the calculation processing of the target data, reading processed target data or storing the processed target data into a memory, analyzing a service instruction by a transaction processing unit to obtain a data part and an operation part, sending the data reading part to a storage management unit, generating a traditional KV message by the storage management unit according to the data part and sending the traditional KV message to a second memory, reading the target data from a storage medium by the second memory according to the traditional KV message and returning the target data to the storage management unit, sending the target data to a slave transaction processing unit by the storage management unit, and performing calculation processing on the target data by the transaction processing unit according to the operation part, the processed target data is returned to the slave request access unit or forwarded to the second memory by the memory management unit for storage; or, the data part is calculated and processed by the transaction processing unit according to the operation part, and the calculated and processed target data is sent to the storage management unit to generate a traditional KV message which is sent to the second storage for storage.
In a second processing mode, a service instruction sent from a request access unit is used for indicating to read target data or write the target data into a memory, a data part obtained by analyzing the service instruction by a service processing unit is sent to a storage management unit, the storage management unit generates a traditional KV message according to the data part and sends the traditional KV message to a second memory, the second memory reads the target data from a storage medium according to the traditional KV message and returns the target data to the storage management unit, the storage management unit sends the target data to a slave transaction processing unit, and the slave transaction processing unit returns the target data to the slave request access unit; or the second memory writes the target data into the storage medium according to the traditional KV message.
When the distributed storage system comprises at least two slave request access units, the slave request access units acquire access results according to the received KV messages returned by the first storage or the transaction processing unit and feed the access results back to the master request access unit, and the master request access unit processes and summarizes all the access results after receiving the access results fed back by each slave request access unit to obtain a final result.
In an illustrative example, taking the distributed storage system as an example in a MongoDB scenario, the structural diagram of the distributed system is shown in FIG. 6 above.
Assuming that the service instruction received from the transaction processing unit N is "search for the nearest 10 consumption records of Andy of the user", a traditional KV message and operation information are generated from the service processing unit N according to the service instruction, a Key field in the traditional KV message is "Andy's tracking records" for indicating the consumption records of Andy of the user, the operation information includes that the operation type is a sort operation, and the sort rule is to fetch Top (10) from near to far according to time.
And sending the traditional KV message to a second memory from the transaction processing unit N, wherein the second memory acquires target data corresponding to Key from a storage medium, namely consumption records of the user Andy, and if 10000 pieces of acquired target data exist, the traditional memory returns 10000 pieces of target data to the transaction processing unit N, the transaction processing unit N sorts the 10000 pieces of target data returned by the second memory from near to far according to time, acquires the first 10 pieces of data, namely processed target data, and returns the 10 pieces of data to the slave request access unit.
Assuming that the service instruction received from the requesting access unit 1 is also "search for the nearest 10 consumption records of user Andy", the requesting access unit 1 generates a new KV packet according to the service instruction, sends the new KV packet to the first memory, the computing chip in the first memory analyzes the new KV packet to obtain that the Key of the target data is "Andy's tracking records", which is used to indicate the consumption record of user Andy, the option field in the Context field is "scheduling", which is used to indicate that the operation type is a sorting operation, and the KV _ private field is "sorting access to the time sequence from the new to the distance, Top (10)", which is used to indicate that the sorting rule is to sort the first 10 data from near to far according to time. The computing chip obtains target data from a storage medium according to the Key, namely consumption records of the Andy user, and supposes that 10000 target data exist, the computing chip sorts the 10000 target data from near to far according to time, obtains the first 10 data as processed target data, and directly returns the 10 processed target data to the slave request access unit.
Referring to fig. 8, a flowchart of a message processing method according to an exemplary embodiment of the present invention is shown, where the present embodiment is illustrated by using the method in a memory shown in fig. 3, and the method includes the following steps:
step 801, the computing chip receives the first KV message through the KV interface.
Optionally, the first KV message is used to perform calculation processing on the target data in the storage medium, and read the processed target data.
And step 802, analyzing the first KV message by the computing chip to obtain Key and operation information of the target data.
Optionally, the computing chip analyzes the first KV message to obtain a Key field and a Context field, obtains a Key of the target data from the Key field, and obtains the operation information from the Context field.
Optionally, the operation information includes: the type of operation, or, the type of operation and the operating parameters.
Optionally, the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation.
Optionally, the operation type and the operation parameter include at least one of the following information: format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
And step 803, the computing chip reads the target data from the storage medium according to the Key.
And step 804, the computing chip performs computing processing on the target data according to the operation information to obtain processed target data.
And step 805, the computing chip sends the second KV message through the KV interface, with the processed target data as the second KV message generated by Value.
To sum up, in the message processing method provided in the embodiment of the present disclosure, the computing chip of the memory may perform computing processing on the target data according to the operation information, and feed back the processed target data, so as to solve the problem that the network overhead is large because the conventional memory only has storage capacity, and when the target data needs to be computed, a large amount of data needs to be gathered to an application node with computing capacity to complete the computing processing, thereby achieving the effects of performing computing processing on the target data through the memory, directly returning the processed target data, and reducing the network overhead.
Referring to fig. 9, a flowchart of a message processing method according to an exemplary embodiment of the present invention is shown, where the present embodiment is illustrated by using the method in a memory shown in fig. 3, and the method includes the following steps:
and step 901, the computing chip receives the KV message through the KV interface.
Optionally, the KV message is used for calculating and storing the target data in the storage medium.
And step 902, analyzing the KV message by the computing chip to obtain Key of the target data, the target data and the operation information.
Optionally, the computing chip analyzes the KV message to obtain a Key field, a Value field, and a Context field, obtains a Key of the target data from the Key field, obtains the target data from the Value field, and obtains the operation information from the Context field.
Optionally, the operation information includes: the type of operation, or, the type of operation and the operating parameters.
Optionally, the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation.
Optionally, the operation type and the operation parameter include at least one of the following information: format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
And step 903, calculating the target data by the calculation chip according to the operation information to obtain the processed target data.
And 904, writing the processed target data into a storage medium by the computing chip according to the position information.
To sum up, in the message processing method provided in the embodiment of the present disclosure, the computing chip of the memory may perform computing processing on the target data according to the operation information, and feed back the processed target data, which solves the problem that, in the related art, when the target data needs to be computed, an application node needs to first converge a large amount of data to other application nodes with computing capabilities to complete the computing processing, and then send the processed data to the memory for storage, that is, a large amount of data needs to be transmitted among multiple application nodes, so that the network overhead is large, and the effect that the application node can directly send the target data to the memory, perform computing processing on the target data through the memory and store the target data in the storage medium is achieved.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (13)

1. A memory, the memory comprising: a computing chip and a storage medium;
the computing chip is used for receiving a first KV message sent by an application node through a Key-value KV interface, analyzing the first KV message and obtaining a Key Key and operation information of target data;
the computing chip is used for reading the target data from the storage medium according to the Key;
the computing chip is used for computing the target data according to the operation information to obtain the processed target data;
and the computing chip is used for generating a second KV message by using the processed target data as Value and sending the second KV message to the application node through the KV interface.
2. The memory of claim 1,
the computing chip is configured to parse the first KV packet, obtain a Key field and a Context field, obtain the Key of the target data from the Key field, and obtain the operation information from the Context field.
3. The memory of claim 2, wherein the operational information comprises: the type of operation, or, the type of operation and the operating parameters;
the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation;
the operation type and the operation parameter comprise at least one of the following information:
format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
4. A memory, the memory comprising: a computing chip and a storage medium;
the computing chip is used for receiving a KV message sent by an application node through a Key-value KV interface, analyzing the KV message and obtaining a Key Key of target data, the target data and operation information;
the computing chip is used for computing the target data according to the operation information to obtain the processed target data;
and the computing chip is used for writing the processed target data into the storage medium according to the Key.
5. The memory of claim 4,
the computing chip is configured to analyze the KV packet to obtain a Key field, a Value field, and a Context field, obtain the Key of the target data from the Key field, obtain the target data from the Value field, and obtain the operation information from the Context field.
6. The memory of claim 5, wherein the operational information comprises: the type of operation, or, the type of operation and the operating parameters;
the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation;
the operation type and the operation parameter comprise at least one of the following information:
format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
7. A message processing method, applied to a memory having a computing chip and a storage medium, the method comprising:
the computing chip receives a first KV message sent by an application node through a key-value KV interface;
the computing chip analyzes the first KV message to obtain a Key and operation information of target data;
the computing chip reads the target data from the storage medium according to the Key;
the computing chip performs computing processing on the target data according to the operation information to obtain the processed target data;
and the computing chip generates a second KV message by using the processed target data as a Value, and sends the second KV message to the application node through the KV interface.
8. The method according to claim 7, wherein the analyzing the first KV packet by the computing chip to obtain the Key and the operation information of the target data includes:
the computing chip analyzes the first KV message to obtain a Key field and a Context field;
the computing chip obtains the Key of the target data from the Key field;
and the computing chip acquires the operation information from the Context field.
9. The method of claim 8, wherein the operational information comprises: the type of operation, or, the type of operation and the operating parameters;
the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation;
the operation type and the operation parameter comprise at least one of the following information:
format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
10. A message processing method, applied to a memory having a computing chip and a storage medium, the method comprising:
the computing chip receives a KV message sent by an application node through a key-value KV interface;
the computing chip analyzes the KV message to obtain a Key Key of target data, the target data and operation information;
the computing chip performs computing processing on the target data according to the operation information to obtain the processed target data;
and the computing chip writes the processed target data into the storage medium according to the Key.
11. The method according to claim 10, wherein the analyzing the KV packet by the computing chip to obtain a Key of target data, the target data, and operation information includes:
the computing chip analyzes the KV message to obtain a Key field, a Value field and a Context field;
the computing chip obtains the Key of the target data from the Key field;
the computing chip obtains the target data from the Value field;
and the computing chip acquires the operation information from the Context field.
12. The method of claim 11, wherein the operational information comprises: the type of operation, or, the type of operation and the operating parameters;
the operation types include: at least one of format conversion, encryption, decryption, compression, comparison, sorting, and hash calculation;
the operation type and the operation parameter comprise at least one of the following information:
format conversion and format conversion rules, encryption and encryption keys, encryption and encryption algorithm types, decryption and decryption keys, decryption and decryption algorithm types, compression and compression ratios, compression and compression algorithm types, comparison and comparison rules, sorting and ordering rules, hash calculations and hash algorithm types.
13. A distributed storage system, the system comprising: the system comprises at least one application node and at least one memory, wherein the application node is connected with the memory through a network;
the memory is as claimed in any one of claims 1 to 6;
the application node is used for sending a key-value KV message to the memory;
the KV packet carries a Key of target data and operation information, or the KV packet carries the Key of the target data, the target data and the operation information.
CN201610213662.6A 2016-04-07 2016-04-07 Memory, message processing method and distributed storage system Active CN107276912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610213662.6A CN107276912B (en) 2016-04-07 2016-04-07 Memory, message processing method and distributed storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610213662.6A CN107276912B (en) 2016-04-07 2016-04-07 Memory, message processing method and distributed storage system

Publications (2)

Publication Number Publication Date
CN107276912A CN107276912A (en) 2017-10-20
CN107276912B true CN107276912B (en) 2021-08-27

Family

ID=60051957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610213662.6A Active CN107276912B (en) 2016-04-07 2016-04-07 Memory, message processing method and distributed storage system

Country Status (1)

Country Link
CN (1) CN107276912B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108023885A (en) * 2017-12-05 2018-05-11 北京知道创宇信息技术有限公司 Information processing method, device, electronic equipment and storage medium
CN109446464B (en) * 2018-11-09 2021-02-02 深圳高灯计算机科技有限公司 Concurrency number determination method and device and server
CN112055058A (en) * 2020-08-19 2020-12-08 广东省新一代通信与网络创新研究院 Data storage method and device and computer readable storage medium
CN113204535B (en) * 2021-05-20 2024-02-02 中国工商银行股份有限公司 Routing method and device, electronic equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169717A (en) * 2011-05-06 2011-08-31 西安华芯半导体有限公司 Memory device with data processing function
CN103473272A (en) * 2013-08-20 2013-12-25 小米科技有限责任公司 Data processing method, device and system
CN104158875A (en) * 2014-08-12 2014-11-19 上海新储集成电路有限公司 Method and system for sharing and reducing tasks of data center server
CN104980462A (en) * 2014-04-04 2015-10-14 深圳市腾讯计算机系统有限公司 Distributed computation method, distributed computation device and distributed computation system
CN105427193A (en) * 2015-12-17 2016-03-23 山东鲁能软件技术有限公司 Device and method for big data analysis based on distributed time sequence data service

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000016281A1 (en) * 1998-09-11 2000-03-23 Key-Trak, Inc. Mobile object tracking system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169717A (en) * 2011-05-06 2011-08-31 西安华芯半导体有限公司 Memory device with data processing function
CN103473272A (en) * 2013-08-20 2013-12-25 小米科技有限责任公司 Data processing method, device and system
CN104980462A (en) * 2014-04-04 2015-10-14 深圳市腾讯计算机系统有限公司 Distributed computation method, distributed computation device and distributed computation system
CN104158875A (en) * 2014-08-12 2014-11-19 上海新储集成电路有限公司 Method and system for sharing and reducing tasks of data center server
CN105427193A (en) * 2015-12-17 2016-03-23 山东鲁能软件技术有限公司 Device and method for big data analysis based on distributed time sequence data service

Also Published As

Publication number Publication date
CN107276912A (en) 2017-10-20

Similar Documents

Publication Publication Date Title
US20230004434A1 (en) Automated reconfiguration of real time data stream processing
CN107040582B (en) Data processing method and device
US11356282B2 (en) Sending cross-chain authenticatable messages
US10122788B2 (en) Managed function execution for processing data streams in real time
CN112153085B (en) Data processing method, node and block chain system
CN107276912B (en) Memory, message processing method and distributed storage system
US20180121254A1 (en) Dynamic management of data stream processing
US20130191523A1 (en) Real-time analytics for large data sets
CN107818120A (en) Data processing method and device based on big data
US10904316B2 (en) Data processing method and apparatus in service-oriented architecture system, and the service-oriented architecture system
KR20190020105A (en) Method and device for distributing streaming data
CN108399175B (en) Data storage and query method and device
WO2023273218A1 (en) Json packet checking method and json packet checking apparatus
CN110333984B (en) Interface abnormality detection method, device, server and system
US7827132B2 (en) Peer based event conversion
CN111324645B (en) Block chain data processing method and device
CN107391541A (en) A kind of real time data merging method and device
US10628242B1 (en) Message stream processor microbatching
US10819622B2 (en) Batch checkpointing for inter-stream messaging system
CN109151016B (en) Flow forwarding method and device, service system, computing device and storage medium
Agung et al. High performance CDR processing with MapReduce
US11514016B2 (en) Paging row-based data stored as objects
CN113495982B (en) Transaction node management method and device, computer equipment and storage medium
CN111026892B (en) Face search capability management system
US11573960B2 (en) Application-based query transformations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant