Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It should be understood that these embodiments are discussed only to enable those skilled in the art to better understand and thereby implement the subject matter described herein, and are not intended to limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as needed. For example, the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with respect to some examples may also be combined in other examples.
As used herein, the term "include" and its variants mean open-ended terms in the sense of "including, but not limited to. The term "based on" means "based at least in part on". The terms "one embodiment" and "an embodiment" mean "at least one embodiment". The term "another embodiment" means "at least one other embodiment". The terms "first," "second," and the like may refer to different or the same object. Other definitions, whether explicit or implicit, may be included below. Unless the context clearly dictates otherwise, the definition of a term is consistent throughout the specification.
As used herein, the term "transaction" refers to an occurrence, operation, or sequence of operations that a user is prepared to perform, such as a data query operation, a data delete operation, etc., against a database during OLTP. Transactions are typically described using high-level program code written in a high-level database manipulation language or programming language such as SQL, C + +, or Java.
In performing the transaction request processing, upon receiving a transaction request from the client, the transaction request processing apparatus in the server acquires the high-level program code corresponding to the received transaction request, then performs the transaction processing based on the acquired high-level program code (i.e., executes the program code using the CPU), and then returns the obtained transaction processing result to the client.
To increase the transaction speed, a CPU cache is usually added to the CPU. The CPU Cache (Cache Memory) is a temporary Memory between the CPU and the Memory, and has smaller capacity than the Memory but faster exchange speed. The data in the cache is a small part of the data stored in the memory, but the small part is to be accessed by the CPU in a short time, so that when the CPU calls a large amount of data, the data can be called directly from the cache without obtaining the data from the memory, and the reading speed is accelerated.
The CPU cache includes an instruction cache and a data cache. The instruction cache is used to cache instructions that have been called by the CPU and the data cache is used to cache data that has been accessed by the CPU so that the CPU can retrieve from the instruction cache and the data cache without a memory access when subsequently calling the same instructions and data.
The cache is added in the CPU, so that a storage system (cache + memory) in the computing system has the high speed of the cache and the large capacity of the memory. FIG. 1 shows a schematic diagram of one example of a hierarchy of a storage system 10 of a computer system.
As shown in FIG. 1, storage system 10 includes a 7-level architecture: registers, L1 cache, L2 cache, L3 cache, main memory, local secondary storage, and remote secondary storage. Among them, the L1 cache, the L2 cache, and the L3 cache belong to a CPU cache, which is usually implemented by an SRAM memory. Main memory may also be referred to as memory. The local secondary storage may also be referred to as a local hard disk, and the remote secondary storage refers to a remote storage device, such as a distributed storage system or a Web server. Local secondary storage and remote secondary storage belong to mass storage devices.
In the hierarchy shown in fig. 1, along the upward direction, the degree of compactness in combination with the CPU is higher, the memory capacity of the memory structure becomes smaller, the access speed is faster, and the cost is more and more expensive.
The CPU cache may be divided into a level one cache (L1 cache), a level two cache (L2 cache), and a portion of the high-end CPU may also have a level three cache (L3 cache) according to the order of data read and how closely it is integrated with the CPU. All data stored in each level of cache is a part of the next level of cache, the technical difficulty and the manufacturing cost of the three types of cache are relatively decreased, and the capacity of the three types of cache is relatively increased. When a CPU needs to read a datum, the datum is firstly searched from the first-level cache, if the datum is not found, the datum is searched from the second-level cache, and if the datum is not found, the datum is searched from the third-level cache or the memory. Generally, the hit rate of each level of cache is about 80%, that is, 80% of the total data amount can be found in the first level of cache, and only 20% of the total data amount is left to be read from the second level of cache, the third level of cache or the memory. Therefore, most of the data needing to be retrieved is cached in the first-level cache.
In CPU products, the size of the L1 cache is relatively small, which is basically between 4KB and 64 KB. The L2 cache may be 128KB, 256KB, 512KB, 1MB, 2MB, etc., and the L3 cache may be up to 6MB.
When instructions or data are cached in a cache, if the cache is already empty and new instructions or data need to be cached in the cache, the oldest instructions or data already cached in the cache are removed from the cache and the new instructions or data are saved in the cache.
FIG. 2 shows a schematic diagram of a high level program code execution process.
As shown in FIG. 2, the fetched high-level program code 210 is instruction compiled using a compiler 220 to obtain a set of machine instruction code 230. The resulting set of machine instruction code 230 is then provided to the CPU 240 for execution, thereby resulting in program code execution results. Specifically, during execution of the machine instruction code, for each piece of machine instruction code 230 to be executed, the CPU 240 first accesses the instruction cache 250 to see if there is a corresponding machine instruction in the instruction cache 250. If so, the machine instruction is fetched. If not, the corresponding machine instruction is fetched to memory/external storage 270. In addition, CPU 240 also accesses data cache 260 to see if corresponding data exists in data cache 260. If so, the corresponding data is retrieved. If not, the corresponding data is retrieved to memory/external storage 270. After retrieving the machine instructions and the data needed for the execution of the instructions, CPU 240 executes the machine instructions to obtain the results of the execution of the machine instructions. After the CPU 240 executes all the machine instructions in the machine instruction code set 230, the program code execution result is obtained.
However, as the amount of transaction requests and the complexity of transaction operations increase, more and more instructions need to be called and data need to be accessed, and the capacities of the instruction cache and the data cache are limited, so that the instruction cache miss and the data cache miss are very serious. For example, during an access operation in an online Transaction Processing (OLTP), since the operation code is much larger than the capacity of the instruction cache, instruction prefetching (pre-fetch) often fails, and the miss rate of the instruction cache sometimes reaches as high as 40%.
In particular, it is assumed that the transaction request processing device receives a plurality of transaction requests, e.g., transaction requests 1-3, wherein the transactions requested by transaction request 1 and transaction request 3 are substantially the same, i.e., the primary transaction processing functions are the same. In this case, the transaction request processing device generally performs processing in units of transaction processing functions of the transaction request, for example, the CPU may retrieve instruction blocks of the transaction processing functions from the instruction cache for processing. As described above, due to the limited capacity of the instruction cache (typically storing 8 operation instructions), it is generally not possible to cache operation instructions of more than one processing function in the instruction cache. In this way, even if the instruction corresponding to the transaction function is cached in the instruction cache after the transaction function of the transaction request 1 is processed, when the transaction function next to the transaction request 1 or the transaction function of the transaction request 2 is executed, the corresponding instruction is pushed out of the instruction cache, so that the corresponding instruction cannot be hit from the instruction cache when the corresponding transaction function of the transaction request 3 is processed, and thus the hit rate of the instruction cache is low.
In order to solve the above problem, the present disclosure provides a transaction request processing method and apparatus. Considering that in the big data processing process, the main processing functions required by many concurrent transaction requests are the same, and at the same time, the CPU supports out-of-order execution and parallel processing, in the transaction request processing method and apparatus provided by the present disclosure, multiple transaction requests having the same processing function are merged to process multiple transaction requests in the same processing function, so that in the case that the instruction of the first transaction request hits in the instruction cache, since the subsequent transaction requests are located in the same processing function, the instruction fetches for the subsequent transaction requests can be executed immediately, and the instruction fetches of the subsequent transaction requests can all hit, thereby improving the hit rate of the instruction cache.
A transaction request processing method and apparatus according to an embodiment of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 3 shows a flow chart of a transaction request processing method executed by a transaction request processing device at a server according to an embodiment of the present disclosure. The transaction request processing device is provided with a CPU, an instruction cache, a data cache and a memory.
As shown in FIG. 3, at block 310, a transaction request processing device receives a plurality of transaction requests from at least one client. Here, the plurality of transaction requests may be concurrent transaction requests (with respect to the transaction request processing device). In one example, the plurality of transaction requests may be submitted from one client. In other examples, the plurality of transaction requests may also be submitted from a plurality of clients, respectively, or via an intermediary. The intermediate device may be, for example, a transaction request receiving device that receives transaction requests submitted by various users via different clients. Then, the transaction request receiving device sends the received transaction requests to the transaction request processing device in batch. Generally, the number of transaction requests received by the transaction request processing device may exceed the processing capability of the transaction request processing device.
Next, at block 320, a plurality of transaction handlers corresponding to the plurality of transaction requests are obtained, each transaction handler including at least one transaction processing function. FIG. 5A illustrates an example schematic diagram of program code for a transaction function according to an embodiment of the disclosure.
In one example, a plurality of transaction processing programs may be stored in advance in the transaction request processing apparatus, each transaction processing program including at least one transaction processing function and being stored in association with a transaction request. After receiving the transaction request, searching is carried out in the transaction request processing device so as to obtain a transaction processing program corresponding to the transaction request.
Alternatively, in another example, a plurality of transaction handler templates, each stored in association with a transaction request, may be stored in advance in the transaction request processing apparatus. After receiving the transaction request, searching is carried out in the transaction request processing device to obtain a transaction processing program template corresponding to the transaction request. Then, the transaction parameters (i.e., the input parameter information of each transaction function, etc.) in the transaction request are provided to the transaction handler template to automatically generate the transaction handler, i.e., the corresponding transaction function.
After the plurality of transactions are obtained as described above, at block 330, at least one merged transaction set is determined from the obtained plurality of transactions, the transactions in each merged transaction set having at least one common transaction function therebetween. Here, the term "common transaction function" refers to a transaction function in which program codes are the same, but input parameters of the transaction function may be the same or different. For example, assuming that the program code of the transaction functions a and B are the same, the transaction functions a and B belong to a common transaction function regardless of whether the input parameters of the transaction functions are the same.
Specifically, assume that 5 transaction handlers are acquired: transaction handler 1, transaction handler 2, transaction handler 3, transaction handler 4, transaction handler 5 and transaction handler 6. Transaction handler 1 has transaction functions 1, 2 and 3, transaction handler 2 has transaction functions 1, 2 and 4, transaction handler 3 has transaction functions 1, 2 and 5, transaction handler 4 has transaction functions 1, 2 and 6, transaction handler 5 has transaction functions 7 and 8, and transaction handler 6 has transaction functions 8 and 9. As can be seen by comparison, the transaction program 1, the transaction program 2, the transaction program 3 and the transaction program 4 have common transaction functions 1 and 2 therebetween, and the transaction program 5 and the transaction program 6 have a common transaction function 8 therebetween, so that the transaction program 1, the transaction program 2, the transaction program 3 and the transaction program 4 constitute a merged transaction program set 1, and the transaction program 5 and the transaction program 6 constitute a merged transaction program set 2.
The determination of the set of merged transaction handlers may be performed by performing a traversal comparison of the plurality of acquired transaction handlers. For example, one transaction handler may be randomly located from the plurality of transaction handlers as an initial transaction handler, and then the initial transaction handler may be compared pairwise with the remaining transaction handlers in the plurality of transaction handlers to determine whether there is at least one common transaction function between the two transaction handlers. The two transaction handlers may form a merged transaction handler set if there is at least one common transaction function. After the pairwise comparison based on the initial transaction handler is completed, another transaction handler is randomly selected from the remaining transaction handlers that are not determined to belong to the merged transaction handler set to perform the pairwise comparison process again. And circulating the steps until the processing is completed for all the transaction processing programs in the plurality of transaction processing programs. In addition, for the merged transaction processing program set formed based on pairwise comparison, clustering can be further performed based on whether the merged transaction processing program set has a common transaction processing function.
For example, if transactions a and B have the same processing functions 1, 2 and 3, thus forming a merged transaction set 3, and transactions B and C also have the same processing functions 1, 2 and 3, thus forming a merged transaction set 4, in accordance with the pairwise comparison described above, merged transaction sets 3 and 4 may be further combined to obtain a new merged transaction set 3', which merged transaction set 3' includes transactions a, B and C.
Alternatively, if, according to the above two-by-two comparison, the transaction programs a and B have the same processing functions 1, 2 and 3, thereby constituting the merged transaction handler set 5, and the transaction programs B and C also have the same processing functions 1, 4 and 5, thereby constituting the merged transaction handler set 6, the merged transaction handler sets 5 and 6 may be further combined to obtain a new merged transaction handler set 5', which merged transaction handler set 5' includes the transaction programs a, B and C.
Next, at block 340, for each merged transaction processing program set in the determined at least one merged transaction processing program set, merging at least one common transaction processing function in the merged transaction processing program set to obtain at least one merged transaction processing function. For example, for the merged transaction processing program set 1 composed of the transaction processing program 1, the transaction processing program 2, the transaction processing program 3 and the transaction processing program 4, the transaction processing functions 1 and 2 may be merged to obtain merged transaction processing functions 1 'and 2', respectively.
In one example, the merging the at least one common transaction function in each merged transaction handler set may include: for each common transaction processing function in at least one common transaction processing function, merging the corresponding transaction processing functions of all the transaction processing programs in the merged transaction processing program set to obtain a merged transaction processing function, wherein an input function of the merged transaction processing function comprises input parameters of the corresponding transaction processing functions of all the transaction processing programs.
FIG. 5B illustrates an example diagram of a consolidated transaction function, according to an embodiment of this disclosure. In the transaction function shown in fig. 5B, an input parameter bplusree Tmp1 is an input parameter of the transaction function of the transaction program corresponding to one transaction request shown in fig. 5A, and an input parameter bplusree Tmp2 is an input parameter of the transaction function of another transaction program (corresponding to another transaction request) in the merged transaction program set.
After the merge process is performed as above, at block 350, a transaction process is performed based on the resulting merged transaction function and the uncombined transaction functions of the plurality of transaction processes to obtain transaction process results for the plurality of transaction requests. Figure 6 shows a flow diagram of one example of a transaction process according to an embodiment of the disclosure.
As shown in fig. 6, when performing a transaction process based on the obtained merged transaction function and the uncombined transaction function in the plurality of transaction processing programs, first, at block 610, an instruction compiling process is performed on the obtained merged transaction function and the uncombined transaction function in the plurality of transaction processing programs. Then, at block 620, corresponding instructions and data are invoked for a transaction based on the instruction compilation processing results.
In another example of the present disclosure, when performing instruction compiling processing on the obtained merged transaction function and the non-merged transaction function in the multiple transaction processing programs, if the same operation instruction exists in the merged transaction function, the same operation instruction is compiled into a SIMD instruction, thereby increasing instruction density and further increasing instruction cache hit rate.
For example, the operation instruction "value1= value1+ Tmp- > Key [ i + + ]" in fig. 5B is the same as another operation instruction "value2= value2+ Tmp2- > Key [ i + + ]", which may be merged to generate SIMD instructions upon instruction compilation.
After the transaction results of the multiple transaction requests are obtained as described above, at block 360, the transaction results of the multiple transaction requests are sent to a corresponding client of the at least one client.
In the transaction request processing method shown in fig. 3, for a plurality of transaction processing programs corresponding to a plurality of received transaction requests, a plurality of merged transaction processing programs having the same transaction processing function are determined, and then the same transaction processing function of the plurality of merged transaction processing programs is merged to obtain a merged transaction processing function, so that the plurality of transaction requests can be processed in the merged transaction processing function, and thus, in the case that an instruction of a first transaction request hits in an instruction cache, since a subsequent transaction request is located in the same transaction processing function, instruction fetching for the subsequent transaction request can be executed immediately, so that instruction fetching of the subsequent transaction request can all hit, thereby improving the hit rate of the instruction cache.
In addition, in another embodiment of the present disclosure, when determining the merged transaction processing set, in addition to at least one common processing function between the constituent transaction processing programs of the merged transaction processing set, a determination needs to be made in combination with a transaction cost ratio of the at least one common processing function in the respective transaction processing programs. Fig. 4 shows a flowchart of the merged transaction set determining process under this embodiment.
As shown in FIG. 4, at block 410, at least one candidate merged transaction handler set is determined from the retrieved plurality of transaction handlers, the transaction handlers in each candidate merged transaction handler set having at least one common transaction function therebetween. Here, the determination process of the candidate merged transaction handler set may refer to the description above with reference to block 330 of fig. 3.
Next, at block 420, for each candidate merged transaction handler set, a transaction cost ratio of at least one common transaction function among the respective transaction handlers of the candidate merged transaction handler set is calculated. Here, the transaction cost may be, for example, transaction time, transaction resource consumption, or the like.
Then, at block 430, the candidate merged transaction handler set having a transaction cost ratio exceeding the predetermined ratio not less than a predetermined number is determined as the merged transaction handler set. Here, the predetermined percentage may be determined based on an application scenario of the transaction request processing, for example, 51%. In one example, the predetermined number ranges from 1 to the number of transactions in the candidate merged transaction set.
For example, assume that the number of transaction handlers in candidate merged transaction handler set 1 is 3, i.e., there are 3 transaction handlers a, B, and C, and the common transaction function between transaction handlers a, B, and C is transaction functions 1 and 2.
If the predetermined number is 1, the candidate merged transaction program set 1 is considered to be the merged transaction program set as long as the transaction cost ratios of the transaction functions 1 and 2 in any one of the transaction programs a, B, and C exceed the predetermined ratio.
If the predetermined number is 2, the candidate merged transaction program set 1 is considered to be the merged transaction program set as long as the transaction cost ratios of the transaction functions 1 and 2 in any two of the transaction programs a, B, and C exceed the predetermined ratio.
If the predetermined number is 3, the candidate merged transaction processing set 1 is considered as the merged transaction processing set only if the transaction cost ratios of the transaction processing functions 1 and 2 in all of the transaction processing programs a, B and C exceed the predetermined ratio.
With the transaction request processing method of the embodiment shown in fig. 4, only when multiple transaction processing programs have the same transaction processing function and the transaction processing cost ratio of the transaction processing function meets the predetermined requirement, the multiple transaction processing programs are merged, so that the problem that the efficiency of improving the instruction cache hit rate is poor due to the fact that the transaction processing cost ratio of the merged transaction processing function is not high can be eliminated.
The transaction request processing method according to the present disclosure is described above with reference to fig. 1 to 6. Embodiments of a transaction request processing apparatus according to the present disclosure will be described below with reference to fig. 7 to 9.
Fig. 7 illustrates a block diagram of a transaction request processing device 700, according to an embodiment of the disclosure. As shown in fig. 7, the transaction request processing apparatus 700 includes a transaction request receiving unit 710, a program acquiring unit 720, a merge procedure set determining unit 730, a merge processing unit 740, a transaction processing unit 750, and a processing result sending unit 760.
The transaction request receiving unit 710 is configured to receive a plurality of transaction requests from at least one client. The operation of the transaction request receiving unit 710 may refer to the description of block 310 described above with reference to fig. 3.
The program obtaining unit 720 is configured to obtain a plurality of transaction processing programs corresponding to the plurality of transaction requests, each transaction processing program including at least one transaction processing function. The operation of the program obtaining unit 720 may refer to the description of the block 320 described above with reference to fig. 3.
The merged set of transactions determining unit 730 is configured to determine at least one merged set of transactions from the acquired plurality of transactions, the transactions in each merged set of transactions having at least one common transaction function between them. The operation of the merged program set determining unit 730 may refer to the description of the block 330 described above with reference to fig. 3.
The merge processing unit 740 is configured to, for each merged transaction handler set of the determined at least one merged transaction handler set, merge the at least one common transaction processing function of the merged transaction handler set to obtain at least one merged transaction processing function. In one example, the merge processing unit 740 is configured to: and for each common transaction processing function in at least one common transaction processing function, combining the corresponding transaction processing functions of all the transaction processing programs in the combined transaction processing program set to obtain a combined transaction processing function, wherein the input function of the combined transaction processing function comprises the input parameters of the corresponding transaction processing functions of all the transaction processing programs. The operation of the merge processing unit 740 may refer to the description of block 340 described above with reference to fig. 3.
The transaction processing unit 750 is configured to perform a transaction processing based on the obtained merged transaction processing function and the un-merged transaction processing function in the plurality of transaction processing programs to obtain a transaction processing result of the plurality of transaction requests. The operation of transaction unit 750 may refer to the description of block 350 described above with reference to FIG. 3.
The processing result sending unit 760 is configured to send the transaction processing result to a corresponding client of the at least one client. The operation of the processing result transmitting unit 760 may refer to the description of block 360 described above with reference to fig. 3.
In addition, the merge set determination unit 730 may also be implemented in other manners. Fig. 8 illustrates a block diagram of one example of the merged assembly determining unit 730 according to an embodiment of the present disclosure. As shown in fig. 8, the merged program set determining unit 730 includes a candidate merged program set determining module 731, a processing cost ratio calculating module 733, and a merged program set determining module 735.
The candidate merged set determining module 731 is configured to determine at least one candidate merged transaction set from the obtained plurality of transactions, the transactions in each candidate merged transaction set having at least one common transaction function between them. The operation of the candidate merged program set determining module 731 may refer to the description of block 410 described above with reference to fig. 4.
The processing cost ratio calculation module 733 is configured to calculate, for each candidate merged transaction handler set, a transaction cost ratio of the at least one common transaction function among the respective transaction handlers of the candidate merged transaction handler set. The operation of the processing cost duty calculation module 733 may refer to the description of block 420 described above with reference to fig. 4.
The merged assembly determining module 735 is configured to determine as the merged transaction processing assembly the number of transaction cost odds exceeding the predetermined odds not less than a predetermined number of candidate merged transaction processing assemblies. Here, the predetermined number ranges from 1 to the number of transactions in the candidate merged transaction handler set. Further, the predetermined percentage may be determined based on an application scenario of the transaction request processing. The operation of the merge procedure set determination module 735 may refer to the description of block 430 described above with reference to FIG. 4.
Transaction unit 750 may also be implemented in other ways. Figure 9 illustrates a block diagram of one example of a transaction processing unit 750, according to embodiments of the disclosure. As shown in fig. 9, the transaction processing unit 750 includes a program compiling module 751 and an instruction calling module 753.
The program compiling module 751 is configured to perform instruction compiling on the resulting merged transaction function and the non-merged transaction functions of the plurality of transaction programs.
Further, in another example, the program compiling module 751 can be further configured to: when the same operation instruction exists in the merge transaction function, the same operation instruction is compiled into a SIMD instruction.
The instruction call module 753 is configured to call the corresponding instruction and data for transaction processing based on the instruction compilation processing result of the transaction processing function.
As described above with reference to fig. 1 to 9, embodiments of the transaction request processing method and the transaction request processing apparatus according to the present disclosure are described. The above transaction request processing device may be implemented by hardware, or may be implemented by software, or a combination of hardware and software.
Fig. 10 illustrates a hardware architecture diagram of a computing device 1000 for transaction request processing according to an embodiment of the disclosure. As shown in fig. 10, computing device 1000 may include at least one processor 1010, storage (e.g., non-volatile storage) 1020, memory 1030, and a communication interface 1040, and the at least one processor 1010, storage 1020, memory 1030, and communication interface 1040 are coupled together via a bus 1060. The at least one processor 1010 executes at least one computer-readable instruction (i.e., an element described above as being implemented in software) stored or encoded in memory.
In one embodiment, computer-executable instructions are stored in the memory that, when executed, cause the at least one processor 1010 to: receiving a plurality of transaction requests from at least one client; obtaining a plurality of transaction processing programs corresponding to the plurality of transaction requests, wherein each transaction processing program comprises at least one transaction processing function; determining at least one merged transaction processing program set from the acquired plurality of transaction processing programs, wherein the transaction processing programs in each merged transaction processing program set have at least one common transaction processing function; for each merged transaction processing program set in the determined at least one merged transaction processing program set, merging the at least one common transaction processing function in the merged transaction processing program set to obtain at least one merged transaction processing function; performing transaction processing based on the obtained merged transaction processing function and the uncombined transaction processing functions in the plurality of transaction processing programs to obtain transaction processing results of the plurality of transaction requests; and sending the transaction processing result to a corresponding client in the at least one client.
It should be understood that the computer-executable instructions stored in the memory, when executed, cause the at least one processor 1010 to perform the various operations and functions described above in connection with fig. 1-9 in the various embodiments of the present disclosure.
In the present disclosure, computing device 1000 may include, but is not limited to: personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile computing devices, smart phones, tablet computers, cellular phones, personal Digital Assistants (PDAs), handsets, messaging devices, wearable computing devices, consumer electronics, and the like.
According to one embodiment, a program product, such as a machine-readable medium (e.g., a non-transitory machine-readable medium), is provided. A machine-readable medium may have instructions (i.e., elements described above as being implemented in software) that, when executed by a machine, cause the machine to perform various operations and functions described above in connection with fig. 1-9 in the various embodiments of the present disclosure. Specifically, a system or apparatus may be provided which is provided with a readable storage medium on which software program code implementing the functions of any of the above embodiments is stored, and causes a computer or processor of the system or apparatus to read out and execute instructions stored in the readable storage medium.
In this case, the program code itself read from the readable medium can realize the functions of any of the above-described embodiments, and thus the machine-readable code and the readable storage medium storing the machine-readable code constitute a part of the present invention.
Examples of the readable storage medium include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-Rs, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer or from the cloud via a communications network.
It will be understood by those skilled in the art that various changes and modifications may be made in the above-disclosed embodiments without departing from the spirit of the invention. Accordingly, the scope of the invention should be limited only by the attached claims.
It should be noted that not all steps and units in the above flows and system structure diagrams are necessary, and some steps or units may be omitted according to actual needs. The execution order of the steps is not fixed, and can be determined as required. The apparatus structures described in the above embodiments may be physical structures or logical structures, that is, some units may be implemented by the same physical entity, or some units may be implemented by a plurality of physical entities, or some units may be implemented by some components in a plurality of independent devices.
In the above embodiments, the hardware units or modules may be implemented mechanically or electrically. For example, a hardware unit, module or processor may comprise permanently dedicated circuitry or logic (such as a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware units or processors may also include programmable logic or circuitry (e.g., a general purpose processor or other programmable processor) that may be temporarily configured by software to perform the corresponding operations. The specific implementation (mechanical, or dedicated permanent circuit, or temporarily set circuit) may be determined based on cost and time considerations.
The detailed description set forth above in connection with the appended drawings describes example embodiments but is not intended to represent all embodiments which may be practiced or which fall within the scope of the appended claims. The term "exemplary" used throughout this specification means "serving as an example, instance, or illustration," and does not mean "preferred" or "advantageous" over other embodiments. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.