WO2020093714A1 - 一种数据处理方法、装置、设备及可读存储介质 - Google Patents

一种数据处理方法、装置、设备及可读存储介质 Download PDF

Info

Publication number
WO2020093714A1
WO2020093714A1 PCT/CN2019/094372 CN2019094372W WO2020093714A1 WO 2020093714 A1 WO2020093714 A1 WO 2020093714A1 CN 2019094372 W CN2019094372 W CN 2019094372W WO 2020093714 A1 WO2020093714 A1 WO 2020093714A1
Authority
WO
WIPO (PCT)
Prior art keywords
data processing
access request
data
training
request
Prior art date
Application number
PCT/CN2019/094372
Other languages
English (en)
French (fr)
Inventor
廖卓凡
王进
陈沅涛
熊兵
曹敦
王磊
Original Assignee
长沙理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长沙理工大学 filed Critical 长沙理工大学
Publication of WO2020093714A1 publication Critical patent/WO2020093714A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals

Definitions

  • the present invention relates to the technical field of big data processing, and more specifically, to a data processing method, device, device, and readable storage medium.
  • the master node in the storage system when it is necessary to read and write data from the storage system, the master node in the storage system generally determines the storage node corresponding to the current request.
  • Distributed File System HDFS Hadoop Distributed File System
  • the master node is used to manage and distribute data (such as performing operations such as opening, closing, and renaming), and the slave node is used to store data.
  • the way to determine the storage node in the distributed file system mainly includes: when the access request is a read request, determine the storage node according to the storage path of the data corresponding to the current read request, and then return the data to the client; when the access request is When writing a request, the storage node is randomly determined based on the available storage resources of the current system and the data to be stored.
  • the master node determines the storage node since the manner in which the master node determines the storage node depends on experience and the amount of data currently processed, it cannot support the processing rules. Therefore, when the amount of data processing is large, the efficiency of data processing is naturally low, and the response time is relatively long, that is, the delay time of the read or write operation is long, so that it cannot provide users with a good service experience.
  • the object of the present invention is to provide a data processing method, device, equipment and readable storage medium to improve data processing efficiency and reduce delay time.
  • a data processing method including:
  • the destination node corresponding to the access request according to the type of the access request, the current system configuration information and a preset data processing model, the type of the access request includes at least a read request and a write request, and the data processing model Obtained through sparse dimensionality reduction method and Q-learning algorithm training;
  • the transmitting the access request to the destination node for corresponding processing includes:
  • the type of the access request is a read request
  • read data corresponding to the read request from the destination node, and return the data to the client.
  • the transmitting the access request to the destination node for corresponding processing includes:
  • the access request sent by the receiving client includes:
  • the training process of the data processing model includes:
  • the training of the target sample by the Q-learning algorithm includes:
  • the learning rate and discount rate of the Q-learning algorithm are adjusted, and the optimal values of the learning rate and the discount rate are determined.
  • using the sparse dimensionality reduction method to process the training samples to obtain the target samples includes:
  • the data whose frequency of occurrence is lower than a preset threshold is excluded from the training samples to obtain the target sample.
  • a data processing device including:
  • the receiving module is used to receive the access request sent by the client
  • a determining module configured to determine a destination node corresponding to the access request according to the type of the access request, configuration information of the current system and a preset data processing model, the type of the access request includes at least a read request and a write request,
  • the data processing model is obtained through sparse dimensionality reduction method and Q-learning algorithm training;
  • the processing module is configured to transmit the access request to the destination node for corresponding processing.
  • a data processing device including:
  • Memory used to store computer programs
  • the processor is configured to implement any of the steps of the data processing method described above when executing the computer program.
  • a readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements any of the steps of the data processing method described above.
  • a data processing method includes: receiving an access request sent by a client; and determining the location based on the type of the access request, current system configuration information, and a preset data processing model.
  • a destination node corresponding to the access request, the type of the access request includes at least a read request and a write request, and the data processing model is obtained through sparse dimensionality reduction method and Q-learning algorithm training; the access request is transmitted to the destination The nodes are processed accordingly.
  • the corresponding access request is determined Destination node, and finally transmits the access request to the destination node for corresponding processing.
  • the sparse dimensionality reduction method can improve the convergence speed of the Q-learning algorithm and reduce the delay time, thereby improving the data processing model Processing efficiency; that is, this solution processes access requests according to the Q-learning algorithm.
  • the sparse dimensionality reduction method can improve the convergence speed of the Q-learning algorithm. Reduce the read and write delay time, so the response time of the access request is relatively short, which brings a good service experience to the user.
  • the data processing device, device and readable storage medium provided by the embodiments of the present invention also have the above-mentioned technical effects.
  • FIG. 1 is a flowchart of a data processing method disclosed in an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a data processing device disclosed in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a data processing device disclosed in an embodiment of the present invention.
  • Embodiments of the present invention disclose a data processing method, device, device, and readable storage medium to improve data processing efficiency and reduce delay time.
  • a data processing method provided by an embodiment of the present invention includes:
  • S102 Determine the destination node corresponding to the access request according to the type of access request, the configuration information of the current system, and the preset data processing model.
  • the type of access request includes at least read request and write request. Q-learning algorithm training;
  • the destination node When the type of access request is a read request, the destination node is a storage node that stores data corresponding to the current read request. When the type of access request is a write request, the destination node is the storage node that will store the data corresponding to the current write request. Specifically, when determining the destination node for the read request, it is also necessary to search for the address list of the storage node according to the metadata corresponding to the current read request, and then determine the destination node in the address list of the storage node.
  • the configuration information of the current system includes: available storage space of each storage node in the current system, data protection level, storage performance index, usage status, and other related information.
  • the data processing model is the processing model trained by the sparse dimensionality reduction method and the Q-learning algorithm. Before training the data by the Q-learning algorithm, the data is first reduced by the sparse dimensionality reduction method to reduce the delay time.
  • the data corresponding to the read request is read from the destination node, and the data is returned to the client.
  • the number of destination nodes can be multiple.
  • this embodiment provides a data processing method that, when receiving an access request, is obtained according to the type of the access request, the configuration information of the current system, and the training obtained through the sparse dimensionality reduction method and the Q-learning algorithm Data processing model to determine the destination node corresponding to the current access request, and finally transmit the access request to the destination node for corresponding processing.
  • the sparse dimensionality reduction method can improve the convergence speed of the Q-learning algorithm and reduce the delay time, thereby improving the data processing model Processing efficiency; that is, this solution processes access requests according to the Q-learning algorithm.
  • the sparse dimensionality reduction method can improve the convergence speed of the Q-learning algorithm. Reduce the delay time, so the response time of the access request is also relatively short, which brings a good service experience to the user.
  • This embodiment of the present invention discloses another data processing method. Compared with the previous embodiment, this embodiment further describes and optimizes the technical solution.
  • another data processing method provided by an embodiment of the present invention includes:
  • S201 Receive the write request sent by the client, and return the response information corresponding to the write request, so that the client divides the data corresponding to the write request into multiple data blocks according to the response information;
  • the check result carries instructions and related content whether data can be transmitted.
  • the client After receiving the instruction that can transmit data, the client first divides the data that needs to be stored. For example: when the data is 300M and a data block is 128M, then the data will be divided into three data blocks of 128M, 128M and 44M. Among them, for each data block, you can request storage separately, for example, first request to store the first data block, and then request to store the second data block, and so on; you can also request to store all data blocks at once, or use Multi-threaded processing to improve efficiency.
  • S202 Determine the destination node corresponding to the access request according to the type of access request, the current system configuration information, and the preset data processing model.
  • the type of access request includes at least a read request and a write request.
  • the data processing model uses sparse dimensionality reduction methods and Q-learning algorithm training;
  • the data to be stored is also received.
  • the data needs to be segmented, and then the segmented data block is stored. If the current system is a distributed file system, the master node among them determines the destination node for storing data and divides the data; and then transmits the data from the master node to the destination node for storage.
  • the information of the destination node is returned to the client, so that the client establishes a data transmission channel with the destination node, and then transmits the data to the destination node for storage through the data transmission channel.
  • the data is transferred from the destination node to the preset backup node for backup.
  • each storage node is preset with a corresponding backup node, and when data is stored in the storage node, the data will be processed to the backup node at the same time.
  • the backup node is also a storage node in the system, that is, each storage node can serve as a backup node.
  • this embodiment provides another data processing method.
  • the method according to the write request, the current system configuration information, and the data processing model trained by the sparse dimensionality reduction method and the Q-learning algorithm To determine the destination node corresponding to the current write request, and finally store the data corresponding to the write request to the destination node.
  • the sparse dimensionality reduction method can improve the convergence speed of the Q-learning algorithm and reduce the delay time, thereby improving the data processing model Processing efficiency; that is, this solution processes write requests according to the Q-learning algorithm.
  • the sparse dimensionality reduction method can improve the convergence speed of the Q-learning algorithm. Reduce the write delay time, so the response time of the write request is relatively short, which brings a good service experience to the user.
  • the training process of the data processing model includes:
  • Training the target sample through the Q-learning algorithm if the difference between the currently obtained feedback information and the previous feedback information of the currently obtained feedback information is less than a preset threshold, the training is completed and the data processing is obtained model.
  • the historical data information records information such as the data type, data amount, write time stamp, and write frequency previously written to the current system; and the data type, data amount, and read time stamp read from the current system , Reading frequency and other information.
  • the feedback information can be regarded as the impact of the system master node on the entire system after the destination node is determined.
  • the master node is regarded as a trained entity, and each time a storage node is selected, it is regarded as an action of this entity. The action will have an impact on the storage environment. After the environment accepts the action, the system state changes, and the change is Reward, feedback this reward to the entity. The entity chooses the next action according to the reward.
  • the principle of selection is to increase the probability of being positively strengthened (award) until the growth of this reward is no longer obvious.
  • the training of the target sample by the Q-learning algorithm includes:
  • the learning rate and discount rate of the Q-learning algorithm are adjusted, and the optimal values of the learning rate and the discount rate are determined.
  • the learning rate and discount rate are two important parameters in the Q-learning algorithm, please refer to the formula: Q (s, a) ⁇ (1- ⁇ ) * Q (s, a) + ⁇ * [R + ⁇ * max a Q (s', a)].
  • Q (s, a) is the reward of taking action (a, action) in the current state (s, status)
  • R is the immediate reward (reward of the current action)
  • max a Q (s', a) is based on The reward predicted by past experience
  • s' is the next state
  • is the learning rate
  • is the discount factor. It can be seen that the larger the learning rate ⁇ , the less the effect of retaining the previous training; the larger the discount rate ⁇ , the greater the role played by max a Q (s', a).
  • the learning rate is fixed at 0.75, and the discount rate changes in the range of [0.6, 0.9]; conduct multiple experiments and record the experimental results. After a comprehensive analysis of the experiment, it is concluded that when the discount rate is around 0.8, the convergence rate is the fastest and the reward is the highest, so the optimal value of the discount rate is determined to be 0.8. Then the fixed discount rate is 0.75, the learning rate changes in the range of [0.6, 0.9], conduct multiple experiments, and record the experimental results. After a comprehensive analysis of the experiment, it is concluded that when the learning rate is around 0.8, the convergence rate is the fastest and the reward is the highest, so the optimal value of the learning rate is determined to be 0.8.
  • the value range of the learning rate ⁇ is set to: [0.7, 0.8]
  • the value range of the discount rate ⁇ is set to: [0.7 , 0.8] to improve the training efficiency and training quality of the Q-learning algorithm.
  • using the sparse dimensionality reduction method to process the training samples to obtain the target samples includes:
  • the data whose frequency of occurrence is lower than a preset threshold is excluded from the training samples to obtain the target sample.
  • the core of the Q-learning algorithm is Q-table.
  • the rows and columns of Q-table represent the value of state and action, respectively, and the value of Q-table Q (s, a) measures the revenue of the current state taking action.
  • Q-table There is a problem with Q-table.
  • the state of the real situation may be infinite, and the actions taken may also be many, so that the Q-table will be infinitely large.
  • f 2,1 represents the frequency of data2 appearing in node Node1, and then get a matrix A.
  • ZIPF law which is the 28th principle. There are always some data that appear frequently, but they are a minority. Other data frequencies may be small, but they are the majority. We have to find these data.
  • the specific method is as follows:
  • the present invention can also be applied to big data network centers, such as: wireless sensor consoles, data processing in edge computing, etc., to optimize data deployment and reduce data read and write response delays.
  • big data network centers such as: wireless sensor consoles, data processing in edge computing, etc.
  • the following describes a data processing apparatus provided by an embodiment of the present invention.
  • a data processing apparatus described below and a data processing method described above can be cross-referenced.
  • a data processing apparatus provided by an embodiment of the present invention includes:
  • the receiving module 301 is used to receive the access request sent by the client;
  • the determining module 302 is configured to determine the destination node corresponding to the access request according to the type of the access request, the configuration information of the current system and the preset data processing model, and the type of the access request includes at least a read request and a write request ,
  • the data processing model is obtained through sparse dimensionality reduction method and Q-learning algorithm training;
  • the processing module 303 is configured to transmit the access request to the destination node for corresponding processing.
  • the processing module is specifically used for:
  • the type of the access request is a read request
  • read data corresponding to the read request from the destination node, and return the data to the client.
  • the processing module is specifically used for:
  • the receiving module is specifically used for:
  • a training module which is used to train a data processing model and includes:
  • An acquiring unit configured to acquire historical data information of the current system, and use the historical data information as a training sample
  • Dimensionality reduction unit for processing the training samples by using the sparse dimensionality reduction method to obtain target samples
  • a training unit configured to train the target sample through the Q-learning algorithm, and if the difference between the currently obtained feedback information and the previous feedback information of the currently obtained feedback information is less than a preset threshold, the training is completed, Obtain the data processing model.
  • the training unit is specifically used for:
  • the learning rate and discount rate of the Q-learning algorithm are adjusted, and the optimal values of the learning rate and the discount rate are determined.
  • the dimensionality reduction unit is specifically used for:
  • the ZIPF law determine the data in the training sample whose frequency of occurrence is lower than a preset threshold; remove the data whose frequency of occurrence is below the preset threshold from the training sample to obtain the target sample.
  • this embodiment provides a data processing apparatus, including: a receiving module, a determining module, and a processing module.
  • the receiving module receives the access request sent by the client; then the determination module determines the destination node corresponding to the access request according to the type of the access request, the current system configuration information, and the preset data processing model, and the access request
  • the type includes at least a read request and a write request, and the data processing model is obtained through sparse dimensionality reduction method and Q-learning algorithm training; finally, the processing module transmits the access request to the destination node for corresponding processing.
  • the division of labor and cooperation between the various modules, each responsible for its duties improves the efficiency of data processing, reduces the delay time, and also brings a good service experience to users.
  • the following describes a data processing device provided by an embodiment of the present invention.
  • a data processing device described below and a data processing method and apparatus described above can be referred to each other.
  • a data processing device provided by an embodiment of the present invention includes:
  • the memory 401 is used to store computer programs
  • the processor 402 is configured to implement the steps of the data processing method described in any of the foregoing embodiments when the computer program is executed.
  • the following describes an readable storage medium provided by an embodiment of the present invention.
  • the readable storage medium described below and the data processing method, apparatus, and device described above can be cross-referenced.
  • a readable storage medium that stores a computer program on the readable storage medium, and when the computer program is executed by a processor, implements the steps of the data processing method described in any of the foregoing embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种数据处理方法,包括:接收客户端发送的访问请求;根据访问请求的类型、当前系统的配置信息和预设的数据处理模型,确定访问请求对应的目的节点,访问请求的类型至少包括读请求和写请求,数据处理模型通过稀疏降维方法和Q-learning算法训练获得;将访问请求传输至目的节点进行相应处理。该方法依据Q-learning算法处理访问请求,可以提高数据处理效率,且由于稀疏降维方法能够提高Q-learning算法的收敛速度,降低延时时间,所以访问请求的响应时间也比较短,从而给用户带来了良好的服务体验。本发明公开的一种数据处理装置、设备及可读存储介质,也同样具有上述技术效果。

Description

一种数据处理方法、装置、设备及可读存储介质 技术领域
本发明涉及大数据处理技术领域,更具体地说,涉及一种数据处理方法、装置、设备及可读存储介质。
背景技术
在现有技术中,当需要从存储系统读写数据时,一般由存储系统中的主节点确定当前请求对应的存储节点。例如:分布式文件系统HDFS(Hadoop Distributed File System),其由主节点和从节点构成,主节点用于管理分配数据(如执行打开,关闭,重命名等操作),从节点用于存储数据。
其中,分布式文件系统中的确定存储节点的方式主要包括:当访问请求为读请求时,依据当前读请求对应的数据的存储路径确定存储节点,继而将数据返回至客户端;当访问请求为写请求时,依据当前系统的可用存储资源和需要存储的数据随机确定存储节点。其中,不管是读请求还是写请求,由于主节点确定存储节点的方式依赖于经验和当前处理的数据量,无法可支撑的处理规则。因此当数据处理量较大时,数据处理的效率自然较低,响应时间也比较长,即读或写操作的延时时间较长,从而无法给用户提供良好的服务体验。
因此,如何提高数据处理效率,降低延时时间,是本领域技术人员需要解决的问题。
发明内容
本发明的目的在于提供一种数据处理方法、装置、设备及可读存储介质,以提高数据处理效率,降低延时时间。
为实现上述目的,本发明实施例提供了如下技术方案:
一种数据处理方法,包括:
接收客户端发送的访问请求;
根据所述访问请求的类型、当前系统的配置信息和预设的数据处理模型,确定所述访问请求对应的目的节点,所述访问请求的类型至少包括读请求和写 请求,所述数据处理模型通过稀疏降维方法和Q-learning算法训练获得;
将所述访问请求传输至所述目的节点进行相应处理。
其中,所述将所述访问请求传输至所述目的节点进行相应处理,包括:
当所述访问请求的类型为读请求时,从所述目的节点读取所述读请求对应的数据,并将所述数据返回至所述客户端。
其中,所述将所述访问请求传输至所述目的节点进行相应处理,包括:
当所述访问请求的类型为写请求时,将所述写请求对应的数据存储至所述目的节点。
其中,所述接收客户端发送的访问请求,包括:
接收所述客户端发送的写请求,并返回所述写请求对应的响应信息,以使所述客户端根据所述响应信息将所述写请求对应的数据切分为多个数据块。
其中,所述数据处理模型的训练过程包括:
获取当前系统的历史数据信息,将所述历史数据信息作为训练样本;
利用所述稀疏降维方法对所述训练样本进行处理,获得目标样本;
通过所述Q-learning算法训练所述目标样本,若当前获得的反馈信息与所述当前获得的反馈信息的前一次反馈信息的差值小于预设的阈值,则训练完成,获得所述数据处理模型。
其中,所述通过所述Q-learning算法训练所述目标样本,包括:
在通过所述Q-learning算法训练所述目标样本的过程中,调整所述Q-learning算法的学习率和折扣率,并确定所述学习率和所述折扣率的最优值。
其中,所述利用所述稀疏降维方法对所述训练样本进行处理,获得目标样本,包括:
根据ZIPF定律确定所述训练样本中出现频率低于预设阈值的数据;
将所述出现频率低于预设阈值的数据从所述训练样本中剔除,获得所述目标样本。
一种数据处理装置,包括:
接收模块,用于接收客户端发送的访问请求;
确定模块,用于根据所述访问请求的类型、当前系统的配置信息和预设的数据处理模型,确定所述访问请求对应的目的节点,所述访问请求的类型至少包括读请求和写请求,所述数据处理模型通过稀疏降维方法和Q-learning算法训练获得;
处理模块,用于将所述访问请求传输至所述目的节点进行相应处理。
一种数据处理设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述计算机程序时实现上述任意一项所述的数据处理方法的步骤。
一种可读存储介质,所述可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述任意一项所述的数据处理方法的步骤。
通过以上方案可知,本发明实施例提供的一种数据处理方法,包括:接收客户端发送的访问请求;根据所述访问请求的类型、当前系统的配置信息和预设的数据处理模型,确定所述访问请求对应的目的节点,所述访问请求的类型至少包括读请求和写请求,所述数据处理模型通过稀疏降维方法和Q-learning算法训练获得;将所述访问请求传输至所述目的节点进行相应处理。
可见,所述方法在接收到访问请求时,根据所述访问请求的类型、当前系统的配置信息和通过稀疏降维方法和Q-learning算法训练获得的数据处理模型,从而确定出当前访问请求对应的目的节点,最后将访问请求传输至目的节点进行相应处理。其中,由于本方法使用的数据处理模型通过稀疏降维方法和Q-learning算法训练获得,其中的稀疏降维方法能够提高Q-learning算法的收敛速度,降低延时时间,从而提高了数据处理模型的处理效率;即:本方案依据Q-learning算法处理访问请求,当数据处理量较大时,数据处理的效率也不会降低,且由于稀疏降维方法能够提高Q-learning算法的收敛速度,降低读写延时时间,所以访问请求的响应时间也比较短,从而给用户带来了良好的服务体验。
相应地,本发明实施例提供的一种数据处理装置、设备及可读存储介质,也同样具有上述技术效果。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例公开的一种数据处理方法流程图;
图2为本发明实施例公开的另一种数据处理方法流程图;
图3为本发明实施例公开的一种数据处理装置示意图;
图4为本发明实施例公开的一种数据处理设备示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例公开了一种数据处理方法、装置、设备及可读存储介质,以提高数据处理效率,降低延时时间。
参见图1,本发明实施例提供的一种数据处理方法,包括:
S101、接收客户端发送的访问请求;
S102、根据访问请求的类型、当前系统的配置信息和预设的数据处理模型,确定访问请求对应的目的节点,访问请求的类型至少包括读请求和写请求,数据处理模型通过稀疏降维方法和Q-learning算法训练获得;
当访问请求的类型为读请求时,目的节点即为存储当前读请求对应的数据的存储节点。当访问请求的类型为写请求时,目的节点即为将要存储当前写请求对应的数据的存储节点。具体的,在针对读请求在确定目的节点时,还需要按照当前读请求对应的元数据查找存储节点的地址列表,进而在存储节点的地址列表中确定目的节点。
具体的,当前系统的配置信息包括:当前系统中各存储节点的可用存储空间、数据保护级别、存储性能指标、使用状态和其他相关信息。数据处理模型即为利用稀疏降维方法和Q-learning算法训练得到的处理模型,而在利用Q-learning算法训练数据之前,首先利用稀疏降维方法对数据进行了降维处理,以降低延时时间。
S103、将访问请求传输至目的节点进行相应处理。
优选地,当所述访问请求的类型为读请求时,从所述目的节点读取所述读请求对应的数据,并将所述数据返回至所述客户端。
需要说明的是,目的节点的数量可以为多个。
可见,本实施例提供了一种数据处理方法,所述方法在接收到访问请求时,根据所述访问请求的类型、当前系统的配置信息和通过稀疏降维方法和Q-learning算法训练获得的数据处理模型,从而确定出当前访问请求对应的目的节点,最后将访问请求传输至目的节点进行相应处理。其中,由于本方法使用的数据处理模型通过稀疏降维方法和Q-learning算法训练获得,其中的稀疏降维方法能够提高Q-learning算法的收敛速度,降低延时时间,从而提高了数据处理模型的处理效率;即:本方案依据Q-learning算法处理访问请求,当数据处理量较大时,数据处理的效率也不会降低,且由于稀疏降维方法能够提高Q-learning算法的收敛速度,降低延时时间,所以访问请求的响应时间也比较短,从而给用户带来了良好的服务体验。
本发明实施例公开了另一种数据处理方法,相对于上一实施例,本实施例对技术方案作了进一步的说明和优化。
参见图2,本发明实施例提供的另一种数据处理方法,包括:
S201、接收客户端发送的写请求,并返回写请求对应的响应信息,以使客户端根据响应信息将写请求对应的数据切分为多个数据块;
具体的,当接收到客户端发送的写请求时,即需要存储某些数据至当前系统。那么需要首先检查当前系统中是否存在需要存储的数据,然后将检查结果返回至客户端。若当前系统中存在需要存储的数据时,说明当前写操作为更新操作,即修改操作;若当前系统中不存在需要存储的数据时,说明当前写操作为增加操作。其中,检查结果中携带有是否可以传输数据的指令以及相关内容。
客户端在接收到可以传输数据的指令后,首先对需要存储的数据进行切分。例如:当数据为300M,且一个数据块128M,那么数据会被切分为128M、128M和44M这三个数据块。其中,对于每个数据块,可以分别请求存储,例如首先请求存储第一个数据块,之后再请求存储第二个数据块,其他以此类推;还可以一次性请求存储所有数据块,或者利用多线程进行处理,以提高效率。
S202、根据访问请求的类型、当前系统的配置信息和预设的数据处理模型,确定访问请求对应的目的节点,访问请求的类型至少包括读请求和写请求,数据处理模型通过稀疏降维方法和Q-learning算法训练获得;
S203、将写请求对应的数据存储至目的节点。
在本实施例中,当接收到客户端发送写请求时,同时还接收到了需要存储 的数据,为了使数据适应当前系统,需要对数据进行切分,进而再存储切分得到的数据块。若当前系统为分布式文件系统,则是由其中的主节点确定存储数据的目的节点,并切分数据;进而将数据从主节点传输至目的节点进行存储。
其中,当确定访问请求对应的目的节点后,将目的节点的信息返回至客户端,以使客户端建立与目的节点的数据传输通道,进而通过此数据传输通道将数据传输至目的节点进行存储,同时将数据从目的节点传输至预设的备份节点进行备份。
需要说明的是,每个存储节点预设有对应的备份节点,当数据存入存储节点时,同时会将该数据处理至备份节点。备份节点也是系统中的存储节点,即每个存储节点都可以作为备份节点。
可见,本实施例提供了另一种数据处理方法,所述方法在接收到写请求时,根据写请求、当前系统的配置信息和通过稀疏降维方法和Q-learning算法训练获得的数据处理模型,从而确定出当前写请求对应的目的节点,最后将写请求对应的数据存储至目的节点。其中,由于本方法使用的数据处理模型通过稀疏降维方法和Q-learning算法训练获得,其中的稀疏降维方法能够提高Q-learning算法的收敛速度,降低延时时间,从而提高了数据处理模型的处理效率;即:本方案依据Q-learning算法处理写请求,当数据处理量较大时,数据处理的效率也不会降低,且由于稀疏降维方法能够提高Q-learning算法的收敛速度,降低写入延时时间,所以写请求的响应时间也比较短,从而给用户带来了良好的服务体验。
基于上述任意实施例,需要说明的是,所述数据处理模型的训练过程包括:
获取当前系统的历史数据信息,将所述历史数据信息作为训练样本;
利用所述稀疏降维方法对所述训练样本进行处理,获得目标样本;
通过所述Q-learning算法训练所述目标样本,若当前获得的反馈信息与所述当前获得的反馈信息的前一次反馈信息的差值小于预设的阈值,则训练完成,获得所述数据处理模型。
具体的,所述历史数据信息记录了以前写入当前系统的数据类型、数据量、写入时间戳、写入频率等信息;以及从当前系统读取的数据类型、数据量、读取时间戳、读取频率等信息。其中,所述反馈信息可以看作系统主节点在确定目的节点后给整个系统带来的影响。具体为:将主节点视为一个受训实体,每 一次选择存储节点就被视为这个实体的一个动作,动作会对存储环境造成一个影响,环境接受该动作后系统状态发生变化,该变化即为奖励,将此奖励反馈给实体,实体根据奖励再选择下一个动作,选择的原则是使受到正强化(奖)的概率增大,直到这个奖励的增长不再明显。
其中,所述通过所述Q-learning算法训练所述目标样本,包括:
在通过所述Q-learning算法训练所述目标样本的过程中,调整所述Q-learning算法的学习率和折扣率,并确定所述学习率和所述折扣率的最优值。
具体的,学习率和折扣率是Q-learning算法中的两个重要参数,请参见公式:Q(s,a)←(1-α)*Q(s,a)+α*[R+γ*max aQ(s',a)]。其中,Q(s,a)是当前下状态(s,status)采取动作(a,action)的奖励,R为眼前奖励(当次动作的奖励),max aQ(s',a)是根据以往经验预测到的奖励,s'是下一个状态,α为学习率(learning rate),γ为折扣率(discount factor)。可见,学习率α越大,保留之前训练的效果就越少;折扣率γ越大,max aQ(s',a)所起到的作用就越大。
为了确定学习率和折扣率的最优值,首先固定学习率为0.75,折扣率在[0.6,0.9]范围变化;进行多次实验,并记录实验结果。综合分析实验后得出结论:折扣率在0.8左右时,收敛速度最快,奖励最高,因此将折扣率的最优值确定为0.8。然后固定折扣率为0.75,学习率在[0.6,0.9]范围变化,进行多次实验,并记录实验结果。综合分析实验后得出结论:学习率在0.8左右时,收敛速度最快,奖励最高,因此将学习率的最优值确定为0.8。需要说明的是,学习率和折扣率均需要一定的调整范围,因此将学习率α的取值范围设定为:[0.7,0.8],将折扣率γ的取值范围设定为:[0.7,0.8],以提高Q-learning算法的训练效率和训练质量。
其中,所述利用所述稀疏降维方法对所述训练样本进行处理,获得目标样本,包括:
根据ZIPF定律确定所述训练样本中出现频率低于预设阈值的数据;
将所述出现频率低于预设阈值的数据从所述训练样本中剔除,获得所述目标样本。
具体的,Q-learning算法的核心是Q-table。Q-table的行和列分别表示state和action的值,Q-table的值Q(s,a)衡量当前states采取action的收益。Q-table 存在一个问题,真实情况的state可能无穷多,采取的动作也可能很多,这样Q-table会无限大。
假设已知有32个节点Node,n个数据data。例如f 2,1代表的是data2在节点Node1出现的频率,然后得到一个矩阵A。根据数据的ZIPF定律,也就是二八原则。总有一些数据的出现频率很高,但又占少数。其他的数据频率可能很小,但占多数。我们要找到这些数据。具体方法如下:
1)计算矩阵A i行的最大值记为f max,矩阵A如表1所示:
表1
Frequency Node1 Node2 Node32
data1 (f max-f i,j) 2>d f 1,2 f 1,32
data2 f 2,1 f 2,2 f 2,32
datan f n,1 f n,2 f n,32
2)然后计算i行各个元素的频率f i,j与第i行的f max的距离(方差),如果满足(f max-f i,j) 2<d,这个d根据实际运行来设定。我们就将f i,j视为“活跃的”数据,记录下来。反之如果(f max-f i,j) 2>d,将这个数据视为“惰性的”,用0来代表。
3)最后会得到如下矩阵其中k<n,0代表data i在data j上不频繁出现。这样就实现了Q-learning的输入空间稀疏化,如表2所示,将矩阵中非零的部分作为输入。
表2
Frequency Node 1 Node 2 Node 32
data 1 0 g 1,2 0
data 2 g 2,1 0 g 2,32
data k 0 0 g k,32
通过稀疏化后,特征选择完成,可以直接训练模型了,但是可能由于特征矩阵过大,导致计算量大,训练时间长的问题,因此降低特征矩阵维度也是必不可少的。数据降维,直观地好处是维度降低了,便于计算和可视化,其更深 层次的意义在于有效信息的提取综合及无用信息的摒弃。
需要说明的是,本发明还可以应用于大数据网络中心,例如:无线传感器的控制台、边缘计算中的数据处理等,以优化数据部署,降低数据读写响应延迟。
下面对本发明实施例提供的一种数据处理装置进行介绍,下文描述的一种数据处理装置与上文描述的一种数据处理方法可以相互参照。
参见图3,本发明实施例提供的一种数据处理装置,包括:
接收模块301,用于接收客户端发送的访问请求;
确定模块302,用于根据所述访问请求的类型、当前系统的配置信息和预设的数据处理模型,确定所述访问请求对应的目的节点,所述访问请求的类型至少包括读请求和写请求,所述数据处理模型通过稀疏降维方法和Q-learning算法训练获得;
处理模块303,用于将所述访问请求传输至所述目的节点进行相应处理。
其中,所述处理模块具体用于:
当所述访问请求的类型为读请求时,从所述目的节点读取所述读请求对应的数据,并将所述数据返回至所述客户端。
其中,所述处理模块具体用于:
当所述访问请求的类型为写请求时,将所述写请求对应的数据存储至所述目的节点。
其中,所述接收模块具体用于:
接收所述客户端发送的写请求,并返回所述写请求对应的响应信息,以使所述客户端根据所述响应信息将所述写请求对应的数据切分为多个数据块。
其中,还包括训练模块,所述训练模块用于训练数据处理模型,包括:
获取单元,用于获取当前系统的历史数据信息,将所述历史数据信息作为训练样本;
降维单元,用于利用所述稀疏降维方法对所述训练样本进行处理,获得目标样本;
训练单元,用于通过所述Q-learning算法训练所述目标样本,若当前获得的反馈信息与所述当前获得的反馈信息的前一次反馈信息的差值小于预设的阈值,则训练完成,获得所述数据处理模型。
其中,所述训练单元具体用于:
在通过所述Q-learning算法训练所述目标样本的过程中,调整所述Q-learning算法的学习率和折扣率,并确定所述学习率和所述折扣率的最优值。
其中,所述降维单元具体用于:
根据ZIPF定律确定所述训练样本中出现频率低于预设阈值的数据;将所述出现频率低于预设阈值的数据从所述训练样本中剔除,获得所述目标样本。
可见,本实施例提供了一种数据处理装置,包括:接收模块、确定模块以及处理模块。首先由接收模块接收客户端发送的访问请求;然后确定模块根据所述访问请求的类型、当前系统的配置信息和预设的数据处理模型,确定所述访问请求对应的目的节点,所述访问请求的类型至少包括读请求和写请求,所述数据处理模型通过稀疏降维方法和Q-learning算法训练获得;最后处理模块将所述访问请求传输至所述目的节点进行相应处理。如此各个模块之间分工合作,各司其职,提高了数据处理效率,降低了延时时间,也给用户带来了良好的服务体验。
下面对本发明实施例提供的一种数据处理设备进行介绍,下文描述的一种数据处理设备与上文描述的一种数据处理方法及装置可以相互参照。
参见图4,本发明实施例提供的一种数据处理设备,包括:
存储器401,用于存储计算机程序;
处理器402,用于执行所述计算机程序时实现上述任意实施例所述的数据处理方法的步骤。
下面对本发明实施例提供的一种可读存储介质进行介绍,下文描述的一种可读存储介质与上文描述的一种数据处理方法、装置及设备可以相互参照。
一种可读存储介质,所述可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上述任意实施例所述的数据处理方法的步骤。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的, 本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (10)

  1. 一种数据处理方法,其特征在于,包括:
    接收客户端发送的访问请求;
    根据所述访问请求的类型、当前系统的配置信息和预设的数据处理模型,确定所述访问请求对应的目的节点,所述访问请求的类型至少包括读请求和写请求,所述数据处理模型通过稀疏降维方法和Q-learning算法训练获得;
    将所述访问请求传输至所述目的节点进行相应处理。
  2. 根据权利要求1所述的数据处理方法,其特征在于,所述将所述访问请求传输至所述目的节点进行相应处理,包括:
    当所述访问请求的类型为读请求时,从所述目的节点读取所述读请求对应的数据,并将所述数据返回至所述客户端。
  3. 根据权利要求1所述的数据处理方法,其特征在于,所述将所述访问请求传输至所述目的节点进行相应处理,包括:
    当所述访问请求的类型为写请求时,将所述写请求对应的数据存储至所述目的节点。
  4. 根据权利要求3所述的数据处理方法,其特征在于,所述接收客户端发送的访问请求,包括:
    接收所述客户端发送的写请求,并返回所述写请求对应的响应信息,以使所述客户端根据所述响应信息将所述写请求对应的数据切分为多个数据块。
  5. 根据权利要求1-4任意一项所述的数据处理方法,其特征在于,所述数据处理模型的训练过程包括:
    获取当前系统的历史数据信息,将所述历史数据信息作为训练样本;
    利用所述稀疏降维方法对所述训练样本进行处理,获得目标样本;
    通过所述Q-learning算法训练所述目标样本,若当前获得的反馈信息与所述当前获得的反馈信息的前一次反馈信息的差值小于预设的阈值,则训练完成,获得所述数据处理模型。
  6. 根据权利要求5所述的数据处理方法,其特征在于,所述通过所述Q-learning算法训练所述目标样本,包括:
    在通过所述Q-learning算法训练所述目标样本的过程中,调整所述Q-learning算法的学习率和折扣率,并确定所述学习率和所述折扣率的最优值。
  7. 根据权利要求6所述的数据处理方法,其特征在于,所述利用所述稀疏降维方法对所述训练样本进行处理,获得目标样本,包括:
    根据ZIPF定律确定所述训练样本中出现频率低于预设阈值的数据;
    将所述出现频率低于预设阈值的数据从所述训练样本中剔除,获得所述目标样本。
  8. 一种数据处理装置,其特征在于,包括:
    接收模块,用于接收客户端发送的访问请求;
    确定模块,用于根据所述访问请求的类型、当前系统的配置信息和预设的数据处理模型,确定所述访问请求对应的目的节点,所述访问请求的类型至少包括读请求和写请求,所述数据处理模型通过稀疏降维方法和Q-learning算法训练获得;
    处理模块,用于将所述访问请求传输至所述目的节点进行相应处理。
  9. 一种数据处理设备,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述计算机程序时实现如权利要求1-7任意一项所述的数据处理方法的步骤。
  10. 一种可读存储介质,其特征在于,所述可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7任意一项所述的数据处理方法的步骤。
PCT/CN2019/094372 2018-11-09 2019-07-02 一种数据处理方法、装置、设备及可读存储介质 WO2020093714A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811332391.1A CN109407997B (zh) 2018-11-09 2018-11-09 一种数据处理方法、装置、设备及可读存储介质
CN201811332391.1 2018-11-09

Publications (1)

Publication Number Publication Date
WO2020093714A1 true WO2020093714A1 (zh) 2020-05-14

Family

ID=65472409

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/094372 WO2020093714A1 (zh) 2018-11-09 2019-07-02 一种数据处理方法、装置、设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN109407997B (zh)
WO (1) WO2020093714A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407997B (zh) * 2018-11-09 2021-04-23 长沙理工大学 一种数据处理方法、装置、设备及可读存储介质
CN110134688B (zh) * 2019-05-14 2021-06-01 北京科技大学 一种在线社交网络中热点事件数据存储管理方法及系统
CN112529169A (zh) * 2019-09-18 2021-03-19 华为技术有限公司 数据处理方法、模型优化装置和模型执行装置
CN113037891B (zh) * 2021-03-26 2022-04-08 腾讯科技(深圳)有限公司 边缘计算系统中有状态应用的访问方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102541475A (zh) * 2012-03-12 2012-07-04 成都市华为赛门铁克科技有限公司 数据存储方法和数据存储装置
US20140330807A1 (en) * 2012-04-26 2014-11-06 Christoph Weyerhaeuser Rule-Based Extendable Query Optimizer
CN104811493A (zh) * 2015-04-21 2015-07-29 华中科技大学 一种网络感知的虚拟机镜像存储系统及读写请求处理方法
CN107038252A (zh) * 2017-05-04 2017-08-11 沈阳航空航天大学 一种基于多模态数据的路由度量的生成方法
CN109407997A (zh) * 2018-11-09 2019-03-01 长沙理工大学 一种数据处理方法、装置、设备及可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107517266A (zh) * 2017-09-05 2017-12-26 江苏电力信息技术有限公司 一种基于分布式缓存的即时通讯方法
CN108112082B (zh) * 2017-12-18 2021-05-25 北京工业大学 一种基于无状态q学习的无线网络分布式自主资源分配方法
CN108446340B (zh) * 2018-03-02 2019-11-05 哈尔滨工业大学(威海) 一种面向海量小文件的用户热点数据访问预测方法
CN108510082B (zh) * 2018-03-27 2022-11-11 苏宁易购集团股份有限公司 对机器学习模型进行处理的方法及装置
CN108769026B (zh) * 2018-05-31 2022-02-15 康键信息技术(深圳)有限公司 用户账号检测系统和方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102541475A (zh) * 2012-03-12 2012-07-04 成都市华为赛门铁克科技有限公司 数据存储方法和数据存储装置
US20140330807A1 (en) * 2012-04-26 2014-11-06 Christoph Weyerhaeuser Rule-Based Extendable Query Optimizer
CN104811493A (zh) * 2015-04-21 2015-07-29 华中科技大学 一种网络感知的虚拟机镜像存储系统及读写请求处理方法
CN107038252A (zh) * 2017-05-04 2017-08-11 沈阳航空航天大学 一种基于多模态数据的路由度量的生成方法
CN109407997A (zh) * 2018-11-09 2019-03-01 长沙理工大学 一种数据处理方法、装置、设备及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHI, CHUAN ET AL: "Online Hierarchical Reinforcement Learning Based on Path-matching", JOURNAL OF COMPUTER RESEARCH AND DEVELOPMENT, no. 9, 15 September 2008 (2008-09-15), pages 1470 - 1476 *

Also Published As

Publication number Publication date
CN109407997B (zh) 2021-04-23
CN109407997A (zh) 2019-03-01

Similar Documents

Publication Publication Date Title
WO2020093714A1 (zh) 一种数据处理方法、装置、设备及可读存储介质
US8762309B2 (en) Storage policy evaluation in a computing environment
US9864749B2 (en) Methods for provisioning workloads in a storage system using machine learning and devices thereof
CN109379395B (zh) 一种接口数据缓存设置方法及终端设备
WO2015110062A1 (zh) 一种分布式数据存储方法、装置和系统
US10079907B2 (en) Cached data detection
CN109240946A (zh) 数据的多级缓存方法及终端设备
CN108959510B (zh) 一种分布式数据库的分区级连接方法和装置
JP2012503257A5 (zh)
US20170262373A1 (en) Cache Optimization Based On Predictive Routing
WO2015096408A1 (zh) 用户感知评估方法及系统
US9292454B2 (en) Data caching policy in multiple tenant enterprise resource planning system
CN103491152A (zh) 分布式文件系统中元数据获取方法、装置及系统
CN104702592A (zh) 流媒体下载方法和装置
CN110704336B (zh) 一种数据缓存方法及装置
AU2021244852B2 (en) Offloading statistics collection
US8682892B1 (en) Ranking search results
CN110413413A (zh) 一种数据写入方法、装置、设备及存储介质
US11561965B2 (en) Data retrieval via incremental updates to graph data structures
CN103744975A (zh) 基于分布式文件的高效缓存服务器
US20210209024A1 (en) Method, device and computer program product for managing cache
CN110858912A (zh) 流媒体缓存方法和系统、缓存策略服务器、流服务节点
CN102932358A (zh) 基于内容分发网络的第三方文件改写加速分发方法和装置
CN109948803A (zh) 算法模型优化方法、装置和设备
TWI758223B (zh) 具有動態最小批次尺寸之運算方法,以及用於執行該方法之運算系統及電腦可讀儲存媒體

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19883051

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19883051

Country of ref document: EP

Kind code of ref document: A1