CN114021708B - Data processing method, device and system, electronic equipment and storage medium - Google Patents

Data processing method, device and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN114021708B
CN114021708B CN202111165135.XA CN202111165135A CN114021708B CN 114021708 B CN114021708 B CN 114021708B CN 202111165135 A CN202111165135 A CN 202111165135A CN 114021708 B CN114021708 B CN 114021708B
Authority
CN
China
Prior art keywords
computing
core
computing core
calculation
cores
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111165135.XA
Other languages
Chinese (zh)
Other versions
CN114021708A (en
Inventor
董刚
赵雅倩
李仁刚
杨宏斌
刘海威
蒋东东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IEIT Systems Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co Ltd filed Critical Inspur Electronic Information Industry Co Ltd
Priority to CN202111165135.XA priority Critical patent/CN114021708B/en
Publication of CN114021708A publication Critical patent/CN114021708A/en
Priority to PCT/CN2022/090194 priority patent/WO2023050807A1/en
Application granted granted Critical
Publication of CN114021708B publication Critical patent/CN114021708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8046Systolic arrays
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请公开了一种数据处理方法、装置、系统、电子设备及计算机可读存储介质,该方法包括:获取设置指令,并根据设置指令设置计算网络;设置指令用于设置计算网络中各个计算核之间的数据流向;获取至少一个特征值,并将至少一个特征值分别输入计算网络中的至少一个起始计算核;以起始计算核为起点,按照数据流向传输特征值;利用各个计算核,基于特征值和对应的权重值生成计算结果;该方法通过设置指令设置计算网络中各个计算核之间的数据流向,使得数据在不同级之间流动,或者在同级之间流动,使得整个计算网络无论在处理什么形状的网络模型时,都能够被百分百全面利用。

The application discloses a data processing method, device, system, electronic equipment, and computer-readable storage medium. The method includes: obtaining a setting instruction, and setting a computing network according to the setting instruction; the setting instruction is used to set each computing core in the computing network The data flow direction between them; obtain at least one eigenvalue, and input at least one eigenvalue into at least one initial computing core in the computing network; start from the initial computing core, and transmit the eigenvalue according to the data flow; use each computing core , to generate calculation results based on eigenvalues and corresponding weight values; this method sets the data flow direction between each calculation core in the calculation network by setting instructions, so that data flows between different levels, or between the same levels, so that the entire Computational networks can be 100% fully utilized regardless of the shape of the network model being processed.

Description

一种数据处理方法、装置、系统、电子设备及存储介质A data processing method, device, system, electronic equipment and storage medium

技术领域technical field

本申请涉及深度学习技术领域,特别涉及一种数据处理方法、数据处理装置、数据处理系统、电子设备及计算机可读存储介质。The present application relates to the technical field of deep learning, and in particular to a data processing method, a data processing device, a data processing system, electronic equipment, and a computer-readable storage medium.

背景技术Background technique

目前深度学习方面的发展对于计算力提出的非常高的需求,各种ASIC(Application Specific Integrated Circuit,专用集成电路)架构层出不穷,比较有代表性的有:谷歌推出的张量处理单元(TPU,Tensor Processing Unit),它是一种定制化的ASIC芯片,专门用于机器学习工作负载。但是TPU不是一种通用的处理器,而是专用于神经网络工作负载的矩阵处理器,其主要任务是矩阵处理,TPU的硬件设计者知道该运算过程的每个步骤。因此他们放置了成千上万的乘法器和加法器并将它们直接连接起来,以构建那些运算符的物理矩阵,被称作脉动阵列(Systolic Array)架构。脉动阵列是一系列在网格中规律布置的处理单元(Processing Elements,PE)。脉动阵列中的每一个PE会以预定的步骤和它相邻的PE进行数据传输。此外,与TPU类似的其他架构方案(例如达芬奇架构方案),同样采用了规则的二维或者三维网格式PE阵列结构。然而,神经网络的形状或者说规格参数的数量千差万别,而TPU等的硬件规模是固定的,只有网络的形状与芯片完全匹配时,才能够将硬件资源利用率达到最大,而对于与芯片不匹配的神经网络,其硬件资源利用率与网络的尺寸相关,但是无法达到全面利用硬件资源的状态,存在资源利用率不足的问题。At present, the development of deep learning puts forward a very high demand for computing power. Various ASIC (Application Specific Integrated Circuit, application specific integrated circuit) architectures emerge in an endless stream. The more representative ones are: Google's tensor processing unit (TPU, Tensor Processing Unit), which is a custom ASIC chip designed for machine learning workloads. But the TPU is not a general-purpose processor, but a matrix processor dedicated to neural network workloads. Its main task is matrix processing, and the hardware designer of the TPU knows every step of the operation process. So they put tens of thousands of multipliers and adders and connect them directly to build a physical matrix of those operators, called a Systolic Array architecture. A systolic array is a series of processing elements (Processing Elements, PE) regularly arranged in a grid. Each PE in the systolic array will perform data transmission with its adjacent PEs in predetermined steps. In addition, other architecture schemes similar to TPU (such as Da Vinci architecture scheme) also adopt a regular two-dimensional or three-dimensional grid PE array structure. However, the shape of the neural network or the number of specification parameters vary greatly, while the hardware scale of TPU and the like is fixed. Only when the shape of the network fully matches the chip can the utilization of hardware resources be maximized. The neural network, its hardware resource utilization is related to the size of the network, but it cannot fully utilize the hardware resources, and there is a problem of insufficient resource utilization.

发明内容Contents of the invention

有鉴于此,本申请的目的在于提供一种数据处理方法、装置、系统、电子设备及计算机可读存储介质,提高了资源利用率。In view of this, the object of the present application is to provide a data processing method, device, system, electronic equipment, and computer-readable storage medium, which improves resource utilization.

为解决上述技术问题,本申请提供了一种数据处理方法,包括:In order to solve the above technical problems, the application provides a data processing method, including:

获取设置指令,并根据所述设置指令设置计算网络;所述设置指令用于设置所述计算网络中各个计算核之间的数据流向;Obtaining a setting instruction, and setting a computing network according to the setting instruction; the setting instruction is used to set the data flow direction between each computing core in the computing network;

获取至少一个特征值,并将所述至少一个特征值分别输入所述计算网络中的至少一个起始计算核;obtaining at least one eigenvalue, and inputting the at least one eigenvalue into at least one initial computing core in the computing network;

以所述起始计算核为起点,按照所述数据流向传输所述特征值;Using the initial calculation core as a starting point, transmit the feature value according to the data flow direction;

利用各个所述计算核,基于所述特征值和对应的权重值生成计算结果;using each of the calculation cores to generate calculation results based on the feature values and corresponding weight values;

其中,所述计算网络包括一个第一计算核、若干个第二计算核和若干个第三计算核;所述起始计算核包括所述第一计算核,或者包括所述第一计算核和所述第三计算核,或者包括所述第一计算核、所述第二计算核和所述第三计算核;任一所述第二计算核对应于一个上级计算核和两个下级计算核,所述下级计算核为所述第二计算核或所述第三计算核,所述第三计算核不具有下级计算核,所述第一计算核为目标第二计算核的上级计算核。Wherein, the computing network includes a first computing core, several second computing cores, and several third computing cores; the initial computing core includes the first computing core, or includes the first computing core and The third computing core, or includes the first computing core, the second computing core and the third computing core; any of the second computing cores corresponds to one upper-level computing core and two lower-level computing cores , the lower-level computing core is the second computing core or the third computing core, the third computing core does not have a lower-level computing core, and the first computing core is a higher-level computing core of the target second computing core.

可选地,还包括:Optionally, also include:

获取配置信息,并将所述配置信息存储至各个所述计算核;其中,所述配置信息中包括若干个标识信息和对应的若干个数据流向;Obtain configuration information, and store the configuration information in each of the computing cores; wherein, the configuration information includes several identification information and corresponding several data flows;

相应的,所述根据所述设置指令设置计算网络,包括:Correspondingly, setting the computing network according to the setting instruction includes:

将所述设置指令发送至各个所述计算核,利用所述设置指令中的目标标识信息和所述配置信息确定各个所述计算核对应的所述数据流向。Sending the setting instruction to each of the computing cores, using the target identification information and the configuration information in the setting instruction to determine the data flow direction corresponding to each of the computing cores.

可选地,所述设置指令用于设置所述计算网络为一路计算,则所述数据流向为从上级至下级流动的第一流向,所述起始计算核为所述第一计算核;Optionally, the setting instruction is used to set the computing network as one-way computing, then the data flow direction is the first flow direction from the upper level to the lower level, and the initial computing core is the first computing core;

相应的,所述将所述至少一个特征值分别输入所述计算网络中的至少一个起始计算核;以所述起始计算核为起点,按照所述数据流向传输所述特征值,包括:Correspondingly, the inputting the at least one feature value into at least one initial computing core in the computing network respectively; starting from the initial computing core, transmitting the feature value according to the data flow direction includes:

将所述特征值输入所述第一计算核,利用所述第一计算核将所述特征值发送至所述目标第二计算核;inputting the feature value into the first computing core, and using the first computing core to send the feature value to the target second computing core;

基于所述第一流向,从所述目标第二计算核开始,依次将所述特征值发送至对应的下级计算核,直至将所述特征值发送至所述第三计算核。Based on the first flow direction, starting from the target second computing core, the feature values are sent to corresponding lower-level computing cores in sequence until the feature values are sent to the third computing core.

可选地,所述设置指令用于设置所述计算网络为至少三路计算,则所述数据流向为从上级至下级流动的第一流向和同级之间流动的第二流向,所述起始计算核包括所述第一计算核和起始第三计算核,所述第一计算核对应于所述第一流向,所述起始第三计算核对应于所述第二流向;Optionally, the setting instruction is used to set the computing network to be at least three-way computing, then the data flow direction is the first flow direction flowing from the upper level to the lower level and the second flow direction flowing between the same levels, the starting The initial calculation core includes the first calculation core and the initial third calculation core, the first calculation core corresponds to the first flow direction, and the initial third calculation core corresponds to the second flow direction;

相应的,所述将所述至少一个特征值分别输入所述计算网络中的至少一个起始计算核;以所述起始计算核为起点,按照所述数据流向传输所述特征值,包括:Correspondingly, the inputting the at least one feature value into at least one initial computing core in the computing network respectively; starting from the initial computing core, transmitting the feature value according to the data flow direction includes:

将第一特征值输入所述第一计算核,并利用所述第一计算核将所述第一特征值发送至所述目标第二计算核;inputting a first feature value into the first computing core, and using the first computing core to send the first feature value to the target second computing core;

基于所述第一流向,从所述目标第二计算核开始,依次将所述第一特征值发送至对应的下级计算核,直至将所述第一特征值发送至所述第一流向结尾的所述第二计算核;Based on the first flow direction, starting from the target second computing core, sending the first eigenvalues to the corresponding lower-level computing cores sequentially until sending the first eigenvalues to the end of the first flow direction the second computing core;

将第二特征值输入所述起始第三计算核,并从所述起始第三计算核开始,基于所述第二流向,依次将所述第二特征值发送至后序同级计算核,直至将所述第二特征值发送至所述第二流向结尾的所述第三计算核。inputting the second eigenvalue into the starting third computing core, and starting from the starting third computing core, based on the second flow direction, sequentially sending the second eigenvalue to subsequent computing cores of the same level , until the second feature value is sent to the third computing core at the end of the second stream.

可选地,所述起始计算核还包括起始第二计算核,所述第二计算核对应于所述第二流向;Optionally, the initial computing core further includes an initial second computing core, and the second computing core corresponds to the second flow direction;

相应的,所述将所述至少一个特征值分别输入所述计算网络中的至少一个起始计算核;以所述起始计算核为起点,按照所述数据流向传输所述特征值,包括:Correspondingly, the inputting the at least one feature value into at least one initial computing core in the computing network respectively; starting from the initial computing core, transmitting the feature value according to the data flow direction includes:

将第二特征值输入所述起始第二计算核,并从所述起始第二计算核开始,基于所述第二流向,依次将所述第二特征值发送至后序同级计算核,直至将所述第二特征值发送至所述第二流向结尾的所述第二计算核。inputting the second eigenvalue into the starting second computing core, and starting from the starting second computing core, based on the second flow direction, sequentially sending the second eigenvalue to subsequent computing cores of the same level , until the second feature value is sent to the second computing core at the end of the second stream.

可选地,还包括:Optionally, also include:

确定各个所述计算核分别对应的权重值;Determining the weight values corresponding to each of the calculation cores;

将所述权重值发送并存储至对应的所述计算核。Sending and storing the weight value to the corresponding computing core.

可选地,所述确定各个所述计算核分别对应的权重值,包括:Optionally, the determining the weight values corresponding to each of the calculation cores includes:

获取初始权重值;Get the initial weight value;

基于所述数据流向,确定各个所述计算核与所述初始权重值之间的对应关系,完成所述权重值的确定。Based on the data flow direction, determine the corresponding relationship between each of the calculation cores and the initial weight value, and complete the determination of the weight value.

可选地,所述利用各个所述计算核,基于所述特征值和对应的权重值生成计算结果,包括:Optionally, using each of the calculation cores to generate calculation results based on the feature values and corresponding weight values includes:

控制各个所述计算核,利用所述特征值和所述权重值相乘得到目标结果;controlling each of the calculation cores, and multiplying the feature value and the weight value to obtain a target result;

将所述目标结果与所述计算核存储的历史计算结果相加,得到所述计算结果。The target result is added to the historical calculation result stored by the calculation core to obtain the calculation result.

可选地,所述计算核包括算术逻辑单元、权重值接口、特征值接口、控制模块和存储单元,所述权重值接口包括外部输入端口和同级输入端口,所述特征值接口包括外部输入端口、上级输入端口和同级输入端口,所述控制模块用于存储配置信息,并根据所述设置指令确定数据流向,所述存储单元用于存储权重值、特征值和所述计算结果。Optionally, the calculation core includes an arithmetic logic unit, a weight value interface, a feature value interface, a control module and a storage unit, the weight value interface includes an external input port and an input port of the same level, and the feature value interface includes an external input port Port, upper-level input port and same-level input port, the control module is used to store configuration information, and determine the data flow direction according to the setting instruction, and the storage unit is used to store weight values, feature values and the calculation results.

本申请还提供了一种数据处理装置,包括:The present application also provides a data processing device, including:

设置模块,用于获取设置指令,并根据所述设置指令设置计算网络;所述设置指令用于设置所述计算网络中各个计算核之间的数据流向;A setting module, configured to obtain a setting instruction, and set a computing network according to the setting instruction; the setting instruction is used to set the data flow direction between each computing core in the computing network;

输入模块,用于获取至少一个特征值,并将所述至少一个特征值分别输入所述计算网络中的至少一个起始计算核;An input module, configured to acquire at least one eigenvalue, and respectively input the at least one eigenvalue into at least one initial calculation core in the calculation network;

传输模块,用于以所述起始计算核为起点,按照所述数据流向传输所述特征值;a transmission module, configured to use the initial calculation core as a starting point, and transmit the characteristic value according to the data flow direction;

计算模块,用于利用各个所述计算核,基于所述特征值和对应的权重值生成计算结果;A calculation module, configured to use each of the calculation cores to generate calculation results based on the feature values and corresponding weight values;

其中,所述计算网络包括一个第一计算核、若干个第二计算核和若干个第三计算核;所述起始计算核包括所述第一计算核,或者包括所述第一计算核与所述第三计算核,或者包括所述第一计算核、所述第二计算核与所述第三计算核;任一所述第二计算核对应于一个上级计算核和两个下级计算核,所述下级计算核为所述第二计算核或所述第三计算核,所述第三计算核不具有下级计算核,所述第一计算核为目标第二计算核的上级计算核。Wherein, the computing network includes a first computing core, several second computing cores, and several third computing cores; the initial computing core includes the first computing core, or includes the first computing core and The third computing core, or includes the first computing core, the second computing core and the third computing core; any of the second computing cores corresponds to one upper-level computing core and two lower-level computing cores , the lower-level computing core is the second computing core or the third computing core, the third computing core does not have a lower-level computing core, and the first computing core is a higher-level computing core of the target second computing core.

本申请还提供了一种数据处理系统,包括计算网络,所述计算网络包括一个第一计算核、若干个第二计算核和若干个第三计算核;任一所述第二计算核对应于一个上级计算核和两个下级计算核,所述下级计算核为所述第二计算核或所述第三计算核,所述第三计算核不具有下级计算核,所述第一计算核为目标第二计算核的上级计算核。The present application also provides a data processing system, including a computing network, and the computing network includes a first computing core, several second computing cores, and several third computing cores; any of the second computing cores corresponds to One upper-level computing core and two lower-level computing cores, the lower-level computing core is the second computing core or the third computing core, the third computing core does not have a lower-level computing core, and the first computing core is The upper-level computing core of the target second computing core.

本申请还提供了一种电子设备,包括存储器和处理器,其中:The present application also provides an electronic device, including a memory and a processor, wherein:

所述存储器,用于保存计算机程序;The memory is used to store computer programs;

所述处理器,用于执行所述计算机程序,以实现上述的数据处理方法。The processor is configured to execute the computer program to implement the above data processing method.

本申请还提供了一种计算机可读存储介质,用于保存计算机程序,其中,所述计算机程序被处理器执行时实现上述的数据处理方法。The present application also provides a computer-readable storage medium for storing a computer program, wherein the above-mentioned data processing method is implemented when the computer program is executed by a processor.

本申请提供的数据处理方法,获取设置指令,并根据设置指令设置计算网络;设置指令用于设置计算网络中各个计算核之间的数据流向;获取至少一个特征值,并将至少一个特征值分别输入计算网络中的至少一个起始计算核;以起始计算核为起点,按照数据流向传输特征值;利用各个计算核,基于特征值和对应的权重值生成计算结果;其中,计算网络包括一个第一计算核、若干个第二计算核和若干个第三计算核;起始计算核包括第一计算核,或者包括第一计算核和第三计算核,或者包括第一计算核、第二计算核和第三计算核;任一第二计算核对应于一个上级计算核和两个下级计算核,下级计算核为第二计算核或第三计算核,第三计算核不具有下级计算核,第一计算核为目标第二计算核的上级计算核。The data processing method provided by the present application obtains setting instructions, and sets the computing network according to the setting instructions; the setting instructions are used to set the data flow direction between computing cores in the computing network; obtain at least one eigenvalue, and separate the at least one eigenvalue Input at least one initial computing core in the computing network; start from the initial computing core, transmit feature values according to the data flow direction; use each computing core to generate calculation results based on feature values and corresponding weight values; wherein, the computing network includes a The first computing core, several second computing cores and several third computing cores; the initial computing core includes the first computing core, or includes the first computing core and the third computing core, or includes the first computing core, the second computing core Computing core and third computing core; any second computing core corresponds to one upper-level computing core and two lower-level computing cores, the lower-level computing core is the second computing core or the third computing core, and the third computing core does not have a lower-level computing core , the first computing core is the superior computing core of the target second computing core.

可见,该方法采用了具有特殊架构的计算网络,该计算网络中存在三种计算核,计算核之间具有预设的等级关系。其中,第一计算核单独为一级,一个目标第二计算核为第一计算核的下级计算核,两个第二计算核为目标第二计算核的下级计算核,四个其他的第二计算核为上述两个第二计算核的下级计算核,以此类推,直至所有的第三计算核两两一组分别为某一级的第二计算核的下级计算核。所有的计算核按照等级高低排列,同一级的计算核处于同一平面,呈现一种金字塔形。此外,计算网络中的计算核数量为1+1+2+4+…+2^n=2^(n+1),且每一级的计算核的数量均为级别高于该级的所有计算核数量的和。由于神经网络的通道数通常为2^k个,因此在进行计算时,若无需利用所有的计算核参与计算,即网络模型的形状与计算网络的尺寸不匹配时,则可以利用其中前若干级的计算核作为一个计算路径得到2^k个计算结果,并将更低级别的计算核进行拆分,得到至少两个另外的计算路径,对其他的特征值进行处理,各路之间相互独立,并行计算。具体即为通过设置指令设置计算网络中各个计算核之间的数据流向,使得数据在不同级之间流动,或者在同级之间流动,使得整个计算网络无论在处理什么形状的网络模型时,都能够被百分百全面利用,解决了相关技术存在的硬件利用率不足的问题。It can be seen that the method adopts a computing network with a special architecture, and there are three types of computing cores in the computing network, and the computing cores have a preset hierarchical relationship. Among them, the first computing core is a single level, one target second computing core is the lower-level computing core of the first computing core, two second computing cores are the lower-level computing cores of the target second computing core, and four other second computing cores are The computing cores are the lower-level computing cores of the above two second computing cores, and so on, until all the third computing cores in pairs are the lower-level computing cores of the second computing cores of a certain level. All computing cores are arranged according to the level, and the computing cores of the same level are on the same plane, presenting a pyramid shape. In addition, the number of computing cores in the computing network is 1+1+2+4+...+2^n=2^(n+1), and the number of computing cores at each level is all Compute the sum of the number of cores. Since the number of channels of the neural network is usually 2^k, if it is not necessary to use all the computing cores to participate in the calculation, that is, when the shape of the network model does not match the size of the computing network, the first few stages can be used The calculation core of the calculation path is used as a calculation path to obtain 2^k calculation results, and the lower-level calculation core is split to obtain at least two other calculation paths to process other eigenvalues, and each path is independent of each other. ,Parallel Computing. Specifically, by setting instructions to set the data flow direction between each computing core in the computing network, so that the data flows between different levels, or between the same level, so that no matter what shape of the network model the entire computing network is processing, All of them can be fully utilized, which solves the problem of insufficient hardware utilization in related technologies.

此外,本申请还提供了一种数据处理装置、电子设备及计算机可读存储介质,同样具有上述有益效果。In addition, the present application also provides a data processing device, an electronic device, and a computer-readable storage medium, which also have the above beneficial effects.

附图说明Description of drawings

为了更清楚地说明本申请实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application or related technologies, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or related technologies. Obviously, the accompanying drawings in the following description are only For the embodiments of the application, those skilled in the art can also obtain other drawings according to the provided drawings without creative work.

图1为本申请实施例提供的一种计算网络的结构示意图;FIG. 1 is a schematic structural diagram of a computing network provided by an embodiment of the present application;

图2为本申请实施例提供的一种数据处理方法流程图;FIG. 2 is a flow chart of a data processing method provided by an embodiment of the present application;

图3为本申请实施例提供的一种第二计算核和第三计算核之间的级别结构示意图;FIG. 3 is a schematic diagram of a level structure between a second computing core and a third computing core provided in an embodiment of the present application;

图4为本申请实施例提供的一种具体的计算核的结构示意图;FIG. 4 is a schematic structural diagram of a specific computing core provided by an embodiment of the present application;

图5为本申请实施例提供的一种数据处理装置的结构示意图;FIG. 5 is a schematic structural diagram of a data processing device provided in an embodiment of the present application;

图6为本申请实施例提供的一种电子设备的结构示意图。FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.

具体实施方式Detailed ways

为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments It is only a part of the embodiments of the present application, but not all the embodiments. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the scope of protection of this application.

本申请提出一种新型结构的计算网络,请参考图1,图1为本申请实施例提供的一种计算网络的结构示意图。计算网络包括一个第一计算核、若干个第二计算核和若干个第三计算核。其中,任意一个第二计算核对应于一个上级计算核和两个下级计算核,下级计算核具体为第二计算核,或者可以为第三计算核。第三计算核不具有下级计算核,即其为整个网络最下级的计算核,第一计算核为目标第二计算核的上级计算核,目标第二计算核即为左右第二计算核中级别最高的一个。This application proposes a computing network with a new structure, please refer to FIG. 1 , which is a schematic structural diagram of a computing network provided by an embodiment of this application. The computing network includes a first computing core, several second computing cores and several third computing cores. Wherein, any one of the second computing cores corresponds to one upper-level computing core and two lower-level computing cores, and the lower-level computing cores are specifically the second computing cores, or may be the third computing cores. The third computing core does not have a lower-level computing core, that is, it is the lowest-level computing core of the entire network, the first computing core is the upper-level computing core of the target second computing core, and the target second computing core is the middle level of the left and right second computing cores the tallest one.

从图1可以看出,从各个计算核级别的角度对整个计算网络进行划分,可以得到如图1所示的金字塔结构。最上端的一个计算核为第一计算核(可以视为处于第0级),其对应的下级计算核即为目标第二计算核(可以视为处于第1级),目标第二计算核的数量为一个。目标第二计算核同样为第二计算核,其同样具有第二计算核的特性,即具有一个上级计算核(第0级)和两个下级计算核(可以视为处于第2级)。目标第二计算核的下级计算核(第2级)同样分别具有两个下级计算核(第3级),以此类推,直至第三计算核作为第二计算核的下级计算核,第三计算核不具有下级计算核。此外,图1中实现连接的节点为计算核,虚线连接的节点为传输节点,用于进行权重值的传输。As can be seen from Figure 1, the entire computing network is divided from the perspective of each computing core level, and the pyramid structure shown in Figure 1 can be obtained. The uppermost computing core is the first computing core (which can be regarded as being at level 0), and its corresponding lower-level computing core is the target second computing core (which can be regarded as being at level 1), and the number of target second computing cores for one. The target second computing core is also the second computing core, which also has the characteristics of the second computing core, that is, it has one upper-level computing core (level 0) and two lower-level computing cores (which can be regarded as being in the second level). The lower-level computing core (level 2) of the target second computing core also has two lower-level computing cores (level 3), and so on, until the third computing core serves as the lower-level computing core of the second computing core, and the third computing core A core does not have subordinate computing cores. In addition, the connected nodes in Figure 1 are computing cores, and the nodes connected by dotted lines are transmission nodes, which are used to transmit weight values.

由此可见,从第一计算核开始,若第三计算核处于第n级,则整个计算网络中共有1+1+2+4+…+2^n=2^(n+1)个计算核,n为正整数。且,每一级的计算核的数量均为级别高于该级的所有计算核数量的和。示例性的,第5级的计算核的数量,为第0级、第1级、第2级、第3级、第4级对应的所有计算核的数量之和。由于神经网络的通道数通常为2^k个(k为正整数),因此在进行计算时,若k=n,则整个计算网络的所有计算核都能够参与计算。若无需利用所有的计算核参与计算,即网络模型的形状与计算网络的尺寸不匹配时,k<n时,则可以利用其中前k-1级的计算核作为一个计算路径得到2^k个计算结果,并将第k级以及更低级别的计算核进行拆分,得到至少两个另外的计算路径,对其他的特征值进行处理,各路之间相互独立,并行计算。It can be seen that starting from the first calculation core, if the third calculation core is at the nth level, there are 1+1+2+4+...+2^n=2^(n+1) calculations in the entire computing network Kernel, n is a positive integer. Moreover, the number of computing cores at each level is the sum of the numbers of all computing cores at levels higher than that level. Exemplarily, the number of computing cores at level 5 is the sum of the numbers of all computing cores corresponding to level 0, level 1, level 2, level 3, and level 4. Since the number of channels of the neural network is usually 2^k (k is a positive integer), when performing calculations, if k=n, all calculation cores of the entire calculation network can participate in the calculation. If you don’t need to use all the computing cores to participate in the calculation, that is, when the shape of the network model does not match the size of the computing network, when k<n, you can use the computing cores of the first k-1 levels as a computing path to get 2^k Calculate the result, and split the calculation cores of the kth level and lower levels to obtain at least two other calculation paths, and process other eigenvalues. Each path is independent of each other and calculated in parallel.

示例性的,若n=6,k=6,则整个计算网络的计算核数量为2^(6+1)=2^7个,网络模型的通道数为2^6个,则可以将计算网络中第6级以前的计算核作为一路进行计算,对于第7级的计算核(实际上也是第三计算核),其数量为k的两倍,因此可以将其拆分为两路独立的计算路径,三路计算路径并行计算。Exemplarily, if n=6, k=6, then the number of computing cores of the entire computing network is 2^(6+1)=2^7, and the number of channels of the network model is 2^6, then the computing The calculation cores before level 6 in the network are calculated as one way. For the calculation cores of level 7 (actually the third calculation core), the number is twice k, so it can be split into two independent channels. Computing path, three-way computing path for parallel computing.

在上述计算网络的基础上,本实施例还提供了一种利用其进行数据处理的处理方法。请参考图2,图2为本申请实施例提供的一种数据处理方法流程图。该方法包括:On the basis of the above computing network, this embodiment also provides a processing method for data processing using it. Please refer to FIG. 2 , which is a flow chart of a data processing method provided by an embodiment of the present application. The method includes:

S101:获取设置指令,并根据设置指令设置计算网络。S101: Obtain a setting instruction, and set a computing network according to the setting instruction.

其中,设置指令用于设置计算网络中各个计算核之间的数据流向,数据流向是指计算核之间特征值的传输方向。可以理解的是,在计算网络的结构确定,且网络模型的通道数量确定后,即在n和k的大小确定后,计算网络与网络模型是否完全适配即可确定。因此,在使用计算网络进行数据处理之前,需要对其进行具体的设置。通过设置计算核之间的数据流向,则可以将计算网络设置为一路或多路并行的计算路径,进而充分发挥计算网络的资源利用率。Wherein, the setting instruction is used to set the data flow direction between the various computing cores in the computing network, and the data flow direction refers to the transmission direction of the characteristic values between the computing cores. It can be understood that after the structure of the computing network is determined and the number of channels of the network model is determined, that is, after the sizes of n and k are determined, it can be determined whether the computing network and the network model are fully compatible. Therefore, before using the computing network for data processing, it needs to be specifically set up. By setting the data flow direction between computing cores, the computing network can be set as one or more parallel computing paths, so as to fully utilize the resource utilization of the computing network.

S102:获取至少一个特征值,并将至少一个特征值分别输入计算网络中的至少一个起始计算核。S102: Obtain at least one eigenvalue, and respectively input at least one eigenvalue to at least one initial calculation core in the calculation network.

起始计算核,是指特征值输入的第一个计算核。可以理解的是,各个计算路径并行计算,每个路径应当有且只有一个起始计算核,因此起始计算核和计算路径的数量相同。在获取到特征值后将其输入起始计算核,起始计算核即可利用特征值进行数据处理,同时还可以按照设置的数据流向,对特征值进行流转。The initial calculation kernel refers to the first calculation kernel of the eigenvalue input. It can be understood that each calculation path is calculated in parallel, and each path should have one and only one initial calculation core, so the number of initial calculation cores and calculation paths are the same. After obtaining the eigenvalues, input them into the initial calculation core, and the initial calculation core can use the eigenvalues for data processing, and can also transfer the eigenvalues according to the set data flow direction.

S103:以起始计算核为起点,按照数据流向传输特征值。S103: Starting from the initial calculation core, transmit the feature value according to the data flow.

数据流向规定了特征值的传输方向,因此起始计算核可以确定特征值的发送对象,进而将特征值发送至下一个计算核,直至所有的计算核接收到特征值。The data flow direction specifies the transmission direction of the eigenvalues, so the initial calculation core can determine the sender of the eigenvalues, and then send the eigenvalues to the next calculation core until all the calculation cores receive the eigenvalues.

S104:利用各个计算核,基于特征值和对应的权重值生成计算结果。S104: Using each calculation core, generate a calculation result based on the feature value and the corresponding weight value.

每个计算核在获取到特征值后,可以利用特征值以及自身对应的权重值进行计算,得到对应的计算结果。计算的集体过程不做限定,可以根据需要设置。权重值被预先存入计算核,其数量和大小不做限定。After obtaining the eigenvalues, each calculation core can use the eigenvalues and its corresponding weight values to perform calculations to obtain corresponding calculation results. The collective calculation process is not limited and can be set as required. The weight values are pre-stored in the computing cores, and their number and size are not limited.

需要说明的是,本申请中S102、S103和S104步骤的执行顺序不做限定,在一种实施方式中,三个步骤为串行执行,即先执行S102步骤,再执行S103步骤,最后执行S104步骤。在另一种实施方式中,三个步骤可以并行执行,即在执行S102步骤的同时,可以执行S103和/或S104步骤。It should be noted that the execution order of steps S102, S103 and S104 in this application is not limited. In one embodiment, the three steps are executed in series, that is, step S102 is executed first, then step S103 is executed, and finally S104 is executed. step. In another implementation manner, the three steps may be executed in parallel, that is, while step S102 is executed, steps S103 and/or S104 may be executed.

应用本申请实施例提供的数据处理方法,采用了具有特殊架构的计算网络,该计算网络中存在三种计算核,计算核之间具有预设的等级关系。其中,第一计算核单独为一级,一个目标第二计算核为第一计算核的下级计算核,两个第二计算核为目标第二计算核的下级计算核,四个其他的第二计算核为上述两个第二计算核的下级计算核,以此类推,直至所有的第三计算核两两一组分别为某一级的第二计算核的下级计算核。所有的计算核按照等级高低排列,同一级的计算核处于同一平面,呈现一种金字塔形。此外,计算网络中的计算核数量为1+1+2+4+…+2^n=2^(n+1),且每一级的计算核的数量均为级别高于该级的所有计算核数量的和。由于神经网络的通道数通常为2^k个,因此在进行计算时,若无需利用所有的计算核参与计算,即网络模型的形状与计算网络的尺寸不匹配时,则可以利用其中前若干级的计算核作为一个计算路径得到2^k个计算结果,并将更低级别的计算核进行拆分,得到至少两个另外的计算路径,对其他的特征值进行处理,各路之间相互独立,并行计算。具体即为通过设置指令设置计算网络中各个计算核之间的数据流向,使得数据在不同级之间流动,或者在同级之间流动,使得整个计算网络无论在处理什么形状的网络模型时,都能够被百分百全面利用,解决了相关技术存在的硬件利用率不足的问题。The application of the data processing method provided by the embodiment of the present application adopts a computing network with a special architecture. There are three types of computing cores in the computing network, and there is a preset hierarchical relationship among the computing cores. Among them, the first computing core is a single level, one target second computing core is the lower-level computing core of the first computing core, two second computing cores are the lower-level computing cores of the target second computing core, and four other second computing cores are The computing cores are the lower-level computing cores of the above two second computing cores, and so on, until all the third computing cores in pairs are the lower-level computing cores of the second computing cores of a certain level. All computing cores are arranged according to the level, and the computing cores of the same level are on the same plane, presenting a pyramid shape. In addition, the number of computing cores in the computing network is 1+1+2+4+...+2^n=2^(n+1), and the number of computing cores at each level is all Compute the sum of the number of cores. Since the number of channels of the neural network is usually 2^k, if it is not necessary to use all the computing cores to participate in the calculation, that is, when the shape of the network model does not match the size of the computing network, the first few stages can be used The calculation core of the calculation path is used as a calculation path to obtain 2^k calculation results, and the lower-level calculation core is split to obtain at least two other calculation paths to process other eigenvalues, and each path is independent of each other. ,Parallel Computing. Specifically, by setting instructions to set the data flow direction between each computing core in the computing network, so that the data flows between different levels, or between the same level, so that no matter what shape of the network model the entire computing network is processing, All of them can be fully utilized, which solves the problem of insufficient hardware utilization in related technologies.

基于上述实施例,本实施例对数据处理方法的步骤进行进一步说明。对于设置指令的具体内容,在一种实施方式中,其可以具体说明了计算网络中所有计算核的之间的数据流向。在另一种实施方式中,在设置数据流向前,还可以包括如下步骤:Based on the foregoing embodiments, this embodiment further describes the steps of the data processing method. As for the specific content of the setting instruction, in an implementation manner, it may specify the data flow among all computing cores in the computing network. In another implementation manner, before setting the data flow, the following steps may also be included:

步骤11:获取配置信息,并将配置信息存储至各个计算核。Step 11: Obtain configuration information, and store the configuration information to each computing core.

其中,配置信息中包括若干个标识信息和对应的若干个数据流向。Wherein, the configuration information includes several identification information and corresponding several data flow directions.

相应的,根据设置指令设置计算网络的过程可以包括:Correspondingly, the process of setting the computing network according to the setting instruction may include:

步骤12:将设置指令发送至各个计算核,利用设置指令中的目标标识信息和配置信息确定各个计算核对应的数据流向。Step 12: Send the setting command to each computing core, and use the target identification information and configuration information in the setting command to determine the data flow direction corresponding to each computing core.

配置信息中设置了若干个数据流向,以及这些数据流向分别对应的唯一标识信息。在各个计算核中预设配置信息,在获取到设置指令后,可以利用设置指令中的目标标识信息对配置信息进行筛选,得到对应的数据流向。具体的,在确定数据流向后,各个计算核可以利用自身对应的身份信息对数据流向进行筛选,得到自身对应的数据流向,即特征值的发送方向。Several data flow directions and unique identification information corresponding to these data flow directions are set in the configuration information. The configuration information is preset in each computing core, and after the setting instruction is obtained, the configuration information can be screened by using the target identification information in the setting instruction to obtain the corresponding data flow direction. Specifically, after determining the data flow direction, each calculation core can use its corresponding identity information to screen the data flow direction to obtain its own corresponding data flow direction, that is, the sending direction of the characteristic value.

数据流向的具体内容存在多种可能,其中,在一种实施方式中,设置指令用于设置计算网络为一路计算,则数据流向为从上级至下级流动的第一流向,起始计算核为第一计算核;There are many possibilities for the specific content of the data flow direction, wherein, in one embodiment, the setting instruction is used to set the computing network to one-way computing, then the data flow direction is the first flow direction from the upper level to the lower level, and the initial computing core is the first a computing core;

相应的,将至少一个特征值分别输入计算网络中的至少一个起始计算核;以起始计算核为起点,按照数据流向传输特征值的过程具体可以包括:Correspondingly, at least one eigenvalue is respectively input into at least one initial calculation core in the calculation network; starting from the initial calculation core, the process of transmitting the eigenvalue according to the data flow may specifically include:

步骤21:将特征值输入第一计算核,利用第一计算核将特征值发送至目标第二计算核。Step 21: Input the feature value into the first computing core, and use the first computing core to send the feature value to the target second computing core.

步骤22:基于第一流向,从目标第二计算核开始,依次将特征值发送至对应的下级计算核,直至将特征值发送至第三计算核。Step 22: Based on the first flow direction, starting from the target second computing core, sequentially sending the feature values to the corresponding lower-level computing cores until sending the feature values to the third computing core.

在本实施方式中,整个计算网络作为一个计算路径进行计算。在这种情况下,起始计算核仅有一个,即第一计算核,数据流向为从上级到下级进行流动的第一流向。因此在将特征值输入第一计算核后,即可控制第一计算核将特征值发送至其对应的下级计算核,即目标第二计算核。从目标第二计算核开始,每个计算核均将其获得的特征值发送至对应的下级计算核,直至将特征值发送至所有的第三计算核。第三计算核不具有下级计算核,特征值流转结束。In this embodiment, the entire computing network is calculated as one computing path. In this case, there is only one initial computing core, that is, the first computing core, and the data flow direction is the first flow direction from the upper level to the lower level. Therefore, after inputting the feature value into the first computing core, the first computing core can be controlled to send the feature value to its corresponding lower-level computing core, that is, the target second computing core. Starting from the target second computing core, each computing core sends the obtained eigenvalues to the corresponding lower-level computing cores, until the eigenvalues are sent to all the third computing cores. The third calculation core does not have a lower-level calculation core, and the transfer of eigenvalues ends.

在第二种实施方式中,设置指令用于设置计算网络为至少三路计算,则数据流向为从上级至下级流动的第一流向和同级之间流动的第二流向,起始计算核包括第一计算核和起始第三计算核,第一计算核对应于第一流向,起始第三计算核对应于第二流向;In the second embodiment, the setting instruction is used to set the computing network to at least three-way computing, then the data flow direction is the first flow direction flowing from the upper level to the lower level and the second flow direction flowing between the same levels, and the initial computing core includes The first computing core and the starting third computing core, the first computing core corresponds to the first flow direction, and the starting third computing core corresponds to the second flow direction;

相应的,将至少一个特征值分别输入计算网络中的至少一个起始计算核;以起始计算核为起点,按照数据流向传输特征值的过程可以包括:Correspondingly, at least one eigenvalue is respectively input into at least one initial computing core in the computing network; starting from the initial computing core, the process of transmitting the eigenvalue according to the data flow may include:

步骤31:将第一特征值输入第一计算核,并利用第一计算核将第一特征值发送至目标第二计算核。Step 31: Input the first feature value into the first computing core, and send the first feature value to the target second computing core by using the first computing core.

步骤32:基于第一流向,从目标第二计算核开始,依次将第一特征值发送至对应的下级计算核,直至将第一特征值发送至第一流向结尾的第二计算核。Step 32: Based on the first flow direction, starting from the target second computing core, sequentially send the first feature value to the corresponding lower-level computing core until sending the first feature value to the second computing core at the end of the first flow direction.

步骤33:将第二特征值输入起始第三计算核,并从起始第三计算核开始,基于第二流向,依次将第二特征值发送至后序同级计算核,直至将第二特征值发送至第二流向结尾的第三计算核。Step 33: Input the second eigenvalue into the starting third computing core, and from the starting third computing core, based on the second flow direction, sequentially send the second eigenvalue to subsequent computing cores of the same level until the second The eigenvalues are sent to the third computing core at the end of the second stream.

在本实施方式中,整个计算网络可以分为三个计算路径,或者三个以上的更多的计算路径。在这种情况下,数据流向除了从上级至下级的第一流向外,还包括同级之间流动的第二流向,即特征值在同级的计算核之间流动。可以理解的是,由于计算路径为多个,因此起始计算核也为多个,其包括第一计算核的同时,还至少应该包括两个起始第三计算核。In this embodiment, the entire computing network can be divided into three computing paths, or more than three computing paths. In this case, in addition to the first flow direction from the upper level to the lower level, the data flow also includes the second flow direction between the same levels, that is, the feature values flow between the calculation cores of the same level. It can be understood that since there are multiple calculation paths, there are also multiple initial calculation cores, which include at least two initial third calculation cores while including the first calculation core.

起始第三计算核,是指被指定为起始计算核的第三计算核,根据计算路径数量的大小,起始第三计算核的数量可以改变。但是可以理解的是,当存在三个计算路径时,所有的第三计算核被划分为两个独立的计算路径,因此起始第三计算核的数量最少为两个,计算路径更多时,起始第三计算核的数量更多。The starting third computing core refers to the third computing core designated as the starting computing core, and the number of starting third computing cores can be changed according to the number of computing paths. However, it is understandable that when there are three computing paths, all the third computing cores are divided into two independent computing paths, so the initial number of third computing cores is at least two, and when there are more computing paths, The number of initial third computing cores is greater.

在这种情况下,对于第一计算核作为起始计算核的计算路径来说,本实施方式与上述的第一种实施方式没有区别。输入第一计算核的特征值为第一特征值,第一计算核将第一特征值发送至目标第二计算核后,按照第一流向的规定,逐级下发第一特征值,直至将第一特征至发送至作为第一流向结尾的第二计算核。对于第三计算核作为起始计算核的计算路径来说,输入起始第三计算核的特征值为第二特征值,第二流向同样规定了数据核之间的数据传输先后顺序,因此根据第二流向,可以确定从起始第三计算核开始的各个第三计算核对应的后续同级计算核,各个第三计算核将第二特征值发送至后续同级计算核,直至将第二特征值发送至第二流向结尾的第三计算核。In this case, there is no difference between this embodiment and the above-mentioned first embodiment for the computation path where the first computing core is used as the initial computing core. The eigenvalue of the input first calculation core is the first eigenvalue, and after the first calculation core sends the first eigenvalue to the target second calculation core, the first eigenvalue is issued step by step according to the first flow direction until the The first feature to send to the second compute core as the end of the first flow. For the calculation path where the third calculation core is used as the initial calculation core, the eigenvalue of the input starting third calculation core is the second eigenvalue, and the second flow direction also specifies the order of data transmission between data cores, so according to In the second flow direction, subsequent computing cores of the same level corresponding to each third computing core starting from the initial third computing core can be determined, and each third computing core sends the second characteristic value to subsequent computing cores of the same level until the second The eigenvalues are sent to the third computing core at the end of the second stream.

在第二种实施方式的基础上,还存在第三种实施方式。具体的,起始计算核还包括起始第二计算核,第二计算核对应于第二流向;On the basis of the second embodiment, there is also a third embodiment. Specifically, the initial calculation core also includes an initial second calculation core, and the second calculation core corresponds to the second flow direction;

相应的,将至少一个特征值分别输入计算网络中的至少一个起始计算核;以起始计算核为起点,按照数据流向传输特征值的过程可以包括:Correspondingly, at least one eigenvalue is respectively input into at least one initial computing core in the computing network; starting from the initial computing core, the process of transmitting the eigenvalue according to the data flow may include:

步骤41:将第二特征值输入起始第二计算核,并从起始第二计算核开始,基于第二流向,依次将第二特征值发送至后序同级计算核,直至将第二特征值发送至第二流向结尾的第二计算核。Step 41: Input the second eigenvalue into the starting second computing core, and from the starting second computing core, based on the second flow direction, sequentially send the second eigenvalue to the subsequent computing core at the same level until the second The eigenvalues are sent to the second computational core at the end of the second stream.

在本实施例中,起始计算核还包括起始第二计算核,即除了级别最低的第三计算核被拆分为至少两个计算路径外,比第三计算核级别更高的若干级第二计算核同样被拆分为至少两个计算路径。请参考图3,图3为本申请实施例提供的一种第二计算核和第三计算核之间的级别结构示意图。其中,第6级的计算核为第三计算核,第1级到第5级的计算核为第二计算核,此外,还存在第0级的第一计算核(图中未示出)。示例性的,从第0级的第一计算核到第4级的计算核可以被设置为一个计算路径,即网络的通道数为16。再次基础上,第5级的第二计算核可以被划分为一个计算路径,第5级的第三计算核可以被划分为两个计算路径。In this embodiment, the initial computing core also includes the initial second computing core, that is, several levels higher than the third computing core except that the lowest-level third computing core is split into at least two computing paths. The second computing core is also split into at least two computing paths. Please refer to FIG. 3 , which is a schematic diagram of a level structure between a second computing core and a third computing core provided in an embodiment of the present application. Wherein, the calculation core of level 6 is the third calculation core, the calculation cores of level 1 to level 5 are the second calculation core, and there is also the first calculation core of level 0 (not shown in the figure). Exemplarily, the calculation path from the first calculation core of level 0 to the calculation core of level 4 can be set as one calculation path, that is, the number of channels of the network is 16. Based on this again, the second computing core at the fifth level can be divided into one computing path, and the third computing core at the fifth level can be divided into two computing paths.

当第二计算核作为起始计算核时,其对应的特征值流动方式与第三计算核作为起始计算核的方式相同。对于第二计算核作为起始计算核的计算路径来说,输入起始第二计算核的特征值为第二特征值,第二流向同样规定了数据核之间的数据传输先后顺序,因此根据第二流向,可以确定从起始第二计算核开始的各个第二计算核对应的后续同级计算核,各个第二计算核将第二特征值发送至后续同级计算核,直至将第二特征值发送至第二流向结尾的第二计算核。When the second calculation core is used as the initial calculation core, its corresponding eigenvalue flow mode is the same as that of the third calculation core as the initial calculation core. For the calculation path where the second calculation core is used as the initial calculation core, the eigenvalue of the input starting second calculation core is the second eigenvalue, and the second flow direction also specifies the order of data transmission between data cores, so according to In the second flow direction, subsequent computing cores of the same level corresponding to each second computing core starting from the starting second computing core can be determined, and each second computing core sends the second feature value to subsequent computing cores of the same level until the second The eigenvalues are sent to the second computational core at the end of the second stream.

在获取到特征值后,各个计算核应当利用其与权重值进行计算得到计算结果。可以理解的是,在利用权重值生成计算结果之前,需要为各个计算核分配对应的权重值。具体的,还可以包括:After obtaining the eigenvalues, each calculation core should use them and weight values to perform calculations to obtain calculation results. It can be understood that, before using the weight value to generate the calculation result, it is necessary to assign a corresponding weight value to each calculation core. Specifically, it may also include:

步骤51:确定各个计算核分别对应的权重值。Step 51: Determine the weight values corresponding to each calculation core.

步骤52:将权重值发送并存储至对应的计算核。Step 52: Send and store the weight value to the corresponding computing core.

各个计算核需要分别存储自身对应的权重值,该权重值用于与特征值共同生成对应通道的输出值,即计算结果。根据计算网络与模型输出通道数量的匹配程度,可以确定数据流向,进而确定各个计算核对应的权重值。将各个权重值发送至计算核并存储,使得计算核可以在获取到特征值后直接调用权重值进行计算。Each calculation core needs to store its own corresponding weight value, which is used to generate the output value of the corresponding channel together with the feature value, that is, the calculation result. According to the matching degree of the calculation network and the number of model output channels, the data flow direction can be determined, and then the weight value corresponding to each calculation core can be determined. Each weight value is sent to the calculation core and stored, so that the calculation core can directly call the weight value for calculation after obtaining the feature value.

进一步的,确定各个计算核分别对应的权重值的过程可以包括:Further, the process of determining the weight values corresponding to each calculation core may include:

步骤61:获取初始权重值。Step 61: Get the initial weight value.

步骤62:基于数据流向,确定各个计算核与初始权重值之间的对应关系,完成权重值的确定。Step 62: Based on the data flow direction, determine the corresponding relationship between each calculation core and the initial weight value, and complete the determination of the weight value.

其中,初始权重值是指未分配给计算核的权重值,各个初始权重值分别对应于模型的不同输出通道,其可以按照数据流向排布。因此,按照计算网络对应的数据流向,包括第一流向和第二流向,确定计算核与初始权重值之间的对应关系,并将与计算核对应的初始权重值确定为计算核的权重值,完成权重值的确定。Wherein, the initial weight value refers to the weight value not allocated to the calculation core, and each initial weight value corresponds to different output channels of the model, which can be arranged according to the data flow direction. Therefore, according to the data flow direction corresponding to the computing network, including the first flow direction and the second flow direction, determine the corresponding relationship between the computing core and the initial weight value, and determine the initial weight value corresponding to the computing core as the weight value of the computing core, Complete the determination of the weight value.

本实施例并不限定计算核计算的具体方式,目前深度学习方面的研究主要以CNN(Convolutional Neural Networks,卷积神经网络)为研究对象。而由于处理场景的不同,对CNN的性能要求也不相同,从而发展出多种网络结构。但是CNN的基本组成是固定的,分别为输入层、卷积层、激活层、池化层和全连接层。其中计算量最大的部分是卷积层,其主要的功能就是完成图像(feature)与神经元(filter)之间的卷积运算。因此在一种实施方式中,可以利用计算网络进行卷积计算。在权重值分配完毕后,利用各个计算核,基于特征值和对应的权重值生成计算结果的过程可以包括:This embodiment does not limit the specific method of calculation kernel calculation, and current research on deep learning mainly takes CNN (Convolutional Neural Networks, Convolutional Neural Networks) as the research object. Due to different processing scenarios, the performance requirements of CNN are also different, thus developing a variety of network structures. However, the basic composition of CNN is fixed, which are input layer, convolutional layer, activation layer, pooling layer and fully connected layer. The most computationally intensive part is the convolution layer, whose main function is to complete the convolution operation between the image (feature) and the neuron (filter). Therefore, in an implementation manner, a computing network may be used to perform convolution calculations. After the weight values are assigned, using each calculation core, the process of generating calculation results based on the feature values and corresponding weight values may include:

步骤71:控制各个计算核,利用特征值和权重值相乘得到目标结果。Step 71: Control each calculation core, multiply the feature value and the weight value to obtain the target result.

步骤72:将目标结果与计算核存储的历史计算结果相加,得到计算结果。Step 72: Add the target result and the historical calculation result stored in the calculation core to obtain the calculation result.

可以理解的是,卷积计算即为乘加计算。具体的,特征值和权重值相乘得到的目标结果与计算核存储的历史计算结果相加,即可得到最终需要的计算结果。相应的,新的计算结果同样应当被存入计算核,以便作为后续计算的历史计算结果,或者在所有输入通道的特征值被便利完毕后被读出。It can be understood that convolution calculations are multiplication and addition calculations. Specifically, the target result obtained by multiplying the feature value and the weight value is added to the historical calculation result stored in the calculation core to obtain the final required calculation result. Correspondingly, the new calculation results should also be stored in the calculation core, so as to be used as the historical calculation results of subsequent calculations, or read out after the feature values of all input channels are facilitated.

对于计算核的具体结构,本实施例不做限定。在一种实施方式中,请参考图4,图4为本申请实施例提供的一种具体的计算核的结构示意图。计算核包括算术逻辑单元(Arithmetic&logical Unit,ALU)、权重值接口、特征值接口、控制模块和存储单元。其中,权重值接口包括外部输入端口和同级输入端口,外部输入接口用于获取外部输入的权重值,同级输入端口个用于获取同级别的传输节点输入的权重值。特征值接口包括外部输入端口、上级输入端口和同级输入端口,其中,外部输入端口用于获取外部输入的特征值,计算核作为起始计算核时使用,上级输入端口(即前级输入端口)用于获取上级计算核发送的特征值,同级输入端口用于获取同级计算核发送的特征值。控制模块用于存储配置信息,并根据设置指令确定数据流向,存储单元用于存储权重值、特征值和计算结果。此外,还包括选择器,用于选择各个输入端口输入的信息作为计算使用的特征值和权重值,选择器的选择方式与数据流向相关。The specific structure of the computing core is not limited in this embodiment. In an implementation manner, please refer to FIG. 4 , which is a schematic structural diagram of a specific computing core provided in an embodiment of the present application. The calculation core includes an arithmetic logic unit (Arithmetic&logical Unit, ALU), a weight value interface, a feature value interface, a control module and a storage unit. Among them, the weight value interface includes an external input port and an input port of the same level, the external input interface is used to obtain the weight value of the external input, and the input port of the same level is used to obtain the weight value input by the transmission node of the same level. The eigenvalue interface includes an external input port, an upper-level input port, and an input port of the same level. The external input port is used to obtain the eigenvalue of the external input, and the calculation core is used when the calculation core is used as the initial calculation core. ) is used to obtain the characteristic value sent by the computing core of the upper level, and the input port of the same level is used to obtain the characteristic value sent by the computing core of the same level. The control module is used to store configuration information and determine the data flow direction according to setting instructions, and the storage unit is used to store weight values, feature values and calculation results. In addition, it also includes a selector for selecting the information input by each input port as the feature value and weight value used for calculation, and the selection method of the selector is related to the data flow direction.

下面对本申请实施例提供的数据处理装置进行介绍,下文描述的数据处理装置与上文描述的数据处理方法可相互对应参照。The data processing device provided in the embodiment of the present application is introduced below, and the data processing device described below and the data processing method described above may refer to each other correspondingly.

请参考图5,图5为本申请实施例提供的一种数据处理装置的结构示意图,包括:Please refer to FIG. 5. FIG. 5 is a schematic structural diagram of a data processing device provided in an embodiment of the present application, including:

设置模块110,用于获取设置指令,并根据设置指令设置计算网络;设置指令用于设置计算网络中各个计算核之间的数据流向;The setting module 110 is used to obtain the setting instruction, and set the computing network according to the setting instruction; the setting instruction is used to set the data flow direction between each computing core in the computing network;

输入模块120,用于获取至少一个特征值,并将至少一个特征值分别输入计算网络中的至少一个起始计算核;The input module 120 is used to obtain at least one feature value, and input at least one feature value into at least one initial calculation core in the calculation network;

传输模块130,用于以起始计算核为起点,按照数据流向传输特征值;The transmission module 130 is used to transmit the feature value according to the data flow direction starting from the initial computing core;

计算模块140,用于利用各个计算核,基于特征值和对应的权重值生成计算结果;Calculation module 140, is used for using each calculation kernel, based on feature value and corresponding weight value generation calculation result;

其中,计算网络包括一个第一计算核、若干个第二计算核和若干个第三计算核;起始计算核包括第一计算核,或者包括第一计算核与第三计算核,或者包括第一计算核、第二计算核与第三计算核;任一第二计算核对应于一个上级计算核和两个下级计算核,下级计算核为第二计算核或第三计算核,第三计算核不具有下级计算核,第一计算核为目标第二计算核的上级计算核。Wherein, the computing network includes a first computing core, several second computing cores, and several third computing cores; the initial computing core includes the first computing core, or includes the first computing core and the third computing core, or includes the first computing core A computing core, a second computing core, and a third computing core; any second computing core corresponds to an upper-level computing core and two lower-level computing cores, and the lower-level computing core is the second computing core or the third computing core, and the third computing core The core does not have a lower-level computing core, and the first computing core is the upper-level computing core of the target second computing core.

可选地,还包括:Optionally, also include:

配置存储模块,用于获取配置信息,并将配置信息存储至各个计算核;其中,配置信息中包括若干个标识信息和对应的若干个数据流向;The configuration storage module is used to obtain configuration information and store the configuration information in each computing core; wherein, the configuration information includes several identification information and corresponding several data flow directions;

相应的,设置模块110,包括:Correspondingly, the setting module 110 includes:

配置指示单元,用于将设置指令发送至各个计算核,利用设置指令中的目标标识信息和配置信息确定各个计算核对应的数据流向。The configuration instruction unit is configured to send a setting instruction to each computing core, and use the target identification information and configuration information in the setting instruction to determine the data flow direction corresponding to each computing core.

可选地,设置指令用于设置计算网络为一路计算,则数据流向为从上级至下级流动的第一流向,起始计算核为第一计算核;Optionally, the setting instruction is used to set the computing network to one-way computing, then the data flow direction is the first flow direction from the upper level to the lower level, and the initial computing core is the first computing core;

相应的,输入模块120,包括:Correspondingly, the input module 120 includes:

第一输入单元,用于将特征值输入第一计算核;The first input unit is used to input the feature value into the first calculation core;

相应的,传输模块130,包括:Correspondingly, the transmission module 130 includes:

第一传输单元,用于利用第一计算核将特征值发送至目标第二计算核;A first transmission unit, configured to use the first computing core to send the feature value to the target second computing core;

第二传输单元,用于基于第一流向,从目标第二计算核开始,依次将特征值发送至对应的下级计算核,直至将特征值发送至第三计算核。The second transmission unit is configured to, based on the first flow direction, start from the target second computing core, and sequentially send the feature values to corresponding lower-level computing cores until sending the feature values to the third computing core.

可选地,设置指令用于设置计算网络为至少三路计算,则数据流向为从上级至下级流动的第一流向和同级之间流动的第二流向,起始计算核包括第一计算核和起始第三计算核,第一计算核对应于第一流向,起始第三计算核对应于第二流向;Optionally, the setting instruction is used to set the computing network to be at least three-way computing, then the data flow direction is the first flow direction flowing from the upper level to the lower level and the second flow direction flowing between the same levels, and the initial computing core includes the first computing core and starting the third computing core, the first computing core corresponds to the first flow direction, and the starting third computing core corresponds to the second flow direction;

相应的,输入模块120,包括:Correspondingly, the input module 120 includes:

第二输入单元,用于将第一特征值输入第一计算核;a second input unit, configured to input the first feature value into the first calculation core;

相应的,传输模块130,包括:Correspondingly, the transmission module 130 includes:

第三传输单元,用于利用第一计算核将第一特征值发送至目标第二计算核;a third transmission unit, configured to use the first computing core to send the first feature value to the target second computing core;

第四传输单元,用于基于第一流向,从目标第二计算核开始,依次将第一特征值发送至对应的下级计算核,直至将第一特征值发送至第一流向结尾的第二计算核;The fourth transmission unit is configured to, based on the first flow direction, start from the target second computing core, and sequentially send the first eigenvalues to the corresponding lower-level computing cores, until sending the first eigenvalues to the second computing core at the end of the first flow direction nuclear;

第五传输单元,用于将第二特征值输入起始第三计算核,并从起始第三计算核开始,基于第二流向,依次将第二特征值发送至后序同级计算核,直至将第二特征值发送至第二流向结尾的第三计算核。The fifth transmission unit is configured to input the second eigenvalue into the starting third computing core, and from the starting third computing core, based on the second flow direction, sequentially send the second eigenvalue to subsequent computing cores of the same level, Until the second feature value is sent to the third computing core at the end of the second stream.

可选地,起始计算核还包括起始第二计算核,第二计算核对应于第二流向;Optionally, the initial calculation core further includes an initial second calculation core, and the second calculation core corresponds to the second flow direction;

相应的,输入模块120,包括:Correspondingly, the input module 120 includes:

第三输入单元,用于将第二特征值输入起始第二计算核;The third input unit is used to input the second feature value into the starting second computing core;

相应的,传输模块130,包括:Correspondingly, the transmission module 130 includes:

第六传输单元,用于从起始第二计算核开始,基于第二流向,依次将第二特征值发送至后序同级计算核,直至将第二特征值发送至第二流向结尾的第二计算核。The sixth transmission unit is configured to start from the starting second computing core, and based on the second flow direction, sequentially send the second eigenvalues to subsequent computing cores at the same level until sending the second eigenvalues to the second eigenvalue at the end of the second flow direction Two computing cores.

可选地,还包括:Optionally, also include:

权重确定模块,用于确定各个计算核分别对应的权重值;a weight determination module, configured to determine the weight values corresponding to each calculation core;

权重存储模块,用于将权重值发送并存储至对应的计算核。The weight storage module is used to send and store the weight value to the corresponding computing core.

可选地,权重确定模块,包括:Optionally, the weight determination module includes:

权重获取单元,用于获取初始权重值;a weight acquisition unit, configured to acquire an initial weight value;

对应关系确定单元,用于基于数据流向,确定各个计算核与初始权重值之间的对应关系,完成权重值的确定。The corresponding relationship determination unit is configured to determine the corresponding relationship between each calculation core and the initial weight value based on the data flow direction, and complete the determination of the weight value.

可选地,计算模块140,包括:Optionally, the computing module 140 includes:

相乘单元,用于控制各个计算核,利用特征值和权重值相乘得到目标结果;The multiplication unit is used to control each calculation core, and obtain the target result by multiplying the feature value and the weight value;

相加单元,用于将目标结果与计算核存储的历史计算结果相加,得到计算结果。The addition unit is used for adding the target result and the historical calculation result stored in the calculation core to obtain the calculation result.

可选地,计算核包括算术逻辑单元、权重值接口、特征值接口、控制模块和存储单元,权重值接口包括外部输入端口和同级输入端口,特征值接口包括外部输入端口、上级输入端口和同级输入端口,控制模块用于存储配置信息,并根据设置指令确定数据流向,存储单元用于存储权重值、特征值和计算结果。Optionally, the calculation core includes an arithmetic logic unit, a weight value interface, a feature value interface, a control module and a storage unit, the weight value interface includes an external input port and an input port of the same level, and the feature value interface includes an external input port, an upper-level input port and The same-level input port, the control module is used to store configuration information, and determine the data flow direction according to the setting instructions, and the storage unit is used to store weight values, feature values and calculation results.

下面对本申请实施例提供的数据处理系统进行介绍,下文描述的数据处理系统与上文描述的数据处理方法可相互对应参照。The data processing system provided by the embodiment of the present application is introduced below, and the data processing system described below and the data processing method described above may be referred to in correspondence.

本申请还提供了一种数据处理系统,包括计算网络,计算网络包括一个第一计算核、若干个第二计算核和若干个第三计算核;任一第二计算核对应于一个上级计算核和两个下级计算核,下级计算核为第二计算核或第三计算核,第三计算核不具有下级计算核,第一计算核为目标第二计算核的上级计算核。The present application also provides a data processing system, including a computing network, and the computing network includes a first computing core, several second computing cores, and several third computing cores; any second computing core corresponds to a superior computing core and two lower-level computing cores, the lower-level computing core is the second computing core or the third computing core, the third computing core does not have a lower-level computing core, and the first computing core is the upper-level computing core of the target second computing core.

下面对本申请实施例提供的电子设备进行介绍,下文描述的电子设备与上文描述的数据处理方法可相互对应参照。The electronic device provided by the embodiment of the present application is introduced below, and the electronic device described below and the data processing method described above may be referred to in correspondence.

请参考图6,图6为本申请实施例提供的一种电子设备的结构示意图。其中电子设备100可以包括处理器101和存储器102,还可以进一步包括多媒体组件103、信息输入/信息输出(I/O)接口104以及通信组件105中的一种或多种。Please refer to FIG. 6 , which is a schematic structural diagram of an electronic device provided in an embodiment of the present application. The electronic device 100 may include a processor 101 and a memory 102 , and may further include one or more of a multimedia component 103 , an information input/information output (I/O) interface 104 and a communication component 105 .

其中,处理器101用于控制电子设备100的整体操作,以完成上述的数据处理方法中的全部或部分步骤;存储器102用于存储各种类型的数据以支持在电子设备100的操作,这些数据例如可以包括用于在该电子设备100上操作的任何应用程序或方法的指令,以及应用程序相关的数据。该存储器102可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,例如静态随机存取存储器(Static Random Access Memory,SRAM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、可编程只读存储器(Programmable Read-Only Memory,PROM)、只读存储器(Read-OnlyMemory,ROM)、磁存储器、快闪存储器、磁盘或光盘中的一种或多种。Among them, the processor 101 is used to control the overall operation of the electronic device 100, so as to complete all or part of the steps in the above-mentioned data processing method; the memory 102 is used to store various types of data to support the operation of the electronic device 100, these data For example, instructions for any application or method operating on the electronic device 100 may be included, as well as application-related data. The memory 102 can be implemented by any type of volatile or non-volatile memory device or their combination, such as Static Random Access Memory (Static Random Access Memory, SRAM), Electrically Erasable Programmable Read-Only Memory (Electrically Erasable Erasable Programmable Read-Only Memory, EEPROM), Erasable Programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM), Programmable Read-Only Memory (Programmable Read-Only Memory, PROM), Read-Only Memory (Read-Only Memory) One or more of OnlyMemory, ROM), magnetic memory, flash memory, magnetic disk or optical disk.

多媒体组件103可以包括屏幕和音频组件。其中屏幕例如可以是触摸屏,音频组件用于输出和/或输入音频信号。例如,音频组件可以包括一个麦克风,麦克风用于接收外部音频信号。所接收的音频信号可以被进一步存储在存储器102或通过通信组件105发送。音频组件还包括至少一个扬声器,用于输出音频信号。I/O接口104为处理器101和其他接口模块之间提供接口,上述其他接口模块可以是键盘,鼠标,按钮等。这些按钮可以是虚拟按钮或者实体按钮。通信组件105用于电子设备100与其他设备之间进行有线或无线通信。无线通信,例如Wi-Fi,蓝牙,近场通信(Near Field Communication,简称NFC),2G、3G或4G,或它们中的一种或几种的组合,因此相应的该通信组件105可以包括:Wi-Fi部件,蓝牙部件,NFC部件。Multimedia components 103 may include screen and audio components. The screen can be, for example, a touch screen, and the audio component is used for outputting and/or inputting audio signals. For example, an audio component may include a microphone for receiving external audio signals. The received audio signal may be further stored in the memory 102 or sent via the communication component 105 . The audio component also includes at least one speaker for outputting audio signals. The I/O interface 104 provides an interface between the processor 101 and other interface modules, which may be a keyboard, a mouse, buttons, and the like. These buttons can be virtual buttons or physical buttons. The communication component 105 is used for wired or wireless communication between the electronic device 100 and other devices. Wireless communication, such as Wi-Fi, Bluetooth, Near Field Communication (Near Field Communication, NFC for short), 2G, 3G or 4G, or one or a combination of them, so the corresponding communication component 105 may include: Wi-Fi parts, Bluetooth parts, NFC parts.

电子设备100可以被一个或多个应用专用集成电路(Application SpecificIntegrated Circuit,简称ASIC)、数字信号处理器(Digital Signal Processor,简称DSP)、数字信号处理设备(Digital Signal Processing Device,简称DSPD)、可编程逻辑器件(Programmable Logic Device,简称PLD)、现场可编程门阵列(Field ProgrammableGate Array,简称FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述实施例给出的数据处理方法。The electronic device 100 may be implemented by one or more application-specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), digital signal processors (Digital Signal Processor, DSP for short), digital signal processing devices (Digital Signal Processing Device, DSPD for short), Programmable logic device (Programmable Logic Device, be called for short PLD), field programmable gate array (Field ProgrammableGate Array, be called for short FPGA), controller, microcontroller, microprocessor or other electronic components realize, be used to carry out the above-mentioned embodiment given data processing method.

下面对本申请实施例提供的计算机可读存储介质进行介绍,下文描述的计算机可读存储介质与上文描述的数据处理方法可相互对应参照。The computer-readable storage medium provided by the embodiments of the present application is introduced below, and the computer-readable storage medium described below and the data processing method described above may refer to each other correspondingly.

本申请还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现上述的数据处理方法的步骤。The present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above-mentioned data processing method are implemented.

该计算机可读存储介质可以包括:U盘、移动硬盘、只读存储器(Read-OnlyMemory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The computer-readable storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc., which can store program codes. medium.

本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。Each embodiment in this specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same or similar parts of each embodiment can be referred to each other. As for the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and for relevant details, please refer to the description of the method part.

本领域技术人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应该认为超出本申请的范围。Those skilled in the art can further appreciate that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, computer software, or a combination of the two. In order to clearly illustrate the hardware and software In the above description, the components and steps of each example have been generally described according to their functions. Whether these functions are executed by means of hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may implement the described functionality using different methods for each particular application, but such implementation should not be considered as exceeding the scope of the present application.

结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。The steps of the methods or algorithms described in connection with the embodiments disclosed herein may be directly implemented by hardware, software modules executed by a processor, or a combination of both. Software modules can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other Any other known storage medium.

最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系属于仅仅用来将一个实体或者操作与另一个实体或者操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语包括、包含或者其他任何变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。Finally, it should also be noted that in this article, relationships such as first and second etc. are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that these entities or operations, any such actual relationship or order exists. Moreover, the term comprises, comprises, or any other variation is intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus comprising a set of elements includes not only those elements but also other elements not expressly listed, or Yes also includes elements inherent to such a process, method, article, or device.

本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。In this paper, specific examples are used to illustrate the principles and implementation methods of the application. The descriptions of the above embodiments are only used to help understand the method and core idea of the application; meanwhile, for those of ordinary skill in the art, according to the application There will be changes in the specific implementation and scope of application. In summary, the content of this specification should not be construed as limiting the application.

Claims (9)

1.一种数据处理方法,其特征在于,包括:1. A data processing method, characterized in that, comprising: 获取设置指令,并根据所述设置指令设置计算网络;所述设置指令用于设置所述计算网络中各个计算核之间的数据流向;Obtaining a setting instruction, and setting a computing network according to the setting instruction; the setting instruction is used to set the data flow direction between each computing core in the computing network; 获取至少一个特征值,并将所述至少一个特征值分别输入所述计算网络中的至少一个起始计算核;obtaining at least one eigenvalue, and inputting the at least one eigenvalue into at least one initial computing core in the computing network; 以所述起始计算核为起点,按照所述数据流向传输所述特征值;Using the initial calculation core as a starting point, transmit the feature value according to the data flow direction; 利用各个所述计算核,基于所述特征值和对应的权重值生成计算结果;using each of the calculation cores to generate calculation results based on the feature values and corresponding weight values; 其中,所述计算网络包括一个第一计算核、若干个第二计算核和若干个第三计算核;所述起始计算核包括所述第一计算核,或者包括所述第一计算核和所述第三计算核,或者包括所述第一计算核、所述第二计算核和所述第三计算核;任一所述第二计算核对应于一个上级计算核和两个下级计算核,所述下级计算核为所述第二计算核或所述第三计算核,所述第三计算核不具有下级计算核,所述第一计算核为目标第二计算核的上级计算核;Wherein, the computing network includes a first computing core, several second computing cores, and several third computing cores; the initial computing core includes the first computing core, or includes the first computing core and The third computing core, or includes the first computing core, the second computing core and the third computing core; any of the second computing cores corresponds to one upper-level computing core and two lower-level computing cores , the lower-level computing core is the second computing core or the third computing core, the third computing core does not have a lower-level computing core, and the first computing core is a higher-level computing core of the target second computing core; 其中,所述数据处理方法,还包括:Wherein, the data processing method also includes: 确定各个所述计算核分别对应的权重值;将所述权重值发送并存储至对应的所述计算核;determining weight values corresponding to each of the calculation cores; sending and storing the weight values to the corresponding calculation cores; 所述确定各个所述计算核分别对应的权重值,包括:The determination of the weight values corresponding to each of the calculation cores includes: 获取初始权重值;基于所述数据流向,确定各个所述计算核与所述初始权重值之间的对应关系,完成所述权重值的确定;Acquiring an initial weight value; based on the data flow direction, determining the correspondence between each of the calculation cores and the initial weight value, and completing the determination of the weight value; 所述设置指令用于设置所述计算网络为至少三路计算,则所述数据流向为从上级至下级流动的第一流向和同级之间流动的第二流向,所述起始计算核包括所述第一计算核和起始第三计算核,所述第一计算核对应于所述第一流向,所述起始第三计算核对应于所述第二流向;The setting instruction is used to set the computing network to be at least three-way computing, and the data flow direction is the first flow direction flowing from the upper level to the lower level and the second flow direction flowing between the same levels, and the initial computing core includes The first computing core and the initial third computing core, the first computing core corresponds to the first flow direction, and the initial third computing core corresponds to the second flow direction; 相应的,所述将所述至少一个特征值分别输入所述计算网络中的至少一个起始计算核;以所述起始计算核为起点,按照所述数据流向传输所述特征值,包括:Correspondingly, the inputting the at least one feature value into at least one initial computing core in the computing network respectively; starting from the initial computing core, transmitting the feature value according to the data flow direction includes: 将第一特征值输入所述第一计算核,并利用所述第一计算核将所述第一特征值发送至所述目标第二计算核;基于所述第一流向,从所述目标第二计算核开始,依次将所述第一特征值发送至对应的下级计算核,直至将所述第一特征值发送至所述第一流向结尾的所述第二计算核;将第二特征值输入所述起始第三计算核,并从所述起始第三计算核开始,基于所述第二流向,依次将所述第二特征值发送至后序同级计算核,直至将所述第二特征值发送至所述第二流向结尾的所述第三计算核;inputting a first eigenvalue into the first calculation core, and using the first calculation core to send the first eigenvalue to the target second calculation core; based on the first flow direction, from the target second calculation core Starting from two calculation cores, the first eigenvalues are sent to the corresponding lower-level calculation cores in sequence until the first eigenvalues are sent to the second calculation core at the end of the first stream; the second eigenvalues are sent Input the initial third computing core, and start from the initial third computing core, based on the second flow direction, send the second feature value to the subsequent computing cores of the same level sequentially, until the sending the second eigenvalue to the third computing core at the end of the second stream; 所述起始计算核还包括起始第二计算核,所述第二计算核对应于所述第二流向;The initiating computing core also includes initiating a second computing core, the second computing core corresponding to the second flow direction; 相应的,所述将所述至少一个特征值分别输入所述计算网络中的至少一个起始计算核;以所述起始计算核为起点,按照所述数据流向传输所述特征值,包括:Correspondingly, the inputting the at least one feature value into at least one initial computing core in the computing network respectively; starting from the initial computing core, transmitting the feature value according to the data flow direction includes: 将第二特征值输入所述起始第二计算核,并从所述起始第二计算核开始,基于所述第二流向,依次将所述第二特征值发送至后序同级计算核,直至将所述第二特征值发送至所述第二流向结尾的所述第二计算核。inputting the second eigenvalue into the starting second computing core, and starting from the starting second computing core, based on the second flow direction, sequentially sending the second eigenvalue to subsequent computing cores of the same level , until the second feature value is sent to the second computing core at the end of the second stream. 2.根据权利要求1所述的数据处理方法,其特征在于,还包括:2. The data processing method according to claim 1, further comprising: 获取配置信息,并将所述配置信息存储至各个所述计算核;其中,所述配置信息中包括若干个标识信息和对应的若干个数据流向;Obtain configuration information, and store the configuration information in each of the computing cores; wherein, the configuration information includes several identification information and corresponding several data flows; 相应的,所述根据所述设置指令设置计算网络,包括:Correspondingly, setting the computing network according to the setting instruction includes: 将所述设置指令发送至各个所述计算核,利用所述设置指令中的目标标识信息和所述配置信息确定各个所述计算核对应的所述数据流向。Sending the setting instruction to each of the computing cores, using the target identification information and the configuration information in the setting instruction to determine the data flow direction corresponding to each of the computing cores. 3.根据权利要求1所述的数据处理方法,其特征在于,所述设置指令用于设置所述计算网络为一路计算,则所述数据流向为从上级至下级流动的第一流向,所述起始计算核为所述第一计算核;3. The data processing method according to claim 1, wherein the setting instruction is used to set the computing network to one-way computing, then the data flow direction is the first flow direction flowing from the upper level to the lower level, and the The initial computing core is the first computing core; 相应的,所述将所述至少一个特征值分别输入所述计算网络中的至少一个起始计算核;以所述起始计算核为起点,按照所述数据流向传输所述特征值,包括:Correspondingly, the inputting the at least one feature value into at least one initial computing core in the computing network respectively; starting from the initial computing core, transmitting the feature value according to the data flow direction includes: 将所述特征值输入所述第一计算核,利用所述第一计算核将所述特征值发送至所述目标第二计算核;inputting the feature value into the first computing core, and using the first computing core to send the feature value to the target second computing core; 基于所述第一流向,从所述目标第二计算核开始,依次将所述特征值发送至对应的下级计算核,直至将所述特征值发送至所述第三计算核。Based on the first flow direction, starting from the target second computing core, the feature values are sent to corresponding lower-level computing cores in sequence until the feature values are sent to the third computing core. 4.根据权利要求1所述的数据处理方法,其特征在于,所述利用各个所述计算核,基于所述特征值和对应的权重值生成计算结果,包括:4. The data processing method according to claim 1, wherein said utilizing each said calculation core to generate a calculation result based on said feature value and a corresponding weight value comprises: 控制各个所述计算核,利用所述特征值和所述权重值相乘得到目标结果;controlling each of the calculation cores, and multiplying the feature value and the weight value to obtain a target result; 将所述目标结果与所述计算核存储的历史计算结果相加,得到所述计算结果。The target result is added to the historical calculation result stored by the calculation core to obtain the calculation result. 5.如权利要求1所述的数据处理方法,其特征在于,所述计算核包括算术逻辑单元、权重值接口、特征值接口、控制模块和存储单元,所述权重值接口包括外部输入端口和同级输入端口,所述特征值接口包括外部输入端口、上级输入端口和同级输入端口,所述控制模块用于存储配置信息,并根据所述设置指令确定数据流向,所述存储单元用于存储权重值、特征值和所述计算结果。5. the data processing method as claimed in claim 1, is characterized in that, described calculation kernel comprises arithmetic logic unit, weight value interface, characteristic value interface, control module and storage unit, and described weight value interface comprises external input port and Same-level input ports, the characteristic value interface includes external input ports, upper-level input ports, and same-level input ports, the control module is used to store configuration information, and determine the data flow direction according to the setting instructions, and the storage unit is used for Weight values, feature values, and the calculation results are stored. 6.一种数据处理装置,其特征在于,包括:6. A data processing device, characterized in that, comprising: 设置模块,用于获取设置指令,并根据所述设置指令设置计算网络;所述设置指令用于设置所述计算网络中各个计算核之间的数据流向;A setting module, configured to obtain a setting instruction, and set a computing network according to the setting instruction; the setting instruction is used to set the data flow direction between each computing core in the computing network; 输入模块,用于获取至少一个特征值,并将所述至少一个特征值分别输入所述计算网络中的至少一个起始计算核;An input module, configured to acquire at least one eigenvalue, and respectively input the at least one eigenvalue into at least one initial calculation core in the calculation network; 传输模块,用于以所述起始计算核为起点,按照所述数据流向传输所述特征值;a transmission module, configured to use the initial calculation core as a starting point, and transmit the characteristic value according to the data flow direction; 计算模块,用于利用各个所述计算核,基于所述特征值和对应的权重值生成计算结果;A calculation module, configured to use each of the calculation cores to generate calculation results based on the feature values and corresponding weight values; 其中,所述计算网络包括一个第一计算核、若干个第二计算核和若干个第三计算核;所述起始计算核包括所述第一计算核,或者包括所述第一计算核与所述第三计算核,或者包括所述第一计算核、所述第二计算核与所述第三计算核;任一所述第二计算核对应于一个上级计算核和两个下级计算核,所述下级计算核为所述第二计算核或所述第三计算核,所述第三计算核不具有下级计算核,所述第一计算核为目标第二计算核的上级计算核;Wherein, the computing network includes a first computing core, several second computing cores, and several third computing cores; the initial computing core includes the first computing core, or includes the first computing core and The third computing core, or includes the first computing core, the second computing core and the third computing core; any of the second computing cores corresponds to one upper-level computing core and two lower-level computing cores , the lower-level computing core is the second computing core or the third computing core, the third computing core does not have a lower-level computing core, and the first computing core is a higher-level computing core of the target second computing core; 其中,所述数据处理装置,具体用于:Wherein, the data processing device is specifically used for: 确定各个所述计算核分别对应的权重值;将所述权重值发送并存储至对应的所述计算核;determining weight values corresponding to each of the calculation cores; sending and storing the weight values to the corresponding calculation cores; 所述数据处理装置,具体用于:The data processing device is specifically used for: 获取初始权重值;基于所述数据流向,确定各个所述计算核与所述初始权重值之间的对应关系,完成所述权重值的确定;Acquiring an initial weight value; based on the data flow direction, determining the correspondence between each of the calculation cores and the initial weight value, and completing the determination of the weight value; 其中,所述设置指令用于设置所述计算网络为至少三路计算,则所述数据流向为从上级至下级流动的第一流向和同级之间流动的第二流向,所述起始计算核包括所述第一计算核和起始第三计算核,所述第一计算核对应于所述第一流向,所述起始第三计算核对应于所述第二流向;Wherein, the setting instruction is used to set the computing network to be at least three-way computing, then the data flow direction is the first flow direction flowing from the upper level to the lower level and the second flow direction flowing between the same levels, and the initial calculation The core includes the first computing core and an initial third computing core, the first computing core corresponds to the first flow direction, and the initial third computing core corresponds to the second flow direction; 所述数据处理装置,具体用于:The data processing device is specifically used for: 将第一特征值输入所述第一计算核,并利用所述第一计算核将所述第一特征值发送至所述目标第二计算核;基于所述第一流向,从所述目标第二计算核开始,依次将所述第一特征值发送至对应的下级计算核,直至将所述第一特征值发送至所述第一流向结尾的所述第二计算核;将第二特征值输入所述起始第三计算核,并从所述起始第三计算核开始,基于所述第二流向,依次将所述第二特征值发送至后序同级计算核,直至将所述第二特征值发送至所述第二流向结尾的所述第三计算核;inputting a first eigenvalue into the first calculation core, and using the first calculation core to send the first eigenvalue to the target second calculation core; based on the first flow direction, from the target second calculation core Starting from two calculation cores, the first eigenvalues are sent to the corresponding lower-level calculation cores in sequence until the first eigenvalues are sent to the second calculation core at the end of the first stream; the second eigenvalues are sent Input the initial third computing core, and start from the initial third computing core, based on the second flow direction, send the second feature value to the subsequent computing cores of the same level sequentially, until the sending the second eigenvalue to the third computing core at the end of the second stream; 所述起始计算核还包括起始第二计算核,所述第二计算核对应于所述第二流向;The initiating computing core also includes initiating a second computing core, the second computing core corresponding to the second flow direction; 所述数据处理装置,具体用于:The data processing device is specifically used for: 将第二特征值输入所述起始第二计算核,并从所述起始第二计算核开始,基于所述第二流向,依次将所述第二特征值发送至后序同级计算核,直至将所述第二特征值发送至所述第二流向结尾的所述第二计算核。inputting the second eigenvalue into the starting second computing core, and starting from the starting second computing core, based on the second flow direction, sequentially sending the second eigenvalue to subsequent computing cores of the same level , until the second feature value is sent to the second computing core at the end of the second stream. 7.一种数据处理系统,其特征在于,包括计算网络,所述计算网络中包含至少一个起始计算核,所述计算网络包括一个第一计算核、若干个第二计算核和若干个第三计算核;所述起始计算核包括所述第一计算核,或者包括所述第一计算核和所述第三计算核,或者包括所述第一计算核、所述第二计算核和所述第三计算核;任一所述第二计算核对应于一个上级计算核和两个下级计算核,所述下级计算核为所述第二计算核或所述第三计算核,所述第三计算核不具有下级计算核,所述第一计算核为目标第二计算核的上级计算核;7. A data processing system, characterized in that it comprises a computing network, the computing network includes at least one initial computing core, and the computing network includes a first computing core, several second computing cores and several first computing cores Three computing cores; the initial computing core includes the first computing core, or includes the first computing core and the third computing core, or includes the first computing core, the second computing core and The third computing core; any of the second computing cores corresponds to one upper-level computing core and two lower-level computing cores, and the lower-level computing cores are the second computing core or the third computing core, and the The third computing core does not have a lower-level computing core, and the first computing core is the upper-level computing core of the target second computing core; 所述数据处理系统用于获取设置指令,并根据所述设置指令设置所述计算网络;所述设置指令用于设置所述计算网络中各个计算核之间的数据流向;The data processing system is used to obtain a setting instruction, and set the computing network according to the setting instruction; the setting instruction is used to set the data flow direction between each computing core in the computing network; 所述数据处理系统用于获取至少一个特征值,并将所述至少一个特征值分别输入所述计算网络中的至少一个所述起始计算核;The data processing system is used to obtain at least one eigenvalue, and input the at least one eigenvalue to at least one of the initial computing cores in the computing network; 所述数据处理系统用于以所述起始计算核为起点,按照所述数据流向传输所述特征值;The data processing system is configured to use the initial calculation core as a starting point, and transmit the characteristic value according to the data flow direction; 所述数据处理系统用于利用各个所述计算核,基于所述特征值和对应的权重值生成计算结果;The data processing system is configured to use each of the calculation cores to generate calculation results based on the feature values and corresponding weight values; 所述数据处理系统用于:The data processing system is used for: 确定各个所述计算核分别对应的权重值;将所述权重值发送并存储至对应的所述计算核;determining weight values corresponding to each of the calculation cores; sending and storing the weight values to the corresponding calculation cores; 所述数据处理系统用于:The data processing system is used for: 获取初始权重值;基于所述数据流向,确定各个所述计算核与所述初始权重值之间的对应关系,完成所述权重值的确定;Acquiring an initial weight value; based on the data flow direction, determining the correspondence between each of the calculation cores and the initial weight value, and completing the determination of the weight value; 其中,所述设置指令用于设置所述计算网络为至少三路计算,则所述数据流向为从上级至下级流动的第一流向和同级之间流动的第二流向,所述起始计算核包括所述第一计算核和起始第三计算核,所述第一计算核对应于所述第一流向,所述起始第三计算核对应于所述第二流向;Wherein, the setting instruction is used to set the computing network to be at least three-way computing, then the data flow direction is the first flow direction flowing from the upper level to the lower level and the second flow direction flowing between the same levels, and the initial calculation The core includes the first computing core and an initial third computing core, the first computing core corresponds to the first flow direction, and the initial third computing core corresponds to the second flow direction; 所述数据处理系统用于:The data processing system is used for: 将第一特征值输入所述第一计算核,并利用所述第一计算核将所述第一特征值发送至所述目标第二计算核;基于所述第一流向,从所述目标第二计算核开始,依次将所述第一特征值发送至对应的所述下级计算核,直至将所述第一特征值发送至所述第一流向结尾的所述第二计算核;将第二特征值输入所述起始第三计算核,并从所述起始第三计算核开始,基于所述第二流向,依次将所述第二特征值发送至后序同级计算核,直至将所述第二特征值发送至所述第二流向结尾的所述第三计算核;inputting a first eigenvalue into the first calculation core, and using the first calculation core to send the first eigenvalue to the target second calculation core; based on the first flow direction, from the target second calculation core Starting from two calculation cores, the first eigenvalues are sent to the corresponding lower-level calculation cores in sequence until the first eigenvalues are sent to the second calculation core at the end of the first stream; the second The feature value is input into the starting third computing core, and starting from the starting third computing core, based on the second flow direction, the second feature value is sent to the subsequent computing core at the same level in sequence until the sending the second feature value to the third calculation core at the end of the second stream; 所述起始计算核还包括起始第二计算核,所述第二计算核对应于所述第二流向;The initiating computing core also includes initiating a second computing core, the second computing core corresponding to the second flow direction; 所述数据处理系统用于:The data processing system is used for: 将第二特征值输入所述起始第二计算核,并从所述起始第二计算核开始,基于所述第二流向,依次将所述第二特征值发送至后序同级计算核,直至将所述第二特征值发送至所述第二流向结尾的所述第二计算核。inputting the second eigenvalue into the starting second computing core, and starting from the starting second computing core, based on the second flow direction, sequentially sending the second eigenvalue to subsequent computing cores of the same level , until the second feature value is sent to the second computing core at the end of the second stream. 8.一种电子设备,其特征在于,包括存储器和处理器,其中:8. An electronic device, comprising a memory and a processor, wherein: 所述存储器,用于保存计算机程序;The memory is used to store computer programs; 所述处理器,用于执行所述计算机程序,以实现如权利要求1至5任一项所述的数据处理方法。The processor is configured to execute the computer program to implement the data processing method according to any one of claims 1 to 5. 9.一种计算机可读存储介质,其特征在于,用于保存计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至5任一项所述的数据处理方法。9. A computer-readable storage medium, characterized by being used to store a computer program, wherein, when the computer program is executed by a processor, the data processing method according to any one of claims 1 to 5 is implemented.
CN202111165135.XA 2021-09-30 2021-09-30 Data processing method, device and system, electronic equipment and storage medium Active CN114021708B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111165135.XA CN114021708B (en) 2021-09-30 2021-09-30 Data processing method, device and system, electronic equipment and storage medium
PCT/CN2022/090194 WO2023050807A1 (en) 2021-09-30 2022-04-29 Data processing method, apparatus, and system, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111165135.XA CN114021708B (en) 2021-09-30 2021-09-30 Data processing method, device and system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114021708A CN114021708A (en) 2022-02-08
CN114021708B true CN114021708B (en) 2023-08-01

Family

ID=80055496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111165135.XA Active CN114021708B (en) 2021-09-30 2021-09-30 Data processing method, device and system, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114021708B (en)
WO (1) WO2023050807A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114021708B (en) * 2021-09-30 2023-08-01 浪潮电子信息产业股份有限公司 Data processing method, device and system, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427584A (en) * 2018-03-19 2018-08-21 清华大学 The configuration method of the chip and the chip with parallel computation core quickly started
CN111008243A (en) * 2019-11-21 2020-04-14 山东爱城市网信息技术有限公司 Block chain-based donation flow direction recording supervision method, device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7796527B2 (en) * 2006-04-13 2010-09-14 International Business Machines Corporation Computer hardware fault administration
US20080031243A1 (en) * 2006-08-01 2008-02-07 Gidon Gershinsky Migration of Message Topics over Multicast Streams and Groups
US20090040946A1 (en) * 2007-08-06 2009-02-12 Archer Charles J Executing an Allgather Operation on a Parallel Computer
CN110046704B (en) * 2019-04-09 2022-11-08 深圳鲲云信息科技有限公司 Deep network acceleration method, device, equipment and storage medium based on data stream
CN111752691B (en) * 2020-06-22 2023-11-28 深圳鲲云信息科技有限公司 Method, device, equipment and storage medium for sorting AI (advanced technology attachment) calculation graphs
CN114021708B (en) * 2021-09-30 2023-08-01 浪潮电子信息产业股份有限公司 Data processing method, device and system, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427584A (en) * 2018-03-19 2018-08-21 清华大学 The configuration method of the chip and the chip with parallel computation core quickly started
CN111008243A (en) * 2019-11-21 2020-04-14 山东爱城市网信息技术有限公司 Block chain-based donation flow direction recording supervision method, device and storage medium

Also Published As

Publication number Publication date
WO2023050807A1 (en) 2023-04-06
CN114021708A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
KR102443546B1 (en) matrix multiplier
US20250147760A1 (en) Processor and control method for processor
CN110689138B (en) Operation method, device and related product
CN107454965B (en) Batch processing in neural network processors
TW202022644A (en) Operation device and operation method
CN109543816A (en) A kind of convolutional neural networks calculation method and system mediated based on weight
CN112214727A (en) computing accelerator
CN110780921A (en) Data processing method and device, storage medium and electronic device
CN108122030A (en) A kind of operation method of convolutional neural networks, device and server
KR102238600B1 (en) Scheduler computing device, data node of distributed computing system having the same, and method thereof
CN114021708B (en) Data processing method, device and system, electronic equipment and storage medium
CN113449842B (en) A distributed automatic differentiation method and related device
TW202020654A (en) Digital circuit with compressed carry
JP5798378B2 (en) Apparatus, processing method, and program
CN111723932A (en) Training methods and related products of neural network models
EP4052188B1 (en) Neural network instruction streaming
CN109685203B (en) Data processing method, device, computer system and storage medium
CN115016947B (en) Load distribution method, device, equipment and medium
CN111027688A (en) Neural network calculator generation method and device based on FPGA
JP5907607B2 (en) Processing arrangement method and program
JP5832311B2 (en) Reconfiguration device, process allocation method, and program
CN119005271B (en) Neural network model parallel optimization method and device based on operator partitioning
CN113919489B (en) Method and device for improving resource utilization rate of on-chip multiplier-adder of FPGA
JP7389176B2 (en) acceleration system
TWI884081B (en) Vector operation acceleration with convolution computation unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant