WO2021016932A1 - Data processing method and apparatus, and computer-readable storage medium - Google Patents

Data processing method and apparatus, and computer-readable storage medium Download PDF

Info

Publication number
WO2021016932A1
WO2021016932A1 PCT/CN2019/098657 CN2019098657W WO2021016932A1 WO 2021016932 A1 WO2021016932 A1 WO 2021016932A1 CN 2019098657 W CN2019098657 W CN 2019098657W WO 2021016932 A1 WO2021016932 A1 WO 2021016932A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
processing
neural network
data conversion
target
Prior art date
Application number
PCT/CN2019/098657
Other languages
French (fr)
Chinese (zh)
Inventor
余俊峰
周爱春
聂谷洪
张伟
万千
陈玉双
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/098657 priority Critical patent/WO2021016932A1/en
Priority to CN201980032385.0A priority patent/CN112166441A/en
Publication of WO2021016932A1 publication Critical patent/WO2021016932A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • This specification belongs to the field of artificial intelligence, and particularly relates to a data processing method, device and computer-readable storage medium.
  • the operation of data conversion processing on parameters based on the specified data conversion algorithm may cause the parameters to exceed the data expression range, which in turn will cause the parameters to be truncated, which reduces the processing accuracy of the neural network and reduces the use of the neural network to process the data. Treatment effect.
  • This specification provides a data processing method, device, and computer-readable storage medium to solve the problem that the processing effect of the neural network on the data to be processed is reduced due to the introduction of a specified data conversion algorithm.
  • an embodiment of this specification provides a data processing method, which includes:
  • the initial neural network whose loss degree is within the preset range is determined as the target neural network.
  • the embodiments of this specification provide a data processing method, which includes:
  • the converted data is processed to obtain a processing result; wherein, the target neural network is generated using the method in the first aspect.
  • an embodiment of this specification provides a data processing device, which includes:
  • the first determining module is configured to determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data;
  • An adjustment module configured to adjust parameters in the initial neural network based on the degree of error if the degree of error is not within a preset range
  • the training module is used to continue training the initial neural network based on the target training data and the true value corresponding to the target training data after the data conversion processing is completed, until the loss degree is within the expected Set within
  • an embodiment of this specification provides a data processing device, which includes:
  • the first processing module is used to perform fixed-point processing on the data to be processed to obtain fixed-point processing data
  • the embodiments of this specification provide a movable platform with limited computing power, and the movable platform is used to implement the steps of the data processing method in the first aspect or the second aspect.
  • an embodiment of this specification provides an image acquisition device including a processor, and the image acquisition device is configured to implement the steps of the data processing method in the first aspect or the second aspect.
  • Figure 1 is a data processing method provided by an embodiment of this specification
  • FIG. 4 is a flowchart of the steps of another data processing method provided by an embodiment of this specification.
  • FIG. 5 is a block diagram of a data processing device provided by an embodiment of this specification.
  • FIG. 8 is a block diagram of a computing processing device provided by an embodiment of this specification.
  • Fig. 1 is a data processing method provided by an embodiment of this specification. As shown in Fig. 1, the method may include:
  • Step 101 Determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data.
  • the initial neural network can be a convolutional neural network, where the convolutional neural network is a deep feedforward artificial neural network, mainly used in the field of image processing, and the artificial neurons of the convolutional neural network can respond to a part of the coverage
  • the surrounding unit inside contains a large number of convolution operations when processing the image.
  • parameter adjustment methods can also be used to adjust the parameters, for example, directly calculating the difference between the parameter and the preset fixed value as the adjusted parameter, which is not limited in the embodiment of this specification.
  • Step 103 Perform data conversion processing on the adjusted parameters based on the designated data conversion algorithm.
  • the parameters in the target neural network are often adjusted continuously during the process of generating the target neural network, in this step, the parameters can be pre-transformed during the generation process, so that even if the parameters are truncated The problem will not affect the accuracy of the final neural network model.
  • the parameters in the final generated target neural network can be achieved The effect after the data conversion processing. In this way, in the subsequent application process, there is no need to perform data conversion processing on the parameters, thereby avoiding the problem of reduced accuracy due to the truncation of the parameters during the application process.
  • the inverse fixed-point processing refers to the conversion of a fixed-point number into a floating-point number according to the inverse fixed-point rule. Specifically, when the conversion is performed, the fixed-point number can be converted into a fixed-point number according to the number of decimal places. Number, the decimal places of the fixed-point number are complemented, so that the number of decimal places is the same as the number before the conversion, and then the conversion is realized. The complemented value can be generated by a random generation algorithm.
  • the floating-point number after the inverse fixed-point processing is used as the target training data, so that in the subsequent process of generating the target neural network, floating-point numbers can be used for training, and the expression range of floating-point numbers is relatively large.
  • Step 204 If the degree of error is not within the preset range, adjust the parameters in the initial neural network based on the degree of error.
  • the weight matrix after the convolution processing can also be fixed-point processing, and then after the fixed-point processing is completed, the weight matrix after the convolution processing is inverse fixed-point processing.
  • the implementation of the fixed-point processing can refer to the description in the above step 201
  • the implementation of the inverse fixed-point processing can refer to the description in the above step 202, which is not repeated in the embodiment of the present specification. Since the expression range of floating-point numbers is relatively large, in the embodiment of this specification, after the parameters are fixed-point, and then inversely fixed-point into floating-point numbers, the probability of data overflow in each calculation operation during training can be reduced to a certain extent.
  • the floating-point number obtained after the inverse fixed-pointization will have a precision loss.
  • the trained target neural network is used to process the real data to be processed.
  • the parameters may also be fixed-point processing, which will bring a certain loss of data accuracy. Therefore, in the embodiment of this specification, the fixed-point processing and inverse fixed-point processing of the parameters during the training phase can be simulated in advance The data loss caused by fixed-point processing can further improve the processing accuracy when using the trained target neural network to process the data to be processed in the subsequent application stage.
  • Sub-step (2) based on the inverse transformation matrix defined in the specified data conversion algorithm and the transposed matrix of the inverse transformation matrix, transform the weight matrix after the convolution processing.
  • A represents the inverse change matrix used to transform the weight matrix C after the convolution processing
  • AT represents the transposed matrix of A
  • both A and AT are preset matrices.
  • the specific content of these matrices can be It is predetermined based on the size of C, and the size of C may be predetermined based on the size of the preset convolution kernel and the size of the weight matrix.
  • Step 206 After completing the data conversion process, continue training the initial neural network based on the target training data and the true value corresponding to the target training data until the loss degree is within the preset range Inside.
  • Step 207 Determine the initial neural network whose loss degree is within the preset range as the target neural network.
  • the data processing method performs fixed-point processing and inverse fixed-point processing on the initial training data to generate target training data.
  • the data loss caused by fixed-point processing can be simulated in advance.
  • the processing accuracy when using the trained target neural network to process the data to be processed can be improved.
  • the error level of the initial neural network is determined.
  • the parameters in the initial neural network are adjusted based on the error level, and then , Based on the specified data conversion algorithm, perform data conversion processing on the adjusted parameters.
  • the data conversion processing of the parameters is simulated in advance during the generation of the target neural network, so that the parameters in the final generated target neural network can achieve the effect after the data conversion processing, so that the subsequent In the application process, only the data to be processed need to be processed for data conversion, without the need for data conversion processing for parameters, which can avoid the problem of reduced accuracy due to truncated parameters, and realize the introduction of specified data conversion algorithms while ensuring The processing effect of the data to be processed by the neural network.
  • Fig. 3 is a flow chart of the steps of a data processing method provided by an embodiment of this specification. As shown in Fig. 3, the method may include:
  • Step 301 Perform fixed-point processing on the data to be processed to obtain fixed-point processing data.
  • the target neural network is a convolutional neural network used to process images
  • the data to be processed is an image to be processed.
  • the value of each element The value is converted to a fixed-point number, and then the fixed-point processing data is obtained.
  • the element in the image matrix is the pixel in the image to be processed
  • the value of the element is the pixel value of the pixel.
  • the implementation of converting a floating-point number to a fixed-point number can refer to the content in the foregoing step 201, which is not limited in the embodiment of the present specification.
  • the terminal needs to provide a register with more digits to support the calculation operation.
  • the fixed-point processing data is obtained through the fixed-point processing of the data to be processed, so that the subsequent generation process can be calculated with fixed-point data. In this way, there is no need to provide a register with a large number of bits. The cost of the terminal can be saved.
  • the fixed-point number has a fixed number of digits
  • the calculation amount is often smaller than that of a floating-point number with a floating number. Therefore, in this step, the fixed-point This method can be applied to more terminals with limited computing power.
  • Step 302 Perform data conversion processing on the fixed-point processing data based on a designated data conversion algorithm to obtain converted data.
  • the specified data conversion algorithm may be an algorithm that can improve the calculation effect.
  • the specified data conversion algorithm may be a data conversion algorithm based on the remainder theorem, or the specified data conversion algorithm may also be based on pull The data conversion algorithm of Grange's Interpolation Theorem, these two algorithms can perform linear decomposition and linear combination for each item involved in the calculation, and use non-time-consuming calculation operations, namely addition operations, instead of time-consuming calculation operations, namely multiplication operations, and then It can reduce algorithm time and improve processing efficiency.
  • Step 303 Based on the pre-training parameters in the target neural network, process the converted data to obtain a processing result.
  • Step 402 If the degree of error is not within a preset range, adjust the parameters in the initial neural network based on the degree of error.
  • Step 403 Perform data conversion processing on the adjusted parameters based on the designated data conversion algorithm.
  • Step 405 Determine the initial neural network whose loss degree is within the preset range as the target neural network.
  • Step 406 Perform fixed-point processing on the data to be processed to obtain fixed-point processing data.
  • Step 407 Perform data conversion processing on the fixed-point processing data based on the designated data conversion algorithm to obtain converted data.
  • Step 408 Based on the pre-training parameters in the target neural network, process the converted data to obtain a processing result.
  • this step can refer to the above step 203, which is not limited in the embodiment of this specification.
  • the adjustment module 502 is configured to adjust the parameters in the initial neural network based on the degree of error if the degree of error is not within the preset range.
  • the device 50 further includes:
  • the data to be processed is an image.
  • the audio output unit 703 may convert the audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into audio signals and output them as sounds. Moreover, the audio output unit 703 may also provide audio output related to a specific function performed by the terminal 700 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed are a data processing method and apparatus, and a computer-readable storage medium. The method comprises: determining an error degree of an initial neural network on the basis of target training data and a real value corresponding to the target training data; then if the error degree does not fall within a preset range, adjusting parameters in the initial neural network on the basis of the error degree; next, on the basis of a specified data conversion algorithm, carrying out data conversion on the adjusted parameters; after the data conversion is completed, continuing to train the initial neural network on the basis of the target training data and the real value corresponding to the target training data until a loss degree falls within a preset range; and finally, determining the initial neural network of which the loss degree falls within the preset range as a target neural network. Thus, in a subsequent application process, the data conversion processing does not need to be carried out on the parameters, such that the processing effect of the neural network on data to be processed can be guaranteed.

Description

数据处理方法、装置及计算机可读存储介质Data processing method, device and computer readable storage medium 技术领域Technical field
本说明书属于人工智能领域,特别是涉及一种数据处理方法、装置及计算机可读存储介质。This specification belongs to the field of artificial intelligence, and particularly relates to a data processing method, device and computer-readable storage medium.
背景技术Background technique
目前,为了提高神经网络的计算效率,经常会在利用神经网络对待处理数据进行处理时,对处理过程中涉及到的数据,例如,待处理数据及神经网络中的参数,进行定点化处理。进一步地,为了实现定点化处理,一般会预先设定数据的表达范围,如果数据超过该范围,就对数据进行截断处理。At present, in order to improve the computational efficiency of neural networks, when the neural network is used to process the data to be processed, the data involved in the processing process, for example, the data to be processed and the parameters in the neural network, are fixed-pointed. Further, in order to achieve fixed-point processing, the expression range of the data is generally set in advance, and if the data exceeds this range, the data is truncated.
现有技术中,为了进一步对神经网络的处理过程进行优化,往往会引入数据转换算法,具体的,是在利用神经网络对待处理数据进行处理时,利用指定数据转换算法,对待处理数据及参数进行数据转换,以此优化处理过程中涉及到的计算操作。In the prior art, in order to further optimize the processing process of the neural network, data conversion algorithms are often introduced. Specifically, when the neural network is used to process the data to be processed, the specified data conversion algorithm is used to perform the processing of the data and parameters to be processed. Data conversion to optimize the calculation operations involved in the processing.
但是,基于指定数据转换算法对参数进行数据转换处理的操作,可能会导致参数超过该数据表达范围,进而会导致参数被截断,使得神经网络的处理精度降低,降低利用该神经网络对待处理数据的处理效果。However, the operation of data conversion processing on parameters based on the specified data conversion algorithm may cause the parameters to exceed the data expression range, which in turn will cause the parameters to be truncated, which reduces the processing accuracy of the neural network and reduces the use of the neural network to process the data. Treatment effect.
发明内容Summary of the invention
本说明书提供一种数据处理方法、装置及计算机可读存储介质,以便解决由于引入指定数据转换算法导致的神经网络对待处理数据的处理效果被降低的问题。This specification provides a data processing method, device, and computer-readable storage medium to solve the problem that the processing effect of the neural network on the data to be processed is reduced due to the introduction of a specified data conversion algorithm.
为了解决上述技术问题,本说明书是这样实现的:In order to solve the above technical problems, this manual is implemented as follows:
第一方面,本说明书实施例提供了一种数据处理方法,该方法包括:In the first aspect, an embodiment of this specification provides a data processing method, which includes:
基于目标训练数据、所述目标训练数据对应的真实值,确定初始神经网络的误差程度;Determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data;
若所述误差程度不在预设范围内,则基于所述误差程度对所述初始神经网络中的参数进行调整;If the degree of error is not within the preset range, adjust the parameters in the initial neural network based on the degree of error;
基于指定数据转换算法,对调整后的所述参数进行数据转换处理;Perform data conversion processing on the adjusted parameters based on a designated data conversion algorithm;
在完成所述数据转换处理之后,基于所述目标训练数据、所述目标训练数据对应的真实值,继续对所述初始神经网络进行训练,直至所述损失程度在所述预设范围内;After completing the data conversion process, continue training the initial neural network based on the target training data and the true value corresponding to the target training data until the loss degree is within the preset range;
将损失程度在所述预设范围内的初始神经网络确定为目标神经网络。The initial neural network whose loss degree is within the preset range is determined as the target neural network.
第二方面,本说明书实施例提供了一种数据处理方法,该方法包括:In the second aspect, the embodiments of this specification provide a data processing method, which includes:
对待处理数据进行定点化处理,得到定点化处理数据;Perform fixed-point processing on the data to be processed to obtain fixed-point processing data;
基于指定数据转换算法,对所述定点化处理数据进行数据转换处理,得到转换后的数据;Perform data conversion processing on the fixed-point processing data based on a designated data conversion algorithm to obtain converted data;
基于目标神经网络中的预训练参数,对所述转换后的数据进行处理,得到处理结果;其中,所述目标神经网络是利用第一方面中的方法生成的。Based on the pre-training parameters in the target neural network, the converted data is processed to obtain a processing result; wherein, the target neural network is generated using the method in the first aspect.
第三方面,本说明书实施例提供了一种数据处理装置,该装置包括:In the third aspect, an embodiment of this specification provides a data processing device, which includes:
第一确定模块,用于基于目标训练数据、所述目标训练数据对应的真实值,确定初始神经网络的误差程度;The first determining module is configured to determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data;
调整模块,用于若所述误差程度不在预设范围内,则基于所述误差程度对所述初始神经网络中的参数进行调整;An adjustment module, configured to adjust parameters in the initial neural network based on the degree of error if the degree of error is not within a preset range;
转换模块,用于基于指定数据转换算法,对调整后的所述参数进行数据转换处理;The conversion module is configured to perform data conversion processing on the adjusted parameters based on a specified data conversion algorithm;
训练模块,用于在完成所述数据转换处理之后,基于所述目标训练数据、所述目标训练数据对应的真实值,继续对所述初始神经网络进行训练,直至所述损失程度在所述预设范围内;The training module is used to continue training the initial neural network based on the target training data and the true value corresponding to the target training data after the data conversion processing is completed, until the loss degree is within the expected Set within
第二确定模块,用于将损失程度在所述预设范围内的初始神经网络确定为目标神经网络。The second determining module is used to determine the initial neural network whose loss degree is within the preset range as the target neural network.
第四方面,本说明书实施例提供了一种数据处理装置,该装置包括:In a fourth aspect, an embodiment of this specification provides a data processing device, which includes:
第一处理模块,用于对待处理数据进行定点化处理,得到定点化处理数据;The first processing module is used to perform fixed-point processing on the data to be processed to obtain fixed-point processing data;
转换模块,用于基于指定数据转换算法,对所述定点化处理数据进行数据转换处理,得到转换后的数据;The conversion module is configured to perform data conversion processing on the fixed-point processing data based on a specified data conversion algorithm to obtain converted data;
第二处理模块,用于基于目标神经网络中的预训练参数,对所述转换后的数据进行处理,得到处理结果;其中,所述目标神经网络是利用第三方面中的装置生成的。The second processing module is configured to process the converted data based on the pre-training parameters in the target neural network to obtain a processing result; wherein the target neural network is generated by using the device in the third aspect.
第五方面,本说明书实施例提供了一种计算机可读存储介质,其所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现第一方面或第二方面中的数据处理方法的步骤。In a fifth aspect, the embodiments of the present specification provide a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the first aspect or the second aspect is implemented. The steps of the data processing method.
第六方面,本说明书实施例提供了一种可移动平台,该可移动平台具有有限算力,该可移动平台用于实现第一方面或第二方面中的数据处理方法的步骤。In a sixth aspect, the embodiments of this specification provide a movable platform with limited computing power, and the movable platform is used to implement the steps of the data processing method in the first aspect or the second aspect.
第七方面,本说明书实施例提供了一种图像获取装置,包括处理器,所述图像获取装置用于实现第一方面或第二方面中的数据处理方法的步骤。In a seventh aspect, an embodiment of this specification provides an image acquisition device including a processor, and the image acquisition device is configured to implement the steps of the data processing method in the first aspect or the second aspect.
第八方面,本说明书实施例提供了一种无人飞行器,所述无人飞行器用于实现第一方面或第二方面中的数据处理方法的步骤。In an eighth aspect, the embodiments of this specification provide an unmanned aerial vehicle that is used to implement the steps of the data processing method in the first aspect or the second aspect.
第九方面,本说明书实施例提供了一种手持稳定云台,所述手持稳定云台用于实现第一方面或第二方面中的数据处理方法的步骤。In a ninth aspect, the embodiments of the present specification provide a handheld stabilized PTZ which is used to implement the steps of the data processing method in the first aspect or the second aspect.
在本说明书实施例中,可以基于目标训练数据、目标训练数据对应的真实值,确定初始神经网络的误差程度,然后,若误差程度不在预设范围内,则基于误差程度对初始神经网络中的参数进行调整,接着,基于指定数据转换算法,对调整后的参数进行数据转换处理,在完成数据转换处理之后,基于目标训练数据、目标训练数据对应的真实值,继续对初始神经网络进行训练,直至损失程度在预设范围内,最后,将损失程度在预设范围内的初始神经网络确定为目标神经网络。本说明书实施例中,通过在目标神经网络的生成过程中,预先模拟对参数的数据转换处理,以使最终生成的目标神经网络中的参数,达到进行数据转换处理之后的效果,这样, 使得后续的应用过程中,仅需对待处理数据进行数据转换处理,而无需对参数进行数据转换处理,进而可以避免由于参数被截断所导致精度降低的问题,进而实现在引入指定数据转换算法的同时,确保神经网络对待处理数据的处理效果。In the embodiment of this specification, the error degree of the initial neural network can be determined based on the target training data and the true value corresponding to the target training data. Then, if the error degree is not within the preset range, the error degree of the initial neural network is determined based on the error degree. The parameters are adjusted, and then, based on the specified data conversion algorithm, the adjusted parameters are processed for data conversion. After the data conversion process is completed, based on the target training data and the true value corresponding to the target training data, continue to train the initial neural network. Until the loss degree is within the preset range, finally, the initial neural network with the loss degree within the preset range is determined as the target neural network. In the embodiment of this specification, the data conversion process of the parameters is simulated in advance during the generation process of the target neural network, so that the finally generated parameters in the target neural network can achieve the effect after the data conversion process, so that the subsequent In the application process, only the data to be processed need to be processed for data conversion, without the need for data conversion processing for parameters, which can avoid the problem of reduced accuracy due to truncated parameters, and realize the introduction of specified data conversion algorithms while ensuring The processing effect of the data to be processed by the neural network.
附图说明Description of the drawings
图1是本说明书实施例提供的一种数据处理方法;Figure 1 is a data processing method provided by an embodiment of this specification;
图2是本说明书实施例提供的另一种数据处理方法;Figure 2 is another data processing method provided by an embodiment of this specification;
图3是本说明书实施例提供的一种数据处理方法的步骤流程图;Figure 3 is a flow chart of the steps of a data processing method provided by an embodiment of this specification;
图4是本说明书实施例提供的另一种数据处理方法的步骤流程图;FIG. 4 is a flowchart of the steps of another data processing method provided by an embodiment of this specification;
图5是本说明书实施例提供的一种数据处理装置的框图;Figure 5 is a block diagram of a data processing device provided by an embodiment of this specification;
图6是本说明书实施例提供的一种数据处理装置的框图;Figure 6 is a block diagram of a data processing device provided by an embodiment of this specification;
图7为实现本说明书各个实施例的一种终端的硬件结构示意图;FIG. 7 is a schematic diagram of the hardware structure of a terminal for implementing each embodiment of this specification;
图8为本说明书实施例提供的一种计算处理设备的框图;FIG. 8 is a block diagram of a computing processing device provided by an embodiment of this specification;
图9为本说明书实施例提供的一种便携式或者固定存储单元的框图。Fig. 9 is a block diagram of a portable or fixed storage unit provided by an embodiment of the specification.
具体实施方式Detailed ways
下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本说明书一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本说明书保护的范围。The technical solutions in the embodiments of this specification will be clearly and completely described below in conjunction with the drawings in the embodiments of this specification. Obviously, the described embodiments are part of the embodiments of this specification, not all of the embodiments. Based on the embodiments in this specification, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this specification.
图1是本说明书实施例提供的一种数据处理方法,如图1所示,该方法可以包括:Fig. 1 is a data processing method provided by an embodiment of this specification. As shown in Fig. 1, the method may include:
步骤101、基于目标训练数据、所述目标训练数据对应的真实值,确定初始神经网络的误差程度。Step 101: Determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data.
本说明书实施例中,该初始神经网络可以是基于所需进行的数据处理搭建的,所需进行的数据处理不同,该初始神经网络的构成可以不同,相应地,目标训练数据及目标训练数据对应的真实值可以不同,示例的,以初始神经网络是用于对图像进行清晰度增强处理的神经网络为例,该目标训练数据可以是样本图像,目标训练数据对应的真实值可以是与样本图像的图像内容且清晰度大于该样本图像的真实图像。以初始神经网络是用于对图像进行分类处理的神经网络为例,该目标训练数据可以是样本图像,目标训练数据对应的真实值可以是该样本图像的真实类别。进一步地,该初始神经网络可以包含多个层级,不同的层级中可以定义有不用的参数,每个层级中的参数可以为随机设置的初始参数,每个层级可以执行不同的操作,示例的,该初始神经网络可以包括输入层,特征提取层,全连接层,输出层等等。进一步地,该初始神经网络可以为卷积神经网络,其中,卷积神经网络是一种深度前馈人工神经网络,主要应用于图像处理领域,卷积神经网络的人工神经元可以响应一部分覆盖范围内的周围单元,其进行对图像 进行处理时,内部包含了大量的卷积操作。In the embodiments of this specification, the initial neural network can be built based on the data processing that needs to be performed. The data processing that needs to be performed is different, and the structure of the initial neural network can be different. Accordingly, the target training data and the target training data correspond to The true value of can be different. For example, the initial neural network is a neural network used to enhance the sharpness of the image. The target training data can be a sample image, and the true value corresponding to the target training data can be the same as the sample image. The image content and definition is greater than the real image of the sample image. Taking the initial neural network as a neural network used to classify images as an example, the target training data may be a sample image, and the true value corresponding to the target training data may be the true category of the sample image. Further, the initial neural network can include multiple levels, different parameters can be defined in different levels, the parameters in each level can be initial parameters set randomly, and each level can perform different operations. For example, The initial neural network may include an input layer, a feature extraction layer, a fully connected layer, an output layer, and so on. Further, the initial neural network can be a convolutional neural network, where the convolutional neural network is a deep feedforward artificial neural network, mainly used in the field of image processing, and the artificial neurons of the convolutional neural network can respond to a part of the coverage The surrounding unit inside contains a large number of convolution operations when processing the image.
进一步地,在基于目标训练数据、目标训练数据对应的真实值确定初始误差程度时,可以先将目标训练数据输入初始神经网络中,相应地,该初始神经网络中的各个层级会依次对该目标训练数据进行处理,最终会输出一个预测值。示例的,以层级为卷积层为例,该层级可以是对前一层级输出的对目标训练数据的处理结果进行卷积处理,以层级为输出层为例,该层级可以是将对目标训练数据的最终处理结果,即,预测值,输出。Further, when determining the initial degree of error based on the target training data and the true value corresponding to the target training data, the target training data can be first input into the initial neural network, and accordingly, each level in the initial neural network will respond to the target in turn The training data is processed, and finally a predicted value is output. For example, taking the level as the convolutional layer as an example, the level can be the convolution processing on the processing results of the target training data output by the previous level. Taking the level as the output layer as an example, the level can be the target training The final processing result of the data, that is, the predicted value, is output.
接着,可以基于预设的损失函数、该预测值及目标训练数据对应的真实值,计算该初始神经网络的误差程度,具体的,可以是将预测值及目标训练数据对应的真实值输入该预设的损失函数,然后对损失函数做梯度运算,计算损失函数的梯度值,得到该初始神经网络的误差程度,其中,该梯度值可以是采用反向传播算法进行计算得到的,该误差程度可以表示初始神经网络生存的预测值与该目标样本数据对应的真实值之间的偏差程度。Then, based on the preset loss function, the predicted value, and the true value corresponding to the target training data, the error degree of the initial neural network can be calculated. Specifically, the predicted value and the true value corresponding to the target training data can be input into the predicted value. Set the loss function, and then do a gradient operation on the loss function, calculate the gradient value of the loss function, and get the error degree of the initial neural network. The gradient value can be calculated by using the back propagation algorithm, and the error degree can be Indicates the degree of deviation between the predicted value of the initial neural network survival and the true value corresponding to the target sample data.
步骤102、若所述误差程度不在预设范围内,则基于所述误差程度对所述初始神经网络中的参数进行调整。Step 102: If the degree of error is not within a preset range, adjust the parameters in the initial neural network based on the degree of error.
本说明书实施例中,该预设范围可以根据实际应用场景和实际需求设定,本说明书实施例对其不加以限制。进一步地,如果误差程度在预设范围内,则可以认为预测值与目标样本数据对应的真实值之间的偏差程度足够小,此时,可以认为初始神经网络的处理能力能够满足需求,该初始神经网络能够正确对数据进行相应的处理,相反的,如果误差程度不在预设范围内,则可以认为预测值与目标样本数据对应的真实值之间的偏差程度较大,此时,可以认为初始神经网络的处理能力还不能够满足需求,该初始神经网络还不能正确对数据进行相应的处理,因此,可以基于误差程度对初始神经网络中的参数进行调整。进一步地,可以基于随机梯度下降法进行调整,具体的,可以先计算梯度值与预设步长的乘积,接着计算参数与该乘积之差,进而得到调整后的参数。这样,基于随机梯度下降法对参数进行调整的方式,可以使得神经网络的误差程度朝着收敛程度最大的方向下降,进而可以提高神经网络模型的收敛速度,进而提高生成神经网络的效率。In the embodiments of this specification, the preset range can be set according to actual application scenarios and actual requirements, and the embodiments of this specification do not limit it. Further, if the degree of error is within the preset range, it can be considered that the degree of deviation between the predicted value and the true value corresponding to the target sample data is sufficiently small. At this time, it can be considered that the processing capacity of the initial neural network can meet the demand. The neural network can process the data correctly. On the contrary, if the degree of error is not within the preset range, it can be considered that the deviation between the predicted value and the true value corresponding to the target sample data is large. At this time, it can be considered as the initial The processing capacity of the neural network cannot meet the demand, and the initial neural network cannot process the data correctly. Therefore, the parameters in the initial neural network can be adjusted based on the degree of error. Further, the adjustment can be made based on the stochastic gradient descent method. Specifically, the product of the gradient value and the preset step length can be calculated first, and then the difference between the parameter and the product can be calculated, and then the adjusted parameter can be obtained. In this way, the method of adjusting the parameters based on the stochastic gradient descent method can reduce the error degree of the neural network toward the direction of maximum convergence, thereby increasing the convergence speed of the neural network model, and thereby improving the efficiency of generating the neural network.
当然,也可以采用其他的参数调节方式对参数进行调整,例如,直接计算参数与预设固定值的差值,以作为调整后的参数,本说明书实施例对此不作限定。Of course, other parameter adjustment methods can also be used to adjust the parameters, for example, directly calculating the difference between the parameter and the preset fixed value as the adjusted parameter, which is not limited in the embodiment of this specification.
步骤103、基于指定数据转换算法,对调整后的所述参数进行数据转换处理。Step 103: Perform data conversion processing on the adjusted parameters based on the designated data conversion algorithm.
本说明书实施例中,该指定数据转换算法可以是需要在应用过程中引入的算法,该指定数据转换算法可以是能够提高计算效果的算法,示例的,该指定数据转换算法可以是基于余数定理的数据转换算法,或者,该指定数据转换算法也可以是基于拉格朗日插值定理的数据转换算法,或者,该指定数据转换算法也可以是基于余数定理的数据转换算法以及基于拉格朗日插值定理的数据转换算法。In the embodiment of this specification, the specified data conversion algorithm may be an algorithm that needs to be introduced in the application process, and the specified data conversion algorithm may be an algorithm that can improve the calculation effect. For example, the specified data conversion algorithm may be based on the remainder theorem Data conversion algorithm, or, the specified data conversion algorithm can also be a data conversion algorithm based on Lagrangian interpolation theorem, or, the specified data conversion algorithm can also be a data conversion algorithm based on the remainder theorem and based on Lagrangian interpolation Theorem's data conversion algorithm.
进一步地,由于生成目标神经网络的过程中,往往会对目标神经网络中的参数不断进行调 整,因此,本步骤中可以在生成过程中对参数预先进行数据转换处理,这样,即使出现参数被截断的问题,也不会影响到最终的神经网络模型的精度,同时,通过在目标神经网络的生成过程中,预先对参数的数据转换处理,以使最终生成的目标神经网络中的参数,达到进行数据转换处理之后的效果,这样,在后续的应用过程中,就无需对参数进行数据转换处理,进而可以避免应用过程中由于参数被截断所导致精度降低的问题。Further, since the parameters in the target neural network are often adjusted continuously during the process of generating the target neural network, in this step, the parameters can be pre-transformed during the generation process, so that even if the parameters are truncated The problem will not affect the accuracy of the final neural network model. At the same time, through the data conversion processing of the parameters in the process of generating the target neural network, the parameters in the final generated target neural network can be achieved The effect after the data conversion processing. In this way, in the subsequent application process, there is no need to perform data conversion processing on the parameters, thereby avoiding the problem of reduced accuracy due to the truncation of the parameters during the application process.
步骤104、在完成所述数据转换处理之后,基于所述目标训练数据、所述目标训练数据对应的真实值,继续对所述初始神经网络进行训练,直至所述损失程度在所述预设范围内。Step 104: After completing the data conversion process, continue training the initial neural network based on the target training data and the true value corresponding to the target training data until the loss degree is within the preset range Inside.
本说明书实施例中,由于前述步骤中已经对初始神经网络中的参数进行了更新,因此,可以基于目标训练数据、目标训练数据对应的真实值,继续对初始神经网络进行训练,其中,该继续训练的流程可以从确定初始神经网络的误差程度开始,重新执行上述处理流程。这样,在重复训练的过程中不断调整初始神经网络中的参数,以提高初始神经网络的处理能力,直到某一轮训练过程中,该初始神经网络的损失程度落入预设范围内,此时,可以停止训练。In the embodiment of this specification, since the parameters in the initial neural network have been updated in the foregoing steps, the training of the initial neural network can be continued based on the target training data and the true values corresponding to the target training data. The training process can start from determining the degree of error of the initial neural network, and re-execute the above processing process. In this way, in the process of repeated training, the parameters in the initial neural network are continuously adjusted to improve the processing capacity of the initial neural network until the loss of the initial neural network falls within the preset range during a certain round of training. , You can stop training.
步骤105、将损失程度在所述预设范围内的初始神经网络确定为目标神经网络。Step 105: Determine the initial neural network whose loss degree is within the preset range as the target neural network.
本说明书实施例中,通过不断训练,来降低初始神经网络的误差程度,进一步地,如果损失程度在预设范围内,则可以认为初始神经网络的处理能力能够满足需求,该初始神经网络能够正确对数据进行相应的处理,因此,可以将该初始神经网络确定为目标神经网络。进一步地,生成的目标神经网络可以部署至常用的平台上进行应用,示例的,该平台可以为精简指令集计算机微处理器(Advanced RISC Machines,ARM)、数字信号处理器(Digital Signal Processing,DSP)、现场可编程门阵列处理器(Field-Programmable Gate Array,FPGA)、图形处理器(Graphics Processing Unit,GPU)等等。In the embodiments of this specification, continuous training is used to reduce the error degree of the initial neural network. Further, if the loss degree is within a preset range, it can be considered that the processing capacity of the initial neural network can meet the requirements, and the initial neural network can be correct. The data is processed accordingly, so the initial neural network can be determined as the target neural network. Furthermore, the generated target neural network can be deployed on a common platform for application. For example, the platform can be a simplified instruction set computer microprocessor (Advanced RISC Machines, ARM), digital signal processor (Digital Signal Processing, DSP) ), Field-Programmable Gate Array (FPGA), Graphics Processing Unit (GPU), etc.
综上所述,本说明书实施例提供的数据处理方法,可以基于目标训练数据、目标训练数据对应的真实值,确定初始神经网络的误差程度,然后,若误差程度不在预设范围内,则基于误差程度对初始神经网络中的参数进行调整,接着,基于指定数据转换算法,对调整后的参数进行数据转换处理,在完成数据转换处理之后,基于目标训练数据、目标训练数据对应的真实值,继续对初始神经网络进行训练,直至损失程度在预设范围内,最后,将损失程度在预设范围内的初始神经网络确定为目标神经网络。本说明书实施例中,通过在目标神经网络的生成过程中,预先模拟对参数的数据转换处理,以使最终生成的目标神经网络中的参数,达到进行数据转换处理之后的效果,这样,使得后续的应用过程中,仅需对待处理数据进行数据转换处理,而无需对参数进行数据转换处理,进而可以避免由于参数被截断所导致精度降低的问题,进而实现在引入指定数据转换算法的同时,确保神经网络对待处理数据的处理效果。In summary, the data processing method provided by the embodiments of this specification can determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data. Then, if the error degree is not within the preset range, it is based on The degree of error adjusts the parameters in the initial neural network, and then, based on the specified data conversion algorithm, performs data conversion processing on the adjusted parameters. After completing the data conversion processing, based on the target training data and the true value of the target training data, Continue to train the initial neural network until the loss degree is within the preset range, and finally, the initial neural network with the loss degree within the preset range is determined as the target neural network. In the embodiment of this specification, the data conversion processing of the parameters is simulated in advance during the generation of the target neural network, so that the parameters in the final generated target neural network can achieve the effect after the data conversion processing, so that the subsequent In the application process, only the data to be processed need to be processed for data conversion, without the need for data conversion processing for parameters, which can avoid the problem of reduced accuracy due to truncated parameters, and realize the introduction of specified data conversion algorithms while ensuring The processing effect of the data to be processed by the neural network.
图2是本说明书实施例提供的另一种数据处理方法,如图2所示,该方法可以包括:Fig. 2 is another data processing method provided by an embodiment of this specification. As shown in Fig. 2, the method may include:
步骤201、对初始训练数据进行定点化处理,得到定点化训练数据。Step 201: Perform fixed-point processing on the initial training data to obtain fixed-point training data.
本步骤中,该定点化处理指的是将浮点数按定点化规则转换为定点表示的定点数,这样, 通过对初始训练数据进行定点化处理,可以将初始训练数据中的浮点数转换为定点数,其中,浮点数指的是小数点的位置会发生浮动,其整数位和小数位会发生变化的数,定点数指的是整数位和小数位确定的数。In this step, the fixed-point processing refers to the conversion of floating-point numbers to fixed-point representations according to the fixed-point rules. In this way, by performing fixed-point processing on the initial training data, the floating-point numbers in the initial training data can be converted into fixed-point numbers. Points, among them, floating-point number refers to the number of which the position of the decimal point will float and its integer and decimal places will change. Fixed-point number refers to the number determined by the integer and decimal places.
具体的,进行转换时,可以先对初始训练数据中的每个浮点数进行估值,然后基于预设的定点数的数据表达范围进行定标,确定定标值,其中,定标值表示的是小数点后的位数,定标值越大,数据的精度越大但所需的表达范围也会越大,对终端的硬件要求也越高,反之,定标值越小,数据的精度越小但所需的表达范围也会越小,对终端的硬件要求也越低,相应地,在确定定标值时,可以在确保不会超出预设数据表达范围的同时,尽量选取较大的值作为定标值,以确保数据精度。最后,基于定标值对浮点数的小数位进行截断,示例的,假设浮点数为3.1415926,定标值为4,那么可以将该浮点数转换为3.1415,当然,也可以基于被截断的小数位的值,对截断后的最后一个小数位进行四舍五入,例如,可以将该浮点数转换为3.1416。Specifically, during the conversion, each floating-point number in the initial training data can be estimated first, and then based on the data expression range of the preset fixed-point number to perform calibration to determine the calibration value, where the calibration value represents It is the number of digits after the decimal point. The larger the calibration value, the greater the accuracy of the data but the greater the range of expression required, and the higher the hardware requirements of the terminal. On the contrary, the smaller the calibration value, the greater the accuracy of the data. However, the required expression range will be smaller, and the hardware requirements of the terminal will be lower. Correspondingly, when determining the calibration value, it is possible to select a larger one while ensuring that the preset data expression range is not exceeded. The value is used as the calibration value to ensure data accuracy. Finally, the decimal place of the floating-point number is truncated based on the scaled value. For example, suppose the floating-point number is 3.1415926 and the scaled value is 4, then the floating-point number can be converted to 3.1415. Of course, it can also be based on the truncated decimal place. The value of is rounded to the last decimal place after truncation. For example, the floating-point number can be converted to 3.1416.
步骤202、对所述定点化训练数据进行逆定点化处理,得到所述目标训练数据。Step 202: Perform inverse fixed-point processing on the fixed-point training data to obtain the target training data.
本步骤中,该逆定点化处理指的是将定点数按逆定点化规则转换为浮点数,具体的,在进行转换时,可以按照该定点数在被转换为定点数之前的小数位的位数,对该定点数的小数位进行补齐,以使小数位的位数与被转换之前位数相同,进而实现转换,其中,补齐的数值,可以是利用随机生成算法生成的。本步骤中,以逆定点化处理之后的浮点数作为目标训练数据,可以使得后续生成目标神经网络的过程中,能够使用浮点数进行训练,而浮点数的表达范围较大,因此,一定程度上可以减少训练过程中各个计算操作出现数据溢出的概率。同时,受到精确度的影响,补齐之后的浮点数会存在精度损失,而实际应用场景中,在后续的应用阶段中,利用训练好的目标神经网络对真实的待处理数据进行处理时,往往会对待处理数据先进行定点化处理,这样,就会带来一定的数据精度损失,因此,本说明书实施例中,通过在训练阶段进行定点化处理及逆定点化处理,可以预先模拟定点化处理带来的数据损失,进而可以提高后续应用阶段中,利用训练好的目标神经网络对待处理数据进行处理时的处理精度。In this step, the inverse fixed-point processing refers to the conversion of a fixed-point number into a floating-point number according to the inverse fixed-point rule. Specifically, when the conversion is performed, the fixed-point number can be converted into a fixed-point number according to the number of decimal places. Number, the decimal places of the fixed-point number are complemented, so that the number of decimal places is the same as the number before the conversion, and then the conversion is realized. The complemented value can be generated by a random generation algorithm. In this step, the floating-point number after the inverse fixed-point processing is used as the target training data, so that in the subsequent process of generating the target neural network, floating-point numbers can be used for training, and the expression range of floating-point numbers is relatively large. Therefore, to a certain extent It can reduce the probability of data overflow in each calculation operation during training. At the same time, affected by the accuracy, there will be a loss of accuracy in the floating-point number after the completion. In actual application scenarios, in the subsequent application stage, when the trained target neural network is used to process the real data to be processed, it is often The data to be processed will be fixed-pointed first, which will bring a certain loss of data accuracy. Therefore, in the embodiment of this specification, the fixed-point processing and inverse fixed-point processing can be simulated in advance in the training phase. The resulting data loss can further improve the processing accuracy when using the trained target neural network to process the data to be processed in the subsequent application stage.
步骤203、基于目标训练数据、所述目标训练数据对应的真实值,确定初始神经网络的误差程度。Step 203: Determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data.
具体的,本步骤的具体实现方式可以参照前述步骤102,本说明书实施例在此不做赘述。Specifically, for the specific implementation of this step, refer to the foregoing step 102, which is not described in detail in the embodiment of this specification.
步骤204、若所述误差程度不在预设范围内,则基于所述误差程度对所述初始神经网络中的参数进行调整。Step 204: If the degree of error is not within the preset range, adjust the parameters in the initial neural network based on the degree of error.
具体的,本步骤的具体实现方式可以参照前述步骤102,本说明书实施例在此不做赘述。Specifically, for the specific implementation of this step, refer to the foregoing step 102, which is not described in detail in the embodiment of this specification.
步骤205、基于指定数据转换算法,对调整后的所述参数进行数据转换处理。Step 205: Perform data conversion processing on the adjusted parameters based on the designated data conversion algorithm.
本步骤中,以该指定数据转换算法为基于余数定理的数据转换算法为例,该基于余数定理的数据转换算法能够针对参与计算的各个项进行线性分解以及线性组合,用非耗时运算操作,即加法操作,替代耗时运算操作,即乘法操作,以达到减少算法时间度。In this step, taking the specified data conversion algorithm as a data conversion algorithm based on the remainder theorem as an example, the data conversion algorithm based on the remainder theorem can perform linear decomposition and linear combination for each item involved in the calculation, using non-time-consuming arithmetic operations, That is, the addition operation replaces the time-consuming operation, that is, the multiplication operation, in order to reduce the algorithm time.
具体的,本步骤可以通过下述子步骤(1)~子步骤(2)实现:Specifically, this step can be implemented through the following substeps (1) to (2):
子步骤(1):基于所指定数据转换算法中定义的变换矩阵及所述变换矩阵的转置矩阵,对调整后的所述权重矩阵进行卷积处理。Sub-step (1): Perform convolution processing on the adjusted weight matrix based on the transformation matrix defined in the designated data transformation algorithm and the transpose matrix of the transformation matrix.
本步骤中,假设进行卷积处理后的权重矩阵为C,那么C可以表示为:In this step, assuming that the weight matrix after convolution processing is C, then C can be expressed as:
C=[GgG T]⊙[B TdB]; C=[GgG T ]⊙[B T dB];
其中,⊙表示数组元素依次相乘,g表示预设的卷积核,G表示用于对卷积核进行转换的变化矩阵,G T表示G的转置矩阵,d表示权重矩阵中进行卷积计算的各个元素,示例的,假设权重矩阵为
Figure PCTCN2019098657-appb-000001
那么d可以为[d1,d2,d3,d4],B表示用于对权重矩阵进行转换的变化矩阵,B T表示B的转置矩阵,其中,G、B及其各自的转置矩阵均为预先设定好的矩阵,这些矩阵的具体内容可以是基于预设的卷积核的大小以及权重矩阵的大小预先设定的。相较于直接进行卷积处理的计算方式,由上式表示的计算方式能够增大加法计算,减少计算操作中的乘法计算,由于加法计算的耗时较短,乘法计算的耗时较大,因此,一定程度上可以提高处理效率。
Among them, ⊙ means that the array elements are multiplied in order, g means the preset convolution kernel, G means the change matrix used to transform the convolution kernel, G T means the transpose matrix of G, and d means convolution in the weight matrix Each element of the calculation, as an example, assume that the weight matrix is
Figure PCTCN2019098657-appb-000001
Then d can be [d1, d2, d3, d4], B represents the change matrix used to transform the weight matrix, B T represents the transpose matrix of B, where G, B and their respective transpose matrices are Pre-set matrices, the specific content of these matrices may be preset based on the size of the preset convolution kernel and the size of the weight matrix. Compared with the calculation method that directly performs the convolution processing, the calculation method represented by the above formula can increase the addition calculation and reduce the multiplication calculation in the calculation operation. Because the addition calculation takes less time and the multiplication calculation takes more time, Therefore, the processing efficiency can be improved to a certain extent.
进一步地,在进行卷积处理之后,还可以对卷积处理后的权重矩阵进行定点化处理,接着,在完成定点化处理之后,对所述卷积处理后的权重矩阵进行逆定点化处理。具体的,定点化处理的实现方式可以参照上述步骤201中的描述,逆定点化处理的实现方式可以参照上述步骤202中的描述,本说明书实施例在此不做赘述。由于浮点数的表达范围较大,因此,本说明书实施例中,将参数定点化之后,再逆定点化为浮点数,一定程度上可以减少训练过程中各个计算操作出现数据溢出的概率。同时,受到精确度的影响,逆定点化之后得到的浮点数会存在精度损失,而实际应用场景中,在后续的应用阶段中,利用训练好的目标神经网络对真实的待处理数据进行处理时,可能会对参数也进行定点化处理,这样,就会带来一定的数据精度损失,因此,本说明书实施例中,通过在训练阶段对参数进行定点化处理及逆定点化处理,可以预先模拟定点化处理带来的数据损失,进而可以提高后续应用阶段中,利用训练好的目标神经网络对待处理数据进行处理时的处理精度。Further, after the convolution processing is performed, the weight matrix after the convolution processing can also be fixed-point processing, and then after the fixed-point processing is completed, the weight matrix after the convolution processing is inverse fixed-point processing. Specifically, the implementation of the fixed-point processing can refer to the description in the above step 201, and the implementation of the inverse fixed-point processing can refer to the description in the above step 202, which is not repeated in the embodiment of the present specification. Since the expression range of floating-point numbers is relatively large, in the embodiment of this specification, after the parameters are fixed-point, and then inversely fixed-point into floating-point numbers, the probability of data overflow in each calculation operation during training can be reduced to a certain extent. At the same time, affected by the accuracy, the floating-point number obtained after the inverse fixed-pointization will have a precision loss. In actual application scenarios, in the subsequent application stage, the trained target neural network is used to process the real data to be processed. , The parameters may also be fixed-point processing, which will bring a certain loss of data accuracy. Therefore, in the embodiment of this specification, the fixed-point processing and inverse fixed-point processing of the parameters during the training phase can be simulated in advance The data loss caused by fixed-point processing can further improve the processing accuracy when using the trained target neural network to process the data to be processed in the subsequent application stage.
本说明书实施例中,通过对参数进行In the embodiments of this specification, the parameters are
子步骤(2):基于所述指定数据转换算法中定义的反变换矩阵及所述反变换矩阵的转置矩阵,对卷积处理后的所述权重矩阵进行转换。Sub-step (2): based on the inverse transformation matrix defined in the specified data conversion algorithm and the transposed matrix of the inverse transformation matrix, transform the weight matrix after the convolution processing.
具体的,假设完成数据转换处理后的权重矩阵为Y,那么Y可以表示为:Y=A TCA Specifically, assuming that the weight matrix after completing the data conversion processing is Y, then Y can be expressed as: Y=A T CA
其中,A表示用于对卷积处理后的权重矩阵C进行转换的逆变化矩阵,A T表示A的转置矩阵,A及A T均为预先设定好的矩阵,这些矩阵的具体内容可以是基于C的大小预先设定的,C的大小可以是基于预设的卷积核的大小以及权重矩阵的大小预先确定的。 Among them, A represents the inverse change matrix used to transform the weight matrix C after the convolution processing, AT represents the transposed matrix of A, and both A and AT are preset matrices. The specific content of these matrices can be It is predetermined based on the size of C, and the size of C may be predetermined based on the size of the preset convolution kernel and the size of the weight matrix.
步骤206、在完成所述数据转换处理之后,基于所述目标训练数据、所述目标训练数据对应 的真实值,继续对所述初始神经网络进行训练,直至所述损失程度在所述预设范围内。Step 206: After completing the data conversion process, continue training the initial neural network based on the target training data and the true value corresponding to the target training data until the loss degree is within the preset range Inside.
具体的,本步骤的具体实现方式可以参照前述步骤102,本说明书实施例在此不做赘述。Specifically, for the specific implementation of this step, refer to the foregoing step 102, which is not described in detail in the embodiment of this specification.
步骤207、将损失程度在所述预设范围内的初始神经网络确定为目标神经网络。Step 207: Determine the initial neural network whose loss degree is within the preset range as the target neural network.
具体的,本步骤的具体实现方式可以参照前述步骤102,本说明书实施例在此不做赘述。Specifically, for the specific implementation of this step, refer to the foregoing step 102, which is not described in detail in the embodiment of this specification.
综上所述,本说明书实施例提供的数据处理方法,会对初始训练数据进行定点化处理及逆定点化处理,以生成目标训练数据,这样,可以预先模拟定点化处理带来的数据损失,进而可以提高后续应用阶段中,利用训练好的目标神经网络对待处理数据进行处理时的处理精度。接着,会基于目标训练数据、目标训练数据对应的真实值,确定初始神经网络的误差程度,接着,若误差程度不在预设范围内,则基于误差程度对初始神经网络中的参数进行调整,接着,基于指定数据转换算法,对调整后的参数进行数据转换处理,在完成数据转换处理之后,基于目标训练数据、目标训练数据对应的真实值,继续对初始神经网络进行训练,直至损失程度在预设范围内,最后,将损失程度在预设范围内的初始神经网络确定为目标神经网络。本说明书实施例中,通过在目标神经网络的生成过程中,预先模拟对参数的数据转换处理,以使最终生成的目标神经网络中的参数,达到进行数据转换处理之后的效果,这样,使得后续的应用过程中,仅需对待处理数据进行数据转换处理,而无需对参数进行数据转换处理,进而可以避免由于参数被截断所导致精度降低的问题,进而实现在引入指定数据转换算法的同时,确保神经网络对待处理数据的处理效果。In summary, the data processing method provided by the embodiment of this specification performs fixed-point processing and inverse fixed-point processing on the initial training data to generate target training data. In this way, the data loss caused by fixed-point processing can be simulated in advance. Furthermore, in the subsequent application stage, the processing accuracy when using the trained target neural network to process the data to be processed can be improved. Then, based on the target training data and the true value corresponding to the target training data, the error level of the initial neural network is determined. Then, if the error level is not within the preset range, the parameters in the initial neural network are adjusted based on the error level, and then , Based on the specified data conversion algorithm, perform data conversion processing on the adjusted parameters. After completing the data conversion processing, continue training the initial neural network based on the target training data and the true value of the target training data until the loss Set the range, and finally, determine the initial neural network whose loss degree is within the preset range as the target neural network. In the embodiment of this specification, the data conversion processing of the parameters is simulated in advance during the generation of the target neural network, so that the parameters in the final generated target neural network can achieve the effect after the data conversion processing, so that the subsequent In the application process, only the data to be processed need to be processed for data conversion, without the need for data conversion processing for parameters, which can avoid the problem of reduced accuracy due to truncated parameters, and realize the introduction of specified data conversion algorithms while ensuring The processing effect of the data to be processed by the neural network.
图3是本说明书实施例提供的一种数据处理方法的步骤流程图,如图3所示,该方法可以包括:Fig. 3 is a flow chart of the steps of a data processing method provided by an embodiment of this specification. As shown in Fig. 3, the method may include:
步骤301、对待处理数据进行定点化处理,得到定点化处理数据。Step 301: Perform fixed-point processing on the data to be processed to obtain fixed-point processing data.
示例的,以目标神经网络是用于对图像进行处理的卷积神经网络,待处理数据为待处理图像为例,本步骤中,可以将该待处理图像对应的图像矩阵中,每个元素的值转换为定点数,进而得到定点化处理数据。其中,该图像矩阵中的元素即为该待处理图像中的像素,元素的值即为该像素的像素值。具体的,将浮点数转换为定点数的实现方式可以参照前述步骤201中的内容,本说明书实施例对此不作限定。As an example, the target neural network is a convolutional neural network used to process images, and the data to be processed is an image to be processed. In this step, in the image matrix corresponding to the image to be processed, the value of each element The value is converted to a fixed-point number, and then the fixed-point processing data is obtained. Wherein, the element in the image matrix is the pixel in the image to be processed, and the value of the element is the pixel value of the pixel. Specifically, the implementation of converting a floating-point number to a fixed-point number can refer to the content in the foregoing step 201, which is not limited in the embodiment of the present specification.
由于定点数的整数位及小数位是确定,而浮点数的整数位及小数位会发生浮动,因此,在基于浮点数进行计算的计算结果的位数的增多程度往往会大于基于定点数进行计算的计算结果的位数,这样,就需要终端提供位数较多的寄存器以支持计算操作。本说明书实施例中,通过待处理数据进行定点化处理,得到定点化处理数据,可以使得后续生成过程中的能够以定点化的数据进行计算,这样,就无需提供位数较多的寄存器,进而可以节省终端的成本,同时,由于定点数的位数固定,相对于位数会浮动的浮点数而言,其计算量往往也会较小,因此,本步骤中,通过先对待处理数据进行定点化处理,可以使得该方法能够适用于更多计算能力有限的终端。Since the integer and decimal places of fixed-point numbers are determined, and the integer and decimal places of floating-point numbers will fluctuate, the increase in the number of digits in the calculation results based on floating-point numbers is often greater than that based on fixed-point numbers. The number of digits of the calculation result, in this way, the terminal needs to provide a register with more digits to support the calculation operation. In the embodiment of this specification, the fixed-point processing data is obtained through the fixed-point processing of the data to be processed, so that the subsequent generation process can be calculated with fixed-point data. In this way, there is no need to provide a register with a large number of bits. The cost of the terminal can be saved. At the same time, because the fixed-point number has a fixed number of digits, the calculation amount is often smaller than that of a floating-point number with a floating number. Therefore, in this step, the fixed-point This method can be applied to more terminals with limited computing power.
步骤302、基于指定数据转换算法,对所述定点化处理数据进行数据转换处理,得到转换后的数据。Step 302: Perform data conversion processing on the fixed-point processing data based on a designated data conversion algorithm to obtain converted data.
本说明书实施例中,该指定数据转换算法可以是能够提高计算效果的算法,示例的,该指定数据转换算法可以是基于余数定理的数据转换算法,或者,该指定数据转换算法也可以是基于拉格朗日插值定理的数据转换算法,这两个算法能够针对参与计算的各个项进行线性分解以及线性组合,用非耗时运算操作,即加法操作,替代耗时运算操作,即乘法操作,进而可以减少算法时间度,提高处理效率。需要说明的是,基于拉格朗日插值定理的数据转换算法卷积核较大时,增加的加法数量会以远超卷积核大小的速度增长,最终会导致增加的加法操作所耗费的时间甚至超过节省下来的乘法操作所耗费的时间,因此,在选择指定数据转换算法时,可以基于所使用的卷积核的大小进行选择,以确保所引入的指定数据转换算法,能够起到提高处理效率的效果。需要说明是,对定点化处理数据进行数据转换处理之后,可能会导致数据溢出,因此,本说明书实施例可以为转换后的数据提供较多存储位进行存储,这样,以避免数据溢出,确保后续的处理过程能够正常进行。In the embodiment of this specification, the specified data conversion algorithm may be an algorithm that can improve the calculation effect. For example, the specified data conversion algorithm may be a data conversion algorithm based on the remainder theorem, or the specified data conversion algorithm may also be based on pull The data conversion algorithm of Grange's Interpolation Theorem, these two algorithms can perform linear decomposition and linear combination for each item involved in the calculation, and use non-time-consuming calculation operations, namely addition operations, instead of time-consuming calculation operations, namely multiplication operations, and then It can reduce algorithm time and improve processing efficiency. It should be noted that when the convolution kernel of the data conversion algorithm based on the Lagrangian interpolation theorem is large, the number of additions will increase at a speed far exceeding the size of the convolution kernel, which will eventually lead to an increase in the time consumed by the addition operation It even exceeds the time consumed by the multiplication operation saved. Therefore, when selecting the specified data conversion algorithm, you can choose based on the size of the convolution kernel used to ensure that the specified data conversion algorithm introduced can improve processing The effect of efficiency. It should be noted that after the data conversion processing of the fixed-point processing data, data overflow may be caused. Therefore, the embodiment of this specification can provide more storage bits for the converted data to store, so as to avoid data overflow and ensure subsequent The processing process can proceed normally.
步骤303、基于目标神经网络中的预训练参数,对所述转换后的数据进行处理,得到处理结果。Step 303: Based on the pre-training parameters in the target neural network, process the converted data to obtain a processing result.
本步骤中,该目标神经网络可以是前述神经网络生成方法实施例中所示的方法生成的。具体的,可以将转换后的数据输入目标神经网络中,然后基于该目标神经网络中的预训练参数及转换后的数据,进行点乘运算,具体的,该目标神经网络的各个层级可以基于该层级中的参数与该转换后的数据进行点乘,然后将结果输出给下一层级,以基于下一层级继续进行处理,最后,可以该目标神经网络的最后一层的结果,作为点乘结果,进一步地,可以对点乘运算的结果进行数据转换逆处理,得到处理结果,示例的,以指定数据转换算法为基于余数定理的数据转换算法,那么对点乘运算的结果进行数据转换逆处理的过程可以是基于余数定理的数据转换算法中定义的反变换矩阵及反变换矩阵的转置矩阵,进行转换的,具体的转换方式可以参照前述步骤205中的相关描述,本说明书实施例在此不做赘述。In this step, the target neural network may be generated by the method shown in the foregoing neural network generation method embodiment. Specifically, the converted data can be input into the target neural network, and then based on the pre-training parameters in the target neural network and the converted data, the dot product operation can be performed. Specifically, the various levels of the target neural network can be based on the The parameters in the level are multiplied by the converted data, and then the result is output to the next level to continue processing based on the next level. Finally, the result of the last layer of the target neural network can be used as the point multiplication result Further, data conversion and inverse processing can be performed on the result of the dot multiplication operation to obtain the processing result. For example, if the specified data conversion algorithm is a data conversion algorithm based on the remainder theorem, then the result of the dot multiplication operation is subjected to data conversion inversion processing The process can be based on the inverse transformation matrix and the transpose matrix of the inverse transformation matrix defined in the data transformation algorithm based on the remainder theorem, and the specific transformation method can refer to the relevant description in the foregoing step 205. The embodiment of this specification is here Do not repeat it.
进一步地,现有技术在应用阶段引入指定数据转换算法时,为了避免对参数做指定数据转换算法处理之后,参数超过数据表达范围截断,往往需要为参数提供较多存储位进行存储,这样,就需要应用该目标神经网络的终端配置存储位数较大的寄存器,进而会导致应用成本较大。而本说明书实施例中,由于应用阶段中,无需再对参数做指定数据转换处理,相应地,就无需为参数提供较多存储位,进而可以降低应用成本。Further, when the prior art introduces the specified data conversion algorithm in the application stage, in order to avoid truncating the parameters beyond the data expression range after the specified data conversion algorithm is processed on the parameters, it is often necessary to provide more storage locations for the parameters for storage. The terminal to which the target neural network needs to be applied is configured with a register with a large number of storage bits, which in turn leads to a large application cost. However, in the embodiment of this specification, since there is no need to perform specified data conversion processing on the parameters in the application stage, accordingly, there is no need to provide more storage locations for the parameters, thereby reducing the application cost.
综上所述,本说明书实施例提供的数据处理方法,可以对待处理数据进行定点化处理,得到定点化处理数据,然后基于指定数据转换算法,对定点化处理数据进行数据转换处理,得到转换后的数据,接着,将转换后的数据输入目标神经网络,基于目标神经网络中的预训练参数,对转换后的数据进行处理,得到处理结果。其中,该目标神经网络的生成过程中,预先模拟了 对参数的数据转换处理,以使最终生成的目标神经网络中的参数,达到进行数据转换处理之后的效果,因此,本说明书实施例中仅需对待处理数据进行数据转换处理,而无需对参数进行数据转换处理,进而可以避免由于参数被截断所导致精度降低的问题,进而实现在引入指定数据转换算法的同时,确保神经网络对待处理数据的处理效果。In summary, the data processing method provided in the embodiments of this specification can perform fixed-point processing on the data to be processed to obtain fixed-point processed data, and then perform data conversion processing on the fixed-point processed data based on the specified data conversion algorithm to obtain the converted data Then, input the converted data into the target neural network, and process the converted data based on the pre-training parameters in the target neural network to obtain the processing result. Among them, in the process of generating the target neural network, the data conversion processing of the parameters is simulated in advance, so that the finally generated parameters in the target neural network can achieve the effect after the data conversion processing. Therefore, in the embodiment of this specification only It is necessary to perform data conversion processing on the data to be processed, without the need to perform data conversion processing on the parameters, thereby avoiding the problem of reduced accuracy due to truncation of the parameters, thereby achieving the introduction of the specified data conversion algorithm while ensuring the neural network to process the data Treatment effect.
图4是本说明书实施例提供的另一种数据处理方法的步骤流程图,如图4所示,该方法可以包括:Fig. 4 is a step flow chart of another data processing method provided by an embodiment of this specification. As shown in Fig. 4, the method may include:
步骤401、基于目标训练数据、所述目标训练数据对应的真实值,确定初始神经网络的误差程度。Step 401: Determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data.
具体的,本步骤的实现方式可以参照上述步骤101,本说明书实施例对此不作限定。Specifically, the implementation of this step can refer to the above step 101, which is not limited in the embodiment of this specification.
步骤402、若所述误差程度不在预设范围内,则基于所述误差程度对所述初始神经网络中的参数进行调整。Step 402: If the degree of error is not within a preset range, adjust the parameters in the initial neural network based on the degree of error.
具体的,本步骤的实现方式可以参照上述步骤102,本说明书实施例对此不作限定。Specifically, the implementation of this step can refer to the foregoing step 102, which is not limited in the embodiment of this specification.
步骤403、基于指定数据转换算法,对调整后的所述参数进行数据转换处理。Step 403: Perform data conversion processing on the adjusted parameters based on the designated data conversion algorithm.
具体的,本步骤的实现方式可以参照上述步骤103,本说明书实施例对此不作限定。Specifically, the implementation of this step can refer to the above step 103, which is not limited in the embodiment of this specification.
步骤404、在完成所述数据转换处理之后,基于所述目标训练数据、所述目标训练数据对应的真实值,继续对所述初始神经网络进行训练,直至所述损失程度在所述预设范围内。Step 404: After completing the data conversion process, continue training the initial neural network based on the target training data and the true value corresponding to the target training data until the loss degree is within the preset range Inside.
具体的,本步骤的实现方式可以参照上述步骤104,本说明书实施例对此不作限定。Specifically, the implementation of this step can refer to the above step 104, which is not limited in the embodiment of this specification.
步骤405、将损失程度在所述预设范围内的初始神经网络确定为目标神经网络。Step 405: Determine the initial neural network whose loss degree is within the preset range as the target neural network.
具体的,本步骤的实现方式可以参照上述步骤105,本说明书实施例对此不作限定。Specifically, the implementation of this step can refer to the above step 105, which is not limited in the embodiment of the specification.
步骤406、对待处理数据进行定点化处理,得到定点化处理数据。Step 406: Perform fixed-point processing on the data to be processed to obtain fixed-point processing data.
具体的,本步骤的实现方式可以参照上述步骤201,本说明书实施例对此不作限定。Specifically, the implementation of this step can refer to the above step 201, which is not limited in the embodiment of this specification.
步骤407、基于指定数据转换算法,对所述定点化处理数据进行数据转换处理,得到转换后的数据。Step 407: Perform data conversion processing on the fixed-point processing data based on the designated data conversion algorithm to obtain converted data.
具体的,本步骤的实现方式可以参照上述步骤202,本说明书实施例对此不作限定。Specifically, the implementation of this step can refer to the above step 202, which is not limited in the embodiment of this specification.
步骤408、基于目标神经网络中的预训练参数,对所述转换后的数据进行处理,得到处理结果。Step 408: Based on the pre-training parameters in the target neural network, process the converted data to obtain a processing result.
具体的,本步骤的实现方式可以参照上述步骤203,本说明书实施例对此不作限定。Specifically, the implementation of this step can refer to the above step 203, which is not limited in the embodiment of this specification.
综上所述,本说明书实施例提供的数据处理方法,可以基于目标训练数据、目标训练数据对应的真实值,确定初始神经网络的误差程度,然后,若误差程度不在预设范围内,则基于误差程度对初始神经网络中的参数进行调整,接着,基于指定数据转换算法,对调整后的参数进行数据转换处理,在完成数据转换处理之后,基于目标训练数据、目标训练数据对应的真实值,继续对初始神经网络进行训练,直至损失程度在预设范围内,将损失程度在预设范围内的初始神经网络确定为目标神经网络,接着,可以对待处理数据进行定点化处理,得到定点化处理数 据,然后基于指定数据转换算法,对定点化处理数据进行数据转换处理,得到转换后的数据,接着,将转换后的数据输入目标神经网络,基于目标神经网络中的预训练参数,对转换后的数据进行处理,得到处理结果。其中,该目标神经网络的生成过程中,预先模拟了对参数的数据转换处理,以使最终生成的目标神经网络中的参数,达到进行数据转换处理之后的效果,因此,本说明书实施例中仅需对待处理数据进行数据转换处理,而无需对参数进行数据转换处理,进而可以避免由于参数被截断所导致精度降低的问题,进而实现在引入指定数据转换算法的同时,确保神经网络对待处理数据的处理效果。In summary, the data processing method provided by the embodiments of this specification can determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data. Then, if the error degree is not within the preset range, it is based on The degree of error adjusts the parameters in the initial neural network, and then, based on the specified data conversion algorithm, performs data conversion processing on the adjusted parameters. After completing the data conversion processing, based on the target training data and the true value of the target training data, Continue to train the initial neural network until the degree of loss is within the preset range. Determine the initial neural network with the degree of loss within the preset range as the target neural network. Then, you can perform fixed-point processing on the data to be processed to obtain fixed-point processing Data, and then based on the specified data conversion algorithm, the fixed-point processing data is converted into data to obtain the converted data. Then, the converted data is input into the target neural network, and based on the pre-training parameters in the target neural network, the converted data The data is processed to obtain the processing result. Among them, in the process of generating the target neural network, the data conversion processing of the parameters is simulated in advance, so that the finally generated parameters in the target neural network can achieve the effect after the data conversion processing. Therefore, in the embodiment of this specification only It is necessary to perform data conversion processing on the data to be processed, without the need to perform data conversion processing on the parameters, thereby avoiding the problem of reduced accuracy due to truncation of the parameters, thereby achieving the introduction of the specified data conversion algorithm while ensuring the neural network to process the data Treatment effect.
图5是本说明书实施例提供的一种数据处理装置的框图,如图5所示,该数据处理装置50可以包括:FIG. 5 is a block diagram of a data processing device provided by an embodiment of this specification. As shown in FIG. 5, the data processing device 50 may include:
第一确定模块501,用于基于目标训练数据、所述目标训练数据对应的真实值,确定初始神经网络的误差程度。The first determining module 501 is configured to determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data.
调整模块502,用于若所述误差程度不在预设范围内,则基于所述误差程度对所述初始神经网络中的参数进行调整。The adjustment module 502 is configured to adjust the parameters in the initial neural network based on the degree of error if the degree of error is not within the preset range.
转换模块503,用于基于指定数据转换算法,对调整后的所述参数进行数据转换处理。The conversion module 503 is configured to perform data conversion processing on the adjusted parameters based on a specified data conversion algorithm.
训练模块504,用于在完成所述数据转换处理之后,基于所述目标训练数据、所述目标训练数据对应的真实值,继续对所述初始神经网络进行训练,直至所述损失程度在所述预设范围内。The training module 504 is configured to, after completing the data conversion process, continue to train the initial neural network based on the target training data and the true value corresponding to the target training data until the loss degree is within the Within the preset range.
第二确定模块505,用于将损失程度在所述预设范围内的初始神经网络确定为目标神经网络。The second determining module 505 is configured to determine the initial neural network whose loss degree is within the preset range as the target neural network.
综上所述,本说明书实施例提供的数据处理装置,可以基于目标训练数据、目标训练数据对应的真实值,确定初始神经网络的误差程度,然后,若误差程度不在预设范围内,则基于误差程度对初始神经网络中的参数进行调整,接着,基于指定数据转换算法,对调整后的参数进行数据转换处理,在完成数据转换处理之后,基于目标训练数据、目标训练数据对应的真实值,继续对初始神经网络进行训练,直至损失程度在预设范围内,最后,将损失程度在预设范围内的初始神经网络确定为目标神经网络。本说明书实施例中,通过在目标神经网络的生成过程中,预先模拟对参数的数据转换处理,以使最终生成的目标神经网络中的参数,达到进行数据转换处理之后的效果,这样,使得后续的应用过程中,仅需对待处理数据进行数据转换处理,而无需对参数进行数据转换处理,进而可以避免由于参数被截断所导致精度降低的问题,进而实现在引入指定数据转换算法的同时,确保神经网络对待处理数据的处理效果。In summary, the data processing device provided by the embodiment of this specification can determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data. Then, if the error degree is not within the preset range, it is based on The degree of error adjusts the parameters in the initial neural network, and then, based on the specified data conversion algorithm, performs data conversion processing on the adjusted parameters. After completing the data conversion processing, based on the target training data and the true value of the target training data, Continue to train the initial neural network until the loss degree is within the preset range. Finally, the initial neural network with the loss degree within the preset range is determined as the target neural network. In the embodiment of this specification, the data conversion processing of the parameters is simulated in advance during the generation of the target neural network, so that the parameters in the final generated target neural network can achieve the effect after the data conversion processing, so that the subsequent In the application process, only the data to be processed need to be processed for data conversion, without the need for data conversion processing for parameters, which can avoid the problem of reduced accuracy due to truncated parameters, and realize the introduction of specified data conversion algorithms while ensuring The effect of the neural network on the data to be processed.
可选的,所述装置50还包括:Optionally, the device 50 further includes:
第一处理模块,用于对初始训练数据进行定点化处理,得到定点化训练数据;The first processing module is used to perform fixed-point processing on initial training data to obtain fixed-point training data;
第二处理模块,用于对所述定点化训练数据进行逆定点化处理,得到所述目标训练数据。The second processing module is configured to perform inverse fixed-point processing on the fixed-point training data to obtain the target training data.
可选的,所述指定数据转换算法为基于余数定理的数据转换算法;Optionally, the designated data conversion algorithm is a data conversion algorithm based on the remainder theorem;
可选的,所述指定数据转换算法为基于拉格朗日插值定理的数据转换算法。Optionally, the specified data conversion algorithm is a data conversion algorithm based on the Lagrangian interpolation theorem.
可选的,所述指定数据转换算法为基于余数定理的数据转换算法及基于拉格朗日插值定理的数据转换算法。Optionally, the specified data conversion algorithm is a data conversion algorithm based on the remainder theorem and a data conversion algorithm based on the Lagrange interpolation theorem.
可选的,所述参数为权重矩阵;Optionally, the parameter is a weight matrix;
所述转换模块503,用于:The conversion module 503 is used to:
基于所述指定数据转换算法中定义的变换矩阵及所述变换矩阵的转置矩阵,对调整后的所述权重矩阵进行卷积处理;Performing convolution processing on the adjusted weight matrix based on the transformation matrix defined in the specified data transformation algorithm and the transposed matrix of the transformation matrix;
基于所述指定数据转换算法中定义的反变换矩阵及所述反变换矩阵的转置矩阵,对卷积处理后的所述权重矩阵进行转换。The weight matrix after the convolution processing is transformed based on the inverse transformation matrix defined in the specified data transformation algorithm and the transposed matrix of the inverse transformation matrix.
可选的,所述转换模块503,还用于:Optionally, the conversion module 503 is further used for:
对卷积处理后的所述权重矩阵进行定点化处理;Performing fixed-point processing on the weight matrix after convolution processing;
在完成所述定点化处理之后,对卷积处理后的所述权重矩阵进行逆定点化处理。After the fixed-point processing is completed, the inverse fixed-point processing is performed on the weight matrix after the convolution processing.
可选的,所述目标神经网络为卷积神经网络,所述目标训练数据为图像。Optionally, the target neural network is a convolutional neural network, and the target training data is an image.
综上所述,本说明书实施例提供的数据处理装置,会对初始训练数据进行定点化处理及逆定点化处理,以生成目标训练数据,这样,可以预先模拟定点化处理带来的数据损失,进而可以提高后续应用阶段中,利用训练好的目标神经网络对待处理数据进行处理时的处理精度。接着,会基于目标训练数据、目标训练数据对应的真实值,确定初始神经网络的误差程度,接着,若误差程度不在预设范围内,则基于误差程度对初始神经网络中的参数进行调整,接着,基于指定数据转换算法,对调整后的参数进行数据转换处理,在完成数据转换处理之后,基于目标训练数据、目标训练数据对应的真实值,继续对初始神经网络进行训练,直至损失程度在预设范围内,最后,将损失程度在预设范围内的初始神经网络确定为目标神经网络。本说明书实施例中,通过在目标神经网络的生成过程中,预先模拟对参数的数据转换处理,以使最终生成的目标神经网络中的参数,达到进行数据转换处理之后的效果,这样,使得后续的应用过程中,仅需对待处理数据进行数据转换处理,而无需对参数进行数据转换处理,进而可以避免由于参数被截断所导致精度降低的问题,进而实现在引入指定数据转换算法的同时,确保神经网络对待处理数据的处理效果。In summary, the data processing device provided by the embodiment of this specification performs fixed-point processing and inverse fixed-point processing on initial training data to generate target training data. In this way, the data loss caused by fixed-point processing can be simulated in advance. Furthermore, in the subsequent application stage, the processing accuracy when using the trained target neural network to process the data to be processed can be improved. Then, based on the target training data and the true value corresponding to the target training data, the error level of the initial neural network is determined. Then, if the error level is not within the preset range, the parameters in the initial neural network are adjusted based on the error level, and then , Based on the specified data conversion algorithm, perform data conversion processing on the adjusted parameters. After completing the data conversion processing, continue training the initial neural network based on the target training data and the true value of the target training data until the loss Set the range, and finally, determine the initial neural network whose loss degree is within the preset range as the target neural network. In the embodiment of this specification, the data conversion processing of the parameters is simulated in advance during the generation of the target neural network, so that the parameters in the final generated target neural network can achieve the effect after the data conversion processing, so that the subsequent In the application process, only the data to be processed need to be processed for data conversion, without the need for data conversion processing for parameters, which can avoid the problem of reduced accuracy due to truncated parameters, and realize the introduction of specified data conversion algorithms while ensuring The effect of the neural network on the data to be processed.
图6是本说明书实施例提供的一种数据处理装置的框图,如图6所示,该数据处理装置60可以包括:FIG. 6 is a block diagram of a data processing device provided by an embodiment of this specification. As shown in FIG. 6, the data processing device 60 may include:
第一处理模块601,用于对待处理数据进行定点化处理,得到定点化处理数据;The first processing module 601 is configured to perform fixed-point processing on the data to be processed to obtain fixed-point processing data;
转换模块602,用于基于指定数据转换算法,对所述定点化处理数据进行数据转换处理,得到转换后的数据;The conversion module 602 is configured to perform data conversion processing on the fixed-point processing data based on a specified data conversion algorithm to obtain converted data;
第二处理模块603,用于基于目标神经网络中的预训练参数,对所述转换后的数据进行处理,得到处理结果;其中,所述目标神经网络是利用前述实施例中的数据处理装置生成的。The second processing module 603 is configured to process the converted data based on the pre-training parameters in the target neural network to obtain the processing result; wherein, the target neural network is generated by the data processing device in the foregoing embodiment of.
可选的,所述第二处理模块603,用于:Optionally, the second processing module 603 is configured to:
基于所述预训练参数及所述转换后的数据,进行点乘运算;Performing a dot multiplication operation based on the pre-training parameters and the converted data;
对所述点乘运算的结果进行数据转换逆处理,得到所述处理结果。Perform data conversion inverse processing on the result of the dot multiplication operation to obtain the processing result.
可选的,所述待处理数据为图像。Optionally, the data to be processed is an image.
综上所述,本说明书实施例提供的数据处理装置,可以对待处理数据进行定点化处理,得到定点化处理数据,然后基于指定数据转换算法,对定点化处理数据进行数据转换处理,得到转换后的数据,接着,将转换后的数据输入目标神经网络,基于目标神经网络中的预训练参数,对转换后的数据进行处理,得到处理结果。其中,该目标神经网络的生成过程中,预先模拟了对参数的数据转换处理,以使最终生成的目标神经网络中的参数,达到进行数据转换处理之后的效果,因此,本说明书实施例中仅需对待处理数据进行数据转换处理,而无需对参数进行数据转换处理,进而可以避免由于参数被截断所导致精度降低的问题,进而实现在引入指定数据转换算法的同时,确保神经网络对待处理数据的处理效果。In summary, the data processing device provided by the embodiments of this specification can perform fixed-point processing on the data to be processed to obtain fixed-point processed data, and then perform data conversion processing on the fixed-point processed data based on the specified data conversion algorithm to obtain the converted data Then, input the converted data into the target neural network, and process the converted data based on the pre-training parameters in the target neural network to obtain the processing result. Among them, in the process of generating the target neural network, the data conversion processing of the parameters is simulated in advance, so that the finally generated parameters in the target neural network can achieve the effect after the data conversion processing. Therefore, in the embodiment of this specification only It is necessary to perform data conversion processing on the data to be processed, without the need to perform data conversion processing on the parameters, thereby avoiding the problem of reduced accuracy due to truncation of the parameters, thereby achieving the introduction of the specified data conversion algorithm while ensuring the neural network to process the data Treatment effect.
图7为实现本说明书各个实施例的一种终端的硬件结构示意图,该终端700包括但不限于:射频单元701、网络模块702、音频输出单元703、输入单元704、传感器705、显示单元706、用户输入单元707、接口单元708、存储器709、处理器710、以及电源711等部件。本领域技术人员可以理解,图12中示出的终端结构并不构成对终端的限定,终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本说明书实施例中,终端包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。FIG. 7 is a schematic diagram of the hardware structure of a terminal for implementing the various embodiments of this specification. The terminal 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, User input unit 707, interface unit 708, memory 709, processor 710, power supply 711 and other components. Those skilled in the art can understand that the terminal structure shown in FIG. 12 does not constitute a limitation on the terminal, and the terminal may include more or fewer components than shown in the figure, or combine some components, or arrange different components. In the embodiments of this specification, terminals include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted terminals, wearable devices, and pedometers.
应理解的是,本说明书实施例中,射频单元701可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器710处理;另外,将上行的数据发送给基站。通常,射频单元701包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元701还可以通过无线通信系统与网络和其他设备通信。It should be understood that, in the embodiment of this specification, the radio frequency unit 701 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, after receiving downlink data from the base station, it is processed by the processor 710; Uplink data is sent to the base station. Generally, the radio frequency unit 701 includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 can also communicate with the network and other devices through a wireless communication system.
终端通过网络模块702为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。The terminal provides users with wireless broadband Internet access through the network module 702, such as helping users to send and receive emails, browse web pages, and access streaming media.
音频输出单元703可以将射频单元701或网络模块702接收的或者在存储器709中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元703还可以提供与终端700执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元703包括扬声器、蜂鸣器以及受话器等。The audio output unit 703 may convert the audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into audio signals and output them as sounds. Moreover, the audio output unit 703 may also provide audio output related to a specific function performed by the terminal 700 (for example, call signal reception sound, message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
输入单元704用于接收音频或视频信号。输入单元704可以包括图形处理器(Graphics Processing Unit,GPU)7041和麦克风7042,图形处理器7041对在视频捕获模式或图像捕获模式中由图像捕获终端(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元706上。经图形处理器7041处理后的图像帧可以存储在存储器709(或其它存储介质)中或者经由射频单元701或网络模块702进行发送。麦克风7042可以接收声音, 并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元701发送到移动通信基站的格式输出。The input unit 704 is used to receive audio or video signals. The input unit 704 may include a graphics processing unit (GPU) 7041 and a microphone 7042, and the graphics processor 7041 is used to capture still pictures or video images obtained by an image capture terminal (such as a camera) in a video capture mode or an image capture mode. Data is processed. The processed image frame may be displayed on the display unit 706. The image frame processed by the graphics processor 7041 may be stored in the memory 709 (or other storage medium) or sent via the radio frequency unit 701 or the network module 702. The microphone 7042 can receive sound, and can process such sound into audio data. The processed audio data can be converted into a format that can be sent to the mobile communication base station via the radio frequency unit 701 for output in the case of a telephone call mode.
终端700还包括至少一种传感器705,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板7061的亮度,接近传感器可在终端700移动到耳边时,关闭显示面板7061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别终端姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器705还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。The terminal 700 further includes at least one sensor 705, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor. The ambient light sensor can adjust the brightness of the display panel 7061 according to the brightness of the ambient light. The proximity sensor can close the display panel 7061 and/or when the terminal 700 is moved to the ear. Or backlight. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when stationary, and can be used to identify terminal posture (such as horizontal and vertical screen switching, related games, Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; sensor 705 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared Sensors, etc., will not be repeated here.
显示单元706用于显示由用户输入的信息或提供给用户的信息。显示单元706可包括显示面板7061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板7061。The display unit 706 is used to display information input by the user or information provided to the user. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
用户输入单元707可用于接收输入的数字或字符信息,以及产生与终端的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元707包括触控面板7071以及其他输入设备7072。触控面板7071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板7071上或在触控面板7071附近的操作)。触控面板7071可包括触摸检测终端和触摸控制器两个部分。其中,触摸检测终端检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测终端上接收触摸信息,并将它转换成触点坐标,再送给处理器710,接收处理器710发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板7071。除了触控面板7071,用户输入单元707还可以包括其他输入设备7072。具体地,其他输入设备7072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。The user input unit 707 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the terminal. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 7071 or near the touch panel 7071. operating). The touch panel 7071 may include two parts, a touch detection terminal and a touch controller. Among them, the touch detection terminal detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection terminal, converts it into contact coordinates, and then sends it To the processor 710, the command sent by the processor 710 is received and executed. In addition, the touch panel 7071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 7071, the user input unit 707 may also include other input devices 7072. Specifically, other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
进一步的,触控面板7071可覆盖在显示面板7061上,当触控面板7071检测到在其上或附近的触摸操作后,传送给处理器710以确定触摸事件的类型,随后处理器710根据触摸事件的类型在显示面板7061上提供相应的视觉输出。虽然触控面板7071与显示面板7061是作为两个独立的部件来实现终端的输入和输出功能,但是在某些实施例中,可以将触控面板7071与显示面板7061集成而实现终端的输入和输出功能,具体此处不做限定。Further, the touch panel 7071 can be overlaid on the display panel 7061. When the touch panel 7071 detects a touch operation on or near it, it transmits it to the processor 710 to determine the type of the touch event. The type of event provides corresponding visual output on the display panel 7061. Although the touch panel 7071 and the display panel 7061 are used as two independent components to realize the input and output functions of the terminal, in some embodiments, the touch panel 7071 and the display panel 7061 can be integrated to realize the input and output of the terminal. The output function is not limited here.
接口单元708为外部终端与终端700连接的接口。例如,外部终端可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的终端的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元708可以用于接收来自外部终端的输入(例如,数据信息、电力等等)并且将接收到的输入传输到 终端700内的一个或多个元件或者可以用于在终端700和外部终端之间传输数据。The interface unit 708 is an interface for connecting an external terminal and the terminal 700. For example, the external terminal may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a terminal with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc. The interface unit 708 may be used to receive input (for example, data information, power, etc.) from an external terminal and transmit the received input to one or more elements in the terminal 700 or may be used to communicate between the terminal 700 and the external terminal. Transfer data between.
存储器709可用于存储软件程序以及各种数据。存储器709可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器709可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 709 can be used to store software programs and various data. The memory 709 may mainly include a program storage area and a data storage area. The program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data, phone book, etc.) created by the use of mobile phones. In addition, the memory 709 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
处理器710是终端的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器709内的软件程序和/或模块,以及调用存储在存储器709内的数据,执行终端的各种功能和处理数据,从而对终端进行整体监控。处理器710可包括一个或多个处理单元;优选的,处理器710可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器710中。The processor 710 is the control center of the terminal. It uses various interfaces and lines to connect various parts of the entire terminal. It executes by running or executing software programs and/or modules stored in the memory 709, and calling data stored in the memory 709. Various functions of the terminal and processing data, so as to monitor the terminal as a whole. The processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc., and the modem The processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 710.
终端700还可以包括给各个部件供电的电源711(比如电池),优选的,电源711可以通过电源管理系统与处理器710逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The terminal 700 may also include a power source 711 (such as a battery) for supplying power to various components. Preferably, the power source 711 may be logically connected to the processor 710 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system. Features.
另外,终端700包括一些未示出的功能模块,在此不再赘述。In addition, the terminal 700 includes some functional modules not shown, which will not be repeated here.
可选的,本说明书实施例还提供一种终端,包括处理器710,存储器709,存储在存储器709上并可在所述处理器710上运行的计算机程序,该计算机程序被处理器710执行时实现上述数据处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。Optionally, the embodiment of this specification also provides a terminal, including a processor 710, a memory 709, and a computer program stored on the memory 709 and running on the processor 710. When the computer program is executed by the processor 710, Each process of the foregoing data processing method embodiment is implemented, and the same technical effect can be achieved. In order to avoid repetition, details are not repeated here.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are merely illustrative. The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work.
本说明书的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器来实现根据本说明书实施例的计算处理设备中的一些或者全部部件的一些或者全部功能。本说明书还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本说明书的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。Each component embodiment of this specification can be implemented by hardware, or by software modules running on one or more processors, or by a combination of them. Those skilled in the art should understand that a microprocessor or a digital signal processor may be used in practice to implement some or all of the functions of some or all of the components in the computing processing device according to the embodiments of this specification. This specification can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein. Such a program for realizing this specification may be stored on a computer-readable medium, or may have the form of one or more signals. Such signals can be downloaded from Internet websites, or provided on carrier signals, or provided in any other form.
相应地,本说明书实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现所述的数据处理方法的步骤,或者数据处 理方法的步骤。Correspondingly, the embodiments of the present specification also provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the steps of the data processing method, or data Processing method steps.
进一步地,本说明书实施例还提供一种可移动平台,该可移动平台具有有限算力,该可移动平台用于实现上述数据处理方法的步骤。进一步地,本说明书实施例还提供一种图像获取装置,该图像获取装置包括处理器,该图像获取装置用于实现上述数据处理方法的步骤。有限算力是指移动平台由于多并发任务或者硬件限制导致分配给数据处理的带宽和/或运算资源有限。例如,可移动平台所具有的处理器处理能力为800M,而运算过程需要1G以上的运算资源。基于本说明书实施例提供的方法,可以极大地减小系统的运算量,从而使有限算力平台的运用成为可能。进一步地,本说明书实施例还提供一种无人飞行器,该无人飞行器用于实现上述数据处理方法的步骤。所述无人机包括动力部、飞行控制部、图像获取部。本说明书实施例可以对图像处理部获取的图像数据,进行处理,通过数据处理方法简化图像处理部的运算复杂度和运算量,从而减少无人机处理器的运算负荷。数据处理结果可以被发送至飞行控制部以控制动力部的运行。Furthermore, the embodiment of the present specification also provides a movable platform with limited computing power, and the movable platform is used to implement the steps of the above-mentioned data processing method. Further, the embodiment of the present specification also provides an image acquisition device, the image acquisition device includes a processor, and the image acquisition device is configured to implement the steps of the above data processing method. Limited computing power refers to the limited bandwidth and/or computing resources allocated to data processing on the mobile platform due to multiple concurrent tasks or hardware limitations. For example, the mobile platform has a processor processing capacity of 800M, and the computing process requires more than 1G computing resources. Based on the method provided by the embodiment of this specification, the calculation amount of the system can be greatly reduced, thereby making it possible to use the limited computing power platform. Further, the embodiment of the present specification also provides an unmanned aerial vehicle, which is used to implement the steps of the aforementioned data processing method. The UAV includes a power unit, a flight control unit, and an image acquisition unit. The embodiments of the present specification can process the image data acquired by the image processing unit, and simplify the calculation complexity and calculation amount of the image processing unit through the data processing method, thereby reducing the calculation load of the drone processor. The data processing result can be sent to the flight control unit to control the operation of the power unit.
进一步地,本说明书实施例还提供一种手持稳定云台,该手持稳定云台用于实现上述数据处理方法的步骤。手持稳定云台包括控制部、控制电机以及云台轴臂,手持稳定云台还可以带有摄像装置或者通过通信接口外接图像获取装置。本说明书实施例可以对图像处理部获取的图像数据,进行处理,通过数据处理方法简化图像处理部的运算复杂度和运算量,从而减少手持稳定云台的运算负荷。数据处理结果可以被发送至控制部以控制电机的运行。Furthermore, the embodiment of this specification also provides a handheld stabilized PTZ, which is used to implement the steps of the above data processing method. The handheld stabilized PTZ includes a control unit, a control motor, and a PTZ shaft arm. The handheld stabilized PTZ may also have a camera device or an external image acquisition device through a communication interface. The embodiment of this specification can process the image data acquired by the image processing unit, and simplify the calculation complexity and calculation amount of the image processing unit through the data processing method, thereby reducing the calculation load of the handheld stable pan/tilt. The data processing result can be sent to the control part to control the operation of the motor.
进一步地,图8为本说明书实施例提供的一种计算处理设备的框图,如图8所示,图8示出了可以实现根据本说明书的方法的计算处理设备。该计算处理设备传统上包括处理器810和以存储器820形式的计算机程序产品或者计算机可读介质。存储器820可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器820具有用于执行上述方法中的任何方法步骤的程序代码的存储空间830。例如,用于程序代码的存储空间830可以包括分别用于实现上面的方法中的各种步骤的各个程序代码。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图9所述的便携式或者固定存储单元。该存储单元可以具有与图8的计算处理设备中的存储器820类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码,即可以由例如诸如810之类的处理器读取的代码,这些代码当由计算处理设备运行时,导致该计算处理设备执行上面所描述的方法中的各个步骤。Further, FIG. 8 is a block diagram of a computing processing device provided by an embodiment of this specification. As shown in FIG. 8, FIG. 8 shows a computing processing device that can implement the method according to this specification. The computing processing device traditionally includes a processor 810 and a computer program product in the form of a memory 820 or a computer readable medium. The memory 820 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM. The memory 820 has a storage space 830 for executing program codes of any method steps in the above methods. For example, the storage space 830 for program codes may include various program codes for implementing various steps in the above method. These program codes can be read out from or written into one or more computer program products. These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks. Such a computer program product is usually a portable or fixed storage unit as described with reference to FIG. 9. The storage unit may have storage segments, storage spaces, etc. arranged similarly to the memory 820 in the computing processing device of FIG. 8. The program code can be compressed in an appropriate form, for example. Generally, the storage unit includes computer-readable codes, that is, codes that can be read by, for example, a processor such as 810. These codes, when run by a computing processing device, cause the computing processing device to perform each of the methods described above. step.
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。The various embodiments in this specification are described in a progressive manner. Each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments can be referred to each other.
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着,结合实施例描述的特定特征、结构或者特性包括在本说明书的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。The “one embodiment”, “an embodiment” or “one or more embodiments” referred to herein means that a specific feature, structure, or characteristic described in combination with the embodiment is included in at least one embodiment in this specification. In addition, please note that the word examples "in one embodiment" herein do not necessarily all refer to the same embodiment.
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本说明书的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。In the instructions provided here, a lot of specific details are explained. However, it can be understood that the embodiments of the present specification can be practiced without these specific details. In some instances, well-known methods, structures and technologies are not shown in detail, so as not to obscure the understanding of this specification.
在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本说明书可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。In the claims, any reference signs placed between parentheses should not be constructed as a limitation to the claims. The word "comprising" does not exclude the presence of elements or steps not listed in the claims. The word "a" or "an" preceding an element does not exclude the presence of multiple such elements. This description can be realized by means of hardware including several different elements and by means of a suitably programmed computer. In the unit claims enumerating several devices, several of these devices may be embodied by the same hardware item. The use of the words first, second, and third, etc. do not indicate any order. These words can be interpreted as names.
最后应说明的是:以上实施例仅用以说明本说明书的技术方案,而非对其限制;尽管参照前述实施例对本说明书进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本说明书各实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of this specification, but not to limit them; although this specification has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions recorded in the foregoing embodiments are modified, or some of the technical features thereof are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of this specification.

Claims (27)

  1. 一种数据处理方法,其特征在于,所述方法包括:A data processing method, characterized in that the method includes:
    基于目标训练数据、所述目标训练数据对应的真实值,确定初始神经网络的误差程度;Determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data;
    若所述误差程度不在预设范围内,则基于所述误差程度对所述初始神经网络中的参数进行调整;If the degree of error is not within the preset range, adjust the parameters in the initial neural network based on the degree of error;
    基于指定数据转换算法,对调整后的所述参数进行数据转换处理;Perform data conversion processing on the adjusted parameters based on a designated data conversion algorithm;
    在完成所述数据转换处理之后,基于所述目标训练数据、所述目标训练数据对应的真实值,继续对所述初始神经网络进行训练,直至所述损失程度在所述预设范围内;After completing the data conversion process, continue training the initial neural network based on the target training data and the true value corresponding to the target training data until the loss degree is within the preset range;
    将损失程度在所述预设范围内的初始神经网络确定为目标神经网络。The initial neural network whose loss degree is within the preset range is determined as the target neural network.
  2. 根据权利要求1所述方法,其特征在于,所述基于目标训练数据、所述目标训练数据对应的真实值,确定初始神经网络的误差程度之前,所述方法还包括:The method according to claim 1, characterized in that, before determining the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data, the method further comprises:
    对初始训练数据进行定点化处理,得到定点化训练数据;Perform fixed-point processing on the initial training data to obtain fixed-point training data;
    对所述定点化训练数据进行逆定点化处理,得到所述目标训练数据。Perform inverse fixed-point processing on the fixed-point training data to obtain the target training data.
  3. 根据权利要求1所述方法,其特征在于,所述指定数据转换算法为基于余数定理的数据转换算法。The method according to claim 1, wherein the designated data conversion algorithm is a data conversion algorithm based on the remainder theorem.
  4. 根据权利要求1所述方法,其特征在于,所述指定数据转换算法为基于拉格朗日插值定理的数据转换算法。The method according to claim 1, wherein the specified data conversion algorithm is a data conversion algorithm based on Lagrangian interpolation theorem.
  5. 根据权利要求1所述方法,其特征在于,所述指定数据转换算法为基于余数定理的数据转换算法及基于拉格朗日插值定理的数据转换算法。The method according to claim 1, wherein the designated data conversion algorithm is a data conversion algorithm based on the remainder theorem and a data conversion algorithm based on the Lagrangian interpolation theorem.
  6. 根据权利要求3所述方法,其特征在于,所述参数为权重矩阵;The method according to claim 3, wherein the parameter is a weight matrix;
    所述基于指定数据转换算法,对调整后的所述参数进行数据转换处理,包括:The performing data conversion processing on the adjusted parameters based on a specified data conversion algorithm includes:
    基于所述指定数据转换算法中定义的变换矩阵及所述变换矩阵的转置矩阵,对调整后的所述权重矩阵进行卷积处理;Performing convolution processing on the adjusted weight matrix based on the transformation matrix defined in the specified data transformation algorithm and the transposed matrix of the transformation matrix;
    基于所述指定数据转换算法中定义的反变换矩阵及所述反变换矩阵的转置矩阵,对卷积处理后的所述权重矩阵进行转换。The weight matrix after the convolution processing is transformed based on the inverse transformation matrix defined in the specified data transformation algorithm and the transposed matrix of the inverse transformation matrix.
  7. 根据权利要求6所述方法,其特征在于,所述基于所述指定数据转换算法中定义的变换矩阵及所述变换矩阵的转置矩阵,对调整后的所述权重矩阵进行卷积处理之后,所述基于所述指定数据转换算法中定义的反变换矩阵及所述反变换矩阵的转置矩阵,对卷积处理后的所述权重矩阵进行转换之前,所述方法还包括:7. The method according to claim 6, wherein after performing convolution processing on the adjusted weight matrix based on the transformation matrix defined in the specified data transformation algorithm and the transpose matrix of the transformation matrix, Before the conversion of the weight matrix after convolution processing based on the inverse transformation matrix defined in the specified data conversion algorithm and the transpose matrix of the inverse transformation matrix, the method further includes:
    对卷积处理后的所述权重矩阵进行定点化处理;Performing fixed-point processing on the weight matrix after convolution processing;
    在完成所述定点化处理之后,对卷积处理后的所述权重矩阵进行逆定点化处理。After the fixed-point processing is completed, the inverse fixed-point processing is performed on the weight matrix after the convolution processing.
  8. 根据权利要求1至7任一所述方法,其特征在于,所述目标神经网络为卷积神经网络,所述目标训练数据为图像。The method according to any one of claims 1 to 7, wherein the target neural network is a convolutional neural network, and the target training data is an image.
  9. 一种数据处理方法,其特征在于,所述方法包括:A data processing method, characterized in that the method includes:
    对待处理数据进行定点化处理,得到定点化处理数据;Perform fixed-point processing on the data to be processed to obtain fixed-point processing data;
    基于指定数据转换算法,对所述定点化处理数据进行数据转换处理,得到转换后的数据;Perform data conversion processing on the fixed-point processing data based on a designated data conversion algorithm to obtain converted data;
    基于目标神经网络中的预训练参数,对所述转换后的数据进行处理,得到处理结果;其中,所述目标神经网络是利用权利要求1至8中任一项所述的方法生成的。Based on the pre-training parameters in the target neural network, the converted data is processed to obtain a processing result; wherein the target neural network is generated by the method according to any one of claims 1 to 8.
  10. 根据权利要求9所述方法,其特征在于,所述基于目标神经网络中的预训练参数,对所述转换后的数据进行处理,得到处理结果,包括:The method according to claim 9, wherein the processing the converted data based on the pre-training parameters in the target neural network to obtain the processing result comprises:
    基于所述预训练参数及所述转换后的数据,进行点乘运算;Performing a dot multiplication operation based on the pre-training parameters and the converted data;
    对所述点乘运算的结果进行数据转换逆处理,得到所述处理结果。Perform data conversion inverse processing on the result of the dot multiplication operation to obtain the processing result.
  11. 根据权利要求9或10所述方法,其特征在于,所述待处理数据为图像。The method according to claim 9 or 10, wherein the data to be processed is an image.
  12. 一种数据处理装置,其特征在于,所述装置包括:A data processing device, characterized in that the device includes:
    第一确定模块,用于基于目标训练数据、所述目标训练数据对应的真实值,确定初始神经网络的误差程度;The first determining module is configured to determine the error degree of the initial neural network based on the target training data and the true value corresponding to the target training data;
    调整模块,用于若所述误差程度不在预设范围内,则基于所述误差程度对所述初始神经网络中的参数进行调整;An adjustment module, configured to adjust parameters in the initial neural network based on the degree of error if the degree of error is not within a preset range;
    转换模块,用于基于指定数据转换算法,对调整后的所述参数进行数据转换处理;The conversion module is configured to perform data conversion processing on the adjusted parameters based on a specified data conversion algorithm;
    训练模块,用于在完成所述数据转换处理之后,基于所述目标训练数据、所述目标训练数据对应的真实值,继续对所述初始神经网络进行训练,直至所述损失程度在所述预设范围内;The training module is used to continue training the initial neural network based on the target training data and the true value corresponding to the target training data after the data conversion processing is completed, until the loss degree is within the expected Set within
    第二确定模块,用于将损失程度在所述预设范围内的初始神经网络确定为目标神经网络。The second determining module is used to determine the initial neural network whose loss degree is within the preset range as the target neural network.
  13. 根据权利要求12所述装置,其特征在于,所述装置还包括:The device according to claim 12, wherein the device further comprises:
    第一处理模块,用于对初始训练数据进行定点化处理,得到定点化训练数据;The first processing module is used to perform fixed-point processing on initial training data to obtain fixed-point training data;
    第二处理模块,用于对所述定点化训练数据进行逆定点化处理,得到所述目标训练数据。The second processing module is configured to perform inverse fixed-point processing on the fixed-point training data to obtain the target training data.
  14. 根据权利要求12所述装置,其特征在于,所述指定数据转换算法为基于余数定理数据转换算法。The device according to claim 12, wherein the designated data conversion algorithm is a data conversion algorithm based on the remainder theorem.
  15. 根据权利要求12所述装置,其特征在于,所述指定数据转换算法为基于拉格朗日插值定理的数据转换算法。The device according to claim 12, wherein the specified data conversion algorithm is a data conversion algorithm based on Lagrangian interpolation theorem.
  16. 根据权利要求12所述装置,其特征在于,所述指定数据转换算法为基于余数定理的数据转换算法及基于拉格朗日插值定理的数据转换算法。The device according to claim 12, wherein the specified data conversion algorithm is a data conversion algorithm based on the remainder theorem and a data conversion algorithm based on the Lagrangian interpolation theorem.
  17. 根据权利要求14所述装置,其特征在于,所述参数为权重矩阵;The device according to claim 14, wherein the parameter is a weight matrix;
    所述转换模块,用于:The conversion module is used for:
    基于所述指定数据转换算法中定义的变换矩阵及所述变换矩阵的转置矩阵,对调整后的所述权重矩阵进行卷积处理;Performing convolution processing on the adjusted weight matrix based on the transformation matrix defined in the specified data transformation algorithm and the transposed matrix of the transformation matrix;
    基于所述指定数据转换算法中定义的反变换矩阵及所述反变换矩阵的转置矩阵,对卷积处理后的所述权重矩阵进行转换。The weight matrix after the convolution processing is transformed based on the inverse transformation matrix defined in the specified data transformation algorithm and the transposed matrix of the inverse transformation matrix.
  18. 根据权利要求17所述装置,其特征在于,所述转换模块,还用于:The device according to claim 17, wherein the conversion module is further configured to:
    对卷积处理后的所述权重矩阵进行定点化处理;Performing fixed-point processing on the weight matrix after convolution processing;
    在完成所述定点化处理之后,对卷积处理后的所述权重矩阵进行逆定点化处理。After completing the fixed-point processing, perform inverse fixed-point processing on the weight matrix after the convolution processing.
  19. 根据权利要求12至18任一所述装置,其特征在于,所述目标神经网络为卷积神经网络,所述目标训练数据为图像。The device according to any one of claims 12 to 18, wherein the target neural network is a convolutional neural network, and the target training data is an image.
  20. 一种数据处理装置,其特征在于,所述装置包括:A data processing device, characterized in that the device includes:
    第一处理模块,用于对待处理数据进行定点化处理,得到定点化处理数据;The first processing module is used to perform fixed-point processing on the data to be processed to obtain fixed-point processing data;
    转换模块,用于基于指定数据转换算法,对所述定点化处理数据进行数据转换处理,得到转换后的数据;The conversion module is configured to perform data conversion processing on the fixed-point processing data based on a specified data conversion algorithm to obtain converted data;
    第二处理模块,用于基于目标神经网络中的预训练参数,对所述转换后的数据进行处理,得到处理结果;其中,所述目标神经网络是利用权利要求12至19中任一项所述的装置生成的。The second processing module is used to process the converted data based on the pre-training parameters in the target neural network to obtain the processing result; wherein, the target neural network uses any one of claims 12 to 19 Generated by the described device.
  21. 根据权利要求20所述装置,其特征在于,所述第二处理模块,用于:The device according to claim 20, wherein the second processing module is configured to:
    基于所述预训练参数及所述转换后的数据,进行点乘运算;Performing a dot multiplication operation based on the pre-training parameters and the converted data;
    对所述点乘运算的结果进行数据转换逆处理,得到所述处理结果。Perform data conversion inverse processing on the result of the dot multiplication operation to obtain the processing result.
  22. 根据权利要求20或21所述装置,其特征在于,所述待处理数据为图像。The device according to claim 20 or 21, wherein the data to be processed is an image.
  23. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现权利要求1至11中任一所述的数据处理方法的步骤。A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the data processing method according to any one of claims 1 to 11 are realized .
  24. 一种可移动平台,其具有有限算力,其特征在于,所述可移动平台用于实现权利要求1至11中任一所述的数据处理方法的步骤。A movable platform with limited computing power, characterized in that the movable platform is used to implement the steps of the data processing method according to any one of claims 1 to 11.
  25. 一种图像获取装置,包括处理器,其特征在于,所述图像获取装置用于实现权利要求1至11中任一所述的数据处理方法的步骤。An image acquisition device comprising a processor, wherein the image acquisition device is used to implement the steps of the data processing method according to any one of claims 1 to 11.
  26. 一种无人飞行器,其特征在于,所述无人飞行器用于实现权利要求1至11中任一所述的数据处理方法的步骤。An unmanned aerial vehicle, characterized in that the unmanned aerial vehicle is used to implement the steps of the data processing method according to any one of claims 1 to 11.
  27. 一种手持稳定云台,其特征在于,所述手持稳定云台用于实现权利要求1至11中任一所述的数据处理方法的步骤。A handheld stabilized PTZ, characterized in that the handheld stabilized PTZ is used to implement the steps of the data processing method according to any one of claims 1 to 11.
PCT/CN2019/098657 2019-07-31 2019-07-31 Data processing method and apparatus, and computer-readable storage medium WO2021016932A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/098657 WO2021016932A1 (en) 2019-07-31 2019-07-31 Data processing method and apparatus, and computer-readable storage medium
CN201980032385.0A CN112166441A (en) 2019-07-31 2019-07-31 Data processing method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/098657 WO2021016932A1 (en) 2019-07-31 2019-07-31 Data processing method and apparatus, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2021016932A1 true WO2021016932A1 (en) 2021-02-04

Family

ID=73859731

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/098657 WO2021016932A1 (en) 2019-07-31 2019-07-31 Data processing method and apparatus, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN112166441A (en)
WO (1) WO2021016932A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115903599A (en) * 2022-11-24 2023-04-04 上海乐存信息科技有限公司 Manufacturing method and device based on MCU

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673153A (en) * 2021-08-11 2021-11-19 追觅创新科技(苏州)有限公司 Method and device for determining electromagnetic torque of robot, storage medium and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485230A (en) * 2016-10-18 2017-03-08 中国科学院重庆绿色智能技术研究院 Based on the training of the Face datection model of neutral net, method for detecting human face and system
CN108009625A (en) * 2016-11-01 2018-05-08 北京深鉴科技有限公司 Method for trimming and device after artificial neural network fixed point
WO2018137357A1 (en) * 2017-01-24 2018-08-02 北京大学 Target detection performance optimization method
CN108763398A (en) * 2018-05-22 2018-11-06 腾讯科技(深圳)有限公司 Database configuration parameters processing method, device, computer equipment and storage medium
CN109800865A (en) * 2019-01-24 2019-05-24 北京市商汤科技开发有限公司 Neural network generation and image processing method and device, platform, electronic equipment
CN109800877A (en) * 2019-02-20 2019-05-24 腾讯科技(深圳)有限公司 Parameter regulation means, device and the equipment of neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485230A (en) * 2016-10-18 2017-03-08 中国科学院重庆绿色智能技术研究院 Based on the training of the Face datection model of neutral net, method for detecting human face and system
CN108009625A (en) * 2016-11-01 2018-05-08 北京深鉴科技有限公司 Method for trimming and device after artificial neural network fixed point
WO2018137357A1 (en) * 2017-01-24 2018-08-02 北京大学 Target detection performance optimization method
CN108763398A (en) * 2018-05-22 2018-11-06 腾讯科技(深圳)有限公司 Database configuration parameters processing method, device, computer equipment and storage medium
CN109800865A (en) * 2019-01-24 2019-05-24 北京市商汤科技开发有限公司 Neural network generation and image processing method and device, platform, electronic equipment
CN109800877A (en) * 2019-02-20 2019-05-24 腾讯科技(深圳)有限公司 Parameter regulation means, device and the equipment of neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115903599A (en) * 2022-11-24 2023-04-04 上海乐存信息科技有限公司 Manufacturing method and device based on MCU

Also Published As

Publication number Publication date
CN112166441A (en) 2021-01-01

Similar Documents

Publication Publication Date Title
KR102360659B1 (en) Machine translation method, apparatus, computer device and storage medium
US11373054B2 (en) Object recognition method and mobile terminal
CN108184050B (en) Photographing method and mobile terminal
CN107909583B (en) Image processing method and device and terminal
CN107103581B (en) Image reflection processing method and device and computer readable medium
CN110766610B (en) Reconstruction method of super-resolution image and electronic equipment
CN107886481B (en) Image processing method and device and mobile terminal
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN110796248A (en) Data enhancement method, device, equipment and storage medium
WO2021016932A1 (en) Data processing method and apparatus, and computer-readable storage medium
CN111062261B (en) Image processing method and device
CN109343811B (en) Display adjustment method and terminal equipment
CN111008929A (en) Image correction method and electronic equipment
CN113888447A (en) Image processing method, terminal and storage medium
US20230273645A1 (en) Electronic device comprising flexible display and operating method thereof
CN110135329B (en) Method, device, equipment and storage medium for extracting gestures from video
CN107609446B (en) Code pattern recognition method, terminal and computer readable storage medium
CN111563838B (en) Image processing method and electronic equipment
CN111310701B (en) Gesture recognition method, device, equipment and storage medium
EP3846426B1 (en) Speech processing method and mobile terminal
CN111158830B (en) Method for displaying application task window and terminal equipment
CN108845753B (en) Picture processing method and terminal
CN113536876A (en) Image recognition method and related device
CN113763230B (en) Image style migration model training method, style migration method and device
CN111260600B (en) Image processing method, electronic equipment and medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939703

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19939703

Country of ref document: EP

Kind code of ref document: A1