CN116629321B - Data processing method, voice processing device, medium and chip - Google Patents
Data processing method, voice processing device, medium and chip Download PDFInfo
- Publication number
- CN116629321B CN116629321B CN202310906292.4A CN202310906292A CN116629321B CN 116629321 B CN116629321 B CN 116629321B CN 202310906292 A CN202310906292 A CN 202310906292A CN 116629321 B CN116629321 B CN 116629321B
- Authority
- CN
- China
- Prior art keywords
- matrix
- convolution
- feature
- feature matrix
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 196
- 238000003672 processing method Methods 0.000 title claims abstract description 138
- 239000011159 matrix material Substances 0.000 claims abstract description 804
- 238000000034 method Methods 0.000 claims description 124
- 230000008569 process Effects 0.000 claims description 116
- 238000013136 deep learning model Methods 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 abstract description 137
- 238000004422 calculation algorithm Methods 0.000 description 31
- 230000017105 transposition Effects 0.000 description 31
- 238000010586 diagram Methods 0.000 description 16
- 238000003780 insertion Methods 0.000 description 12
- 230000037431 insertion Effects 0.000 description 12
- 238000004590 computer program Methods 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 9
- 238000000605 extraction Methods 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 230000009467 reduction Effects 0.000 description 8
- 230000001133 acceleration Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000000227 grinding Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/15—Correlation function computation including computation of convolution operations
- G06F17/153—Multidimensional correlation or convolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Algebra (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Complex Calculations (AREA)
Abstract
The application provides a data processing method, a voice processing device, a medium and a chip, and relates to the technical field of computers. The data processing method comprises the following steps: acquiring a first feature matrix of the first data after convolution processing; performing transpose convolution processing on the first feature matrix according to the target matrix address and the convolution parameter to obtain a second feature matrix, wherein the target matrix address is a physical address of a second matrix element in the second feature matrix; and determining second data according to the second feature matrix. The technical scheme provided by the application can improve the calculation speed and calculation efficiency of transpose convolution.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, a voice processing method, a device, a medium, and a chip.
Background
The transposed convolution is represented as a reverse process of the convolution in the deep learning, and by the transposed convolution, the feature map size before the convolution can be restored according to the convolution kernel size and the output size.
However, in the current process of performing transposed convolution calculation, the input feature images often need to be subjected to multiplication and addition calculation after being subjected to filling operation and zero insertion operation, and the process includes more invalid multiplication and addition operation, so that the calculated amount is increased, and the calculation speed and the calculation efficiency are reduced.
Disclosure of Invention
The present application aims to solve at least one of the technical problems existing in the prior art or related art.
To this end, a first aspect of the application is directed to a data processing method.
A second aspect of the present application is to provide an image processing method.
A third aspect of the present application is directed to a speech processing method.
A fourth aspect of the present application is directed to a data processing apparatus.
A fifth aspect of the present application is to provide an image processing apparatus.
A sixth aspect of the present application is directed to a speech processing apparatus.
A seventh aspect of the present application is directed to a readable storage medium.
An eighth aspect of the application is directed to a computer program product.
A ninth aspect of the present application is directed to a chip.
In view of this, according to one aspect of the present application, there is provided a data processing method including: acquiring a first feature matrix of the first data after convolution processing; performing transpose convolution processing on the first feature matrix according to the target matrix address and the convolution parameter to obtain a second feature matrix, wherein the target matrix address is a physical address of a second matrix element in the second feature matrix; and determining second data according to the second feature matrix.
The execution subject of the technical scheme of the data processing method provided by the application can be electronic equipment, can be a data processing device, and can be determined according to actual use requirements, and is not particularly limited. In order to more clearly describe the data processing method provided by the present application, the data processing apparatus will be described below with an execution body of the data processing method.
The data processing method provided by the application is used for performing transposition convolution operation on the convolved data.
Specifically, in the data processing method provided by the application, in the process of performing transpose convolution operation on first data after convolution processing, feature extraction is performed on the first data after convolution processing to obtain a first feature matrix, and then, according to a set convolution parameter and a physical address of each second matrix element in the second feature matrix to be output, namely, a target matrix address, the obtained first feature matrix is subjected to transpose convolution processing to obtain a second feature matrix after transpose convolution processing, and then, according to the obtained second feature matrix, second data is determined, wherein the second data is obtained after transpose convolution operation is performed on the first data. In this way, in the process of performing the transpose convolution operation on the first data after the convolution processing, from the angle of the data output end, according to the physical address of each second matrix element in the second feature matrix to be output and the set convolution parameters, the element address to be processed in the first feature matrix is determined, that is, the matrix element in the first feature matrix is reversely searched for performing the transpose convolution, the filling operation and the zero insertion operation in the traditional transpose convolution algorithm can be removed, thereby avoiding the invalid zero multiplication calculation in the transpose convolution calculation process, reducing the calculated amount in the transpose convolution calculation process, and further improving the calculation speed and the calculation efficiency of the transpose convolution.
The above data processing method according to the present application may further have the following additional technical features:
in some embodiments, optionally, according to the target matrix address and the convolution parameter, performing transpose convolution on the first feature matrix to obtain a second feature matrix, including: determining a fourth matrix according to the first feature matrix and a third matrix, wherein the third matrix is a convolution kernel of a deep learning model corresponding to the first data; traversing the second feature matrix according to the convolution parameters; in the traversal process, a fourth matrix element in the fourth matrix is selected according to the target matrix address and the convolution parameter, and the second feature matrix is updated according to the fourth matrix element until the traversal is finished.
According to the technical scheme, a fourth matrix is determined according to a third matrix which is a convolution kernel of a deep learning model corresponding to first data and a first feature matrix, then a second feature matrix is traversed according to set convolution parameters, in the process of traversing the second feature matrix, corresponding fourth matrix elements are selected from the fourth matrix according to the set convolution parameters and physical addresses of second matrix elements in the second feature matrix to be output, namely target matrix addresses, and then element values of corresponding second matrix elements in the second feature matrix are updated according to the selected fourth matrix elements, so that the second feature matrix is updated until the second feature matrix is traversed, and the transposed convolution processed second feature matrix is obtained. In this way, in the process of performing the transpose convolution processing on the first feature matrix, the transpose convolution processing is divided into two processing flows, the fourth matrix is obtained through the third matrix and the first feature matrix, and then the transpose convolution is realized based on the fourth matrix, so that the processed second feature matrix is obtained, and the calculation parallelism in the transpose convolution calculation process can be improved, so that the calculation speed and the calculation efficiency of the transpose convolution are further improved.
In some embodiments, optionally, determining the fourth matrix according to the first feature matrix and the third matrix includes: converting the first feature matrix into a column matrix; converting the third matrix into a row matrix; a fourth matrix is determined from the product of the column matrix and the row matrix.
In the technical scheme, the acquired first feature matrix is subjected to conversion processing to obtain a corresponding column matrix, and a convolution kernel of a deep learning model corresponding to the first data, namely a third matrix, is subjected to conversion processing to obtain a corresponding row matrix. Further, multiplication calculation is carried out on the row matrix and the column matrix obtained after conversion processing, and a multi-dimensional fourth matrix is obtained based on the product of the column matrix and the row matrix. In this way, in the process of performing transpose convolution processing on the first feature matrix, the fourth matrix is obtained by multiplying the third matrix and the first feature matrix, and in the process, the multiplying computation between the third matrix and the corresponding matrix elements in the first feature matrix can be implemented in parallel, so that the computation parallelism in the transpose convolution computation process is improved, and the computation speed and the computation efficiency of the transpose convolution are further improved.
In some embodiments, optionally, the convolution parameters include a stride parameter and a fill parameter, traversing the second feature matrix according to the convolution parameters, including: traversing the second feature matrix through the target frame according to the filling parameters; in the traversal process, according to the target matrix address and the convolution parameter, a fourth matrix element in the fourth matrix is selected, including: selecting a plurality of second matrix elements in the target frame according to the stride parameter with the target frame at each traversal position; selecting a first matrix element in the first feature matrix and a third matrix element in the third matrix corresponding to each second matrix element according to the physical address of each second matrix element in the plurality of second matrix elements; a fourth matrix element corresponding to each second matrix element is selected based on the first matrix element and the third matrix element corresponding to each second matrix element.
In this technical scheme, the convolution parameters may include a filling parameter and a step pitch parameter, and in this technical scheme, the second feature matrix is traversed by the target frame according to the set filling parameter. Further, in the process of performing traversal operation on the second feature matrix through the target frame, every time the target frame moves to a traversal position, the traversal operation is continuously performed on the second matrix elements selected by the target frame according to the set step distance parameters, so that a plurality of second matrix elements to be updated are selected from the current target frame according to the set step distance parameters when the target frame moves to the traversal position. Based on the above, according to the physical address of each second matrix element in the selected plurality of second matrix elements, reversely selecting the first matrix element corresponding to each second matrix element from the first feature matrix, and reversely selecting the third matrix element corresponding to each second matrix element from the third matrix. Further, after selecting the third matrix element and the first matrix element corresponding to each second matrix element, selecting a fourth matrix element for updating the element value of each second matrix element from the fourth matrix according to the third matrix element and the first matrix element corresponding to each second matrix element.
In this way, in the process of performing transpose convolution calculation on the first feature matrix, from the angle of the output end, traversing the second feature matrix, and according to the physical address of each second matrix element in the second feature matrix to be output and the set convolution parameters, reversely selecting the third matrix and the matrix elements participating in calculation in the first feature matrix, and further selecting the fourth matrix element for updating the second feature matrix. On the one hand, the filling operation and the zero inserting operation in the traditional transposition convolution algorithm can be removed, so that invalid zero multiplying calculation in the transposition convolution calculation process can be avoided, the calculated amount in the transposition convolution calculation process is reduced, and the calculation speed and the calculation efficiency of transposition convolution are improved; on the other hand, the fourth matrix is determined first, and then the second feature matrix is updated through fourth matrix elements in the fourth matrix, so that the calculation parallelism in the transpose convolution calculation process can be improved, and the calculation speed and the calculation efficiency of the transpose convolution are further improved.
In some embodiments, optionally, updating the second feature matrix according to the fourth matrix element includes: setting the initial value of a second matrix element in the second feature matrix to be zero; in the traversal process, the element values of the fourth matrix element corresponding to each second matrix element are accumulated, and the element values of the corresponding second matrix elements are updated according to the accumulated values.
In the technical scheme, the initial value of each second matrix element in the second feature matrix is set to be zero, and in the process of traversing the second feature matrix through the target frame and continuously traversing the second matrix elements selected by the target frame according to the set step pitch parameter, the element value of the fourth matrix element corresponding to the second matrix element is added to the element value of the second matrix element every time the second matrix element is traversed, so that the element value of the second matrix element is updated until the second feature matrix is traversed, and the updated second feature matrix is obtained. In this way, in the process of performing the transpose convolution processing on the first feature matrix, the transpose convolution processing is divided into two processing flows, a fourth matrix is obtained through multiplication computation, the fourth matrix covers all multiplication computation results between the first matrix element in the first feature matrix and the third matrix element in the third matrix, and then the final transpose convolution result is obtained through addition computation on the fourth matrix element in the fourth matrix. In this way, the multiplication and addition operation in the transposed convolution algorithm is separated into the multiplication operation and the addition operation, so that the calculation parallelism in the transposed convolution calculation process can be improved, and the calculation speed and the calculation efficiency of the transposed convolution are further improved.
In some embodiments, optionally, according to the target matrix address and the convolution parameter, performing transpose convolution on the first feature matrix to obtain a second feature matrix, including: determining a second feature matrix based on the target operator according to the target matrix address, the convolution parameter and the first feature matrix; wherein the target operator comprises at least one of: transpose convolution operator, neural network operator.
In the technical scheme, in the process of performing transpose convolution processing on the first feature matrix according to the target matrix address and the convolution parameters to obtain the second feature matrix, specifically, based on the target operator, the first feature matrix is processed according to the target matrix address and the convolution parameters to obtain the second feature matrix. In this way, based on the target operator, from the angle of the data output end, according to the physical address of each second matrix element in the second feature matrix to be output and the set convolution parameters, matrix elements in the first feature matrix are reversely searched for transposition convolution, so that the calculation flow is simplified, and the operator reasoning speed is accelerated.
The target operator is used for indicating the mapping relation between the second feature matrix and the first feature matrix.
In the practical application process, the target operator may specifically include a transpose convolution operator, a neural network operator, and the like, which is not limited herein.
According to a second aspect of the present application, there is provided an image processing method comprising: acquiring first image data; according to the data processing method in any of the first aspect, the first image data is processed to obtain the second image data; wherein the resolution of the second image data is greater than the resolution of the first image data.
According to the image processing method provided by the application, the first image data to be processed is obtained in the image processing process, and the obtained first image data is processed by the data processing method in any technical scheme of the first aspect, so that the resolution of the first image data is improved, and the second image data with higher resolution is obtained. The image processing method provided by the application comprises the data processing method in any one of the above first aspect, so the image processing method provided by the second aspect of the application has all the advantages of the data processing method in any one of the above first aspect, and is not repeated here.
According to a third aspect of the present application, there is provided a speech processing method comprising: acquiring first voice data; carrying out convolution processing on the first voice data to obtain second voice data; according to the data processing method in any one of the first aspect, the second voice data is processed to obtain third voice data; wherein the noise intensity of the third voice data is smaller than the noise intensity of the first voice data.
According to the voice processing method, in the process of processing voice data, first voice data to be processed are obtained, convolution processing is conducted on the first voice data, feature extraction and downsampling are conducted on the first voice data, and therefore noise intensity of the first voice data is reduced, and noise-reduced second voice data are obtained. Wherein the data size of the second voice data is smaller than the data size of the first voice data. On the basis, through the data processing method in any one of the first technical aspects, transposed convolution processing is performed on the noise-reduced second voice data, so as to up-sample the second voice data, thereby restoring the second voice data to the original data size, and obtaining third voice data with noise intensity smaller than that of the first voice data and the data size identical to that of the first voice data. The voice processing method provided by the application comprises the data processing method in any one of the above first aspect, so the voice processing method provided by the third aspect of the application has all the advantages of the data processing method in any one of the above first aspect, and is not repeated here.
According to a fourth aspect of the present application, there is provided a data processing apparatus comprising: the acquisition unit is used for acquiring a first feature matrix of the first data after convolution processing; the processing unit is used for performing transposition convolution processing on the first feature matrix according to the target matrix address and the convolution parameter to obtain a second feature matrix, wherein the target matrix address is a physical address of a second matrix element in the second feature matrix; and the processing unit is also used for determining second data according to the second characteristic matrix.
The data processing device is used for performing transposition convolution operation on the convolved data.
In the process of performing transpose convolution operation on first data after convolution processing, firstly, performing feature extraction on the first data after convolution processing through the acquisition unit to acquire a first feature matrix, and further performing transpose convolution processing on the acquired first feature matrix through the processing unit according to a set convolution parameter and a physical address, namely a target matrix address, of each second matrix element in the second feature matrix to be output to obtain a second feature matrix after transpose convolution processing, and further determining second data according to the obtained second feature matrix, wherein the second data is obtained after transpose convolution operation is performed on the first data. In this way, in the process of performing the transpose convolution operation on the first data after the convolution processing, from the angle of the data output end, according to the physical address of each second matrix element in the second feature matrix to be output and the set convolution parameters, the element address to be processed in the first feature matrix is determined, that is, the matrix element in the first feature matrix is reversely searched for performing the transpose convolution, the filling operation and the zero insertion operation in the traditional transpose convolution algorithm can be removed, thereby invalid zero multiplication calculation in the process of the turn-around convolution calculation can be avoided, the calculated amount in the process of the transpose convolution calculation is reduced, and the calculation speed and the calculation efficiency of the transpose convolution are improved.
According to a fifth aspect of the present application, there is provided an image processing apparatus comprising: an acquisition unit configured to acquire first image data; a processing unit, configured to process the first image data according to the data processing method in any one of the first aspect of the present application, to obtain second image data; wherein the resolution of the second image data is greater than the resolution of the first image data.
According to the image processing device provided by the application, in the process of processing an image, the first image data to be processed is acquired through the acquisition unit, and then the acquired first image data is processed through the processing unit according to the data processing method in any technical scheme of the first aspect, so that the resolution of the first image data is improved, and the second image data with higher resolution is obtained. The image processing device provided by the application can realize the steps of the data processing method in any one of the first aspect, so the image processing device provided by the fifth aspect of the application has all the beneficial effects of the data processing method in any one of the first aspect, and is not repeated here.
According to a sixth aspect of the present application, there is provided a speech processing apparatus comprising: an acquisition unit configured to acquire first voice data; the processing unit is used for carrying out convolution processing on the first voice data to obtain second voice data; the processing unit is further configured to process the second voice data according to the data processing method in any one of the first aspect of the present application, so as to obtain third voice data; wherein the noise intensity of the third voice data is smaller than the noise intensity of the first voice data.
According to the voice processing device provided by the application, in the process of processing voice data, the first voice data to be processed is obtained through the obtaining unit, and the first voice data is subjected to convolution processing through the processing unit so as to perform feature extraction and downsampling on the first voice data, so that the noise intensity of the first voice data is reduced, and the second voice data after noise reduction is obtained. Wherein the data size of the second voice data is smaller than the data size of the first voice data. On the basis, the processing unit performs transposed convolution processing on the noise-reduced second voice data by using the data processing method in any one of the first aspect, so as to up-sample the second voice data, thereby restoring the second voice data to the original data size, and obtaining third voice data with noise intensity smaller than that of the first voice data and data size identical to that of the first voice data. The voice processing device provided by the application can realize the data processing method in any one of the above first aspect, so the voice processing device provided by the sixth aspect of the application has all the beneficial effects of the data processing method in any one of the above first aspect, and is not repeated here.
According to a seventh aspect of the present application, there is provided a readable storage medium having stored thereon a program or instructions which when executed by a processor, implements a data processing method as in any of the above-described aspects, or which when executed by a processor, implements an image processing method as in the above-described aspects, or which when executed by a processor, implements a speech processing method as in the above-described aspects. Therefore, the readable storage medium according to the seventh aspect of the present application has all the advantages of the data processing method according to any one of the first aspect of the present application, or the readable storage medium according to the seventh aspect of the present application has all the advantages of the image processing method according to the second aspect of the present application, or the readable storage medium according to the seventh aspect of the present application has all the advantages of the voice processing method according to the third aspect of the present application, which will not be repeated here.
According to an eighth aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a data processing method as in any of the above-described technical aspects, or which, when executed by a processor, implements an image processing method as in the above-described technical aspects, or which, when executed by a processor, implements a speech processing method as in the above-described technical aspects. Therefore, the computer program product according to the eighth aspect of the present application has all the advantages of the data processing method according to any one of the first aspect of the present application, or the computer program product according to the eighth aspect of the present application has all the advantages of the image processing method according to the second aspect of the present application, or the computer program product according to the eighth aspect of the present application has all the advantages of the voice processing method according to the third aspect of the present application, which is not repeated herein.
According to a ninth aspect of the present application, there is provided a chip comprising a program or instructions for implementing the steps of the data processing method as in any of the above-described technical aspects when the chip is running, or for implementing the steps of the image processing method as in the above-described technical aspects when the chip is running, or for implementing the steps of the speech processing method as in the above-described technical aspects when the chip is running. Therefore, the chip according to the ninth aspect of the present application has all the advantages of the data processing method according to any one of the first aspect of the present application, or the chip according to the ninth aspect of the present application has all the advantages of the image processing method according to the second aspect of the present application, or the chip according to the ninth aspect of the present application has all the advantages of the voice processing method according to the third aspect of the present application, which are not repeated herein.
Additional aspects and advantages of the application will be set forth in part in the description which follows, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of a data processing method according to an embodiment of the present application;
FIG. 2 shows a schematic diagram of a speech processing method of an embodiment of the present application;
FIG. 3 shows one of the schematics of a data processing method of an embodiment of the application;
FIG. 4 shows a second schematic diagram of a data processing method according to an embodiment of the present application;
FIG. 5 shows a third schematic diagram of a data processing method of an embodiment of the present application;
FIG. 6 shows a fourth schematic diagram of a data processing method of an embodiment of the present application;
FIG. 7 shows a fifth schematic diagram of a data processing method of an embodiment of the present application;
FIG. 8 shows a sixth schematic diagram of a data processing method of an embodiment of the present application;
fig. 9 is a flowchart schematically showing an image processing method according to an embodiment of the present application;
FIG. 10 is a flow chart of a speech processing method according to an embodiment of the present application;
fig. 11 is a block diagram showing the configuration of an image processing apparatus according to an embodiment of the present application;
FIG. 12 is a block diagram showing the structure of a speech processing apparatus according to an embodiment of the present application;
FIG. 13 is a block diagram showing one of the structures of a data processing apparatus of an embodiment of the present application;
FIG. 14 shows a second block diagram of a data processing apparatus according to an embodiment of the present application;
Fig. 15 shows a block diagram of the electronic device of the embodiment of the application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and the scope of the application is therefore not limited to the specific embodiments disclosed below.
The data processing method, the voice processing method, the device, the medium and the chip provided by the embodiment of the application are described in detail below with reference to fig. 1 to 15 through specific embodiments and application scenarios thereof.
In one embodiment of the present application, as shown in fig. 1, the data processing method may specifically include the following steps 102 to 106:
102, acquiring a first feature matrix of first data after convolution processing;
104, performing transposition convolution treatment on the first feature matrix according to the target matrix address and the convolution parameter to obtain a second feature matrix;
Step 106, determining second data according to the second feature matrix;
the target matrix address is the physical address of the second matrix element in the second feature matrix.
The data processing method provided by the application is used for performing transposition convolution operation on the convolved data. The transposed convolution is represented as a reverse process of convolution in deep learning, and by means of the transposed convolution, the data size before convolution can be recovered according to the size of the convolution kernel and the size of the output.
Specifically, in the data processing method provided by the application, in the process of performing transpose convolution operation on the first data after convolution processing, feature extraction is performed on the first data after convolution processing to obtain a first feature matrix. And performing transpose convolution processing on the acquired first feature matrix according to the set convolution parameters and the physical address of each second matrix element in the second feature matrix to be output, namely the target matrix address, to obtain a transposed convolution processed second feature matrix. And then determining second data according to the obtained second feature matrix, wherein the second data is obtained by performing transposition convolution operation on the first data. In this way, in the process of performing transpose convolution operation on the first data after convolution processing, from the angle of the data output end, the address of the element to be processed in the first feature matrix is determined according to the physical address of each second matrix element in the second feature matrix to be output and the set convolution parameters. Namely, matrix elements in the first feature matrix are reversely searched for transposition convolution, filling operation and zero insertion operation in the traditional transposition convolution algorithm can be removed, invalid zero multiplication calculation in the transposition convolution calculation process can be avoided, the calculated amount in the transposition convolution calculation process is reduced, and therefore calculation speed and calculation efficiency of transposition convolution are improved.
In some embodiments of the present application, optionally, the step 104 may specifically include the following steps 104a to 104c:
104a, determining a fourth matrix according to the third matrix and the first feature matrix;
104b, traversing the second feature matrix according to the convolution parameters;
step 104c: in the traversal process, a fourth matrix element in a fourth matrix is selected according to the convolution parameter and the target matrix address, and a second feature matrix is updated according to the fourth matrix element until the traversal is finished;
wherein the third matrix is a convolution kernel of the deep learning model corresponding to the first data.
In the above embodiment, according to the set convolution parameter and the target matrix address, the transposed convolution processing is performed on the obtained first feature matrix, so as to obtain the transposed convolution processed second feature matrix. In this process, specifically, the fourth matrix may be determined from the third matrix, which is the convolution kernel of the deep learning model corresponding to the first data, and the first feature matrix. And then traversing the second feature matrix according to the set convolution parameters, and selecting a corresponding fourth matrix element from the fourth matrix according to the set convolution parameters and the physical address of each second matrix element in the second feature matrix to be output, namely the target matrix address, in the process of traversing the second feature matrix. And updating the element values of the corresponding second matrix elements in the second feature matrix according to the selected fourth matrix elements, so as to update the second feature matrix until the second feature matrix is traversed, and obtaining a second feature matrix after transposed convolution processing. In this way, in the process of performing the transpose convolution processing on the first feature matrix, the transpose convolution processing is divided into two processing flows, a fourth matrix is obtained through the third matrix and the first feature matrix, and then the transpose convolution is realized based on the fourth matrix, so that a processed second feature matrix is obtained. Therefore, the calculation parallelism in the transpose convolution calculation process can be improved, and the calculation speed and the calculation efficiency of the transpose convolution are further improved.
In some embodiments of the present application, optionally, the step 104a may specifically include the following steps 104a1 to 104a3:
step 104a1, converting the first feature matrix into a column matrix;
step 104a2: converting the third matrix into a row matrix;
step 104a3, determining a fourth matrix from the product of the column matrix and the row matrix.
In the above embodiment, the fourth matrix is determined based on the third matrix and the first feature matrix. Specifically, the obtained first feature matrix is subjected to conversion processing to obtain a corresponding column matrix, and a convolution kernel of a deep learning model corresponding to the first data, namely, a third matrix is subjected to conversion processing to obtain a corresponding row matrix. Further, multiplication calculation is carried out on the row matrix and the column matrix obtained after conversion processing, and a multi-dimensional fourth matrix is obtained based on the product of the column matrix and the row matrix. In this way, in the process of performing transpose convolution processing on the first feature matrix, the fourth matrix is obtained by performing multiplication computation on the third matrix and the first feature matrix, and in this process, the multiplication computation between the third matrix and the corresponding matrix elements in the first feature matrix can be implemented in parallel. Therefore, the calculation parallelism in the transpose convolution calculation process is improved, and the calculation speed and the calculation efficiency of the transpose convolution are further improved.
In some embodiments of the application, the convolution parameters described above optionally include a stride parameter and a fill parameter. On this basis, the step 104b may specifically include the following step 104b1, where during the traversal, the step of selecting the fourth matrix element in the fourth matrix according to the convolution parameter and the target matrix address may specifically include the following steps 108 to 112:
step 104b1, traversing the second feature matrix through the target frame according to the filling parameters;
step 108, selecting a plurality of second matrix elements in the target frame according to the step distance parameters under the condition that the target frame is positioned at each traversing position;
step 110, selecting a third matrix element and a first matrix element corresponding to each second matrix element according to the physical address of each second matrix element in the plurality of second matrix elements;
step 112, selecting a fourth matrix element corresponding to each second matrix element according to the third matrix element corresponding to each second matrix element and the first matrix element;
the third matrix element is a matrix element in the third matrix, and the first matrix element is a matrix element in the first feature matrix.
In the above embodiment, the convolution parameters may specifically include a padding parameter and a stride parameter. The filling parameter may be used to indicate the number of elements that the matrix edge of the first feature matrix expands outwards in the conventional transpose convolution algorithm, for example, the filling parameter is 2. In the conventional transpose convolution algorithm, when the filling parameter is 2, each matrix edge of the first feature matrix needs to be extended by 2 element values outwards, that is, 2 circles of 0 are filled in the periphery of the first feature matrix. Further, the stride parameter may be used to indicate a position difference between two originally adjacent matrix elements in the first feature matrix after the zero insertion operation on the first feature matrix in the conventional transpose convolution algorithm, for example, the stride parameter is 2. In the conventional transposed convolution algorithm, when the stride parameter is 2, after zero insertion operation is performed on the first feature matrix, a position difference between two originally adjacent matrix elements in the first feature matrix is 2, that is, a 0 is inserted between every two adjacent matrix elements in the first feature matrix. The stride parameter may be further used to indicate a number of steps of moving for performing processing calculation on matrix elements in the first feature matrix in the transposed convolution calculation, for example, the stride parameter is 2, and in the process of performing transposed convolution calculation on the first feature matrix, the number of steps of moving is 2, that is, 1 is an interval number of steps, and processing calculation is performed on matrix elements in the first feature matrix at intervals.
On the basis, in the above embodiment, in the process of performing the traversal operation on the second feature matrix according to the set convolution parameters, specifically, performing the traversal operation on the second feature matrix through the target frame according to the set filling parameters. The size of the target frame is related to the size of the third matrix, the size of the matrix of the second feature matrix and the filling parameter. Specifically, the size of the target frame satisfies: the number of movable positions of the target frame on the second feature matrix is the same as the number of third matrix elements in the third matrix in traversing the second feature matrix by moving the target frame on the second feature matrix. That is, in the process of performing the traversal operation on the second feature matrix by the target frame, the change of the traversal position of the target frame on the second feature matrix may indicate the change of the third matrix element for performing the transpose convolution calculation, that is, the multiply-add calculation, in the third matrix.
Further, in the process of performing traversal operation on the second feature matrix through the target frame, every time the target frame moves to a traversal position, the traversal operation is continuously performed on the second matrix elements selected by the target frame according to the set step distance parameters, so that a plurality of second matrix elements to be updated are selected from the current target frame according to the set step distance parameters when the target frame moves to the traversal position. The change of the traversing position in the process of continuing the traversing operation on the second matrix element selected by the frame in the target frame can indicate the change of the first matrix element used for performing the transpose convolution calculation, namely the multiply-add calculation, in the first feature matrix. On the basis, in the process of traversing the second feature matrix, according to the physical address of each second matrix element in the selected plurality of second matrix elements, the first matrix element corresponding to each second matrix element is reversely selected from the first feature matrix, and the third matrix element corresponding to each second matrix element is reversely selected from the third matrix. Further, after selecting the third matrix element and the first matrix element corresponding to each second matrix element, selecting a fourth matrix element for updating the element value of each second matrix element from the fourth matrix according to the third matrix element and the first matrix element corresponding to each second matrix element.
In this way, in the process of performing transpose convolution calculation on the first feature matrix, from the angle of the output end, traversing the second feature matrix, and according to the physical address of each second matrix element in the second feature matrix to be output and the set convolution parameters, reversely selecting the third matrix and the matrix elements participating in calculation in the first feature matrix, and further selecting the fourth matrix element for updating the second feature matrix. On the one hand, the filling operation and the zero inserting operation in the traditional transposition convolution algorithm can be removed, so that invalid zero multiplying calculation in the transposition convolution calculation process can be avoided, the calculated amount in the transposition convolution calculation process is reduced, and the calculation speed and the calculation efficiency of transposition convolution are improved; on the other hand, the fourth matrix is determined first, and then the second feature matrix is updated through fourth matrix elements in the fourth matrix, so that the calculation parallelism in the transpose convolution calculation process can be improved, and the calculation speed and the calculation efficiency of the transpose convolution are further improved.
In some embodiments of the present application, the step of updating the second feature matrix according to the fourth matrix element may include the following steps 114 and 116:
Step 114, setting the initial value of the second matrix element in the second feature matrix to be zero;
in the traversal process, the element values of the fourth matrix element corresponding to each of the second matrix elements are accumulated, and the element values of the corresponding second matrix elements are updated according to the accumulated values, step 116.
In the above embodiment, the element values of the corresponding second matrix elements in the second feature matrix are updated according to the selected fourth matrix element, so as to update the second feature matrix. Specifically, the initial value of each second matrix element in the second feature matrix is set to be zero, the second feature matrix is traversed through the target frame, and the second matrix elements selected by the target frame are continuously traversed according to the set step pitch parameters. And accumulating the element values of the fourth matrix element corresponding to the second matrix element to the element values of the second matrix element every time the second matrix element is traversed to update the element values of the second matrix element until the second feature matrix is traversed to obtain an updated second feature matrix. In this way, in the process of performing the transpose convolution processing on the first feature matrix, the transpose convolution processing is divided into two processing flows, a fourth matrix is obtained through multiplication computation, the fourth matrix covers all multiplication computation results between the first matrix element in the first feature matrix and the third matrix element in the third matrix, and then the final transpose convolution result is obtained through addition computation on the fourth matrix element in the fourth matrix. In this way, the multiplication and addition operation in the transposed convolution algorithm is separated into the multiplication operation and the addition operation, so that the calculation parallelism in the transposed convolution calculation process can be improved, and the calculation speed and the calculation efficiency of the transposed convolution are further improved.
In some embodiments of the present application, optionally, the step 104 may specifically include the following step 104d:
104d, determining a second feature matrix based on the target operator according to the target matrix address, the convolution parameter and the first feature matrix;
wherein the target operator comprises at least one of: transpose convolution operator, neural network operator.
In the above embodiment, according to the target matrix address and the convolution parameter, the transpose convolution processing is performed on the first feature matrix to obtain the second feature matrix. Specifically, based on a target operator, the first feature matrix is processed according to the target matrix address and the convolution parameter, and a second feature matrix is obtained. In this way, based on the target operator, from the angle of the data output end, according to the physical address of each second matrix element in the second feature matrix to be output and the set convolution parameters, matrix elements in the first feature matrix are reversely searched for transposition convolution, so that the calculation flow is simplified, and the operator reasoning speed is accelerated.
The target operator is used for indicating the mapping relation between the second feature matrix and the first feature matrix.
In the practical application process, the target operator may specifically include a transpose convolution operator, a neural network operator, and the like, which is not limited herein.
In summary, in the data processing method provided by the application, in the process of performing the transposed convolution operation on the first data after the convolution processing, from the perspective of the data output end, according to the physical address of each second matrix element in the second feature matrix to be output and the set convolution parameters, the element address to be processed in the first feature matrix is determined, that is, the matrix elements in the first feature matrix are reversely searched for performing the transposed convolution, so that the filling operation and the zero insertion operation in the traditional transposed convolution algorithm can be removed, thereby avoiding the invalid zero multiplication calculation in the process of the transposed convolution calculation, reducing the calculation amount in the process of the transposed convolution calculation, and further improving the calculation speed and the calculation efficiency of the transposed convolution. In addition, the data processing method provided by the application separates the multiplication and addition operation in the transposed convolution algorithm into the multiplication operation and the addition operation, and improves the calculation parallelism in the transposed convolution calculation process, thereby further improving the calculation speed and the calculation efficiency of the transposed convolution. Therefore, by the data processing method, the calculated amount of the transposed convolution operator and the neural network operator can be reduced, and the calculation flow is simplified, so that the operator reasoning speed is accelerated, the model performance of the neural network model such as a deep learning model when deployed on an edge equipment platform is improved, and the overall efficiency of the neural network model is improved.
The difference between the transposed convolution algorithm in the data processing method provided by the present application and the conventional transposed convolution algorithm is described below by taking the example that the first feature matrix and the third matrix are three-dimensional matrices with the size of 3×3, that is, the dimension k of the third matrix is 3, the stride parameter s is 1, and the filling parameter p is 0.
Specifically, as shown in fig. 3, 4 and 5, in the conventional transpose convolution algorithm, the input first feature matrix 302 needs to be filled, 2 circles (k-p-1) of 0 are filled around the first feature matrix 302, so as to obtain a fifth matrix 310, and the third matrix 306 is rotated in a central symmetry manner so as to obtain a sixth matrix 312. On this basis, the transposed convolved second feature matrix 304 is obtained by multiplying and adding the sixth matrix 312 with the matrix elements in the fifth matrix 310 in sequence.
In the transpose convolution algorithm in the data processing method provided by the present application, as shown in fig. 6 and 7, in the process of performing transpose convolution calculation on the first feature matrix 302, traversing the second feature matrix 304 through the target frame 314, and traversing the second matrix element selected by the target frame 314 under the condition that the target frame 314 is located at each traversing position, further reversely selecting the first matrix element corresponding to the second matrix element from the first feature matrix 302 according to the physical address of the second matrix element, reversely selecting the third matrix element corresponding to the second matrix element from the third matrix 306, and further updating the element value of the corresponding second matrix element by performing multiply-add calculation on the selected first matrix element and third matrix element until the second feature matrix 304 is traversed to end, thereby obtaining the final transpose convolution result.
Further, in the data processing method provided by the application, the multiply-add operation in the transposed convolution algorithm is separated into a multiply operation and an add operation. As shown in fig. 8, the first feature matrix 302 is converted into a column matrix 316, and the third matrix 306 is converted into a row matrix 318, and the fourth matrix 308 covering all multiplication results between the first matrix element and the third matrix element is obtained by multiplying the converted row matrix 318 and column matrix 316. On this basis, after the third matrix element and the first matrix element corresponding to the second matrix element are reversely selected according to the physical address of the second matrix element, the corresponding fourth matrix element is selected from the fourth matrix 308 according to the third matrix element and the first matrix element to perform the accumulating operation.
It can be seen that, for the same second matrix element in the second feature matrix 304, such as the second matrix element "1" in the second feature matrix 304, the second matrix element is obtained through the conventional transposed convolution algorithm by performing the calculation step of 0× 9+0 × 8+0 × 7+0 × 6+0 × 5+0 × 4+0 × 3+0 ×2+0×1+1×1=1, and the transposed convolution algorithm in the data processing method according to the present application is obtained through only the calculation step of 1×1=1. Compared with the traditional transposition convolution algorithm, the transposition convolution algorithm in the data processing method provided by the application can avoid invalid zero multiplication calculation in the process of the reverse transposition convolution calculation, the calculated amount in the process of the transposition convolution calculation can be reduced to 18-36% of the original calculated amount, and the calculation speed and the calculation efficiency of the transposition convolution are improved.
In an embodiment of the present application, an image processing method is further provided, as shown in fig. 9, where the image processing method may specifically include the following steps 202 and 204:
step 202, acquiring first image data;
step 204, processing the first image data according to the data processing method to obtain second image data;
wherein the resolution of the second image data is greater than the resolution of the first image data.
According to the image processing method provided by the application, in the process of processing an image, first image data to be processed is obtained, and the obtained first image data is processed by the data processing method in any embodiment of the first aspect, so that the resolution of the first image data is improved, and second image data with higher resolution is obtained. The first image data is the first data in any of the above embodiments, and the second image data is the second data in any of the above embodiments.
It will be appreciated that in the use of neural networks, up-sampling (up-sampling) is often required to refine a coarse feature map, such as to improve the resolution of a low resolution picture in the image domain. Transposed convolution is a mainstay of modern segmentation and super-resolution algorithms that provide the best and most versatile upsampling of abstract representations. The technical scheme provided by the application can realize rapid super-resolution image reconstruction.
The image processing method provided by the present application includes the data processing method in any embodiment of the first aspect, so the image processing method provided by the second aspect of the present application has all the advantages of the data processing method in any embodiment of the first aspect, and is not described herein.
In an embodiment of the present application, a speech processing method is also provided, as shown in fig. 10, where the speech processing method may specifically include the following steps 402 to 406:
step 402, acquiring first voice data;
step 404, performing convolution processing on the first voice data to obtain second voice data;
step 406, processing the second voice data according to the data processing method to obtain third voice data;
wherein the noise intensity of the third voice data is smaller than the noise intensity of the first voice data.
According to the voice processing method, in the process of processing voice data, first voice data to be processed are obtained, convolution processing is conducted on the first voice data, feature extraction and downsampling are conducted on the first voice data, and therefore noise intensity of the first voice data is reduced, and noise-reduced second voice data are obtained. Wherein the data size of the second voice data is smaller than the data size of the first voice data. On the basis, through the data processing method in any embodiment of the first aspect, transposed convolution processing is performed on the second voice data after noise reduction, so as to up-sample the second voice data, thereby restoring the second voice data to the original data size, and obtaining third voice data with noise intensity smaller than that of the first voice data and the data size identical to that of the first voice data. The second voice data is the first data in any one of the above embodiments, and the third voice data is the second data in any one of the above embodiments.
The voice processing method provided by the present application includes the data processing method in any embodiment of the first aspect, so the voice processing method provided by the third aspect of the present application has all the advantages of the data processing method in any embodiment of the first aspect, and is not described herein.
Illustratively, as shown in fig. 2, convolution and transpose convolution algorithms are applied to a speech noise reduction model, the input of which is audio data, and the output of which is also audio data. In the first half of the speech noise reduction model, feature extraction and downsampling are performed on input audio data through convolution processing, so that the size of the audio data is reduced, and in the second half of the speech noise reduction model, upsampling is performed on the convolved audio data through transposed convolution processing, so that the convolved audio data is restored to the original data size, and noise-reduced audio data is obtained.
In one embodiment of the application, a data processing apparatus is also presented. As shown in fig. 13, fig. 13 shows a block diagram of a data processing apparatus 900 according to an embodiment of the present application. The data processing apparatus 900 may specifically include an acquisition unit 902 and a processing unit 904 as follows:
An acquiring unit 902, configured to acquire a first feature matrix of the first data after the convolution processing;
the processing unit 904 is configured to perform transpose convolution processing on the first feature matrix according to the convolution parameter and the target matrix address, to obtain a second feature matrix;
the processing unit 904 is further configured to determine second data according to the second feature matrix;
the target matrix address is the physical address of the second matrix element in the second feature matrix.
The data processing apparatus 900 provided in the embodiment of the present application is configured to perform a transpose convolution operation on data after convolution processing.
Specifically, in the process of performing a transpose convolution operation on first data after convolution processing, the data processing apparatus 900 provided by the present application includes an obtaining unit 902 and a processing unit 904, where the first data after convolution processing is firstly subjected to feature extraction by the obtaining unit 902 to obtain a first feature matrix, and then, according to a set convolution parameter and a physical address, i.e., a target matrix address, of each second matrix element in a second feature matrix to be output by the processing unit 904, the obtained first feature matrix is subjected to the transpose convolution processing to obtain a second feature matrix after the transpose convolution processing, and then, according to the obtained second feature matrix, second data is determined, where the second data is obtained after the transpose convolution operation is performed on the first data. In this way, in the process of performing the transpose convolution operation on the first data after the convolution processing, from the angle of the data output end, according to the physical address of each second matrix element in the second feature matrix to be output and the set convolution parameters, the element address to be processed in the first feature matrix is determined, that is, the matrix element in the first feature matrix is reversely searched for performing the transpose convolution, the filling operation and the zero insertion operation in the traditional transpose convolution algorithm can be removed, thereby invalid zero multiplication calculation in the process of the turn-around convolution calculation can be avoided, the calculated amount in the process of the transpose convolution calculation is reduced, and the calculation speed and the calculation efficiency of the transpose convolution are improved.
In some embodiments of the present application, the processing unit 904 is optionally specifically configured to: determining a fourth matrix according to the third matrix and the first feature matrix; traversing the second feature matrix according to the convolution parameters; in the traversal process, a fourth matrix element in a fourth matrix is selected according to the convolution parameter and the target matrix address, and a second feature matrix is updated according to the fourth matrix element until the traversal is finished; wherein the third matrix is a convolution kernel of the deep learning model corresponding to the first data.
In the above embodiment, in the process that the processing unit 904 performs the transpose convolution processing on the obtained first feature matrix according to the set convolution parameter and the target matrix address, so as to obtain a transposed convolution processed second feature matrix, specifically, the processing unit 904 determines a fourth matrix according to the third matrix and the first feature matrix, which are convolution kernels of the deep learning model corresponding to the first data, and then performs the traversal operation on the second feature matrix according to the set convolution parameter, and in the process that the traversal operation is performed on the second feature matrix, selects a corresponding fourth matrix element from the fourth matrix according to the set convolution parameter and the target matrix address, which is a physical address of each second matrix element in the second feature matrix to be output, and then updates the element value of the corresponding second matrix element in the second feature matrix according to the selected fourth matrix element, so as to update the second feature matrix until the traversal of the second feature matrix is completed, so as to obtain the transposed convolution processed second feature matrix. In this way, in the process of performing the transpose convolution processing on the first feature matrix, the transpose convolution processing is divided into two processing flows, the fourth matrix is obtained through the third matrix and the first feature matrix, and then the transpose convolution is realized based on the fourth matrix, so that the processed second feature matrix is obtained, and the calculation parallelism in the transpose convolution calculation process can be improved, so that the calculation speed and the calculation efficiency of the transpose convolution are further improved.
In some embodiments of the present application, the processing unit 904 is optionally specifically configured to: converting the first feature matrix into a column matrix; converting the third matrix into a row matrix; a fourth matrix is determined from the product of the column matrix and the row matrix.
In the above embodiment, in the process of determining the fourth matrix by the processing unit 904 according to the third matrix and the first feature matrix, specifically, the processing unit 904 performs a conversion process on the obtained first feature matrix to obtain a corresponding column matrix, and performs a conversion process on the third matrix, which is a convolution kernel of the deep learning model corresponding to the first data, to obtain a corresponding row matrix. Further, the processing unit 904 performs multiplication calculation on the row matrix and the column matrix obtained after the conversion processing, and obtains a multi-dimensional fourth matrix based on the product of the column matrix and the row matrix. In this way, in the process of performing transpose convolution processing on the first feature matrix, the fourth matrix is obtained by multiplying the third matrix and the first feature matrix, and in the process, the multiplying computation between the third matrix and the corresponding matrix elements in the first feature matrix can be implemented in parallel, so that the computation parallelism in the transpose convolution computation process is improved, and the computation speed and the computation efficiency of the transpose convolution are further improved.
In some embodiments of the present application, optionally, the convolution parameters include a stride parameter and a fill parameter, and the processing unit 904 is specifically configured to: traversing the second feature matrix through the target frame according to the filling parameters; selecting a plurality of second matrix elements in the target frame according to the stride parameter with the target frame at each traversal position; selecting a third matrix element and a first matrix element corresponding to each second matrix element according to the physical address of each second matrix element in the plurality of second matrix elements; selecting a fourth matrix element corresponding to each second matrix element according to the third matrix element corresponding to each second matrix element and the first matrix element; the third matrix element is a matrix element in the third matrix, and the first matrix element is a matrix element in the first feature matrix.
In the above embodiment, the convolution parameters may specifically include a filling parameter and a stride parameter, where the filling parameter may be used to indicate the number of elements that the matrix edge of the first feature matrix expands outwards in the conventional transpose convolution algorithm, for example, the filling parameter is 2, and in the conventional transpose convolution algorithm, each matrix edge of the first feature matrix needs to expand outwards by 2 element values, that is, fill 2 circles of 0 around the first feature matrix. Further, the stride parameter may be used to indicate a position difference between two originally adjacent matrix elements in the first feature matrix after the zero insertion operation is performed on the first feature matrix in the conventional transposed convolution algorithm, for example, the stride parameter is 2, and in the conventional transposed convolution algorithm, after the zero insertion operation is performed on the first feature matrix, a position difference between two originally adjacent matrix elements in the first feature matrix is 2, that is, a 0 is inserted between every two adjacent matrix elements in the first feature matrix. The stride parameter may be further used to indicate a number of steps of moving for performing processing calculation on matrix elements in the first feature matrix in the transposed convolution calculation, for example, the stride parameter is 2, and in the process of performing transposed convolution calculation on the first feature matrix, the number of steps of moving is 2, that is, 1 is an interval number of steps, and processing calculation is performed on matrix elements in the first feature matrix at intervals.
On the basis, in the above embodiment, in the process of performing the traversing operation on the second feature matrix by the processing unit 904 according to the set convolution parameter, specifically, the processing unit 904 performs the traversing operation on the second feature matrix through the target frame according to the set filling parameter. The size of the target frame is related to the size of the third matrix, the size of the matrix of the second feature matrix, and the filling parameter, specifically, the size of the target frame satisfies: the number of movable positions of the target frame on the second feature matrix is the same as the number of third matrix elements in the third matrix in traversing the second feature matrix by moving the target frame on the second feature matrix. That is, in the process of performing the traversal operation on the second feature matrix by the target frame, the change of the traversal position of the target frame on the second feature matrix may indicate the change of the third matrix element for performing the transpose convolution calculation, that is, the multiply-add calculation, in the third matrix.
Further, during the traversing operation of the second feature matrix by the target frame, each time the target frame moves to a traversing position, the processing unit 904 continues the traversing operation on the second matrix elements selected by the target frame according to the set step distance parameter, so as to select a plurality of second matrix elements to be updated from the current target frame according to the set step distance parameter when the target frame moves to the traversing position. The change of the traversing position in the process of continuing the traversing operation on the second matrix element selected by the frame in the target frame can indicate the change of the first matrix element used for performing the transpose convolution calculation, namely the multiply-add calculation, in the first feature matrix. On this basis, in the process of traversing the second feature matrix by the processing unit 904, according to the physical address of each second matrix element in the selected plurality of second matrix elements, a first matrix element corresponding to each second matrix element is reversely selected from the first feature matrix, and a third matrix element corresponding to each second matrix element is reversely selected from the third matrix. Further, after the processing unit 904 selects the third matrix element and the first matrix element corresponding to each second matrix element, a fourth matrix element for updating the element value of each second matrix element is selected from the fourth matrix according to the third matrix element and the first matrix element corresponding to each second matrix element.
In this way, in the process of performing transpose convolution calculation on the first feature matrix by the processing unit 904, from the output end angle, the second feature matrix is traversed, and according to the physical address of each second matrix element in the second feature matrix to be output and the set convolution parameters, the third matrix and the matrix elements participating in calculation in the first feature matrix are reversely selected, so that the fourth matrix element for updating the second feature matrix is selected. On the one hand, the filling operation and the zero inserting operation in the traditional transposition convolution algorithm can be removed, so that invalid zero multiplying calculation in the transposition convolution calculation process can be avoided, the calculated amount in the transposition convolution calculation process is reduced, and the calculation speed and the calculation efficiency of transposition convolution are improved; on the other hand, the fourth matrix is determined first, and then the second feature matrix is updated through fourth matrix elements in the fourth matrix, so that the calculation parallelism in the transpose convolution calculation process can be improved, and the calculation speed and the calculation efficiency of the transpose convolution are further improved.
In some embodiments of the present application, the processing unit 904 is optionally specifically configured to: setting the initial value of a second matrix element in the second feature matrix to be zero; in the traversal process, the element values of the fourth matrix element corresponding to each second matrix element are accumulated, and the element values of the corresponding second matrix elements are updated according to the accumulated values.
In the above embodiment, in the process of updating the second feature matrix by the processing unit 904 according to the selected fourth matrix element, specifically, the processing unit 904 sets the initial value of each second matrix element in the second feature matrix to zero, and in the process of traversing the second feature matrix through the target frame and continuing traversing the selected second matrix element in the target frame according to the set stride parameter, each time the second feature matrix is traversed to one second matrix element, the element value of the fourth matrix element corresponding to the second matrix element is added to the element value of the second matrix element, so as to update the element value of the second matrix element until the second feature matrix is traversed, and the updated second feature matrix is obtained. In this way, in the process of performing the transpose convolution processing on the first feature matrix by the processing unit 904, the transpose convolution processing is divided into two processing flows, and a fourth matrix is obtained by multiplication, where the fourth matrix covers all multiplication results between the first matrix element in the first feature matrix and the third matrix element in the third matrix, and then the final transpose convolution result is obtained by adding the fourth matrix element in the fourth matrix. In this way, the multiplication and addition operation in the transposed convolution algorithm is separated into the multiplication operation and the addition operation, so that the calculation parallelism in the transposed convolution calculation process can be improved, and the calculation speed and the calculation efficiency of the transposed convolution are further improved.
In one embodiment of the present application, an image processing apparatus is also presented. As shown in fig. 11, fig. 11 shows a block diagram of the image processing apparatus 500 of the embodiment of the present application. The image processing apparatus 500 may specifically include an acquisition unit 502 and a processing unit 504 as follows:
an acquisition unit 502 for acquiring first image data;
a processing unit 504, configured to process the first image data according to the data processing method in any of the above first aspects to obtain second image data;
wherein the resolution of the second image data is greater than the resolution of the first image data.
According to the image processing device 500 provided by the application, in the process of processing an image, the acquiring unit 502 acquires first image data to be processed, and the processing unit 504 processes the acquired first image data according to the data processing method in any embodiment of the first aspect, so as to improve the resolution of the first image data and obtain second image data with higher resolution. The image processing apparatus 500 provided by the present application can implement the steps of the data processing method in any embodiment of the first aspect, so that the image processing apparatus 500 provided by the fifth aspect of the present application has all the advantages of the data processing method in any embodiment of the first aspect, and is not described herein.
In one embodiment of the present application, a speech processing apparatus is also presented. As shown in fig. 12, fig. 12 shows a block diagram of a voice processing apparatus 600 according to an embodiment of the present application. The speech processing device 600 may specifically include an acquisition unit 602 and a processing unit 604 as follows:
an acquisition unit 602, configured to acquire first voice data;
a processing unit 604, configured to perform convolution processing on the first voice data to obtain second voice data;
the processing unit 604 is further configured to process the second voice data according to the data processing method in any one of the above first aspect, so as to obtain third voice data;
wherein the noise intensity of the third voice data is smaller than the noise intensity of the first voice data.
According to the voice processing device 600 provided by the application, in the process of processing voice data, the first voice data to be processed is acquired through the acquisition unit 602, and the first voice data is subjected to convolution processing through the processing unit 604, so that the feature extraction and downsampling are performed on the first voice data, and the noise intensity of the first voice data is reduced, and the noise-reduced second voice data is obtained. Wherein the data size of the second voice data is smaller than the data size of the first voice data. On the basis, the processing unit 604 performs a transpose convolution process on the noise-reduced second voice data by using the data processing method in any one of the embodiments of the first aspect, so as to up-sample the second voice data, thereby restoring the second voice data to the original data size, and obtaining the third voice data with the noise intensity smaller than that of the first voice data and the data size identical to that of the first voice data. The voice processing apparatus 600 provided by the present application can implement the data processing method in any embodiment of the first aspect, so the voice processing apparatus 600 provided by the sixth aspect of the present application has all the advantages of the data processing method in any embodiment of the first aspect, and is not described herein.
In one embodiment of the application, another data processing apparatus is also presented. As shown in fig. 14, fig. 14 shows a block diagram of a data processing apparatus 1000 according to an embodiment of the present application. Wherein the data processing apparatus 1000 comprises:
a memory 1002, on which a program or instructions are stored on the memory 1002;
processor 1004, the steps of the data processing method in any of the embodiments described above are implemented when the processor 1004 executes the above-described programs or instructions.
The data processing apparatus 1000 provided in this embodiment includes a memory 1002 and a processor 1004, and when the program or the instructions in the memory 1002 are executed by the processor 1004, the steps of the data processing method in any of the foregoing embodiments are implemented, so that the data processing apparatus 1000 has all the advantages of the data processing method in any of the foregoing embodiments, which are not described herein again.
It can be understood that in the process of performing calculation based on the transpose convolution operator, padding and 0 insertion operations cause a problem that a feature map becomes large, and thus the calculation amount becomes large. The application provides a forward reasoning acceleration technical scheme of a transposed convolution operator, aiming at improving the calculation parallelism of the operator and further improving the overall efficiency.
The technical scheme provided by the application converts transposed convolution calculation into vector matrix multiplication operation and carries out addition operation of recovering the converted matrix array. By the method, a large number of invalid multiplication 0 calculation is avoided, efficient parallel calculation can be performed in a vector matrix multiplication operation stage, and overall efficiency is improved. In addition, the technical scheme provided by the application is based on the transposition convolution inverse operation, and the corresponding data of the input matrix is reversely searched from the data of the output matrix to carry out floating point multiplication and addition calculation, so that invalid multiplication 0 calculation can be avoided. That is, the input matrix does not need to carry out padding and 0 insertion operations, but directly determines the data address of the input matrix through padding parameters and stride parameters, so as to calculate the output matrix. In actual service, the technical scheme provided by the application reduces the time consumption of a transposed convolution operator in a voice noise reduction model from 4.565ms to 0.55ms, and improves the speed by 356.5%.
The technical scheme provided by the application can be applied to different side systems such as linux/rtos/android/ios and the like, and provides instruction level acceleration for different side platforms such as armv7/v8, dsp and the like. The technical scheme of the application has the characteristics of light-weight deployment, strong universality, strong usability, high-performance reasoning and the like, comprehensively solves the problem of low-resource bottleneck of intelligent equipment, greatly shortens the AI model deployment period, and achieves the industry leading level in the side AI deployment field. In addition, the technical scheme provided by the application can be applied to a self-grinding chip, for example, the first three-in-one chip FL119 supporting voice, connection and display in the industry. The related achievements have comprehensively energized the intelligent household electric quantity production land of a voice refrigerator, an air conditioner, a robot and the like, so that the working intelligence of the intelligent household appliances is improved, and the working efficiency of the intelligent household appliances is increased.
In particular, the memory 1002 and the processor 1004 may be connected by a bus or other means. The processor 1004 may include one or more processing units, and the processor 1004 may be a central processing unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA), or the like.
In one embodiment of the application, an electronic device is also presented. As shown in fig. 15, fig. 15 shows a block diagram of an electronic device 1100 according to an embodiment of the present application. The electronic device 1100 includes the data processing apparatus 1000 in the above embodiment. Therefore, the electronic device 1100 has all the technical effects of the data processing apparatus 1000 in the above embodiment, and will not be described herein.
In one embodiment of the present application, a readable storage medium is also presented. On which a program or an instruction is stored which, when executed by a processor, implements the steps of the data processing method in any of the above embodiments, or which, when executed by a processor, implements the steps of the image processing method in the above embodiments, or which, when executed by a processor, implements the steps of the speech processing method in the above embodiments.
The readable storage medium according to the embodiment of the present application may implement the steps of the data processing method according to any of the above embodiments when the stored program or instructions are executed by a processor. Therefore, the readable storage medium has all the advantages of the data processing method in any of the above embodiments, or the image processing method in the above embodiments, or the voice processing method in the above embodiments, and will not be described in detail.
In particular, the above-described readable storage medium may include any medium capable of storing or transmitting information. Examples of readable storage media include electronic circuitry, semiconductor Memory devices, read-Only Memory (ROM), random-access Memory (RandomAccess Memory, RAM), compact-disk Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), flash Memory, erasable ROM (EROM), magnetic tape, floppy disk, optical disk, hard disk, fiber optic media, radio Frequency (RF) links, optical data storage devices, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
In an embodiment of the application, a computer program product is also proposed, which comprises a computer program which, when being executed by a processor, implements a data processing method as in any of the embodiments described above, or which, when being executed by a processor, implements an image processing method as in the embodiments described above, or which, when being executed by a processor, implements a speech processing method as in the embodiments described above. Therefore, the computer program product according to the present application has all the advantages of the data processing method in any of the above embodiments, or the computer program product according to the present application has all the advantages of the image processing method in the above embodiments, or the computer program product according to the present application has all the advantages of the voice processing method in the above embodiments, which are not described herein.
In an embodiment of the present application, there is also provided a chip including a program or instructions for implementing the steps of the data processing method in any of the above embodiments when the chip is running, or for implementing the steps of the image processing method in the above embodiments when the chip is running, or for implementing the steps of the voice processing method in the above embodiments when the chip is running. Therefore, the chip provided by the present application has all the advantages of the data processing method in any of the above embodiments, or the chip provided by the present application has all the advantages of the image processing method in the above embodiments, or the chip provided by the present application has all the advantages of the voice processing method in the above embodiments, which are not described herein.
In an actual application process, when the voice noise reduction model is deployed based on the chip provided by the application, the calculation time of a transposed convolution operator in the voice noise reduction model can be reduced from 4.565ms to 0.55ms, and the thrust speed of the operator is increased by 356.5%.
In the description of the present specification, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance unless explicitly specified and limited otherwise; the terms "coupled," "mounted," "secured," and the like are to be construed broadly, and may be fixedly coupled, detachably coupled, or integrally connected, for example; can be directly connected or indirectly connected through an intermediate medium. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
In the description of the present specification, the terms "one embodiment," "some embodiments," "particular embodiments," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In addition, the technical solutions of the embodiments of the present application may be combined with each other, but it is necessary to be based on the fact that those skilled in the art can implement the technical solutions, and when the technical solutions are contradictory or cannot be implemented, the combination of the technical solutions should be considered as not existing, and not falling within the scope of protection claimed by the present application.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (14)
1. An image processing method, comprising:
acquiring first image data;
acquiring a first feature matrix of the convolved first image data;
performing transpose convolution processing on the first feature matrix according to a target matrix address and convolution parameters to obtain a second feature matrix, wherein the target matrix address is a physical address of a second matrix element in the second feature matrix;
determining second image data according to the second feature matrix;
Wherein the resolution of the second image data is greater than the resolution of the first image data;
and performing transpose convolution processing on the first feature matrix according to the target matrix address and the convolution parameter to obtain a second feature matrix, wherein the transpose convolution processing comprises the following steps:
determining a fourth matrix according to the first feature matrix and a third matrix, wherein the third matrix is a convolution kernel of a deep learning model corresponding to the first image data;
traversing the second feature matrix according to the convolution parameters;
in the traversal process, a fourth matrix element in the fourth matrix is selected according to the target matrix address and the convolution parameter, and the second feature matrix is updated according to the fourth matrix element until the traversal is finished.
2. The image processing method according to claim 1, wherein the determining a fourth matrix from the first feature matrix and the third matrix includes:
converting the first feature matrix into a column matrix;
converting the third matrix into a row matrix;
the fourth matrix is determined from the product of the column matrix and the row matrix.
3. The image processing method according to claim 1 or 2, wherein the convolution parameters include a step distance parameter and a filling parameter, and the traversing the second feature matrix according to the convolution parameters includes:
Traversing the second feature matrix through a target frame according to the filling parameters;
in the traversal process, according to the target matrix address and the convolution parameter, selecting a fourth matrix element in the fourth matrix includes:
selecting a plurality of second matrix elements in the target frame according to the stride parameter with the target frame at each traversal position;
selecting a first matrix element in the first feature matrix and a third matrix element in the third matrix corresponding to each second matrix element according to the physical address of each second matrix element in the plurality of second matrix elements;
and selecting the fourth matrix element corresponding to each second matrix element according to the first matrix element and the third matrix element corresponding to each second matrix element.
4. The image processing method according to claim 3, wherein the updating the second feature matrix based on the fourth matrix element includes:
setting the initial value of the second matrix element in the second feature matrix to be zero;
and accumulating the element values of the fourth matrix element corresponding to each second matrix element in the traversal process, and updating the element values of the corresponding second matrix elements according to the accumulated value.
5. The image processing method according to claim 1, wherein the performing transpose convolution on the first feature matrix according to the target matrix address and the convolution parameter to obtain a second feature matrix includes:
determining the second feature matrix based on a target operator according to the target matrix address, the convolution parameter and the first feature matrix;
wherein the target operator comprises at least one of: transpose convolution operator, neural network operator.
6. A method of speech processing, comprising:
acquiring first voice data;
performing convolution processing on the first voice data to obtain second voice data;
acquiring a first feature matrix of the second voice data after convolution processing;
performing transpose convolution processing on the first feature matrix according to a target matrix address and convolution parameters to obtain a second feature matrix, wherein the target matrix address is a physical address of a second matrix element in the second feature matrix;
determining third voice data according to the second feature matrix;
wherein the noise intensity of the third voice data is smaller than the noise intensity of the first voice data;
And performing transpose convolution processing on the first feature matrix according to the target matrix address and the convolution parameter to obtain a second feature matrix, wherein the transpose convolution processing comprises the following steps:
determining a fourth matrix according to the first feature matrix and a third matrix, wherein the third matrix is a convolution kernel of a deep learning model corresponding to the second voice data;
traversing the second feature matrix according to the convolution parameters;
in the traversal process, a fourth matrix element in the fourth matrix is selected according to the target matrix address and the convolution parameter, and the second feature matrix is updated according to the fourth matrix element until the traversal is finished.
7. The method of claim 6, wherein determining a fourth matrix from the first feature matrix and the third matrix comprises:
converting the first feature matrix into a column matrix;
converting the third matrix into a row matrix;
the fourth matrix is determined from the product of the column matrix and the row matrix.
8. The method according to claim 6 or 7, wherein the convolution parameters include a step size parameter and a filling parameter, and traversing the second feature matrix according to the convolution parameters includes:
Traversing the second feature matrix through a target frame according to the filling parameters;
in the traversal process, according to the target matrix address and the convolution parameter, selecting a fourth matrix element in the fourth matrix includes:
selecting a plurality of second matrix elements in the target frame according to the stride parameter with the target frame at each traversal position;
selecting a first matrix element in the first feature matrix and a third matrix element in the third matrix corresponding to each second matrix element according to the physical address of each second matrix element in the plurality of second matrix elements;
and selecting the fourth matrix element corresponding to each second matrix element according to the first matrix element and the third matrix element corresponding to each second matrix element.
9. The method according to claim 8, wherein the updating the second feature matrix according to the fourth matrix element includes:
setting the initial value of the second matrix element in the second feature matrix to be zero;
and accumulating the element values of the fourth matrix element corresponding to each second matrix element in the traversal process, and updating the element values of the corresponding second matrix elements according to the accumulated value.
10. The method for processing speech according to claim 6, wherein the performing transpose convolution on the first feature matrix according to the target matrix address and the convolution parameter to obtain a second feature matrix includes:
determining the second feature matrix based on a target operator according to the target matrix address, the convolution parameter and the first feature matrix;
wherein the target operator comprises at least one of: transpose convolution operator, neural network operator.
11. An image processing apparatus, comprising:
an acquisition unit configured to acquire first image data;
a processing unit for processing the first image data according to the method of any one of claims 1 to 5, resulting in second image data;
wherein the resolution of the second image data is greater than the resolution of the first image data.
12. A speech processing apparatus, comprising:
an acquisition unit configured to acquire first voice data;
the processing unit is used for carrying out convolution processing on the first voice data to obtain second voice data;
the processing unit is further configured to process the second voice data according to the method of any one of claims 6 to 10, to obtain third voice data;
Wherein the noise intensity of the third voice data is smaller than the noise intensity of the first voice data.
13. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, realizes the steps of the image processing method according to any one of claims 1 to 5, or which, when executed by a processor, realizes the steps of the speech processing method according to any one of claims 6 to 10.
14. A chip comprising a program or instructions for implementing the steps of the image processing method according to any one of claims 1 to 5 when the chip is running or for implementing the steps of the speech processing method according to any one of claims 6 to 10 when the chip is running.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310906292.4A CN116629321B (en) | 2023-07-24 | 2023-07-24 | Data processing method, voice processing device, medium and chip |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310906292.4A CN116629321B (en) | 2023-07-24 | 2023-07-24 | Data processing method, voice processing device, medium and chip |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116629321A CN116629321A (en) | 2023-08-22 |
CN116629321B true CN116629321B (en) | 2023-10-03 |
Family
ID=87590629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310906292.4A Active CN116629321B (en) | 2023-07-24 | 2023-07-24 | Data processing method, voice processing device, medium and chip |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116629321B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765247A (en) * | 2018-05-15 | 2018-11-06 | 腾讯科技(深圳)有限公司 | Image processing method, device, storage medium and equipment |
CN109767000A (en) * | 2019-01-16 | 2019-05-17 | 厦门美图之家科技有限公司 | Neural network convolution method and device based on Winograd algorithm |
CN111639699A (en) * | 2020-05-28 | 2020-09-08 | 山东云海国创云计算装备产业创新中心有限公司 | Method, system and equipment for extracting image features and readable storage medium |
CN113222101A (en) * | 2020-02-05 | 2021-08-06 | 北京百度网讯科技有限公司 | Deep learning processing device, method, equipment and storage medium |
CN115424038A (en) * | 2022-09-06 | 2022-12-02 | 中国工商银行股份有限公司 | Multi-scale image processing method, system and device and computer equipment |
-
2023
- 2023-07-24 CN CN202310906292.4A patent/CN116629321B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765247A (en) * | 2018-05-15 | 2018-11-06 | 腾讯科技(深圳)有限公司 | Image processing method, device, storage medium and equipment |
CN109767000A (en) * | 2019-01-16 | 2019-05-17 | 厦门美图之家科技有限公司 | Neural network convolution method and device based on Winograd algorithm |
CN113222101A (en) * | 2020-02-05 | 2021-08-06 | 北京百度网讯科技有限公司 | Deep learning processing device, method, equipment and storage medium |
CN111639699A (en) * | 2020-05-28 | 2020-09-08 | 山东云海国创云计算装备产业创新中心有限公司 | Method, system and equipment for extracting image features and readable storage medium |
CN115424038A (en) * | 2022-09-06 | 2022-12-02 | 中国工商银行股份有限公司 | Multi-scale image processing method, system and device and computer equipment |
Non-Patent Citations (1)
Title |
---|
二维矩阵卷积在向量处理器中的设计与实现;张军阳;郭阳;;国防科技大学学报(第03期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116629321A (en) | 2023-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7052034B2 (en) | How to store weight data and a neural network processor based on this method | |
CN112016507B (en) | Super-resolution-based vehicle detection method, device, equipment and storage medium | |
CN108009625B (en) | Fine adjustment method and device after artificial neural network fixed point | |
CN108961180B (en) | Infrared image enhancement method and system | |
CN107610146A (en) | Image scene segmentation method, apparatus, computing device and computer-readable storage medium | |
EP2908285A1 (en) | Method for performing super-resolution on single images and apparatus for performing super-resolution on single images | |
CN111652330B (en) | Image processing method, device, system, electronic equipment and readable storage medium | |
JP5976665B2 (en) | Image parallel processing method and apparatus | |
CN109146061A (en) | The treating method and apparatus of neural network model | |
CN107730514A (en) | Scene cut network training method, device, computing device and storage medium | |
CN110827212B (en) | Image restoration method based on overlapping combination sparse high-order total variation | |
CN112801906A (en) | Cyclic iterative image denoising method based on cyclic neural network | |
CN112150612A (en) | Three-dimensional model construction method and device, computer equipment and storage medium | |
CN111399493A (en) | Path display method and device of intelligent equipment | |
CN114266894A (en) | Image segmentation method and device, electronic equipment and storage medium | |
CN111612075A (en) | Interest point and descriptor extraction method based on joint feature recombination and feature mixing | |
CN116629321B (en) | Data processing method, voice processing device, medium and chip | |
CN108846430B (en) | Image signal sparse representation method based on multi-atom dictionary | |
CN108986155B (en) | Depth estimation method and depth estimation apparatus for multi-viewpoint image | |
CN113096032A (en) | Non-uniform blur removing method based on image area division | |
CN111027670B (en) | Feature map processing method and device, electronic equipment and storage medium | |
KR20210029595A (en) | Keyword Spotting Apparatus, Method and Computer Readable Recording Medium Thereof | |
KR101946692B1 (en) | Method and apparatus for performing graph ranking | |
CN111767204B (en) | Spill risk detection method, device and equipment | |
CN111767980B (en) | Model optimization method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |