CN113902928A - Image feature extraction method and device and electronic equipment - Google Patents
Image feature extraction method and device and electronic equipment Download PDFInfo
- Publication number
- CN113902928A CN113902928A CN202010643501.7A CN202010643501A CN113902928A CN 113902928 A CN113902928 A CN 113902928A CN 202010643501 A CN202010643501 A CN 202010643501A CN 113902928 A CN113902928 A CN 113902928A
- Authority
- CN
- China
- Prior art keywords
- image
- conversion
- point
- matrix
- calculation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 43
- 238000006243 chemical reaction Methods 0.000 claims abstract description 292
- 238000004364 calculation method Methods 0.000 claims abstract description 208
- 238000012545 processing Methods 0.000 claims abstract description 104
- 238000007667 floating Methods 0.000 claims abstract description 97
- 238000000034 method Methods 0.000 claims abstract description 72
- 230000008569 process Effects 0.000 claims abstract description 42
- 239000011159 matrix material Substances 0.000 claims description 229
- 230000009466 transformation Effects 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 8
- 238000007493 shaping process Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 238000013139 quantization Methods 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 5
- 230000007704 transition Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000017105 transposition Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/15—Correlation function computation including computation of convolution operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/57—Arithmetic logic units [ALU], i.e. arrangements or devices for performing two or more of the operations covered by groups G06F7/483 – G06F7/556 or for performing logical operations
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image feature extraction method, an image feature extraction device and electronic equipment, which relate to the technical field of image processing and comprise the following steps: acquiring an image to be processed and a preset convolution kernel; the image to be processed comprises an original image or a characteristic image, and the data types of the image to be processed and the convolution kernel are fixed point types; performing convolution processing on the image to be processed based on the convolution kernel to obtain a convolution processing result for representing the image characteristics of the image to be processed; wherein, the data type of the convolution processing result is a fixed point type; the convolution process involves floating point calculations and conversion between fixed point and floating point. The invention can effectively improve the precision and speed of feature extraction.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image feature extraction method and apparatus, and an electronic device.
Background
The deep learning method is widely applied in the field of Computer Vision (CV), wherein convolution is used as an important operator in the deep learning method and is mainly used for image feature extraction. In order to accelerate the calculation, a quantization mode is usually adopted in the convolution processing process, that is, data involved in the convolution processing process are quantized into fixed points, and then a shaping calculation mode is adopted in the data processing process. The inventor researches and discovers that when the existing large multiprocessor is used for feature extraction in the mode, the accuracy of the feature extraction is lost, and the problem of low speed still exists.
Disclosure of Invention
In view of the above, the present invention provides an image feature extraction method, an image feature extraction device, and an electronic device, which can effectively improve the accuracy and speed of feature extraction.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides an image feature extraction method, including: acquiring an image to be processed and a preset convolution kernel; the image to be processed comprises an original image or a characteristic graph, and the data types of the image to be processed and the convolution kernel are fixed point types; performing convolution processing on the image to be processed based on the convolution core to obtain a convolution processing result for representing the image characteristics of the image to be processed; wherein the data type of the convolution processing result is a fixed point type; the convolution process involves floating point calculations and conversion between fixed point and floating point.
Further, the step of performing convolution processing on the image to be processed based on the convolution kernel to obtain a convolution processing result for characterizing the image features of the image to be processed includes: performing input conversion on the image to be processed by adopting a preset input conversion matrix to obtain an input conversion result; performing weight conversion on the convolution kernel by adopting a preset weight conversion matrix to obtain a weight conversion result; performing batch matrix multiplication on the input conversion result and the weight conversion result to obtain a batch matrix multiplication result; and performing output conversion on the batch matrix multiplication result by adopting a preset output conversion matrix to obtain a convolution processing result for representing the image characteristics of the image to be processed.
Further, the data type of the input conversion matrix is a fixed point type or a floating point type; the input conversion calculation mode is fixed point calculation or floating point calculation; the data type of the weight conversion matrix is a fixed point type or a floating point type; the calculation mode of the weight conversion is fixed-point calculation or floating-point calculation; the calculation mode of the batch matrix multiplication is fixed-point calculation or floating-point calculation; the data type of the output conversion matrix is a fixed point type or a floating point type; the calculation mode of the output conversion is fixed-point calculation or floating-point calculation; and the data type of at least one of the input conversion matrix, the weight conversion matrix and the output conversion matrix is a floating point type, and/or the calculation mode of at least one of the input conversion, the weight conversion, the batch matrix multiplication and the output conversion is floating point calculation.
Further, the bulk matrix multiplication is a floating point calculation.
Further, the fixed point types of the image to be processed, the convolution kernel and the convolution processing result are int 8; and if a target matrix with the data type being the fixed point type exists in the input conversion matrix, the weight conversion matrix and the output conversion matrix, wherein the fixed point type of the target matrix is int 32.
Further, the values in the input conversion matrix, the weight conversion matrix and the output conversion matrix are allWherein M is an integer and n isA natural number.
Further, the value range of n is [0,3 ].
In a second aspect, an embodiment of the present invention further provides an apparatus for extracting image features, including: the acquisition module is used for acquiring an image to be processed and a preset convolution kernel; the image to be processed comprises an original image or a characteristic graph, and the data types of the image to be processed and the convolution kernel are fixed point types; the convolution processing module is used for performing convolution processing on the image to be processed based on the convolution core to obtain a convolution processing result used for representing the image characteristics of the image to be processed; wherein the data type of the convolution processing result is a fixed point type; the convolution process involves floating point calculations and conversion between fixed point and floating point.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a processor and a storage device; the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any one of the aspects as provided in the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method according to any one of the above-mentioned first aspect.
The embodiment of the invention provides an image feature extraction method, an image feature extraction device and electronic equipment. The method flexibly performs conversion between the fixed point and the floating point in the process of executing the convolution processing, adopts floating point calculation which is good for most processors, and can effectively exert the optimal calculation performance of the processors to improve the convolution processing speed, namely effectively improve the characteristic extraction speed because the floating point calculation speed of most processors is far higher than the shaping calculation speed. In addition, because the fixed-point shaping calculation adopted for the convolution processing in the prior art may cause precision loss, and the floating-point calculation-based calculation mode may effectively avoid precision loss, the floating-point calculation convolution processing mode adopted in this embodiment may effectively improve the precision of the convolution processing, that is, may further improve the feature extraction precision, compared with the prior art. In summary, the above manner provided by the embodiment can effectively improve the speed and accuracy of feature extraction.
Additional features and advantages of embodiments of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of embodiments of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating an image feature extraction method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a convolution process provided by an embodiment of the present invention;
FIG. 4 is a diagram illustrating a specific convolution process provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram illustrating an apparatus for extracting image features according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, not all, embodiments of the present invention.
At present, the convolution computation performance has a great influence on the computation complexity of the deep learning method, and the computation complexity of the deep learning method directly depends on whether the deep learning method can be applied to common equipment and can be used commercially. In order to accelerate the convolution calculation, the data (such as the feature map and/or the convolution kernel) involved in the convolution calculation process can be quantized, and the convolution calculation process is accelerated by adopting an im2col algorithm or an int8 winned algorithm. Taking an example of accelerating a convolution calculation process by using an int8 winograd algorithm, data types of a feature diagram and a convolution kernel involved in the int8 winograd algorithm are both int8 types (8-bit fixed point shaping), in a convolution processing process, conversion of the feature diagram and the convolution kernel is usually involved, and data obtained in the conversion process is also all int types, such as an int16 type or an int32 type, and the like, and a finally obtained convolution result is still converted into an int8 type, and the conversion process all uses shaping calculation. In addition, since the conversion process has an overflow risk, it is generally required that the bit width of the convolution kernel cannot be set to 8 bits, and the overflow risk is reduced as much as possible by setting a lower bit width (such as 6 bits), but reducing the bit width will result in a reduction in the accuracy of the convolution processing, which affects the precision of the convolution processing. Moreover, most of the processors are better at floating point calculation and have lower speed when performing shaping calculation, such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or an arm (advanced RISC machines), wherein the speed of some of the processors for Processing floating point calculation is even more than twice the speed of Processing shaping calculation, and the optimal calculation performance of the processors cannot be exerted by shaping calculation, so that the speed of convolution Processing will be influenced to some extent, and finally the extraction speed of image features will be influenced. In order to improve the above problem, embodiments of the present invention provide an image feature extraction method, an image feature extraction device, and an electronic device, where the technique is applicable to situations where image feature extraction by convolution operation is required, such as image recognition, and the embodiments of the present invention are described in detail below.
The first embodiment is as follows:
first, an example electronic device 100 for implementing an image feature extraction method and apparatus according to an embodiment of the present invention is described with reference to fig. 1.
As shown in fig. 1, an electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are only exemplary and not limiting, and the electronic device may have some of the components shown in fig. 1 and may also have other components and structures not shown in fig. 1, as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), the processor 102 may be one or a combination of several of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other forms of processing units having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
Exemplary electronic devices for implementing the image feature extraction method and apparatus according to the embodiments of the present invention may be implemented as smart terminals, such as a smart phone, a tablet computer, a computer, etc., in which floating-point computing power is stronger than fixed-point computing power.
Example two:
referring to a flow diagram of an image feature extraction method shown in fig. 2, the method is applied to the electronic device, and specifically may be executed by a processor in the electronic device, and the method mainly includes the following steps S202 to S204:
step S202, acquiring an image to be processed and a preset convolution kernel.
The image to be processed comprises an original image or a characteristic diagram, and the data types of the image to be processed and the convolution kernel are fixed point types. The original image may include an initial image obtained by means of shooting by an image acquisition device, network downloading, local storage, or manual uploading, for example, an RGB image, and the feature map may include a next-layer feature map obtained by performing a convolution operation on the initial image or the intermediate feature map. The fixed point type may include int8 type, int16 type, int32 type, etc., and the number of bits is not limited herein. Taking int8 type as an example, int8 refers to a fixed point integer type of 8 bits, and a convolution kernel (i.e., weight) is an operator for performing convolution processing on an image to be processed.
And S204, performing convolution processing on the image to be processed based on the convolution kernel to obtain a convolution processing result for representing the image characteristics of the image to be processed. The convolution processing result may also be referred to as an output feature map.
The data type of the convolution processing result is a fixed point type, and floating point calculation and conversion between a fixed point and a floating point are involved in the convolution processing process. Floating-point (i.e., floating-point type) is usually composed of significant digits (significant) and exponents (exponents) of a certain number of bits, and the floating-point type may include FP16 type or FP32 type, etc. In an embodiment, the image to be processed and the convolution kernel may be converted respectively to obtain conversion results corresponding to the image to be processed and the convolution kernel, the conversion result corresponding to the image to be processed and the conversion result corresponding to the convolution kernel are subjected to batch matrix multiplication to obtain a batch matrix multiplication result, and the batch matrix multiplication result is converted to obtain a fixed-point type convolution processing result, where at least one conversion process in the convolution processing processes adopts floating-point calculation, and conversion between a fixed point and a floating point may also be involved in the floating-point calculation process, for example, if data required by the floating-point calculation is a fixed-point type, the data is converted to a floating point first and then the floating-point calculation is performed, and if the obtained convolution processing result is a floating-point type, the data is converted to a fixed-point type.
According to the method for extracting the image features, the conversion between the fixed point and the floating point is flexibly performed in the process of executing the convolution processing, and the floating point calculation which is good for most processors is adopted. In addition, because the fixed-point shaping calculation adopted for the convolution processing in the prior art may cause precision loss, and the floating-point calculation-based calculation mode may effectively avoid precision loss, the floating-point calculation convolution processing mode adopted in this embodiment may effectively improve the precision of the convolution processing, that is, may further improve the feature extraction precision, compared with the prior art. In summary, the above manner provided by the embodiment can effectively improve the speed and accuracy of feature extraction.
To facilitate understanding of the step S204, an embodiment of the present invention provides a specific implementation manner of performing convolution processing on an image to be processed based on a convolution kernel to obtain a convolution processing result for characterizing an image feature of the image to be processed, referring to a schematic diagram of a calculation process of a convolution processing result shown in fig. 3, where the calculation process includes the following steps 1 to 4:
step 1, performing input conversion on an image to be processed by adopting a preset input conversion matrix to obtain an input conversion result. The fixed-point type of the image to be processed is int8, the input conversion matrix may be understood as a coefficient matrix for performing input conversion on the image to be processed, the data type of the input conversion matrix is a fixed-point type or a floating-point type, and the calculation mode of the input conversion is fixed-point calculation or floating-point calculation. In one embodiment, assuming that the data type of the input conversion matrix is a floating point type and the calculation method of the input conversion is a floating point calculation, the transpose B of the input conversion matrix B can be calculated by using the calculation method of the floating point calculationTProduct of the image d to be processed and the input transformation matrix B, the transposition B of the input transformation matrix BTInputting the conversion matrix B into the image processing system, wherein the data type of the conversion matrix B is a floating point type, the data type of the image d to be processed is a fixed point type, and multiplying the product BTdB is determined as the input conversion result whose data type is the floating point type. Due to the above process, the image to be processed and the inputThe data types of the conversion matrixes are different, in order to implement floating point calculation on the images to be processed of different data types and the input conversion matrix, inverse quantization processing needs to be performed on the images to be processed of the fixed point type to convert the data types of the images to be processed from the fixed point type to the floating point type, and then floating point calculation is performed on the images to be processed of the same floating point type and the input conversion matrix, for example, the data type of the images to be processed is int8, and in order to convert the images to be processed into the FP32 type, the inverse quantization processing may be: FP32 int8/2(3). In addition, the embodiment of the invention exemplarily provides a transpose B of an input conversion matrix BTAs follows:
and 2, performing weight conversion on the convolution kernel by adopting a preset weight conversion matrix to obtain a weight conversion result. The fixed-point type of the convolution kernel is int8, the weight conversion matrix may be understood as a matrix coefficient for performing weight conversion on the convolution kernel, the data type of the weight conversion matrix is a fixed-point type or a floating-point type, and the calculation mode of the weight conversion is fixed-point calculation or floating-point calculation. In one embodiment, assuming that the data type of the weight transformation matrix is a floating-point type and the calculation method of the weight transformation is floating-point calculation, the calculation method of the floating-point calculation may be used to calculate the weight transformation matrix G, the convolution kernel G, and the transpose G of the weight transformation matrix GTWherein the transpose G of the weight conversion matrix GTThe data type of the sum weight conversion matrix G is a floating point type, the data type of the convolution kernel is a fixed point type, and the product GgG is obtainedTAnd determining the weight conversion result, wherein the data type of the weight conversion result is a floating point type. In the above process, since the convolution kernels and the weight conversion matrices have different data types, in order to perform floating-point calculation on the convolution kernels and the weight conversion matrices of different data types, inverse quantization processing needs to be performed on the convolution kernels of fixed-point type to convert the data types of the convolution kernels into the fixed-point types, and then floating is performed on the convolution kernels and the weight conversion matrices of the same floating-point typeAnd (4) point calculation. In addition, an embodiment of the present invention exemplarily provides a weight conversion matrix G as follows:
and 3, carrying out batch matrix multiplication on the input conversion result and the weight conversion result to obtain a batch matrix multiplication result. The calculation mode of the batch matrix multiplication is fixed-point calculation or floating-point calculation, the batch matrix multiplication is also the whole of the matrix multiplication of a plurality of batches, and the batch matrix multiplication result can be expressed as: [ GgGT] ⊙[BTdB]. In one embodiment, in consideration of the large amount of computation involved in performing the batch matrix multiplication, it is preferable that the embodiment of the present invention may use floating-point computation for the batch matrix multiplication, and may use floating-point computation or fixed-point computation for the input conversion, the weight conversion, and the output conversion, and the data types of the input conversion matrix, the weight conversion matrix, and the output conversion matrix may be floating-point type or fixed-point type.
And 4, performing output conversion on the batch matrix multiplication result by adopting a preset output conversion matrix to obtain a convolution processing result (also called as an output result) for representing the image characteristics of the image to be processed. The output conversion matrix may be understood as a coefficient matrix for performing output conversion on the batch matrix multiplication result, the data type of the output conversion matrix is a fixed-point type or a floating-point type, and the calculation manner of the output conversion is fixed-point calculation or floating-point calculation, in this embodiment, the fixed-point type of the finally obtained convolution processing result is int 8. In one embodiment, assuming that the data type of the output conversion matrix is a floating point type and the calculation method of the output conversion is a floating point calculation, the transpose a of the output conversion matrix a can be calculated by using the calculation method of the floating point calculationTBulk matrix multiplication result [ GgGT]⊙[BTdB]Outputting the product of the conversion matrix A, wherein the transposition of the conversion matrix A is outputTAnd outputting the data type of the conversion matrix A as a floating point type, the data type of the batch matrix multiplication as a fixed point type or a floating point type, and multiplying the product AT{[GgGT]⊙[BTdB]And determining the A as a convolution processing result Y. The embodiment of the invention exemplarily provides a transpose A of an output conversion matrix ATAs follows:
in view of the fact that in the prior art, when data is converted by using a conversion matrix, a conversion result containing an infinite number of bits may be obtained, and a part of the number of bits in the infinite number of bits generally needs to be discarded during data processing, thereby causing a loss of precision of the conversion result, in one embodiment, the values in the input conversion matrix, the weight conversion matrix and the output conversion matrix set in the present embodiment may be expressed asWherein M is an integer and n is a natural number. The embodiment of the invention sets the value in the conversion matrix to beThe decimal part of the numerical value can be made intoThe method has the advantages that infinite decimals do not appear in the conversion result when the conversion matrix is used for converting data, for example, when the input conversion matrix, the feature diagram to be processed and the transpose of the input conversion matrix are subjected to multiplication, infinite decimals do not exist in the input conversion result, and similarly, infinite decimals do not exist in the weight conversion result, no matter whether the input conversion result or the weight conversion result is of a fixed-point type or a floating-point type, numerical value precision is not influenced due to the limitation of numerical value digits, precision loss is avoided, and batch matrix multiplication is carried outIt also makes the data easier to process. In summary, by setting the values in the input conversion matrix, the weight conversion matrix and the output conversion matrix toThe difficulty of data processing can be effectively reduced, and the precision loss in the data processing process can be effectively reduced. In practical applications, in order to further facilitate data processing, it is preferable that n has a value range of [0,3]]。
Further, in the above steps 1 to 4, it is necessary to satisfy in common: the data type of at least one of the input conversion matrix, the weight conversion matrix and the output conversion matrix is a floating point type, and/or the calculation mode of at least one of the input conversion, the weight conversion, the batch matrix multiplication and the output conversion is floating point calculation. For example, the calculation method of multiplying the batch matrix may be a floating-point calculation, while the calculation methods for the input conversion, the weight conversion, and the output conversion may be a floating-point calculation or a fixed-point calculation, and the data types of the input conversion matrix, the weight conversion matrix, and the output conversion matrix may be a floating-point type or a fixed-point type; the data type of the input conversion matrix may be a floating point type, the calculation mode of the input conversion may be a floating point calculation, the data types of the weight conversion matrix and the output conversion matrix may be a floating point type or a fixed point type, and the calculation modes of the weight conversion, the batch matrix multiplication, and the output conversion may be a floating point calculation or a fixed point calculation. In practical applications, a conversion link in which the calculation mode is floating point calculation may be selected according to actual requirements, or a matrix in which the data type is floating point type may be selected according to actual requirements, but it should be noted that at least one part of the convolution processing process involves floating point calculation and conversion between floating point type and fixed point type, so that the advantage of floating point calculation is utilized to exert the optimal calculation performance of the processor to improve the convolution processing speed.
Based on the above embodiment of obtaining the convolution processing result provided in the embodiment, with reference to fig. 3, an exemplary calculation process of the convolution processing result is provided in the embodiment of the present invention, which is as follows:
in the first mode, fixed-point calculation (also called shaping calculation) is adopted for input conversion and weight conversion, floating-point calculation is adopted for batch matrix multiplication and output conversion, and the data types of the input conversion matrix, the weight conversion matrix and the output conversion matrix are all floating-point types. In one embodiment, the data types of the image to be processed (Feature Map) and the convolution kernel (Weight) are all int8 types, the image to be processed is input-converted by using a floating-point-type input conversion matrix (the calculation method adopted is fixed-point calculation) to obtain a floating-point-type input conversion result, the convolution kernel is Weight-converted by using a floating-point-type Weight conversion matrix (also called Weight conversion matrix) (the calculation method adopted is fixed-point calculation) to obtain a floating-point-type Weight conversion result (also called Weight conversion result), then the input conversion result and the Weight conversion result are subjected to batch matrix multiplication (the calculation method adopted is floating-point calculation) to obtain a floating-point-type batch matrix multiplication result, and finally the batch matrix multiplication result is output-converted by using the floating-point-type output conversion matrix (the calculation method adopted is floating-point calculation), and converting the obtained floating point type data into int8 type to obtain a convolution processing result.
And in the second mode, fixed-point calculation is adopted for input conversion and weight conversion, floating-point calculation is adopted for batch matrix multiplication and output conversion, the data types of the input conversion matrix and the weight conversion matrix are fixed-point types, and the data type of the output conversion matrix is a floating-point type. In one embodiment, the data types of the image to be processed and the convolution kernel are int8 types, the input conversion matrix of the fixed-point type is used for carrying out input conversion on the image to be processed (the calculation mode adopted is fixed-point calculation) to obtain the input conversion result of the fixed-point type, and performing weight conversion (the adopted calculation mode is fixed-point calculation) on the convolution kernel by using the fixed-point type weight conversion matrix to obtain a fixed-point type weight conversion result, performing batch matrix multiplication (the adopted calculation mode is floating-point calculation) on the input conversion result and the weight conversion result to obtain a floating-point type batch matrix multiplication result, finally performing output conversion (the adopted calculation mode is floating-point calculation) on the batch matrix multiplication result by using the floating-point type output conversion matrix, and converting the obtained floating-point type data into an int8 type to obtain a convolution processing result.
And in the third mode, fixed-point calculation is adopted for input conversion, weight conversion and output conversion, floating-point calculation is adopted for batch matrix multiplication, and the data types of the input conversion matrix, the weight conversion matrix and the output conversion matrix are fixed-point types. In one embodiment, the data types of the image to be processed and the convolution kernel are both int8 types, the image to be processed is input-converted (the calculation method adopted is fixed-point calculation) by using a fixed-point type input conversion matrix to obtain a fixed-point type input conversion result, the convolution kernel is weight-converted (the calculation method adopted is fixed-point calculation) by using a fixed-point type weight conversion matrix to obtain a fixed-point type weight conversion result, then the input conversion result and the weight conversion result are subjected to batch matrix multiplication (the calculation method adopted is floating-point calculation) to obtain a batch matrix multiplication result of a floating-point type, finally the batch matrix multiplication result is converted from the floating-point type to the fixed-point type, then the batch matrix multiplication result of the fixed-point type is output-converted (the calculation method adopted is fixed-point calculation) by using a fixed-point type output conversion matrix, directly results in a convolution processing result of type int 8.
The mode IV is that the calculation mode of the batch matrix multiplication is floating point calculation; one or more of the input transformation, the weight transformation, and the output transformation are calculated in a fixed-point calculation manner, and/or one or more of the input transformation matrix, the weight transformation matrix, and the output transformation matrix are of a fixed-point type. For example, the calculation mode of the input conversion, the weight conversion and the batch matrix multiplication is floating point calculation, the calculation mode of the output conversion is fixed point calculation, the data types in the input conversion matrix, the weight conversion matrix and the output conversion matrix are all fixed point types, and then the image to be processed and the convolution kernel are processed according to the method shown in the above steps 1 to 4 to obtain the convolution processing result of the int8 type.
In one embodiment, if the data types of the input conversion matrix and the image to be processed are both int8 types, and the calculation mode adopted by the input conversion is fixed-point calculation, the data type of the corresponding input conversion result is int16 type; if the data types of the weight conversion matrix and the convolution kernel are both int8 types and the calculation mode adopted by the weight conversion is fixed-point calculation, the data type of the corresponding weight conversion result is int16 type; if the data types of the input conversion result and the weight conversion result are int16 and the batch matrix multiplication mode adopts floating point calculation, the input conversion result and the weight conversion result need to be respectively converted from the fixed point type to the floating point type for batch matrix multiplication, and then the batch matrix multiplication result can be converted from the floating point type to the fixed point type with the data type of int 32; if the data type of the batch matrix multiplication result is int32, the data type of the output conversion matrix is int8, and the calculation mode of the output conversion adopts fixed point calculation, the data type of the convolution processing result is int 8. The above embodiment only illustrates that the calculation method of the batch matrix multiplication is floating point calculation, and the rest calculation methods are fixed point calculation, and the rest data types are fixed point types. In practical application, part of the data types may be replaced by floating point types and part of the calculation modes may be replaced by floating point calculations according to requirements, which are not described herein again.
In one embodiment, if there is an object matrix whose data type is a fixed-point type among the input transition matrix, the weight transition matrix, and the output transition matrix, the fixed-point type of the object matrix is int 32. For example, the weight conversion matrix and the output conversion matrix are both floating point types, and the input conversion matrix is a fixed point type, the input conversion matrix is the target matrix, and the data type of the input conversion matrix is int 32. In the prior art, the data types of the input conversion matrix, the weight conversion matrix and the output conversion matrix are all int8 types, and compared with the prior art, the embodiment of the invention can prevent the overflow phenomenon in the convolution processing process to a certain extent by setting the fixed point type of the target matrix to int 32.
It should be noted that the above-mentioned means one to four are only calculation processes exemplarily listing the convolution processing results, and do not cover all possible calculation processes. In addition, the method can be understood as an improvement on the winogrd algorithm, in the existing winogrd algorithm, the types of the related data are all fixed-point types, and the input conversion, the weight conversion, the batch convolution multiplication and the output conversion are all shaping calculation, so that the accuracy of feature extraction is lost, and the speed of feature extraction is slow. Based on the above, the embodiment of the invention optimizes the winogrd algorithm, and the speed and the precision of feature extraction are effectively improved in a floating point-based computing mode. Moreover, since the convolution processing result (output feature map) finally obtained in the embodiment of the present invention is still of a fixed-point type, and only conversion between a fixed point and a floating point and floating point calculation are adopted in the convolution processing process, the embodiment still belongs to convolution quantization operation, and still retains the advantages of less network model amount and less processing resource occupation of quantization convolution.
In summary, the embodiments of the present invention have at least the following features:
(1) by setting the data type of one or more of the input conversion matrix, the weight conversion matrix and the output conversion matrix as a floating point type, for example, the data bit width of the floating point type is 32 bits, while the data bit width of the int16 type in the fixed point type is 16 bits, that is, the data bit width of the floating point type can be 2 times that of the fixed point type, so that the floating point type conversion matrix can better prevent the overflow phenomenon from occurring in the convolution processing process, and further can effectively alleviate the problem of low accuracy in the conversion process.
(2) Floating point calculation is used as a calculation mode of the calculation process (including one or more of input conversion, weight conversion, batch volume multiplication and output conversion), and the calculation process is optimized to fully exert the calculation peak value of the processor, so that the convolution processing is executed by the processor with the optimal calculation performance, and the characteristic extraction speed can be effectively improved.
(3) The embodiment of the invention elaborately designs the input conversion matrix, the output conversion matrix and the output conversion matrix to minimize the decimal part of the coefficientTo be in a certain rangeThe condition of precision loss of floating point type data in the calculation process is avoided in degree, so that the precision of convolution processing is ensured.
Example three:
based on the foregoing embodiment, this embodiment provides a specific example of an extraction method applying the foregoing image features, and referring to a specific convolution processing schematic diagram shown in fig. 4, the embodiment of the present invention is described by taking an example that the input conversion, the weight conversion, the batch matrix multiplication, and the output conversion all adopt floating point calculation, and the data types of the input conversion matrix, the weight conversion matrix, and the output conversion matrix are all floating point types.
In a specific embodiment, the data types of the image to be processed and the convolution kernel are both fixed-point types and are all int8 types, the image to be processed is input-converted (the calculation mode adopted is floating-point calculation) by using a floating-point type input conversion matrix to obtain a floating-point type input conversion result, the convolution kernel is weight-converted (the calculation mode adopted is floating-point calculation) by using a floating-point type weight conversion matrix to obtain a floating-point type weight conversion result, the input conversion result and the weight conversion result are subjected to batch matrix multiplication by using floating-point calculation to obtain a batch floating-point type matrix multiplication result, and finally the batch matrix multiplication result is output-converted (the calculation mode adopted is floating-point calculation) by using a floating-point type output conversion matrix to obtain output data of int8 type.
According to the method for extracting the image features, provided by the embodiment of the invention, the processor with floating point computing capability stronger than the shaping computing capability can better exert the computing performance by adopting the input conversion matrix, the weight conversion matrix and the output conversion matrix of the floating point type and adopting the computing mode of input conversion, weight conversion, batch matrix multiplication and output conversion for floating point computing, so that the speed and the precision of feature extraction are effectively improved.
Example four:
as to the image feature extraction method provided in the second embodiment, an embodiment of the present invention provides an image feature extraction device, and referring to a schematic structural diagram of an image feature extraction device shown in fig. 5, the device includes the following modules:
an obtaining module 502, configured to obtain an image to be processed and a preset convolution kernel; the image to be processed comprises an original image or a characteristic diagram, and the data types of the image to be processed and the convolution kernel are fixed point types.
A convolution processing module 504, configured to perform convolution processing on the image to be processed based on the convolution kernel to obtain a convolution processing result used for representing an image feature of the image to be processed; wherein, the data type of the convolution processing result is a fixed point type; the convolution process involves floating point calculations and conversion between fixed point and floating point.
The image feature extraction device provided by the embodiment of the invention flexibly performs conversion between the fixed point and the floating point in the convolution processing process, adopts floating point calculation which is good for most processors, and can effectively exert the optimal calculation performance of the processors to improve the convolution processing speed, namely effectively improve the feature extraction speed because the floating point calculation speed of most of the existing processors is far higher than the shaping calculation speed. In addition, because the fixed-point shaping calculation adopted for the convolution processing in the prior art may cause precision loss, and the floating-point calculation-based calculation mode may effectively avoid precision loss, the floating-point calculation convolution processing mode adopted in this embodiment may effectively improve the precision of the convolution processing, that is, may further improve the feature extraction precision, compared with the prior art. In summary, the above manner provided by the embodiment can effectively improve the speed and accuracy of feature extraction.
In an embodiment, the convolution processing module 504 is further configured to: performing input conversion on an image to be processed by adopting a preset input conversion matrix to obtain an input conversion result; performing weight conversion on the convolution kernel by adopting a preset weight conversion matrix to obtain a weight conversion result; carrying out batch matrix multiplication on the input conversion result and the weight conversion result to obtain a batch matrix multiplication result; and adopting a preset output conversion matrix to perform output conversion on the batch matrix multiplication result to obtain a convolution processing result for representing the image characteristics of the image to be processed.
In one embodiment, the data type of the input conversion matrix is a fixed-point type or a floating-point type; the input conversion calculation mode is fixed point calculation or floating point calculation; the data type of the weight conversion matrix is a fixed point type or a floating point type; the calculation mode of the weight conversion is fixed-point calculation or floating-point calculation; the calculation mode of the batch matrix multiplication is fixed-point calculation or floating-point calculation; outputting the data type of the conversion matrix to be a fixed point type or a floating point type; the calculation mode of the output conversion is fixed-point calculation or floating-point calculation; and the data type of at least one of the input conversion matrix, the weight conversion matrix and the output conversion matrix is a floating point type, and/or the calculation mode of at least one of the input conversion, the weight conversion, the batch matrix multiplication and the output conversion is floating point calculation.
In one embodiment, the bulk matrix multiplication is a floating point calculation.
In one embodiment, the fixed point types of the image to be processed, the convolution kernel and the convolution processing result are all int 8; if there is an object matrix whose data type is a fixed point type among the input conversion matrix, the weight conversion matrix, and the output conversion matrix, the fixed point type of the object matrix is int 32.
In one embodiment, the values in the input transformation matrix, the weight transformation matrix and the output transformation matrix are allWherein M is an integer and n is a natural number.
In one embodiment, n has a value in the range of [0,3 ].
The device provided by the embodiment has the same implementation principle and technical effect as the foregoing embodiment, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiment for the portion of the embodiment of the device that is not mentioned.
Example five:
the method and apparatus for extracting image features and the computer program product of the electronic device provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (10)
1. An image feature extraction method is characterized by comprising the following steps:
acquiring an image to be processed and a preset convolution kernel; the image to be processed comprises an original image or a characteristic graph, and the data types of the image to be processed and the convolution kernel are fixed point types;
performing convolution processing on the image to be processed based on the convolution core to obtain a convolution processing result for representing the image characteristics of the image to be processed; wherein the data type of the convolution processing result is a fixed point type; the convolution process involves floating point calculations and conversion between fixed point and floating point.
2. The method according to claim 1, wherein the step of performing convolution processing on the image to be processed based on the convolution kernel to obtain a convolution processing result for characterizing an image feature of the image to be processed comprises:
performing input conversion on the image to be processed by adopting a preset input conversion matrix to obtain an input conversion result;
performing weight conversion on the convolution kernel by adopting a preset weight conversion matrix to obtain a weight conversion result;
performing batch matrix multiplication on the input conversion result and the weight conversion result to obtain a batch matrix multiplication result;
and performing output conversion on the batch matrix multiplication result by adopting a preset output conversion matrix to obtain a convolution processing result for representing the image characteristics of the image to be processed.
3. The method of claim 2, wherein the data type of the input conversion matrix is a fixed-point type or a floating-point type; the input conversion calculation mode is fixed point calculation or floating point calculation; the data type of the weight conversion matrix is a fixed point type or a floating point type; the calculation mode of the weight conversion is fixed-point calculation or floating-point calculation; the calculation mode of the batch matrix multiplication is fixed-point calculation or floating-point calculation; the data type of the output conversion matrix is a fixed point type or a floating point type; the calculation mode of the output conversion is fixed-point calculation or floating-point calculation;
and the number of the first and second electrodes,
the data type of at least one of the input conversion matrix, the weight conversion matrix and the output conversion matrix is a floating point type, and/or the calculation mode of at least one of the input conversion, the weight conversion, the batch matrix multiplication and the output conversion is floating point calculation.
4. The method of claim 3, wherein the bulk matrix multiplication is a floating point calculation.
5. The method according to claim 3, wherein the fixed point types of the image to be processed, the convolution kernel and the convolution processing result are all int 8;
and if a target matrix with the data type being the fixed point type exists in the input conversion matrix, the weight conversion matrix and the output conversion matrix, wherein the fixed point type of the target matrix is int 32.
7. The method of claim 6, wherein n has a value in the range of [0,3 ].
8. An image feature extraction device, comprising:
the acquisition module is used for acquiring an image to be processed and a preset convolution kernel; the image to be processed comprises an original image or a characteristic graph, and the data types of the image to be processed and the convolution kernel are fixed point types;
the convolution processing module is used for performing convolution processing on the image to be processed based on the convolution core to obtain a convolution processing result used for representing the image characteristics of the image to be processed; wherein the data type of the convolution processing result is a fixed point type; the convolution process involves floating point calculations and conversion between fixed point and floating point.
9. An electronic device, comprising: a processor and a storage device;
the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010643501.7A CN113902928A (en) | 2020-07-06 | 2020-07-06 | Image feature extraction method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010643501.7A CN113902928A (en) | 2020-07-06 | 2020-07-06 | Image feature extraction method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113902928A true CN113902928A (en) | 2022-01-07 |
Family
ID=79186845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010643501.7A Pending CN113902928A (en) | 2020-07-06 | 2020-07-06 | Image feature extraction method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113902928A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116129249A (en) * | 2023-04-04 | 2023-05-16 | 上海燧原科技有限公司 | Image processing method, device, electronic equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107562694A (en) * | 2017-08-23 | 2018-01-09 | 维沃移动通信有限公司 | A kind of data processing method and mobile terminal |
CN107704921A (en) * | 2017-10-19 | 2018-02-16 | 北京智芯原动科技有限公司 | The algorithm optimization method and device of convolutional neural networks based on Neon instructions |
CN108229648A (en) * | 2017-08-31 | 2018-06-29 | 深圳市商汤科技有限公司 | Convolutional calculation method and apparatus, electronic equipment, computer storage media |
CN109002881A (en) * | 2018-06-28 | 2018-12-14 | 郑州云海信息技术有限公司 | The fixed point calculation method and device of deep neural network based on FPGA |
CN109063825A (en) * | 2018-08-01 | 2018-12-21 | 清华大学 | Convolutional neural networks accelerator |
CN109740740A (en) * | 2019-01-03 | 2019-05-10 | 厦门美图之家科技有限公司 | The fixed point accelerating method and device of convolutional calculation |
US20190354568A1 (en) * | 2018-05-15 | 2019-11-21 | Apple Inc. | Low precision convolution operations |
CN111126558A (en) * | 2018-10-31 | 2020-05-08 | 北京嘉楠捷思信息技术有限公司 | Convolution neural network calculation acceleration method, device, equipment and medium |
-
2020
- 2020-07-06 CN CN202010643501.7A patent/CN113902928A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107562694A (en) * | 2017-08-23 | 2018-01-09 | 维沃移动通信有限公司 | A kind of data processing method and mobile terminal |
CN108229648A (en) * | 2017-08-31 | 2018-06-29 | 深圳市商汤科技有限公司 | Convolutional calculation method and apparatus, electronic equipment, computer storage media |
CN107704921A (en) * | 2017-10-19 | 2018-02-16 | 北京智芯原动科技有限公司 | The algorithm optimization method and device of convolutional neural networks based on Neon instructions |
US20190354568A1 (en) * | 2018-05-15 | 2019-11-21 | Apple Inc. | Low precision convolution operations |
CN109002881A (en) * | 2018-06-28 | 2018-12-14 | 郑州云海信息技术有限公司 | The fixed point calculation method and device of deep neural network based on FPGA |
CN109063825A (en) * | 2018-08-01 | 2018-12-21 | 清华大学 | Convolutional neural networks accelerator |
CN111126558A (en) * | 2018-10-31 | 2020-05-08 | 北京嘉楠捷思信息技术有限公司 | Convolution neural network calculation acceleration method, device, equipment and medium |
CN109740740A (en) * | 2019-01-03 | 2019-05-10 | 厦门美图之家科技有限公司 | The fixed point accelerating method and device of convolutional calculation |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116129249A (en) * | 2023-04-04 | 2023-05-16 | 上海燧原科技有限公司 | Image processing method, device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110880038B (en) | System for accelerating convolution calculation based on FPGA and convolution neural network | |
CN110929865B (en) | Network quantification method, service processing method and related product | |
CN111401550A (en) | Neural network model quantification method and device and electronic equipment | |
US10491239B1 (en) | Large-scale computations using an adaptive numerical format | |
CN110610237A (en) | Quantitative training method and device of model and storage medium | |
CN108229648B (en) | Convolution calculation method, device, equipment and medium for matching data bit width in memory | |
CN110663048A (en) | Execution method, execution device, learning method, learning device, and program for deep neural network | |
CN109284761B (en) | Image feature extraction method, device and equipment and readable storage medium | |
CN108363559B (en) | Multiplication processing method, device and computer readable medium for neural network | |
CN111240746B (en) | Floating point data inverse quantization and quantization method and equipment | |
CN111105017A (en) | Neural network quantization method and device and electronic equipment | |
WO2022168604A1 (en) | Softmax function approximation calculation device, approximation calculation method, and approximation calculation program | |
CN114978189A (en) | Data coding method and related equipment | |
CN110503182A (en) | Network layer operation method and device in deep neural network | |
WO2021081854A1 (en) | Convolution operation circuit and convolution operation method | |
CN113902928A (en) | Image feature extraction method and device and electronic equipment | |
US20200134434A1 (en) | Arithmetic processing device, learning program, and learning method | |
CN113869517A (en) | Inference method based on deep learning model | |
CN112686365A (en) | Method and device for operating neural network model and computer equipment | |
CN115147283A (en) | Image reconstruction method, device, equipment and medium | |
CN113313253A (en) | Neural network compression method, data processing device and computer equipment | |
CN113986194A (en) | Neural network approximate multiplier implementation method and device based on preprocessing | |
CN113255576B (en) | Face recognition method and device | |
JP7506276B2 (en) | Implementations and methods for processing neural networks in semiconductor hardware - Patents.com | |
WO2024212952A1 (en) | Computing apparatus and method, electronic device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |