CN112269595A - Image processing method, image processing device, computer equipment and storage medium - Google Patents

Image processing method, image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN112269595A
CN112269595A CN202011172285.9A CN202011172285A CN112269595A CN 112269595 A CN112269595 A CN 112269595A CN 202011172285 A CN202011172285 A CN 202011172285A CN 112269595 A CN112269595 A CN 112269595A
Authority
CN
China
Prior art keywords
data
neural network
integer
network model
floating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011172285.9A
Other languages
Chinese (zh)
Other versions
CN112269595B (en
Inventor
李国齐
杨玉宽
裴京
施路平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202011172285.9A priority Critical patent/CN112269595B/en
Publication of CN112269595A publication Critical patent/CN112269595A/en
Application granted granted Critical
Publication of CN112269595B publication Critical patent/CN112269595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/22Microcontrol or microprogram arrangements
    • G06F9/28Enhancement of operational speed, e.g. by using several microcontrol devices operating in parallel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computer device, and a storage medium. The method comprises the following steps: acquiring a target image to be processed; acquiring a neural network model, wherein the neural network model is used for discretizing floating point data in different data ranges in different discretization modes to obtain corresponding integer data with preset bit width; and calling the neural network model to process the target image and outputting to obtain image characteristic data. According to the embodiment of the invention, the bit width of the integer data is effectively controlled in the integer process of the floating point data, the storage representation range of the integer data is expanded, namely, the low-bit-width and high-precision integer is realized, meanwhile, the calculation acceleration is realized due to the speed advantage and the resource advantage of fixed point number operation, and the image processing efficiency and the subsequent image feature extraction effect are improved.

Description

Image processing method, image processing device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computer device, and a storage medium.
Background
The image processing algorithm based on the neural network usually needs a deep neural network model to extract image features so as to complete final tasks of classification, detection, identification and the like, large-scale vector or matrix floating point multiply-add operation is inevitably introduced in the process, and the floating point operation involved in the operation process of the neural network image processing algorithm is usually required to be accelerated in consideration of the calculation speed and the limitation of calculation and storage resources.
Floating point integer shaping is a means for effectively accelerating the computation of a neural network image processing algorithm and reducing the consumption of computing resources. The floating point number (Float 64/32) involved in the neural network model is converted into the integer number (Int 32/8), floating point number operation is converted into integer number operation, and therefore data storage space in the neural network image processing algorithm can be greatly reduced, and calculation is accelerated.
In the related art, a method of normalizing discretization of the nearest fixed point number is generally adopted, the calculation precision of the method is poor, the subsequent image feature extraction effect is poor, and a reasonable and effective image processing method is not provided at present.
Disclosure of Invention
In view of the above, the present disclosure provides an image processing method, an image processing apparatus, a computer device, and a storage medium. The technical scheme comprises the following steps:
according to an aspect of the present disclosure, there is provided an image processing method, the method including:
acquiring a target image to be processed;
acquiring a neural network model, wherein the neural network model is used for discretizing floating point data in different data ranges in different discretization modes to obtain corresponding integer data with preset bit width;
and calling the neural network model to process the target image and outputting to obtain image characteristic data.
In one possible implementation, the floating-point data includes at least one of a weight value, a neuron value, a batch normalization layer value, an activation function value, a feedback error value, a neuron gradient value, and a weight update value.
In another possible implementation manner, the method further includes:
acquiring floating point type data in the neural network model in the process of calling the neural network model to process the target image;
when the absolute value of the floating-point data is larger than or equal to a preset threshold value, converting the floating-point data into integer data with a preset bit width by adopting a first discretization mode;
and when the absolute value of the floating-point data is smaller than a preset threshold value, converting the floating-point data into integer data with preset bit width by adopting a second discretization mode, wherein the second discretization mode is different from the first discretization mode.
In another possible implementation manner, the preset threshold is a target maximum absolute value, or a value determined based on the target maximum absolute value, or a custom threshold, and the target maximum absolute value is an estimated maximum absolute value of the floating-point type data.
In another possible implementation manner, the method further includes:
and storing an identification bit and a data value corresponding to the integer data, wherein the identification bit is used for indicating whether the integer data is greater than or equal to the preset threshold value.
In another possible implementation manner, the method further includes:
in the data reading process, when the identification bit is used for indicating that the integer data is greater than or equal to the preset threshold value, converting the read data value in a first data conversion mode to obtain the integer data;
and when the identification bit is used for indicating that the integer data is smaller than the preset threshold value, converting the read data value by adopting a second data conversion mode to obtain the integer data, wherein the second data conversion mode is different from the first data conversion mode.
According to another aspect of the present disclosure, there is provided an image processing apparatus including:
the first acquisition module is used for acquiring a target image to be processed;
the second acquisition module is used for acquiring a neural network model, and the neural network model is used for discretizing floating point type data in different data ranges in different discretization modes to obtain corresponding integer data with preset bit width;
and the calling module is used for calling the neural network model to process the target image and outputting the target image to obtain image characteristic data.
According to another aspect of the present disclosure, there is provided a computer device including: a processor; a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a target image to be processed;
acquiring a neural network model, wherein the neural network model is used for discretizing floating point data in different data ranges in different discretization modes to obtain corresponding integer data with preset bit width;
and calling the neural network model to process the target image and outputting to obtain image characteristic data.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
The method comprises the steps of obtaining a target image to be processed; acquiring a neural network model, wherein the neural network model is used for discretizing floating point data in different data ranges in different discretization modes to obtain corresponding integer data with preset bit width; calling a neural network model to process a target image and outputting to obtain image characteristic data; the bit width of integer data can be effectively controlled in the integer process of the floating point data, the storage representation range of the integer data is expanded, namely low-bit-width and high-precision integer is realized, meanwhile, the calculation acceleration is realized due to the speed advantage and the resource advantage of fixed point number operation, and the image processing efficiency and the subsequent image feature extraction effect are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a schematic structural diagram of a computer device provided by an exemplary embodiment of the present disclosure;
FIG. 2 shows a flow chart of an image processing method provided by an exemplary embodiment of the present disclosure;
FIG. 3 shows a flow chart of an image processing method provided by another exemplary embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating data information of integer data related to an image processing method according to another exemplary embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image processing apparatus provided in an exemplary embodiment of the present disclosure;
FIG. 6 is a block diagram illustrating a terminal in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating a server in accordance with an example embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
With the arrival of the big data era and the improvement of computing power, the development of deep learning is not obvious, and in the field of image processing algorithms, compared with the prior method for manually extracting image features, the image processing algorithm based on the neural network achieves unprecedented performances. However, the performance of the neural network image processing algorithm is often improved along with the increase of the parameter number and the increase of the calculation cost, so that the problems of calculation resources and calculation speed caused by the performance of the neural network image processing algorithm severely limit the application of the neural network image processing algorithm on the resource-limited mobile device side. The neural network discretization is characterized in that parameters in a neural network image processing algorithm are shaped, floating point number parameters in a traditional neural network are converted into shaped parameters, floating point number operation in the traditional neural network is converted into fixed point number operation, storage space required by the neural network parameters can be effectively reduced, and the calculation speed of the neural network is increased.
The existing discretization method can well keep the distribution characteristics of original data when the data distribution of the neural network image processing algorithm is uniform and the discretization bit width is high, but when the data distribution range is large (such as the absolute value distribution range of floating point type data is large) and the distribution is extremely nonuniform (such as floating point type data with small absolute value occupies most parts), the discretization method in the related technology can reshape the floating point type data occupying large proportion of data and small absolute value into 0 in the normalization and reshaping processes of the floating point type data, so that the loss of integer data information and the huge difference of the distribution of the floating point type data before and after the reshaping are caused, and the huge precision loss of subsequent calculation and even the collapse of the model calculation result are caused.
Compared with the traditional method for normalizing the discretization of the nearest fixed point number, the embodiment of the disclosure provides an image processing method, an image processing device, computer equipment and a storage medium, which can effectively reduce the bit width of data after integer shaping in the process of integer shaping of floating point data, enlarge the storage representation range of integer data, namely realize low-bit-width and high-precision integer shaping, and simultaneously realize the acceleration of calculation due to the speed advantage and resource advantage of fixed point number operation, thereby improving the efficiency of image processing and the subsequent image feature extraction effect.
First, an application scenario to which the present disclosure relates will be described.
Referring to fig. 1, a schematic structural diagram of a computer device according to an exemplary embodiment of the present disclosure is shown.
The computer device may be a terminal or a server. The terminal includes a tablet computer, a laptop portable computer, a desktop computer, and the like. The server can be a server, a server cluster consisting of a plurality of servers, or a cloud computing service center.
The computer device is installed with an image processing program, which is an application program for performing feature extraction on an input target image to obtain image feature data.
As shown in fig. 1, the computer device includes a processor 10, a memory 20, and a communication interface 30. Those skilled in the art will appreciate that the configuration shown in FIG. 1 is not intended to be limiting of the computer device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 10 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by operating or executing software programs and/or modules stored in the memory 20 and calling data stored in the memory 20, thereby performing overall control of the computer device. The processor 10 may be implemented by a CPU or a Graphics Processing Unit (GPU).
The memory 20 may be used to store software programs and modules. The processor 10 executes various functional applications and data processing by executing software programs and modules stored in the memory 20. The memory 20 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system 21, a first obtaining module 22, a second obtaining module 23, a calling module 24, and an application 25 (such as neural network training, etc.) required by at least one function, and the like; the storage data area may store data created according to use of the computer device, and the like. The Memory 20 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. Accordingly, the memory 20 may also include a memory controller to provide the processor 10 access to the memory 20.
Wherein, the processor 20 executes the following functions by operating the first obtaining module 22: acquiring a target image to be processed; the processor 20 performs the following functions by executing the second obtaining module 23: acquiring a neural network model, wherein the neural network model is used for discretizing floating point data in different data ranges in different discretization modes to obtain corresponding integer data with preset bit width; the processor 20 performs the following functions by running the call module 24: and calling a neural network model to process the target image and outputting to obtain image characteristic data. Optionally, the neural network model discretizes the floating-point data in different data ranges in different discretization modes in the training, fine-tuning or direct discretization processes to obtain corresponding integer data with preset bit width.
In the following, several exemplary embodiments are adopted to describe the image processing method provided by the embodiments of the present disclosure.
Referring to fig. 2, a flowchart of an image processing method according to an exemplary embodiment of the present disclosure is shown, which is illustrated in the computer device shown in fig. 1. The method comprises the following steps.
Step 201, a target image to be processed is acquired.
The computer device acquires an input image, which is a target image to be processed.
Step 202, obtaining a neural network model, where the neural network model is used to discretize floating point data in different data ranges in different discretization manners to obtain integer data of a corresponding preset bit width.
The computer device obtains a stored neural network model. The neural network model discretizes floating point data in different data ranges in the neural network model in different discretization modes in the training, fine tuning or direct discretization process to obtain corresponding integer data with preset bit width.
Optionally, the neural network model is configured to discretize the floating-point data in a discretization manner corresponding to a data range in which the floating-point data is located to obtain corresponding integer data with a preset bit width. The discretization modes corresponding to the same data ranges are different.
Optionally, the neural network model is configured to discretize floating point type data in a first data range by using a first discretization method to obtain corresponding integer data with a preset bit width, discretize floating point type data in a second data range by using a second discretization method to obtain corresponding integer data with a preset bit width, where an intersection does not exist between the second data range and the first data range, and the first discretization method is different from the second discretization method. Illustratively, the floating-point type data in the first data range is floating-point type data with an absolute value greater than or equal to a preset threshold, and the floating-point type data in the second data range is floating-point type data with an absolute value less than the preset threshold.
Optionally, the floating point type data is a floating point number in the neural network model, also referred to as a floating point number tensor. Illustratively, the floating-point data includes at least one of a weight value, a neuron value, a batch normalization layer value, an activation function value, a feedback error value, a neuron gradient value, and a weight update value.
Illustratively, floating-point data may also include floating-point numbers during other large-scale tensor (e.g., matrix, 3-dimensional tensor, etc.) operations.
Optionally, the neural network model is a neural network image processing model, and the neural network model includes any one model of an artificial neural network, a recurrent neural network, a long and short memory neural network, a deep reinforcement learning network, a graph neural network, and a tensor neural network in a neural network image processing algorithm. The disclosed embodiments do not impose limitations on the type of neural network model.
Optionally, the transformed integer data is integer data with a preset bit width, and the preset bit width is smaller than a preset bit width threshold, for example, the preset bit width is 8 bits or 16 bits. The embodiment of the present disclosure does not limit the value of the preset bit width.
And step 203, calling the neural network model to process the target image, and outputting to obtain image characteristic data.
And the computer equipment calls the neural network model to process the target image and outputs the target image to obtain image characteristic data. Namely, the computer equipment inputs the target image into the neural network model and outputs the target image to obtain image characteristic data. Wherein the image feature data is indicative of an image feature of the target image.
Optionally, in the process of calling the neural network model to process the target image, acquiring floating point type data in the neural network model; and converting the floating point data into integer data with preset bit width by adopting a discretization mode corresponding to the data range according to the data range where the floating point data is located. The discretization modes corresponding to different data ranges are different.
Optionally, when the absolute value of the floating-point data is greater than or equal to a preset threshold, converting the floating-point data into integer data with a preset bit width by adopting a first discretization mode; and when the absolute value of the floating-point data is smaller than the preset threshold, converting the floating-point data into integer data with preset bit width by adopting a second discretization mode, wherein the second discretization mode is different from the first discretization mode.
Optionally, for any of a plurality of convolutional layers of the neural network, the computer device obtains floating point type data for the convolutional layer. When the absolute value of the floating-point data is larger than or equal to a preset threshold value, converting the floating-point data into integer data with a preset bit width by adopting a first discretization mode; and when the absolute value of the floating-point data is smaller than the preset threshold value, converting the floating-point data into integer data with preset bit width by adopting a second discretization mode. And obtaining output data of the convolutional layer according to the floating point type data of the convolutional layer and the input data of the target image.
Optionally, the preset threshold is a target maximum absolute value, or a value determined based on the target maximum absolute value, or a custom threshold, and the target maximum absolute value is an absolute maximum value of the estimated floating-point type data.
Illustratively, the target absolute value maximum value is the absolute value maximum value of the floating-point type data estimated based on the target fixed-point number, which is the fixed-point number closest to the absolute value maximum value of the floating-point type data.
For example, the preset threshold is a value proportional to the maximum value of the target absolute value.
In summary, the embodiment of the present disclosure obtains a target image to be processed; acquiring a neural network model, wherein the neural network model is used for discretizing floating point data in different data ranges in different discretization modes to obtain corresponding integer data with preset bit width; calling a neural network model to process a target image and outputting to obtain image characteristic data; the bit width of integer data can be effectively controlled in the integer process of the floating point data, the storage representation range of the integer data is expanded, namely low-bit-width and high-precision integer is realized, meanwhile, the calculation acceleration is realized due to the speed advantage and the resource advantage of fixed point number operation, and the image processing efficiency and the subsequent image feature extraction effect are improved. The image processing method provided by the embodiment of the disclosure can be used for discretization of floating point type data with more uniform data distribution, and is more effectively suitable for discretization of floating point type data with nonuniform data distribution or higher precision requirement.
In an application aspect, the image processing method provided by the embodiment of the present disclosure, in one aspect, can effectively solve the problem of accuracy and convergence of neural network data representation in the current neural network image processing process, and can be applied to the regularization of various floating point type data such as a weight value, a neuron value, a batch normalization layer value, an activation function value, a back propagation error value, a neuron gradient value, a weight update value, and the like, thereby improving the performance of the neural network; on the other hand, compared with the current image processing method, the embodiment of the disclosure can be combined with the current mainstream deep learning acceleration chip, and the multiplication and addition operation of floating point data is converted into the multiplication and addition operation of integer data, so that the chip area can be greatly reduced, the energy consumption can be reduced, and the deep learning forward calculation and the on-line training chip architecture design can be greatly promoted; on the other hand, the huge calculation acceleration and resource consumption advantages brought by the embodiment of the disclosure can push the current neural network image processing algorithm model to portable embedded devices such as mobile phones and intelligent terminals from expensive server clusters; in another aspect, the image processing method provided by the embodiment of the disclosure has high universality, and can be widely applied to models such as an artificial neural network, a cyclic neural network, a long and short memory neural network, a deep reinforcement learning network, a graph neural network, a tensor neural network and the like in a neural network image processing algorithm.
In a possible implementation mode, in the process of calling the neural network model to process the target image, the main process of floating point type data in the neural network model comprises a floating point type data discretization method, discretized integer data storage containing identification bits and discretized integer data reading containing identification bits, so that calculation acceleration and data storage compression for neural network image processing are realized. The image processing method includes, but is not limited to, the following steps, as shown in fig. 3:
step 301, in the process of calling the neural network model to process the target image, the computer device discretizes the floating point data in different data ranges in the neural network model in different discretization modes to obtain corresponding integer data with preset bit width.
Optionally, in the process of calling the neural network model to process the target image, acquiring floating point type data in the neural network model; when the absolute value of the floating-point data is larger than or equal to a preset threshold value, converting the floating-point data into integer data with a preset bit width by adopting a first discretization mode; and when the absolute value of the floating-point data is smaller than the preset threshold, converting the floating-point data into integer data with preset bit width by adopting a second discretization mode, wherein the second discretization mode is different from the first discretization mode.
Optionally, the computer device obtains the corresponding integer data of the preset bit width through the following formula
Figure BDA0002747672900000111
Wherein,
Figure BDA0002747672900000112
Figure BDA0002747672900000113
Figure BDA0002747672900000114
wherein x is floating point type data in the neural network model; sc (x, k) is a preset threshold value, which is a value determined based on the maximum value of the target absolute value; r (x) is a target maximum absolute value, which is the maximum absolute value of the estimated floating-point type data; round (·) is a rounding function; clip (·) is a clipping function for controlling the data representation scope; and k is the discretized preset bit width.
In the process of converting floating-point data into integer data with a preset bit width, the rounding function adopted by the computer equipment can be a rounding function, such as a round (·) function; the rounding function may also be an upward rounding function, such as a ceil (·) function; the rounding function may also be a down rounding function, such as a floor (·) function. The embodiments of the present disclosure do not limit this.
A preset threshold value is introduced through a threshold value function, different discretization modes are adopted for discretization of floating point type data in a neural network image processing algorithm, original discretization precision of the floating point type data larger than or smaller than the preset threshold value is kept, discretization precision of the floating point type data smaller than the preset threshold value is increased, and the representing range of the data is increased.
In an illustrative example, taking k-bit discretization as an example, in the neural network image processing algorithm, for floating point type data x in a neural network model, the discretization representation range in the related art is as follows
Figure BDA0002747672900000121
The discretization method will range the data into
Figure BDA0002747672900000122
The discretization of the data is 0, which causes the loss of the data information of the part, and when the distribution of the floating point type data x in the neural network image processing algorithm is more or the required precision is higher, the discretized integer data causes huge calculation errors.
The embodiment of the disclosure adopts different discretization modes for floating point type data in different data ranges by introducing a preset threshold value mode. When the absolute value of the floating point type data in the neural network image processing algorithm is greater than or equal to a preset threshold, the floating point type data is discretized into { -2k-1+1,-2k-1+2,…,-1,0,1,…,2k-1-1 }; when the absolute value of the floating-point type data is less than a preset threshold value, the floating-point type data is discretized into
Figure BDA0002747672900000123
Figure BDA0002747672900000124
Number of fixed points in between. The data representation range of the discretization mode provided by the embodiment of the disclosure is as follows:
Figure BDA0002747672900000125
the discretization method will range the data into
Figure BDA0002747672900000126
Is 0, the data range is 0 compared with the discretization method in the related art
Figure BDA0002747672900000127
The data of (1) is discretized into 0, and the method provided by the embodiment of the disclosure has higher discretization precision and smaller zero setting range
Figure BDA0002747672900000128
The method can realize the same discretization data precision as the traditional (2k-1) -bit discretization under the k-bit discretization condition, and can better keep the data precision for the floating point type data discretization with uneven data distribution or higher precision requirement.
Step 302, storing an identification bit and a data value corresponding to the integer data, where the identification bit is used to indicate whether the integer data is greater than or equal to a preset threshold.
In order to cooperate with the neural network image processing algorithm parameter discretization method, the embodiment of the disclosure provides a novel data storage mode, and on the basis of binary traditional data storage, whether integer data is greater than or equal to a preset threshold value is judged by introducing an identification bit in the data storage to read the data after discretization.
Optionally, the computer device stores data information of the integer data after converting the floating-point data into integer data with a preset bit width.
Optionally, the computer device stores therein data information corresponding to each of the plurality of the integer data.
The data information of the integer data comprises an identification bit and a data value corresponding to the integer data, wherein the identification bit is used for indicating whether the integer data is larger than or equal to a preset threshold value.
Optionally, the value of the flag bit is used to indicate that the integer data is greater than or equal to the preset threshold when the value of the flag bit is the first value, and is used to indicate that the integer data is smaller than the preset threshold when the value of the flag bit is the second value. For example, the first value is 1 and the second value is 0. For another example, the first value is 0 and the second value is 1. The setting mode of the identification bit is not limited, and the setting of the identification bit only needs to be kept consistent in the storage and reading processes.
Optionally, the data information of the integer data includes an identification bit and a data value, and the data value is a value of a positive flag bit and a negative flag bit. The identification bit is Flag bit, the positive and negative Flag bits are Sign bit, and the Data bit is Data bit. The flag bits may be stored at designated locations, before the positive and negative flags, between the positive and negative flags and the data bit,
in an illustrative example, as shown in fig. 4, the data information of the integer data sequentially includes an identification bit, positive and negative flag bits, and a data bit, i.e., the identification bit precedes the positive and negative flag bits. The value of the identification bit is 0 and is used for indicating that the integer data is greater than or equal to a preset threshold value; the positive and negative flag bit has a value of 1 and is used for indicating that the integer data is a positive number; the data bits comprise a binary value of k-1 bits.
The storage position of the identification bit is not limited in the embodiment of the disclosure, and only needs to be kept consistent in the storage and reading processes.
Optionally, the data information of the integer data further includes a preset threshold corresponding to the integer data.
Step 303, in the data reading process, the computer device transforms the read data value according to the identification bit to obtain corresponding integer data.
Optionally, in the data reading process, the computer device transforms the read data value in a data change manner corresponding to the identification bit corresponding to the data value, so as to obtain corresponding integer data. That is, the computer device transforms the data values with different identification bits by different data transformation modes to obtain respective corresponding integer data.
Optionally, in the data reading process, when the identification bit is used to indicate that the integer data is greater than or equal to the preset threshold, the read data value is transformed by adopting a first data transformation manner to obtain the integer data. And when the identification bit is used for indicating that the integer data is smaller than the preset threshold value, converting the read data value by adopting a second data conversion mode to obtain the integer data, wherein the second data conversion mode is different from the first data conversion mode.
In one possible implementation, the first data variation is used to instruct to multiply the read data value by a preset threshold to obtain integer data. The second data variation mode is used for indicating that the read data value is divided by the first numerical value to obtain a divisor, and the divisor is multiplied by a preset threshold to obtain integer data, wherein the first numerical value is the k-1 power of 2, and k is a preset bit width.
Illustratively, the computer device reads the discretized integer data x by the following formulaq
Figure BDA0002747672900000141
Wherein Sc is a preset threshold value, xreadAnd reading the data values of the positive flag bit and the negative flag bit and the data bit according to the conventional binary system, wherein k is the discretized preset bit width.
In summary, the image processing method provided by the embodiment of the present disclosure realizes computation acceleration and resource reduction in the image processing process by improving the floating point type data integer mode and the data storage representation mode in the neural network image processing process. The image processing method provided by the embodiment of the disclosure relates to two processes: and (3) integer representation of floating point type data and storage representation of integer type data in the neural network image processing algorithm. Floating-point data integer transformation converts floating-point data into integer data with determined bit width by introducing a specific data integer function; the storage representation of the integer data reduces the data bit width and increases the data representation range by introducing an identification bit. For the floating point type data integer type with uneven data distribution or higher precision requirement, the embodiment of the disclosure can greatly improve the calculation speed and calculation precision of large-scale floating point type data operation in the neural network image processing algorithm with extremely low storage and calculation cost, and improve the distribution similarity of integer type data and original floating point type data. The embodiment of the disclosure can be widely applied to parameter reshaping of artificial intelligence algorithms such as an artificial neural network, a long and short memory neural network, reinforcement learning, a graph neural network and a tensor neural network in a neural network image processing algorithm, so that the consumption of algorithm storage and calculation resources is reduced, the area and energy consumption of a chip are reduced, and the calculation speed of the chip is increased. Based on the reduction of computing resources and the improvement of computing speed brought by the embodiment of the disclosure, related applications of the neural network image processing algorithm, such as image classification, identification, tracking and the like, can be transferred from a server GPU computing cluster with high computing cost and huge energy consumption to an embedded intelligent terminal, and the application range of related applications related to the neural network image processing algorithm is expanded.
The following are embodiments of the apparatus of the embodiments of the present disclosure, and for portions of the embodiments of the apparatus not described in detail, reference may be made to technical details disclosed in the above-mentioned method embodiments.
Referring to fig. 5, a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the disclosure is shown. The image processing apparatus may be implemented as all or a part of a computer device by software, hardware, or a combination of both. The device includes: a first obtaining module 510, a second obtaining module 520, and a calling module 530.
A first obtaining module 510, configured to obtain a target image to be processed;
a second obtaining module 520, configured to obtain a neural network model, where the neural network model is configured to discretize floating point type data in different data ranges in different discretization manners to obtain corresponding integer type data with a preset bit width;
and the calling module 530 is used for calling the neural network model to process the target image and outputting the processed target image to obtain image characteristic data.
In one possible implementation, the floating-point data includes at least one of a weight value, a neuron value, a batch normalization layer value, an activation function value, a feedback error value, a neuron gradient value, and a weight update value.
In another possible implementation manner, the apparatus further includes: a processing module; the processing module is configured to:
acquiring floating point type data in a neural network model in the process of calling the neural network model to process a target image;
when the absolute value of the floating-point data is larger than or equal to a preset threshold value, converting the floating-point data into integer data with a preset bit width by adopting a first discretization mode;
and when the absolute value of the floating-point data is smaller than the preset threshold, converting the floating-point data into integer data with preset bit width by adopting a second discretization mode, wherein the second discretization mode is different from the first discretization mode.
In another possible implementation manner, the preset threshold is a target maximum absolute value, or a value determined based on the target maximum absolute value, or a custom threshold, and the target maximum absolute value is an absolute value maximum of the estimated floating-point type data.
In another possible implementation manner, the apparatus further includes: a storage module; the storage module is used for storing an identification bit and a data value corresponding to the integer data, wherein the identification bit is used for indicating whether the integer data is larger than or equal to a preset threshold value.
In another possible implementation manner, the apparatus further includes: a reading module; the reading module is used for:
in the data reading process, when the identification bit is used for indicating that the integer data is greater than or equal to a preset threshold value, converting the read data value in a first data conversion mode to obtain integer data;
and when the identification bit is used for indicating that the integer data is smaller than the preset threshold value, converting the read data value by adopting a second data conversion mode to obtain the integer data, wherein the second data conversion mode is different from the first data conversion mode.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the above functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to actual needs, that is, the content structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present disclosure further provides a computer device, where the computer device includes: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the steps executed by the computer device in the method embodiments are realized.
Optionally, the computer device is a terminal or a server.
The disclosed embodiments also provide a non-transitory computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the methods in the various method embodiments described above.
Fig. 6 is a block diagram illustrating a terminal 600 according to an example embodiment. For example, the terminal 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and so forth.
Referring to fig. 6, terminal 600 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the terminal 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the terminal 600. Examples of such data include instructions for any application or method operating on terminal 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply component 606 provides power to the various components of terminal 600. The power components 606 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal 600.
The multimedia component 608 comprises a screen providing an output interface between the terminal 600 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the terminal 600 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the terminal 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing various aspects of status assessment for the terminal 600. For example, sensor component 614 can detect an open/closed state of terminal 600, relative positioning of components, such as a display and keypad of terminal 600, change in position of terminal 600 or a component of terminal 600, presence or absence of user contact with terminal 600, orientation or acceleration/deceleration of terminal 600, and temperature change of terminal 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the terminal 600 and other devices in a wired or wireless manner. The terminal 600 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 604, is also provided that includes computer program instructions executable by the processor 620 of the terminal 600 to perform the above-described methods.
Fig. 7 is a block diagram illustrating a server 700 in accordance with an example embodiment. Referring to fig. 7, server 700 includes a processing component 722 that further includes one or more processors and memory resources, represented by memory 732, for storing instructions, such as applications, that are executable by processing component 722. The application programs stored in memory 732 may include one or more modules that each correspond to a set of instructions. Further, the processing component 722 is configured to execute instructions to perform the above-described methods.
The server 700 may also include a power component 726 configured to perform power management of the server 700, a wired or wireless network interface 750 configured to connect the server 700 to a network, and an input output (I/O) interface 758. The server 700 may operate based on an operating system stored in memory 732, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 732, is also provided that includes computer program instructions executable by the processing component 722 of the server 700 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

1. An image processing method, characterized in that the method comprises:
acquiring a target image to be processed;
acquiring a neural network model, wherein the neural network model is used for discretizing floating point data in different data ranges in different discretization modes to obtain corresponding integer data with preset bit width;
and calling the neural network model to process the target image and outputting to obtain image characteristic data.
2. The method of claim 1, wherein the floating point data comprises at least one of weight values, neuron values, batch normalization layer values, activation function values, feedback error values, neuron gradient values, and weight update values.
3. The method of claim 1, further comprising:
acquiring floating point type data in the neural network model in the process of calling the neural network model to process the target image;
when the absolute value of the floating-point data is larger than or equal to a preset threshold value, converting the floating-point data into integer data with a preset bit width by adopting a first discretization mode;
and when the absolute value of the floating-point data is smaller than a preset threshold value, converting the floating-point data into integer data with preset bit width by adopting a second discretization mode, wherein the second discretization mode is different from the first discretization mode.
4. The method according to claim 3, wherein the preset threshold is a target maximum absolute value, which is an estimated maximum absolute value of the floating-point type data, or a value determined based on the target maximum absolute value, or a custom threshold.
5. The method of any of claims 1 to 4, further comprising:
and storing an identification bit and a data value corresponding to the integer data, wherein the identification bit is used for indicating whether the integer data is greater than or equal to the preset threshold value.
6. The method of claim 5, further comprising:
in the data reading process, when the identification bit is used for indicating that the integer data is greater than or equal to the preset threshold value, converting the read data value in a first data conversion mode to obtain the integer data;
and when the identification bit is used for indicating that the integer data is smaller than the preset threshold value, converting the read data value by adopting a second data conversion mode to obtain the integer data, wherein the second data conversion mode is different from the first data conversion mode.
7. An image processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring a target image to be processed;
the second acquisition module is used for acquiring a neural network model, and the neural network model is used for discretizing floating point type data in different data ranges in different discretization modes to obtain corresponding integer data with preset bit width;
and the calling module is used for calling the neural network model to process the target image and outputting the target image to obtain image characteristic data.
8. A computer device, characterized in that the computer device comprises: a processor; a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a target image to be processed;
acquiring a neural network model, wherein the neural network model is used for discretizing floating point data in different data ranges in different discretization modes to obtain corresponding integer data with preset bit width;
and calling the neural network model to process the target image and outputting to obtain image characteristic data.
9. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1 to 6.
CN202011172285.9A 2020-10-28 2020-10-28 Image processing method, device, computer equipment and storage medium Active CN112269595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011172285.9A CN112269595B (en) 2020-10-28 2020-10-28 Image processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011172285.9A CN112269595B (en) 2020-10-28 2020-10-28 Image processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112269595A true CN112269595A (en) 2021-01-26
CN112269595B CN112269595B (en) 2024-09-24

Family

ID=74344412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011172285.9A Active CN112269595B (en) 2020-10-28 2020-10-28 Image processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112269595B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990440A (en) * 2021-04-02 2021-06-18 安谋科技(中国)有限公司 Data quantization method for neural network model, readable medium, and electronic device
CN113884504A (en) * 2021-08-24 2022-01-04 湖南云眼智能装备有限公司 Capacitor appearance detection control method and device
WO2023151285A1 (en) * 2022-02-08 2023-08-17 广州小鹏自动驾驶科技有限公司 Image recognition method and apparatus, electronic device, and storage medium
CN117056296A (en) * 2023-02-15 2023-11-14 中科南京智能技术研究院 Face data acquisition and storage method and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171326A (en) * 2017-12-22 2018-06-15 清华大学 Data processing method, device, chip, equipment and the storage medium of neural network
US20190122100A1 (en) * 2017-10-19 2019-04-25 Samsung Electronics Co., Ltd. Method and apparatus with neural network parameter quantization
CN110559012A (en) * 2019-10-21 2019-12-13 江苏鹿得医疗电子股份有限公司 Electronic stethoscope, control method thereof and control method of medical equipment
CN110929838A (en) * 2018-09-19 2020-03-27 杭州海康威视数字技术股份有限公司 Bit width localization method, device, terminal and storage medium in neural network
CN111176853A (en) * 2020-02-19 2020-05-19 珠海市杰理科技股份有限公司 Data quantization method and device, computer equipment and storage medium
CN111401550A (en) * 2020-03-10 2020-07-10 北京迈格威科技有限公司 Neural network model quantification method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190122100A1 (en) * 2017-10-19 2019-04-25 Samsung Electronics Co., Ltd. Method and apparatus with neural network parameter quantization
CN108171326A (en) * 2017-12-22 2018-06-15 清华大学 Data processing method, device, chip, equipment and the storage medium of neural network
CN110929838A (en) * 2018-09-19 2020-03-27 杭州海康威视数字技术股份有限公司 Bit width localization method, device, terminal and storage medium in neural network
CN110559012A (en) * 2019-10-21 2019-12-13 江苏鹿得医疗电子股份有限公司 Electronic stethoscope, control method thereof and control method of medical equipment
CN111176853A (en) * 2020-02-19 2020-05-19 珠海市杰理科技股份有限公司 Data quantization method and device, computer equipment and storage medium
CN111401550A (en) * 2020-03-10 2020-07-10 北京迈格威科技有限公司 Neural network model quantification method and device and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990440A (en) * 2021-04-02 2021-06-18 安谋科技(中国)有限公司 Data quantization method for neural network model, readable medium, and electronic device
CN112990440B (en) * 2021-04-02 2023-09-19 安谋科技(中国)有限公司 Data quantization method for neural network model, readable medium and electronic device
CN113884504A (en) * 2021-08-24 2022-01-04 湖南云眼智能装备有限公司 Capacitor appearance detection control method and device
WO2023151285A1 (en) * 2022-02-08 2023-08-17 广州小鹏自动驾驶科技有限公司 Image recognition method and apparatus, electronic device, and storage medium
CN117056296A (en) * 2023-02-15 2023-11-14 中科南京智能技术研究院 Face data acquisition and storage method and related equipment

Also Published As

Publication number Publication date
CN112269595B (en) 2024-09-24

Similar Documents

Publication Publication Date Title
CN112269595B (en) Image processing method, device, computer equipment and storage medium
CN110390394B (en) Batch normalization data processing method and device, electronic equipment and storage medium
CN110781957B (en) Image processing method and device, electronic equipment and storage medium
US11556761B2 (en) Method and device for compressing a neural network model for machine translation and storage medium
US20210312289A1 (en) Data processing method and apparatus, and storage medium
CN111259967B (en) Image classification and neural network training method, device, equipment and storage medium
CN109165738B (en) Neural network model optimization method and device, electronic device and storage medium
CN112668707B (en) Operation method, device and related product
CN110188865B (en) Information processing method and device, electronic equipment and storage medium
CN113361540A (en) Image processing method and device, electronic equipment and storage medium
CN113095486A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112001364A (en) Image recognition method and device, electronic equipment and storage medium
CN111985635A (en) Method, device and medium for accelerating neural network inference processing
CN113781518B (en) Neural network structure searching method and device, electronic equipment and storage medium
CN111582432B (en) Network parameter processing method and device
CN114861828A (en) Neural network training method and device, and audio processing method and device
CN111694571B (en) Compiling method and device
CN115098262B (en) Multi-neural network task processing method and device
CN115100492B (en) Yolov3 network training and PCB surface defect detection method and device
CN112749709A (en) Image processing method and device, electronic equipment and storage medium
US11966451B2 (en) Method for optimizing deep learning operator, device and storage medium
CN112734015B (en) Network generation method and device, electronic equipment and storage medium
CN114648649A (en) Face matching method and device, electronic equipment and storage medium
CN110209851B (en) Model training method and device, electronic equipment and storage medium
CN113159275A (en) Network training method, image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant