CN115797228A - Image processing device, method, chip, electronic equipment and storage medium - Google Patents

Image processing device, method, chip, electronic equipment and storage medium Download PDF

Info

Publication number
CN115797228A
CN115797228A CN202310044515.0A CN202310044515A CN115797228A CN 115797228 A CN115797228 A CN 115797228A CN 202310044515 A CN202310044515 A CN 202310044515A CN 115797228 A CN115797228 A CN 115797228A
Authority
CN
China
Prior art keywords
image data
data
image
memory computing
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310044515.0A
Other languages
Chinese (zh)
Other versions
CN115797228B (en
Inventor
姜宇奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jiutian Ruixin Technology Co ltd
Original Assignee
Shenzhen Jiutian Ruixin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jiutian Ruixin Technology Co ltd filed Critical Shenzhen Jiutian Ruixin Technology Co ltd
Priority to CN202310044515.0A priority Critical patent/CN115797228B/en
Publication of CN115797228A publication Critical patent/CN115797228A/en
Application granted granted Critical
Publication of CN115797228B publication Critical patent/CN115797228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing apparatus, an image processing method, a chip, an electronic device and a storage medium, wherein the image processing apparatus comprises: the preprocessing unit is used for preprocessing the original image data to convert the original image data into image data in a preset format and then sending the image data to the memory computing module; the memory computing module comprises at least one memory computing unit, the memory computing unit comprises a neural network, and the memory computing unit is used for receiving the image data with the preset format sent by the pre-processing unit, carrying out accelerated computation on the image data with the preset format distributed in the neural network, and sending the image data after accelerated computation to the post-processing unit; and the post-processing unit is used for carrying out image data processing on the received image data after the accelerated calculation to obtain target image data. The method and the device can solve the problems that in the prior art, when the image restoration is carried out in a neural network mode, the power consumption is high and the image restoration efficiency is low.

Description

Image processing device, method, chip, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer and image processing technologies, and in particular, to an image processing apparatus, an image processing method, a chip, an electronic device, and a storage medium.
Background
An Image Signal Processor (ISP) in the related art can provide a series of Image processing algorithms to process RAW Image data output by an Image Sensor (Image Sensor), including black level compensation, lens shading correction, demosaicing, noise reduction and other algorithms, so as to convert an output Image of the Image Sensor from a RAW domain to RGB domain or YUV domain Image data, and provide the processed Image data to a back end for further processing.
However, in the process of research and practice on the prior art, the inventors of the present application found that the conventional image signal processor based on the conventional algorithm can completely restore the image under the general imaging conditions, but in a specific imaging scene, for example, a scene without light, dark light or high exposure, the image restored based on the conventional algorithm has more noise and poor image restoration quality, and cannot meet the requirements of high-quality applications. Moreover, the image processing in the prior art has the problems of high power consumption and low image restoration efficiency.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides an image processing apparatus, an image processing method, a chip, an electronic device, and a storage medium, which can effectively solve the above technical problems.
In order to solve the above technical problem, the present application provides an image processing apparatus, including a pre-processing unit, a memory computing module and a post-processing unit, where the memory computing module is connected to the pre-processing unit and the post-processing unit respectively;
the preprocessing unit is used for preprocessing the acquired original image data to convert the original image data into image data in a preset format and sending the image data in the preset format to the memory computing module;
the memory computing module comprises at least one memory computing unit, the memory computing unit comprises a neural network, and the memory computing unit is used for receiving the image data in the preset format sent by the pre-processing unit, carrying out accelerated computing on the image data in the preset format distributed to the neural network, and sending the image data after accelerated computing to the post-processing unit;
and the post-processing unit is used for receiving the image data after the accelerated computation sent by the memory computing module and carrying out image data processing on the image data after the accelerated computation to obtain target image data.
Optionally, the memory computing module is further configured to allocate the received image data in the preset format, and allocate the image data in the preset format to a plurality of memory computing units for accelerated computing.
Optionally, the allocating the received image data in the preset format, and allocating the image data in the preset format to a plurality of memory computing units for accelerated computing includes:
distributing the image data in the preset format to corresponding memory computing units according to a preset execution sequence;
and performing convolution, pooling, activation and/or scaling on the image data in the preset format distributed to the neural network to obtain image data after accelerated calculation.
Optionally, the allocating the image data in the preset format to the corresponding memory computing unit according to a preset execution order includes:
acquiring a preset execution sequence of the neural network corresponding to each memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operator of the neural network, arrangement of weights corresponding to each layer of operator in a memory, and data path configuration in an image processing device;
and distributing the image data in the preset format to the neural network corresponding to each memory computing unit based on the preset execution sequence.
Optionally, the convolving, pooling, activating, and/or scaling the image data of the preset format allocated to the neural network further includes:
and respectively sending the image data obtained after convolution, the pooled image data, the activated image data and the zoomed image data to a post-processing unit and/or a memory.
Optionally, the image processing apparatus further includes a controller and a memory, the controller is respectively connected to the pre-processing unit, the in-memory calculation module, the post-processing unit and the memory, and the memory is respectively connected to the pre-processing unit, the in-memory calculation module, the post-processing unit and the controller;
the controller is used for controlling the data flow direction of the image processing device through a control bus and carrying out parameter configuration on the pre-processing unit, the memory computing module, the post-processing unit and the memory;
the memory is configured to store the acquired original image data and configuration data of the image processing apparatus, where the configuration data includes configuration parameters of the controller, configuration parameters of the in-memory computing module, first intermediate data of the in-memory computing module, and second intermediate data of the post-processing unit.
Optionally, the preprocessing unit is further configured to obtain original statistical data corresponding to the original image data; the controller is further configured to dynamically adjust configuration parameters of the memory computing module according to the obtained original statistical data.
Optionally, the dynamically adjusting the configuration parameters of the memory computing module according to the obtained original statistical data includes:
determining configuration parameters of a previous frame of a neural network in a memory computing module based on original statistical data of previous frame original image data, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
taking the configuration parameter of the previous frame as an initial configuration parameter of the neural network at the current frame;
comparing the original statistical data of the previous frame of original image data with the original statistical data of the current frame of original image data to obtain a comparison result;
and adjusting the initial configuration parameters of the neural network in the current frame based on the comparison result.
Optionally, the image preprocessing the acquired original image data to convert the original image data into image data in a preset format includes:
carrying out brightness statistics and chromaticity statistics on the global or local interested region of the original image data to obtain corresponding original statistical data; and/or
And carrying out black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
Optionally, the performing image data processing on the image data after the accelerated computation to obtain target image data includes:
and carrying out inverse normalization, fixed-point processing and data truncation processing on the image data subjected to accelerated calculation through the post-processing unit to obtain target image data.
Correspondingly, the application also provides an image processing method, which comprises the following steps:
the method comprises the steps of carrying out image preprocessing on acquired original image data so as to convert the original image data into image data in a preset format, and sending the image data in the preset format to an in-memory computing module;
receiving the image data in the preset format, performing accelerated calculation on the image data in the preset format distributed to the neural network, and sending the image data subjected to accelerated calculation to the post-processing unit;
and receiving the image data after the accelerated calculation, and carrying out image data processing on the image data after the accelerated calculation to obtain target image data.
Optionally, the image processing method further includes:
and distributing the received image data in the preset format, and distributing the image data in the preset format to a plurality of memory computing units for accelerated computing.
Optionally, the allocating the received image data in the preset format, and allocating the image data in the preset format to a plurality of memory computing units for accelerated computing, includes:
distributing the image data in the preset format to corresponding memory computing units according to a preset execution sequence;
and performing convolution, pooling, activation and/or scaling on the image data in the preset format distributed to the neural network to obtain image data after accelerated calculation.
Optionally, the allocating the image data in the preset format to the corresponding memory computing unit according to a preset execution order includes:
acquiring a preset execution sequence of the neural network corresponding to each in-memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operator of the neural network, arrangement of weights corresponding to each layer of operator in a memory, and data path configuration in an image processing device;
and distributing the image data in the preset format to the neural network corresponding to each memory computing unit based on the preset execution sequence.
Optionally, after the convolving, pooling, activating and/or scaling the image data in the preset format allocated to the neural network to obtain the image data after accelerated computation, the method further includes:
and respectively sending the image data obtained after convolution, the pooled image data, the activated image data and the zoomed image data to a target position for storage.
Optionally, the image processing method further comprises:
controlling the data flow direction of the image processing device through the controller according to the control bus, and configuring parameters of the pre-processing unit, the memory computing module, the post-processing unit and the memory;
storing the acquired original image data and configuration data of the image processing device, wherein the configuration data comprises configuration parameters of a controller, configuration parameters of the in-memory computing module, first intermediate data of the in-memory computing module and second intermediate data of the post-processing unit.
Optionally, the image processing method further includes:
acquiring original statistical data corresponding to the original image data through the preprocessing unit;
and dynamically adjusting the configuration parameters of the memory computing module according to the acquired original statistical data through the controller.
Optionally, the dynamically adjusting configuration parameters according to the obtained original statistical data includes:
determining configuration parameters of a previous frame of a neural network in a memory computing module based on original statistical data of previous frame original image data, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
taking the configuration parameter of the previous frame as an initial configuration parameter of the neural network at the current frame;
comparing the original statistical data of the previous frame of original image data with the original statistical data of the current frame of original image data to obtain a comparison result;
and adjusting the initial configuration parameters of the neural network in the current frame based on the comparison result.
Optionally, the image preprocessing the acquired original image data to convert the original image data into image data in a preset format includes:
carrying out brightness statistics and chromaticity statistics on the global or local interested region of the original image data to obtain corresponding original statistical data; and/or
And carrying out black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
Optionally, the performing image data processing on the image data after the accelerated computation to obtain target image data includes:
and carrying out inverse normalization, fixed-point processing and data truncation processing on the image data after the accelerated calculation to obtain target image data.
The application also provides a chip comprising the image processing device.
The application also provides an electronic device comprising the image processing device.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the image processing method as described above.
The embodiment of the invention has the following beneficial effects:
as described above, the present application provides an image processing apparatus, an image processing method, a chip, an electronic device, and a storage medium, where the image processing apparatus includes a preprocessing unit, a memory computing module, and a post-processing unit, and the memory computing module is connected to the preprocessing unit and the post-processing unit respectively; the preprocessing unit is used for carrying out image preprocessing on the acquired original image data so as to convert the original image data into image data in a preset format and sending the image data in the preset format to the memory computing module; the memory computing module comprises at least one memory computing unit, the memory computing unit comprises a neural network, and the memory computing unit is used for receiving the image data in the preset format sent by the pre-processing unit, carrying out accelerated computing on the image data in the preset format distributed to the neural network, and sending the image data subjected to accelerated computing to the post-processing unit; and the post-processing unit is used for receiving the image data after the accelerated calculation sent by the memory calculation module and carrying out image data processing on the image data after the accelerated calculation to obtain target image data. According to the embodiment of the application, the original image data is firstly subjected to image preprocessing and converted into the image data which accords with the format required by the memory computing module, so that the image data with better quality is provided for the follow-up, and the image restoration quality and efficiency are improved; and then, the image data is subjected to accelerated calculation of the neural network by the plurality of memory calculation units, and finally, the image data is subjected to image data processing, so that the problems of high power consumption and low image restoration efficiency in image restoration by using the neural network in the prior art are solved while the image restoration quality is improved by the neural network.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive step.
Fig. 1 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 2 is a schematic flowchart of a first implementation of an image processing method provided in an embodiment of the present application;
FIG. 3 is a schematic flowchart of a first implementation manner of step S21 provided in this application example;
fig. 4 is a schematic flowchart of step S211 provided in an embodiment of the present application;
FIG. 5 is a schematic flowchart of a second implementation manner of step S21 provided in the examples of the present application;
FIG. 6 is a flowchart illustrating a second implementation of an image processing method according to an embodiment of the present application;
fig. 7 is a schematic flowchart of step S5 provided in the embodiment of the present application;
fig. 8 is a schematic flowchart of step S1 provided in an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. Specific embodiments of the present application have been shown by way of example in the drawings and will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of a claim "comprising a" 8230a "\8230means" does not exclude the presence of additional identical elements in the process, method, article or apparatus in which the element is incorporated, and further, similarly named components, features, elements in different embodiments of the application may have the same meaning or may have different meanings, the specific meaning of which should be determined by its interpretation in the specific embodiment or by further combination with the context of the specific embodiment.
It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", by way of further example," a, B or C "or" a, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be noted that step numbers such as S1 and S2 are used herein for the purpose of more clearly and briefly describing the corresponding contents, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S2 first and then S1 in the specific implementation, but these should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to indicate elements are used only for facilitating the description of the present application, and have no particular meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
First, application scenarios that can be provided by the present application are introduced, such as a driving recording system or a driver monitoring system in the field of automatic driving, an eye tracker and an image acquisition device in the field of AR or VR, and the like.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus may specifically include a pre-processing unit 10, a memory computing module 20 and a post-processing unit 30, wherein the memory computing module 20 is connected to the pre-processing unit 10 and the post-processing unit 30 respectively;
the preprocessing unit 10 is configured to perform image preprocessing on the acquired original image data to convert the original image data into image data in a preset format, and send the image data in the preset format to the memory computing module.
Specifically, the preprocessing unit 10 mainly acquires RAW image data (for example, an image in RAW format) input by an image sensor or other image processing chips, performs image preprocessing on the RAW image data through the preprocessing unit 10, so as to convert the acquired RAW image data into an image in a data format required by memory Computing (CIM), and after converting the RAW image data into image data in a preset format, sends the image data to the memory Computing module 20 for processing.
The memory computing module 20 includes at least one memory computing unit, where the memory computing unit includes a neural network, and the memory computing unit is configured to receive the image data in the preset format sent by the pre-processing unit, perform accelerated computation on the image data in the preset format allocated to the neural network, and send the image data after accelerated computation to the post-processing unit.
Specifically, for the memory computing module 20, the memory computing module 20 in this embodiment includes at least one memory computing unit, and each memory computing unit corresponds to a neural network; after receiving the image data, the memory computing module 20 allocates the received image data to corresponding memory computing units according to a preset execution sequence for accelerated computing, each memory computing unit is configured to receive all or part of the image data sent by the pre-processing unit 10 after the image pre-processing, and a plurality of memory computing units in the memory computing module 20 perform accelerated computing on the image data received by the neural network, so that the efficiency and quality of image processing are improved, and the power consumption of image processing can be significantly reduced.
It should be noted that the neural network is essentially a model, and a memory computing unit may have a complete neural network model or a part of a neural network model.
And the post-processing unit 30 is configured to receive the image data after the accelerated computation sent by the memory computing module, and perform image data processing on the image data after the accelerated computation to obtain target image data.
Specifically, for the post-processing unit 30, the image data after the accelerated computation sent by the memory computing module 20 is received, and a series of image data processing including but not limited to inverse normalization, fixed-point processing, data truncation and the like is performed on the image data after the accelerated processing by the post-processing unit 30, so as to finally obtain the target image data, thereby completing the image restoration.
It can be seen that, in the image processing apparatus provided in this embodiment, first, the image preprocessing unit 10 performs image preprocessing on the original image data, and converts the original image data into image data conforming to the format required by the memory computing module, so as to provide better quality image data for the subsequent processing, which is beneficial to improving the image restoration quality and efficiency; the image data is then accelerated calculated by the neural network through the plurality of in-memory calculating units in the in-memory calculating module 20, and finally processed by the post-processing unit 30, so that the image restoration quality is improved by the neural network, and the problems of high power consumption and low image restoration efficiency in image restoration by using the neural network in the prior art are solved.
Optionally, in some embodiments, the in-memory computing module 20 is further configured to distribute the received image data in the preset format, and distribute the image data in the preset format to a plurality of in-memory computing units for accelerated computing.
Optionally, in some embodiments, the allocating the received image data in the preset format and allocating the image data in the preset format to a plurality of memory computing units for accelerated computing may specifically include:
distributing the image data in a preset format to corresponding memory computing units according to a preset execution sequence;
and performing convolution, pooling, activation and/or scaling on the image data in the preset format distributed into the neural network to obtain the image data after accelerated calculation.
Specifically, for the memory computing module 20, after the image data meeting the format requirement is acquired, the image data in the preset format is allocated to the corresponding memory computing units according to the preset execution sequence, after each memory computing unit receives the image data allocated by the memory computing module 20, the image data in the neural network corresponding to the memory computing unit is subjected to computation such as convolution, pooling, activation and/or scaling, and the like, so as to obtain the image data after accelerated computation, thereby achieving the effect of accelerated computation on the image data, and the image data of the neural network is subjected to accelerated computation by the plurality of memory computing units, so as to further improve the speed of processing the image data.
Optionally, in some embodiments, the allocating the image data in the preset format to the corresponding memory computing unit according to the preset execution order may specifically include:
acquiring a preset execution sequence of the neural network corresponding to each in-memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operator of the neural network, arrangement of weights corresponding to each layer of operator in the in-memory and data path configuration in the image processing device;
and distributing the image data in the preset format to the neural networks corresponding to the memory computing units based on the preset execution sequence.
Specifically, the in-memory computation module 20 first obtains a preset execution sequence of the neural network in each in-memory computation unit, where the preset execution sequence includes an execution command corresponding to each layer of operator of each neural network, an arrangement of weights corresponding to each layer of operator in a memory of the memory, and a data path configuration of each component in the image processing apparatus; the memory refers to a part of the memory for temporarily storing data and data exchanged with the external memory. The in-memory computing module 20 allocates the acquired image data in the preset format to each in-memory computing unit according to a preset execution sequence, and the in-memory computing unit allocates the image data to each layer of operators in the neural network corresponding to the in-memory computing unit, so that the image data needing accelerated computing is quickly allocated to each layer of the neural networks for accelerated computing.
Optionally, the convolving, pooling, activating and/or scaling the image data in the preset format allocated to the neural network may specifically include:
the convolved image data, the pooled image data, the activated image data, and the scaled image data are sent to the post-processing unit 30 and/or the memory 50, respectively.
Specifically, after performing accelerated computations such as convolution, pooling, activation, and/or scaling on the image data allocated to the corresponding neural network, the in-memory computing unit sends the image data obtained after each accelerated computation to the post-processing unit 30 and/or the memory 50, and when sending the image data obtained after each accelerated computation to the memory 50, stores the image data obtained after each accelerated computation to the memory in the memory 50 according to a preset execution sequence, so that the image data obtained after reading is complete and ordered when the image data in the memory 50 needs to be subsequently read.
In a specific embodiment, the plurality of in-memory computing units 200 of the in-memory computing module 20 are specifically configured to perform accelerated processing on image data after image preprocessing based on a neural network, after the in-memory computing module 20 receives the preprocessed image data sent by the preprocessing unit 10, the in-memory computing module 20 allocates the image data to the plurality of in-memory computing units 200 to perform data accelerated computing, and performs accelerated computing on the image data through the neural network of the in-memory computing units, including but not limited to convolution, pooling, activation, scaling, and the like, so as to convert the RAW image into an int8/int16/int32 RGB image. As shown in fig. 1, the memory computing module 20 further includes a vector processing unit 201, and after the memory computing module 20 distributes the image data to the plurality of memory computing units 200, the plurality of vector processing units 201 perform convolution, pooling, activation, and/or scaling on the image data, so as to obtain the image data after accelerated computation. The image data processed by each memory computing unit may be of the same type or of different types, for example, the first memory computing unit performs convolution acceleration processing on the image data, the second memory computing unit may perform pooling acceleration processing on the image data convolved by the first memory computing unit, the second memory computing unit may also acquire image data of the memory to perform pooling, and the second memory computing unit may also be used for convolution on the image data, that is, each memory computing unit may process image data independently or may have a data flow relationship. The image data processed by the plurality of in-memory computing units in the in-memory computing module 20 may be the image data sent from the front processing unit 10, or the image data read by the in-memory computing module 20 from the memory 50, and after the acceleration processing of the image data, the in-memory computing module 20 may send the image data after the acceleration processing to the post-processing unit 30, or may store the image data as intermediate data in the memory 50. It should be noted that the in-memory computation module 20 in this embodiment may be specifically an in-memory computation neural network accelerator.
Optionally, as shown in fig. 1, the image processing apparatus may further include a controller 40 and a memory 50, where the controller 40 is connected to the pre-processing unit 10, the memory computing module 20, the post-processing unit 30, and the memory 50, respectively, and the memory 50 is connected to the pre-processing unit 10, the memory computing module 20, the post-processing unit 30, and the controller 40, respectively;
the controller 40 is used for controlling the data flow of the image processing device through the control bus and carrying out parameter configuration on the pre-processing unit 10, the memory computing module 20, the post-processing unit 30 and the memory 50;
specifically, for the controller 40, which is respectively connected to the pre-processing unit 10, the in-memory computing module 20, the post-processing unit 30 and the memory 50 of the image processing apparatus, the controller 40 does not participate in any data stream processing, and only controls the data flow direction of the whole image processing apparatus through the control bus, for example, how the image data flows from the pre-processing unit 10 to the in-memory computing module 20 and then to the post-processing unit 30; how to allocate the image output by the preprocessing unit 10 to each memory computing unit 200 in the memory computing module 20, and so on; the controller 40 is also used to configure the configuration parameters of the components of the image processing apparatus, including but not limited to the physical configuration parameters of the pre-processing unit 10, the in-memory computing module 20, the post-processing unit 30, and the memory 50, and the configuration parameters of the in-memory computing unit and the neural network in the in-memory computing module 20.
And a memory 50 for storing the acquired raw image data and configuration data of the image processing apparatus, the configuration data including configuration parameters of the controller 40, configuration parameters of the in-memory calculation module 20, first intermediate data of the in-memory calculation module 20, and second intermediate data of the post-processing unit 30.
Specifically, the memory 50 is connected to the pre-processing unit 10, the memory computing module 20, the post-processing unit 30 and the controller 40, and is configured to store original image data (such as RAW format image data input by an image sensor or other image processing chips), configuration parameters of each component of the image processing apparatus, configuration information of the controller 40, weight information of the memory computing module 20, first intermediate data of the memory computing module 20 and second intermediate data of the post-processing unit 30, where the first intermediate data is cache data obtained by performing accelerated computation on image data of a neural network by each memory computing unit in the memory computing module 20, and the second intermediate data is cache data obtained by performing image data processing on image data obtained by the post-processing unit 30 after performing accelerated computation; the weight information of the memory computing module 20 is used to determine configuration information for image processing in different luminance scenes.
Preferably, the memory 50 in this embodiment may be selected as an on-chip memory, which has advantages of fast reading speed and low power consumption compared to an off-chip memory.
Optionally, in some embodiments, the preprocessing unit 10 is further configured to obtain original statistical data corresponding to the original image data; and the controller 40 is further configured to dynamically adjust configuration parameters of the memory computing module according to the acquired original statistical data.
Specifically, in this embodiment, the preprocessing unit 10 is further configured to obtain original statistical data of original image data, and the controller 40 is further configured to obtain the original statistical data sent by the preprocessing unit 10, so as to dynamically adjust configuration parameters of the memory computing module 20 according to the original statistical data, and dynamically adjust configuration parameters of a neural network in the memory computing module 20, so as to adapt to image restoration in different imaging scenes.
Optionally, in some embodiments, the dynamically adjusting the configuration parameter of the memory computing module according to the obtained original statistical data specifically may include:
determining configuration parameters of a previous frame of a neural network in a memory computing module based on original statistical data of previous frame original image data, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
taking the configuration parameter of the previous frame as the initial configuration parameter of the neural network when the current frame is present;
comparing the original statistical data of the previous frame of original image data with the original statistical data of the current frame of original image data to obtain a comparison result;
and adjusting the initial configuration parameters of the neural network in the current frame based on the comparison result.
Specifically, the step of dynamically adjusting the configuration parameters of the memory computing module 20 according to the original statistical data by the controller 40 includes the following steps: firstly, acquiring original statistical data of a previous frame of original image data (such as an image in a RAW format), including an original luminance statistical value and an original chrominance statistical value, and determining a current configuration parameter of a neural network of the memory computing module 20 according to the original statistical data, where the configuration parameter includes a weight value, an offset value, a quantization value and a gain value, where the gain value depends on image information of each frame; the configuration parameters of the frame are reserved and used as initial configuration parameters of a neural network when the current frame image data is processed, the corresponding comparison result is obtained by comparing the original statistical data of the previous frame original image data with the original statistical data of the current frame original image data, if the comparison result exceeds a preset threshold value, the initial configuration parameters of the neural network when the current frame image data is processed are dynamically adjusted, so that the self-adaptability of the neural network is utilized, the configuration parameters of the neural network are dynamically adjusted through the statistical parameters of exposure, colors and the like under different scene conditions, the effects of simultaneously performing image noise reduction and enhancement are achieved, the problems of more image noise points and poorer reduction quality are avoided, and the requirements of high-quality image application are met.
Optionally, in some embodiments, the performing image preprocessing on the acquired original image data to convert the original image data into image data in a preset format may specifically include:
carrying out brightness statistics and chromaticity statistics on the global or local interested area of the original image data to obtain corresponding original statistical data; and/or
And carrying out black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
Specifically, for the preprocessing unit 10, first, RAW image data input by an image sensor or other image processing chips is obtained, and luminance statistics and chrominance statistics are performed on the RAW image data, for example, statistics is performed on a RAW image according to 10/12/14/16bit, statistics is performed on a global mean value and a histogram of the RAW image, or a local 9 × 9 image and a ROI region (region of interest), and the like, so as to obtain RAW statistical data of the RAW image data, including but not limited to a RAW luminance statistical value and a RAW chrominance statistical value; then, carrying out black level compensation on original image data (RAW image) to obtain the RAW image without the black level; then, carrying out nonlinear transformation on the original image data, mainly used for decompressing compressed data of 10-14 bits of HDR, and carrying out point-by-point processing on the RAW image, for example, decompressing the data of 10 bits into 16 bits; then, carrying out normalization processing on the original image data, and carrying out global processing on the RAW image, for example, converting 10/12/14/16bit image data into int8 or uint8 image data for processing by a subsequent memory computing module 20; after the image preprocessing, the image data in the format required by the memory computing module 20 is obtained and sent to the memory computing module 20 through the preprocessing unit 10, so that the subsequent memory computing module 20 does not need to convert the data format again when performing accelerated computing on the image data, and the efficiency of accelerated computing is improved.
Optionally, in some embodiments, the image data processing is performed on the image data after the accelerated computation to obtain the target image data, which may specifically include:
the post-processing unit 30 performs inverse normalization, fixed-point processing, and data truncation processing on the image data after the accelerated computation, to obtain target image data.
Specifically, the post-processing unit 30 is configured to receive the image data after the acceleration processing sent by the memory computing module 20, and perform image data processing on the image data, where the image data processing includes, but is not limited to: performing inverse normalization on the image data, performing inverse normalization operation on the image data subjected to normalization by the preprocessing unit 10, and performing point-by-point inverse normalization operation on the wide-bit-width image data; performing fixed point processing on the image data, and fixing the FP16 image data into int16 image data; performing data truncation on the image data, performing cutoff operation when the pixel data is supersaturated or negative, and converting each pixel point of int8/int16/int32 in the image into a pixel point of agent 8; and obtaining target image data after image data processing to finish image restoration.
The embodiment of the application also provides a chip comprising the image processing device.
An embodiment of the present application further provides an electronic device, which includes the image processing apparatus as described above.
As shown in fig. 2, an embodiment of the present application further provides an image processing method, which can be executed in an image processing apparatus, and specifically includes the following steps:
s1, image preprocessing is carried out on the obtained original image data so as to convert the original image data into image data in a preset format, and the image data in the preset format is sent to an in-memory computing module.
Specifically, in step S1, RAW image data (for example, an image in RAW format) input by an image sensor or other image processing chip is mainly acquired, and image preprocessing is performed on the RAW image data, so that the acquired RAW image data is converted into an image in data format required by memory Computing (CIM), and the RAW image data is subjected to image preprocessing to provide image data with better quality for the subsequent process, which is beneficial to improving image restoration quality.
And S2, receiving the image data in the preset format, performing accelerated calculation on the image data in the preset format distributed to the neural network, and sending the image data subjected to accelerated calculation to a post-processing unit.
Specifically, for step S2, after receiving the image data, the received image data is assigned to the corresponding neural network according to the preset execution sequence for performing the accelerated computation, and the image data received by the neural network is subjected to the accelerated computation, so that the efficiency and quality of image processing are improved, and the power consumption of the image processing can be significantly reduced.
And S3, receiving the image data after the accelerated calculation, and performing image data processing on the image data after the accelerated calculation to obtain target image data.
Specifically, in step S3, a series of image data processing including, but not limited to, inverse normalization, fixed-point processing, data truncation, and the like is performed on the image data after the acceleration processing, and finally, target image data is obtained, and image restoration is completed.
Therefore, the image processing method provided by the embodiment of the application firstly carries out image preprocessing on the original image data and converts the original image data into the image data which accords with the format required by the memory computing module, thereby providing image data with better quality for the follow-up process and being beneficial to improving the quality and the efficiency of image restoration; and then, the image data is subjected to accelerated calculation of the neural network by the plurality of memory calculation units, and finally, the image data is subjected to image data processing, so that the problems of high power consumption and low image restoration efficiency in image restoration by using the neural network in the prior art are solved while the image restoration quality is improved by the neural network.
Optionally, in some embodiments, the image processing method further includes:
and S21, distributing the received image data in the preset format, and distributing the image data in the preset format to a plurality of memory computing units for accelerated computing.
Optionally, as shown in fig. 3, in some embodiments, step S21 may specifically include:
s211, distributing image data in a preset format to corresponding memory computing units according to a preset execution sequence;
s212, performing convolution, pooling, activation and/or scaling on the image data in the preset format distributed to the neural network to obtain image data after accelerated calculation.
Specifically, in step S21, after the image data meeting the format requirement is obtained, the image data in the preset format is allocated to the corresponding memory computing units according to the preset execution sequence, and after each memory computing unit receives the image data allocated by the memory computing module 20, the image data in the neural network corresponding to the memory computing unit is simultaneously subjected to calculations such as convolution, pooling, activation and/or scaling, so as to obtain the image data after accelerated calculation, thereby achieving an effect of accelerated calculation on the image data.
Optionally, as shown in fig. 4, in some embodiments, step S211 may specifically include:
s2111, acquiring a preset execution sequence of the neural network corresponding to each memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operator of the neural network, arrangement of weights corresponding to each layer of operator in a memory and data path configuration in the image processing device;
s2112, distributing the image data in the preset format to the neural network corresponding to each memory computing unit based on the preset execution sequence.
Specifically, for step S211, a preset execution sequence of the neural networks in each memory computing unit is first obtained, where the preset execution sequence includes an execution command corresponding to each layer of operator of each neural network, an arrangement of weights corresponding to each layer of operator in a memory of the memory, and a data path configuration of each component in the image processing apparatus; according to a preset execution sequence, the acquired image data in the preset format is distributed to each memory computing unit, and the memory computing units distribute the image data to the operators of all layers in the corresponding neural networks, so that the image data needing accelerated calculation is quickly distributed to a plurality of neural networks for accelerated calculation.
Optionally, as shown in fig. 5, in some embodiments, after step S212, the method further includes:
and S213, respectively sending the image data obtained after convolution, the image data after pooling, the image data after activation and the image data after zooming to a target position for storage.
Specifically, for step S213, according to a preset execution sequence, the image data obtained after each accelerated computation is stored to a target location, for example, a memory in the memory, so that the image data obtained by reading is complete and ordered when the image data of the memory in the memory 50 needs to be read subsequently.
Optionally, in some embodiments, the image processing method further includes:
controlling the data flow direction of the image processing device through the controller according to the control bus, and configuring parameters of the pre-processing unit, the memory computing module, the post-processing unit and the memory;
and storing the acquired original image data and configuration data of the image processing device, wherein the configuration data comprises configuration parameters of the controller, configuration parameters of the in-memory computing module, first intermediate data of the in-memory computing module and second intermediate data of the post-processing unit.
Specifically, the controller 40 does not participate in any data flow processing, and controls the data flow of the entire image processing apparatus only through the control bus, for example, how the image data flows from the front processing unit 10 to the memory computing module 20 and then to the back processing unit 30; how to allocate the image output by the preprocessing unit 10 to each memory computing unit 200 in the memory computing module 20, and so on; the controller 40 is also used to configure the configuration parameters of the components of the image processing apparatus, including but not limited to the physical configuration parameters of the pre-processing unit 10, the in-memory computing module 20, the post-processing unit 30, and the memory 50, and the configuration parameters of the in-memory computing unit and the neural network in the in-memory computing module 20.
Specifically, the memory 50 stores original image data (such as image data in RAW format input by an image sensor or other image processing chips), configuration parameters of each component of the image processing apparatus, configuration information of the controller 40, weight information of the memory computing module 20, first intermediate data of the memory computing module 20 and second intermediate data of the post-processing unit 30, where the first intermediate data is cache data obtained by performing accelerated computation on image data of a neural network by each memory computing unit in the memory computing module 20, and the second intermediate data is cache data obtained by performing image data processing on image data after the accelerated computation by the post-processing unit 30; the weight information of the memory computing module 20 is used to determine configuration information for image processing in different luminance scenes.
Optionally, as shown in fig. 6, in some embodiments, the image processing method may further include:
s4, acquiring original statistical data corresponding to the original image data through a preprocessing unit;
and S5, dynamically adjusting configuration parameters of the memory computing module according to the acquired original statistical data through the controller.
Optionally, as shown in fig. 7, in some embodiments, step S5 may specifically include:
s51, determining configuration parameters of a previous frame of a neural network in the memory computing module based on original statistical data of previous frame original image data, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
s52, taking the configuration parameters of the previous frame as initial configuration parameters of the neural network in the current frame;
s53, comparing the original statistical data of the previous frame of original image data with the original statistical data of the current frame of original image data to obtain a comparison result;
and S54, adjusting the initial configuration parameters of the neural network in the current frame based on the comparison result.
Specifically, for step S5, first, original statistical data of a previous frame of original image data (for example, an image in RAW format) is obtained, where the original statistical data includes an original luminance statistical value and an original chrominance statistical value, and current configuration parameters of the neural network are determined according to the original statistical data, where the configuration parameters include a weight value, an offset value, a quantization value, and a gain value, where the gain value depends on image information of each frame; the configuration parameters of the frame are reserved and used as initial configuration parameters of a neural network when the current frame image data is processed, the corresponding comparison result is obtained by comparing the original statistical data of the previous frame original image data with the original statistical data of the current frame original image data, if the comparison result exceeds a preset threshold value, the initial configuration parameters of the neural network when the current frame image data is processed are dynamically adjusted, so that the self-adaptability of the neural network is utilized, the configuration parameters of the neural network are dynamically adjusted through the statistical parameters of exposure, colors and the like under different scene conditions, the effects of simultaneously performing image noise reduction and enhancement are achieved, the problems of more image noise points and poorer reduction quality are avoided, and the requirements of high-quality image application are met.
Optionally, as shown in fig. 8, in some embodiments, step S1 may specifically include:
s11, carrying out brightness statistics and chromaticity statistics on the global or local interesting regions of the original image data to obtain corresponding original statistical data; and/or
And S12, carrying out black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
Specifically, for step S1, first, RAW image data input by an image sensor or other image processing chip is obtained, and luminance statistics and chrominance statistics are performed on the RAW image data, for example, statistics is performed on a RAW image according to 10/12/14/16bit, statistics is performed on a global mean value and histogram of the RAW image, or local 9 × 9 images and ROI regions (regions of interest), and the like, so as to obtain RAW statistical data of the RAW image data, including but not limited to RAW luminance statistics and RAW chrominance statistics; then, carrying out black level compensation on original image data (RAW image) to obtain the RAW image without the black level; then, carrying out nonlinear transformation on the original image data, mainly used for decompressing compressed data of 10-14 bits of HDR, and carrying out point-by-point processing on the RAW image, for example, decompressing the data of 10 bits into 16 bits; then, carrying out normalization processing on the original image data, and carrying out global processing on the RAW image, for example, converting 10/12/14/16bit image data into int8 or uint8 image data so as to supply subsequent accelerated calculation; after the image preprocessing, the image data in the format required by the subsequent accelerated calculation is obtained, so that the data format does not need to be converted again when the subsequent accelerated calculation is carried out on the image data, and the efficiency of the accelerated calculation is improved.
Optionally, in some embodiments, step S3 may specifically include:
and carrying out inverse normalization, fixed point processing and data truncation processing on the image data subjected to accelerated calculation to obtain target image data.
Specifically, for step S3, image data processing is mainly performed on the image data after the accelerated computation, and the image data processing includes, but is not limited to: performing inverse normalization on the image data, performing inverse normalization operation on the image data subjected to normalization by the preprocessing unit 10, and performing point-by-point inverse normalization operation on the wide-bit-width image data; performing fixed point processing on the image data, and fixing the FP16 image data into int16 image data; performing data truncation on the image data, performing cutoff operation when the pixel data is supersaturated or negative, and converting each pixel point of int8/int16/int32 in the image into a pixel point of agent 8; and obtaining target image data after image data processing to finish image restoration.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing an image processing method, including the steps of: the method comprises the steps of carrying out image preprocessing on acquired original image data so as to convert the original image data into image data in a preset format, and sending the image data in the preset format to an in-memory computing module; receiving image data in a preset format, performing accelerated calculation on the image data in the preset format distributed to the neural network, and sending the image data subjected to accelerated calculation to a post-processing unit; and receiving the image data after the accelerated calculation, and carrying out image data processing on the image data after the accelerated calculation to obtain target image data.
The executed image processing method comprises the steps of firstly, carrying out image preprocessing on original image data, and converting the original image data into image data which accords with a format required by an in-memory computing module, so that image data with better quality is provided for the follow-up, and the image restoration quality and efficiency are improved; and then, the image data is subjected to accelerated calculation of the neural network by the plurality of memory calculation units, and finally, the image data is subjected to image data processing, so that the problems of high power consumption and low image restoration efficiency in image restoration by using the neural network in the prior art are solved while the image restoration quality is improved by the neural network.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). Computer-readable storage media can be any available media that can be accessed by a computer or a data storage device, such as a server, data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, storage Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all the equivalent structures or equivalent processes that can be directly or indirectly applied to other related technical fields by using the contents of the specification and the drawings of the present application are also included in the scope of the present application.

Claims (23)

1. An image processing device is characterized by comprising a preprocessing unit, an in-memory computing module and a post-processing unit, wherein the in-memory computing module is respectively connected with the preprocessing unit and the post-processing unit;
the preprocessing unit is used for preprocessing the acquired original image data to convert the original image data into image data in a preset format and sending the image data in the preset format to the memory computing module;
the in-memory computing module comprises at least one in-memory computing unit, the in-memory computing unit comprises a neural network, and the in-memory computing unit is used for receiving the image data in the preset format sent by the pre-processing unit, carrying out accelerated computation on the image data in the preset format distributed to the neural network, and sending the image data after accelerated computation to the post-processing unit;
and the post-processing unit is used for receiving the image data after the accelerated computation sent by the memory computing module and carrying out image data processing on the image data after the accelerated computation to obtain target image data.
2. The image processing apparatus according to claim 1, wherein the in-memory computing module is further configured to allocate the received image data in the preset format, and allocate the image data in the preset format to a plurality of in-memory computing units for accelerated computing.
3. The image processing apparatus according to claim 2, wherein said allocating the received image data in the preset format to allocate the image data in the preset format to a plurality of memory computing units for accelerated computing comprises:
distributing the image data in the preset format to corresponding memory computing units according to a preset execution sequence;
and performing convolution, pooling, activation and/or scaling on the image data in the preset format distributed to the neural network to obtain image data after accelerated calculation.
4. The image processing apparatus according to claim 3, wherein said allocating the image data of the preset format to the corresponding in-memory computing units in a preset execution order comprises:
acquiring a preset execution sequence of the neural network corresponding to each memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operator of the neural network, arrangement of weights corresponding to each layer of operator in a memory, and data path configuration in an image processing device;
and distributing the image data in the preset format to the neural network corresponding to each memory computing unit based on the preset execution sequence.
5. The image processing apparatus according to claim 3, wherein the convolving, pooling, activating, and/or scaling the image data of the preset format assigned to the neural network further comprises:
and respectively sending the image data obtained after convolution, the pooled image data, the activated image data and the zoomed image data to a post-processing unit and/or a memory.
6. The image processing apparatus according to claim 1, further comprising a controller and a memory, wherein the controller is connected to the preprocessing unit, the in-memory computing module, the post-processing unit, and the memory, respectively, and the memory is connected to the preprocessing unit, the in-memory computing module, the post-processing unit, and the controller, respectively;
the controller is used for controlling the data flow direction of the image processing device through a control bus and carrying out parameter configuration on the pre-processing unit, the memory computing module, the post-processing unit and the memory;
the memory is configured to store the acquired original image data and configuration data of the image processing apparatus, where the configuration data includes configuration parameters of the controller, configuration parameters of the in-memory computing module, first intermediate data of the in-memory computing module, and second intermediate data of the post-processing unit.
7. The image processing apparatus according to claim 6, wherein the preprocessing unit is further configured to obtain original statistical data corresponding to the original image data; the controller is further configured to dynamically adjust configuration parameters of the memory computing module according to the acquired original statistical data.
8. The image processing apparatus according to claim 7, wherein the dynamically adjusting the configuration parameters of the in-memory computing module according to the obtained raw statistical data comprises:
determining configuration parameters of a neural network in a memory computing module in the last frame based on original statistical data of the original image data of the last frame, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
taking the configuration parameter of the previous frame as an initial configuration parameter of the neural network at the current frame;
comparing the original statistical data of the previous frame of original image data with the original statistical data of the current frame of original image data to obtain a comparison result;
and adjusting the initial configuration parameters of the neural network in the current frame based on the comparison result.
9. The image processing apparatus according to claim 1, wherein said image preprocessing the acquired raw image data to convert the raw image data into image data of a preset format comprises:
carrying out brightness statistics and chromaticity statistics on the global or local interested region of the original image data to obtain corresponding original statistical data; and/or
And carrying out black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
10. The image processing apparatus according to claim 1, wherein said performing image data processing on the image data after the accelerated computation to obtain target image data comprises:
and carrying out inverse normalization, fixed-point processing and data truncation processing on the image data subjected to accelerated calculation through the post-processing unit to obtain target image data.
11. An image processing method, characterized by comprising the steps of:
the method comprises the steps of carrying out image preprocessing on acquired original image data so as to convert the original image data into image data in a preset format, and sending the image data in the preset format to an in-memory computing module;
receiving the image data in the preset format, performing accelerated calculation on the image data in the preset format distributed to the neural network, and sending the image data subjected to accelerated calculation to a post-processing unit;
and receiving the image data after the accelerated calculation, and carrying out image data processing on the image data after the accelerated calculation to obtain target image data.
12. The image processing method according to claim 11, further comprising:
and distributing the received image data in the preset format, and distributing the image data in the preset format to a plurality of memory computing units for accelerated computing.
13. The method according to claim 12, wherein said allocating the received image data in the preset format to allocate the image data in the preset format to a plurality of memory computing units for accelerated computing comprises:
distributing the image data in the preset format to corresponding memory computing units according to a preset execution sequence;
and performing convolution, pooling, activation and/or scaling on the image data in the preset format distributed to the neural network to obtain image data after accelerated calculation.
14. The image processing method according to claim 13, wherein said allocating the image data of the preset format to the corresponding memory computing units in a preset execution order comprises:
acquiring a preset execution sequence of the neural network corresponding to each in-memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operator of the neural network, arrangement of weights corresponding to each layer of operator in a memory, and data path configuration in an image processing device;
and distributing the image data in the preset format to the neural network corresponding to each memory computing unit based on the preset execution sequence.
15. The method according to claim 13, wherein after the convolving, pooling, activating and/or scaling the image data of the preset format allocated to the neural network to obtain the image data after accelerated computation, the method further comprises:
and respectively sending the image data obtained after convolution, the pooled image data, the activated image data and the zoomed image data to a target position for storage.
16. The image processing method according to claim 11, further comprising:
controlling the data flow direction of the image processing device through the controller according to the control bus, and performing parameter configuration on the pre-processing unit, the memory computing module, the post-processing unit and the memory;
storing the acquired original image data and configuration data of the image processing device, wherein the configuration data comprises configuration parameters of a controller, configuration parameters of the in-memory computing module, first intermediate data of the in-memory computing module and second intermediate data of the post-processing unit.
17. The image processing method according to claim 16, further comprising:
acquiring original statistical data corresponding to the original image data through the preprocessing unit;
and dynamically adjusting the configuration parameters of the memory computing module according to the acquired original statistical data through the controller.
18. The image processing method of claim 17, wherein the dynamically adjusting configuration parameters according to the obtained raw statistical data comprises:
determining configuration parameters of a previous frame of a neural network in a memory computing module based on original statistical data of previous frame original image data, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
taking the configuration parameter of the previous frame as an initial configuration parameter of the neural network at the current frame;
comparing the original statistical data of the previous frame of original image data with the original statistical data of the current frame of original image data to obtain a comparison result;
and adjusting the initial configuration parameters of the neural network in the current frame based on the comparison result.
19. The image processing method according to claim 11, wherein the image preprocessing the acquired raw image data to convert the raw image data into image data in a preset format comprises:
carrying out brightness statistics and chromaticity statistics on the global or local interested area of the original image data to obtain corresponding original statistical data; and/or
And carrying out black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
20. The image processing method according to claim 11, wherein the performing image data processing on the image data after the accelerated computation to obtain target image data comprises:
and carrying out inverse normalization, fixed-point processing and data truncation processing on the image data after the accelerated calculation to obtain target image data.
21. A chip comprising an image processing apparatus according to any one of claims 1 to 10.
22. An electronic device characterized by comprising the image processing apparatus according to any one of claims 1 to 10.
23. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 11 to 20.
CN202310044515.0A 2023-01-30 2023-01-30 Image processing device, method, chip, electronic equipment and storage medium Active CN115797228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310044515.0A CN115797228B (en) 2023-01-30 2023-01-30 Image processing device, method, chip, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310044515.0A CN115797228B (en) 2023-01-30 2023-01-30 Image processing device, method, chip, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115797228A true CN115797228A (en) 2023-03-14
CN115797228B CN115797228B (en) 2023-06-23

Family

ID=85429131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310044515.0A Active CN115797228B (en) 2023-01-30 2023-01-30 Image processing device, method, chip, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115797228B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402724A (en) * 2023-06-08 2023-07-07 江苏游隼微电子有限公司 RYB format RAW image color restoration method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578453A (en) * 2017-10-18 2018-01-12 北京旷视科技有限公司 Compressed image processing method, apparatus, electronic equipment and computer-readable medium
WO2018221224A1 (en) * 2017-05-29 2018-12-06 オリンパス株式会社 Image processing device, image processing method, and image processing program
CN110555345A (en) * 2018-06-01 2019-12-10 北京深鉴智能科技有限公司 Intelligent image analysis system and method
CN113628093A (en) * 2021-07-29 2021-11-09 苏州浪潮智能科技有限公司 Method, system, equipment and storage medium for accelerating image processing
CN113810593A (en) * 2020-06-15 2021-12-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
US20220122235A1 (en) * 2020-10-16 2022-04-21 Microsoft Technology Licensing, Llc Dual-Stage System for Computational Photography, and Technique for Training Same
US20220253976A1 (en) * 2021-02-10 2022-08-11 Boe Technology Group Co., Ltd. Image processing method and display apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018221224A1 (en) * 2017-05-29 2018-12-06 オリンパス株式会社 Image processing device, image processing method, and image processing program
CN107578453A (en) * 2017-10-18 2018-01-12 北京旷视科技有限公司 Compressed image processing method, apparatus, electronic equipment and computer-readable medium
CN110555345A (en) * 2018-06-01 2019-12-10 北京深鉴智能科技有限公司 Intelligent image analysis system and method
CN113810593A (en) * 2020-06-15 2021-12-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
US20220122235A1 (en) * 2020-10-16 2022-04-21 Microsoft Technology Licensing, Llc Dual-Stage System for Computational Photography, and Technique for Training Same
US20220253976A1 (en) * 2021-02-10 2022-08-11 Boe Technology Group Co., Ltd. Image processing method and display apparatus
CN113628093A (en) * 2021-07-29 2021-11-09 苏州浪潮智能科技有限公司 Method, system, equipment and storage medium for accelerating image processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402724A (en) * 2023-06-08 2023-07-07 江苏游隼微电子有限公司 RYB format RAW image color restoration method
CN116402724B (en) * 2023-06-08 2023-08-11 江苏游隼微电子有限公司 RYB format RAW image color restoration method

Also Published As

Publication number Publication date
CN115797228B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
US20140153820A1 (en) Image processing apparatus and method of processing image
US8064717B2 (en) Digital camera and method
CN108600783B (en) Frame rate adjusting method and device and terminal equipment
CN113781320A (en) Image processing method and device, terminal equipment and storage medium
CN109413335B (en) Method and device for synthesizing HDR image by double exposure
CN111292269B (en) Image tone mapping method, computer device, and computer-readable storage medium
JP2011054192A (en) System and method for image segmentation
JP2006065676A (en) Image processor and image processing method
CN115797228A (en) Image processing device, method, chip, electronic equipment and storage medium
WO2023010750A1 (en) Image color mapping method and apparatus, electronic device, and storage medium
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
CN116647685A (en) Video encoding method, video encoding device, electronic equipment and readable storage medium
CN116113976A (en) Image processing method and device, computer readable medium and electronic equipment
CN113706393A (en) Video enhancement method, device, equipment and storage medium
CN111757100A (en) Method and device for determining camera motion variation, electronic equipment and medium
CN114998122A (en) Low-illumination image enhancement method
WO2022151852A1 (en) Image processing method, apparatus, and system, electronic device, and storage medium
US11568251B1 (en) Dynamic quantization for models run on edge devices
CN111246052B (en) Wide dynamic adjustment method and device, storage medium and electronic device
CN111182223B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112598609A (en) Dynamic image processing method and device
US11521300B2 (en) Edge preserving noise reduction algorithm using inverse exponential function optimization
US9374526B2 (en) Providing frame delay using a temporal filter
WO2023067557A1 (en) Efficient video execution method and system
CN115379128A (en) Exposure control method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant