CN115861086A - Image processing method, image processing device, processor, electronic device and storage medium - Google Patents

Image processing method, image processing device, processor, electronic device and storage medium Download PDF

Info

Publication number
CN115861086A
CN115861086A CN202211257269.9A CN202211257269A CN115861086A CN 115861086 A CN115861086 A CN 115861086A CN 202211257269 A CN202211257269 A CN 202211257269A CN 115861086 A CN115861086 A CN 115861086A
Authority
CN
China
Prior art keywords
image
intermediate image
deep learning
learning model
rule
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211257269.9A
Other languages
Chinese (zh)
Inventor
朱夏宁
张钟宣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhixinke Microelectronics Technology Co ltd
Original Assignee
Hangzhou Zhixinke Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Zhixinke Microelectronics Technology Co ltd filed Critical Hangzhou Zhixinke Microelectronics Technology Co ltd
Priority to CN202211257269.9A priority Critical patent/CN115861086A/en
Publication of CN115861086A publication Critical patent/CN115861086A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

An image processing method, an image processing apparatus, a processor, an electronic device and a storage medium provided by an embodiment of the application are provided, where the method includes: according to a preset compensation rule, performing data compensation on an original image to obtain a first intermediate image; adjusting the brightness of the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image; denoising the second intermediate image based on the first deep learning model to obtain a third intermediate image; based on the second deep learning model, performing enhancement processing on the third intermediate image to obtain a fourth intermediate image; performing image processing on the fourth intermediate image to obtain a fifth intermediate image; and according to a preset correction rule, performing correction processing on the fifth intermediate image to correct the deviation generated by the first deep learning model and the second deep learning model so as to obtain a target image. By adopting the technical scheme provided by the embodiment of the application, a better image processing effect can be obtained.

Description

Image processing method, image processing device, processor, electronic device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a processor, an electronic device, and a storage medium.
Background
With the development of society and the advancement of technology, the quality of images (photos or videos) in various industries or fields is required to be higher and higher, especially the quality of images under dim light or high dynamic illumination. In order to improve the Image quality, the prior art generally processes the original Image based on an Image Signal Processor (ISP) and a corresponding statistical algorithm. However, ISP chips have gradually developed peaks whose computational power and traditional statistical algorithms begin to wander around the limits. With the maturity of deep learning, the deep learning method is gradually applied to the field of image processing.
However, the deep learning method in the prior art still has an unsatisfactory image processing effect. In addition, deep learning often requires high computational power support, and the prior art processor architecture cannot bear the computational power requirements of high-quality deep learning.
Disclosure of Invention
In view of this, the present application provides an image processing method, an image processing apparatus, a processor, an electronic device, and a storage medium, so as to solve the problems that in the prior art, a deep learning method has a poor image processing effect, and an existing processor architecture cannot meet a high-quality computing requirement of deep learning.
In a first aspect, an embodiment of the present application provides an image processing method, including:
performing data compensation on an original image according to a preset compensation rule to obtain a first intermediate image, wherein the original image is an image acquired by a first image sensor, and the compensation rule is matched with the defect characteristics of the first image sensor;
adjusting the brightness of the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image;
denoising the second intermediate image based on a first deep learning model to obtain a third intermediate image;
based on a second deep learning model, performing enhancement processing on the third intermediate image to enhance details of the third intermediate image and obtain a fourth intermediate image;
performing image processing on the fourth intermediate image to obtain a fifth intermediate image;
and according to a preset correction rule, performing correction processing on the fifth intermediate image to correct the deviation generated by the first deep learning model and the second deep learning model, so as to obtain a target image.
In a possible implementation manner, the performing, according to a preset brightness adjustment rule, brightness enhancement processing on the first intermediate image to obtain a second intermediate image includes:
counting average values of luminance directly related to the first image sensor exposure, the average values of luminance directly related to the first image sensor exposure including an average value of luminance of the first intermediate image and an average value of ambient luminance;
determining a brightness adjustment parameter for the first intermediate image based on a brightness average directly related to the first image sensor exposure;
and adjusting the brightness of the first intermediate image according to the brightness adjustment parameter of the first intermediate image to obtain a second intermediate image.
In a possible implementation manner, the performing enhancement processing on the third intermediate image based on the second deep learning model to enhance details of the third intermediate image to obtain a fourth intermediate image includes:
and inputting the second intermediate image and the third intermediate image into the second deep learning model, and outputting the fourth intermediate image.
In one possible implementation, the first deep learning model is a deep learning model obtained by training based on a noise model associated with the first image sensor.
In a possible implementation manner, the training samples of the second deep learning model include a first type of image and a second type of image, the first type of image is an image acquired by the first image sensor, the second type of image is an image acquired by the second image sensor, and the performance of the second image sensor is better than that of the first image sensor.
In one possible implementation, the proportion of the second type of image in the training sample is less than or equal to 20%.
In one possible implementation, the first deep learning model and/or the second deep learning model is a deep learning model of an encoding-decoding structure.
In a possible implementation manner, the performing enhancement processing on the third intermediate image based on the second deep learning model to enhance details of the third intermediate image to obtain a fourth intermediate image includes:
and performing HDR enhancement processing on the third intermediate image based on a second deep learning model to enhance the details of the third intermediate image to obtain a fourth intermediate image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the sensing processor is used for performing data compensation on an original image according to a preset compensation rule to obtain a first intermediate image, wherein the original image is an image acquired by a first image sensor, and the compensation rule is matched with the defect characteristic of the first image sensor; adjusting the brightness of the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image;
the image preprocessor adopts a memory computation CIM (common information model) framework and is used for denoising the second intermediate image based on a first deep learning model to obtain a third intermediate image; based on a second deep learning model, performing enhancement processing on the third intermediate image to enhance details of the third intermediate image and obtain a fourth intermediate image;
the image processor is used for carrying out image processing on the fourth intermediate image to obtain a fifth intermediate image;
and the image post processor is used for correcting the fifth intermediate image according to a preset correction rule so as to correct the deviation generated by the first deep learning model and the second deep learning model and obtain a target image.
In a third aspect, an embodiment of the present application provides an image processor, including:
the sensing processing module is used for performing data compensation on an original image according to a preset compensation rule to obtain a first intermediate image, wherein the original image is an image acquired by a first image sensor, and the compensation rule is matched with the defect characteristics of the first image sensor; adjusting the brightness of the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image;
the image preprocessing module adopts a memory computation CIM framework, and is used for denoising the second intermediate image based on a first deep learning model so as to eliminate noise appearing after brightness adjustment and obtain a third intermediate image; based on a second deep learning model, performing enhancement processing on the third intermediate image to enhance details of the third intermediate image and obtain a fourth intermediate image;
the image processing module is used for carrying out image processing on the fourth intermediate image to obtain a fifth intermediate image;
and the image post-processing module is used for correcting the fifth intermediate image according to a preset correction rule so as to correct the deviation generated by the first deep learning model and the second deep learning model and obtain a target image.
In a fourth aspect, an embodiment of the present application provides an electronic device, including the image processing apparatus according to the second aspect or the image processor according to the third aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium includes a stored program, and when the program runs, a device on which the computer-readable storage medium is located is controlled to execute the method in any one of the first aspects.
By adopting the scheme provided by the embodiment of the application, the method has the following advantages:
1. the compensation rule only aims at one specific image sensor, and different image sensors have different compensation rules, so that the compensation effect of the original data is improved;
2. the image is subjected to denoising and enhancement in sequence based on deep learning, and the details of the image can be enhanced while the noise in the image is removed;
3. the first deep learning model is trained only aiming at the noise model of a specific image sensor, and different image sensors correspond to different first deep learning models, so that the denoising effect of original data is improved;
4. the deviation generated in the deep learning process is compensated through post compensation, the requirements on an ISP (Internet service provider) are reduced, and the final image processing effect is improved;
5. under the condition of not changing the traditional image processor, a sensing processor and an image preprocessor are added in front of the image processor, and an image postprocessor is added behind the image processor, so that the traditional image processor is not required to be adjusted, and the engineering is easier to realize;
6. the deep learning is realized on the in-memory computing, and the requirement of the deep learning on computing power can be met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a neural network according to an embodiment of the present application;
fig. 3 is a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 4 is a block diagram of an image processor according to an embodiment of the present disclosure.
Detailed Description
For better understanding of the technical solutions of the present application, the following detailed descriptions of the embodiments of the present application are provided with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely a relationship that describes an associated object, meaning that three relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Referring to fig. 1, a schematic flow chart of an image processing method according to an embodiment of the present application is shown. As shown in fig. 1, it mainly includes the following steps.
Step S101: and according to a preset compensation rule, performing data compensation on the original image to obtain a first intermediate image.
In the embodiment of the application, the original image is an image acquired by an image sensor. For the sake of convenience of distinction from other image sensors hereinafter, the image sensor that acquires the original image is referred to as "first image sensor". In other application scenarios, the image sensor may also be referred to as a camera or a video camera, etc., which is not particularly limited by the embodiments of the present application. In addition, for convenience of explanation, the original image subjected to data compensation is referred to as a "first intermediate image" in the embodiment of the present application.
In practical applications, the image sensor may have certain defects, which causes certain defects (e.g., dead spots, etc.) in the original image acquired by the image sensor. In order to obtain a better image processing effect, after an original image is obtained, according to a preset compensation rule, data compensation is performed on the original image, that is, defects in the original image are eliminated. It should be noted that different image sensors (e.g., image sensors from different manufacturers, or different models of the same merchant) may have different defect characteristics. In the embodiment of the present application, in order to obtain a better compensation effect, the compensation rule is specifically designed according to a specific image sensor (i.e., the first image sensor) so that the compensation rule matches with the defect feature of the first image sensor. In other words, the compensation rule related to the embodiment of the present application is only for one image sensor, and different compensation rules are provided for different image sensors.
In addition, the original image related to the embodiment of the present application may be a full-color image acquired by the first image sensor in a dark light or high dynamic lighting scene, and of course, may also be an image acquired by the first image sensor in another scene, which is not limited in this application.
Step S102: and adjusting the brightness of the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image.
It can be appreciated that the overall brightness of images acquired in low-light or high-dynamic lighting scenes is generally low. And the image with lower overall brightness is input into the neural network model for denoising and enhancing, so that the image processing effect is poorer. Based on this, the embodiment of the application adjusts the brightness of the first intermediate image according to the preset brightness adjustment rule, so as to meet the requirement of subsequent deep learning. For convenience of explanation, the first intermediate image subjected to the luminance adjustment is referred to as a "second intermediate image" in the embodiment of the present application.
In specific implementation, firstly, counting the brightness average value directly related to the exposure of the first image sensor, wherein the brightness average value directly related to the exposure of the first image sensor comprises the brightness average value of the first intermediate image and the ambient brightness average value; then determining a brightness adjustment parameter of the first intermediate image according to a brightness average value directly related to the exposure of the first image sensor; and finally, adjusting the brightness of the first intermediate image according to the brightness adjustment parameter of the first intermediate image to obtain a second intermediate image, so that the second intermediate image can reach the target brightness to meet the requirement of subsequent deep learning. Illustratively, the overall brightness of the first intermediate image is increased by the brightness adjustment described above for the image captured in the dim or high dynamic lighting scene to achieve the target brightness range. It can be understood that the above example is a case where the luminance is insufficient. In other application scenarios, if the captured image itself has reached the target brightness range, no further brightness enhancement is required. Alternatively, if the acquired image itself has too high brightness due to overexposure or the like, the brightness adjustment method described above is required to reduce the brightness to the target brightness range.
Step S103: and denoising the second intermediate image based on the first deep learning model to obtain a third intermediate image.
Typically, the image sensor captures images in a dim light or high dynamic lighting scene because underexposure can present significant noise, i.e., "dim light noise". The dim light noise is nonlinear noise and is a mixture of multiple nonlinear noises, and the traditional noise reduction method can lose the definition of an image. In order to improve the denoising effect, the embodiment of the application is realized based on a deep learning method. Specifically, based on the trained first deep learning model, the second intermediate image is denoised, so that the noise is reduced, and the details of the image are not lost. For convenience of explanation, the denoised second intermediate image is referred to as a "third intermediate image" in the embodiments of the present application.
In practical applications, since different image sensors (e.g., image sensors of different manufacturers or different models of the same vendor) may have different noise models, the noise introduced by the images captured by the different image sensors is different. In addition, the deep learning model needs to be specific to a specific problem to achieve a good effect.
Based on this, the first deep learning model is trained based on the noise model associated with the first image sensor, so that a better denoising effect can be obtained when the trained first deep learning model denoises the image acquired by the first image sensor. In specific implementation, firstly, acquiring, analyzing and processing original data of a first image sensor under different exposure conditions to obtain a noise model associated with the first image sensor; then, randomly adding noise to the training samples of the first deep learning model through the noise model; finally, the first deep learning model is trained by adding the noisy training samples. In other words, the first deep learning model according to the embodiment of the present application is only for one image sensor, and has different first deep learning models for different image sensors. It can be understood that, since the first deep learning model is a deep learning model trained based on a noise model of a specific image sensor (first image sensor), when an image collected by the specific image sensor (first image sensor) is denoised by the first deep learning model, a better denoising effect is achieved.
Step S104: and performing enhancement processing on the third intermediate image based on the second deep learning model to enhance the details of the third intermediate image and obtain a fourth intermediate image.
In the embodiment of the application, denoising and enhancement are combined. In the above steps, based on the first deep learning model, denoising processing is performed on the second intermediate image, and after a third intermediate image is obtained, detail enhancement is performed on the third intermediate image based on the second deep learning model. Because the details of the image are not blurred in the denoising process based on the deep learning in the steps, the details of the image are further enhanced after the noise is removed, and a better image processing effect can be obtained. In some possible implementations, the enhancement process is a High Dynamic Range (HDR) enhancement process. Of course, other algorithms may be adopted by those skilled in the art to perform detail enhancement of the image, and the embodiment of the present application is not limited to this. For convenience of explanation, the third intermediate image after the enhancement processing is referred to as a "fourth intermediate image" in the embodiment of the present application.
It can be understood that, in general, since the second deep learning model is used for processing the image acquired by the first image sensor, the image data acquired by the first image sensor should be used for training of the second deep learning model. However, in order to improve the image processing effect of the second deep learning model, the inventor overcomes the technical prejudice in the field, and adds the image data collected by the image sensor with better performance into the second deep learning model. Specifically, the training samples of the second deep learning model comprise a first type image and a second type image, the first type image is an image acquired by a first image sensor, the second type image is an image acquired by a second image sensor, and the performance of the second image sensor is better than that of the first image sensor. In the training process, the second type image acquired by the second image sensor with better performance is used as a learning target, namely the first type image acquired by the first image sensor with poorer performance is used, and the characteristics of the second type image acquired by the second image sensor with better performance are learned, so that the image acquired by the first image sensor with poorer performance can still obtain better effect finally. In a specific implementation, the proportion of the second type of image in the training sample should not be too high. In one possible implementation, the second type of image is less than or equal to 20% of the training samples. For example, 8%,10%,12%, etc., which the examples of this application do not specifically limit.
It can be understood that, in the process of denoising the second intermediate image in the above step, there may be misjudgment on the noise of the original data, resulting in removing part of the original data, and further possibly affecting the effect after enhancement in this step. Based on this, the embodiment of the present application performs enhancement processing with reference to both the second intermediate image (the image before denoising) and the third intermediate image (the image after denoising).
Referring to fig. 2, a schematic structural diagram of a dual neural network provided in an embodiment of the present application is shown. As shown in fig. 2, in the embodiment of the present application, the second intermediate image with increased brightness is input into the first deep learning model for denoising, so as to obtain a denoised third intermediate image; and further, simultaneously inputting the second intermediate image and the third intermediate image into the second deep learning model to obtain a fourth intermediate image with enhanced details. In the enhancement processing process, the colleagues refer to the second intermediate image before denoising and the third intermediate image after denoising, and a better image processing effect can be obtained.
In a specific implementation, the first deep learning model and/or the second deep learning model is a deep learning model of a coding-decoding structure, or a deep learning model of a Unet structure. Because the deep learning model of the coding-decoding structure can decode the image to each pixel in the image processing process, and the image processing at the pixel level is realized, a better image processing effect can be obtained.
Step S105: and carrying out image processing on the fourth intermediate image to obtain a fifth intermediate image.
After the fourth intermediate Image is obtained, the fourth intermediate Image may be subjected to conventional Image processing by using an Image Processor (ISP), which is not limited in this embodiment of the present application.
Step S106: and according to a preset correction rule, performing correction processing on the fifth intermediate image to correct the deviation generated by the first deep learning model and the second deep learning model so as to obtain a target image.
In a specific implementation, the image processed by the first deep learning model and the second deep learning model may have a certain deviation from the original image. In the embodiment of the application, the deviation of the image processed by the first deep learning model and the second deep learning model is analyzed, and then the correction rule is determined. In the final stage, the fifth intermediate image is corrected according to the correction rule to correct the deviation in the image, and a final target image is obtained.
By adopting the method provided by the embodiment of the application, the following advantages are achieved:
1. the compensation rule only aims at one specific image sensor, and different image sensors have different compensation rules, so that the compensation effect of the original data is improved;
2. the image is subjected to denoising and enhancement in sequence based on deep learning, and the details of the image can be enhanced while the noise in the image is removed;
3. the first deep learning model is trained only aiming at the noise model of a specific image sensor, and different image sensors correspond to different first deep learning models, so that the denoising effect of original data is improved;
4. and the deviation generated in the deep learning process is compensated through post compensation, the requirement on an ISP is reduced, and the final image processing effect is improved.
It can be appreciated that deep learning often requires high computational support and that prior art processor architectures cannot afford the computational requirements of high quality deep learning. In view of this problem, the deep learning in the embodiment of the present application may be implemented on a Computing In Memory (CIM), and for easy understanding, the following is a brief description of the basic principle of CIM.
Memory computing, as the name implies, embeds a computing unit into memory. Generally, a von neumann system operated by a computer comprises a storage unit and a computing unit, data needs to be stored in a main memory firstly when the computer performs operation, then instructions are fetched from the main memory in sequence, and the instructions are executed one by one, the data needs to be frequently migrated between a processor and the memory, if the transmission speed of a memory cannot keep up with the performance of a CPU, the computing capacity is limited, namely a memory wall occurs, for example, the time consumed by the CPU for processing one instruction is assumed to be 1ns, but the time consumed by the memory for reading and transmitting the instruction can reach 10ns, and the operation processing speed of the CPU is seriously influenced. In addition, the data energy of the memory read and write once is consumed hundreds of times more than the energy of the data calculated once, namely the existence of a power consumption wall.
Under the ideal condition that the computing unit is embedded into the memory, the memory computing can effectively eliminate the conditions of high energy consumption and limited speed of data transmission between the storage unit and the computing unit, thereby effectively solving the von Neumann bottleneck. Nowadays, with the development of artificial intelligence technology, AI is widely applied in various fields, and a neural network algorithm represented by deep learning needs to be capable of efficiently processing massive unstructured data, such as text, video, image, voice, etc., which results in that hardware under von neumann architecture needs to frequently read and write memory, and the computing task has the characteristics of large parallel computation amount and many parameters, so that deep learning has higher requirements on parallel computation, low delay, bandwidth, etc., and thus, deep learning is more suitable for being implemented on-memory computation.
Referring to fig. 3, a block diagram of an image processing apparatus according to an embodiment of the present disclosure is shown. As shown in fig. 3, the image processing apparatus includes 4 processors, which are a sensing processor, an image pre-processor, an image processor, and an image post-processor, respectively. The image processor can be a traditional ISP chip, and in the embodiment of the application, the sensing processor and the image preprocessor are added in front of the ISP chip and the image postprocessor is added behind the ISP chip under the condition that the traditional ISP chip is not changed. The realization scheme is easier to realize in engineering because the ISP chip is not required to be adjusted.
Specifically, the sensing processor is connected with the input interface and used for receiving an original image input by the input interface and performing data compensation on the original image according to a preset compensation rule to obtain a first intermediate image, wherein the original image is an image acquired by the first image sensor, and the compensation rule is matched with the defect characteristics of the first image sensor; and adjusting the brightness of the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image. The image preprocessor is used for denoising the second intermediate image based on the first deep learning model to obtain a third intermediate image; and performing enhancement processing on the third intermediate image based on the second deep learning model to enhance the details of the third intermediate image and obtain a fourth intermediate image. In order to meet the computing power requirement of deep learning, the image preprocessor adopts an in-memory Computing (CIM) framework.
That is, the sensing processor is used for performing preliminary processing on raw data acquired by the image sensor; the image preprocessor is used for carrying out deep learning on the data which is processed by the sensing processor primarily based on the advantages of memory calculation. In the embodiment of the present application, the reason why the above-described data processing procedures are performed on two processors (a sensing processor and an image preprocessor) respectively is that: on one hand, the calculation amount required by the primary processing of the original data by the sensing processor is small, and if the primary processing of the original data is carried out by the image preprocessor of the memory computing architecture, the waste of hardware resources of the image preprocessor can be caused; on the other hand, the preliminary processing of raw data by the sensing processor usually has a relatively fixed computational model associated with the image sensor, and the sensing processor may only be adjusted when the image sensor changes. On the contrary, if the sensing processor and the image preprocessor are integrated together, when the image sensor changes, the adjustment of the processor becomes relatively complicated, resulting in a long development period and low efficiency.
And the image processor is used for carrying out image processing on the fourth intermediate image to obtain a fifth intermediate image. And the image post processor is used for correcting the fifth intermediate image according to a preset correction rule so as to correct the deviation generated by the first deep learning model and the second deep learning model, obtain a target image and output the target image through the output interface. In the embodiment of the present application, the reason why the image post-processor is separately provided is that: the image processor is relatively mature and complex in function, and if the function of the image post-processor is integrated in the image processor, the original architecture of the image processor is damaged and is not easy to implement.
In summary, in the embodiment of the present application, the sensing processor performs data compensation and brightness enhancement on the original image, the image preprocessor performs denoising and enhancement on the image with enhanced brightness, the image processor performs conventional image processing on the enhanced image, and finally the image postprocessor corrects the deviation, and the processors cooperate with each other to respectively complete corresponding processing tasks, so that a better processing effect can be achieved, and the engineering implementation is easy. In addition, the image preprocessor adopts an in-memory computing architecture, so that the computing power requirement of deep learning can be ensured.
For specific contents related to the embodiments of the present application, reference may be made to the description of the method embodiments, and for brevity, detailed description is omitted here.
Referring to fig. 4, a block diagram of an image processor according to an embodiment of the present application is provided. As shown in fig. 4, the image processor includes a sensing processing module, an image preprocessing module, an image processing module, and an image post-processing module.
The sensing processing module is used for performing data compensation on an original image according to a preset compensation rule to obtain a first intermediate image, wherein the original image is an image acquired by a first image sensor, and the compensation rule is matched with the defect characteristics of the first image sensor; and adjusting the brightness of the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image.
The image preprocessing module adopts a memory computation CIM framework, and is used for denoising the second intermediate image based on a first deep learning model so as to eliminate noise appearing after brightness adjustment and obtain a third intermediate image; and performing enhancement processing on the third intermediate image based on a second deep learning model to enhance the details of the third intermediate image and obtain a fourth intermediate image.
And the image processing module is used for carrying out image processing on the fourth intermediate image to obtain a fifth intermediate image.
And the image post-processing module is used for correcting the fifth intermediate image according to a preset correction rule so as to correct the deviation generated by the first deep learning model and the second deep learning model and obtain a target image.
It should be noted that the embodiment of the present application implements the functions of the image processing apparatus shown in fig. 3 on one processor, wherein, for convenience of implementation of the functions, the image processor is divided into 4 main functional modules. Wherein the sensing processing module corresponds to the sensing processor in fig. 3; the image preprocessing module corresponds to the image preprocessor in fig. 3; the image processing module corresponds to the image processor in fig. 3; the image post-processing module corresponds to the image post-processor in fig. 3. For the details of the embodiments of the present application, reference may be made to the description of the embodiments above, and for brevity, detailed description is omitted here.
Corresponding to the above embodiments, the present application also provides an electronic device including the above image processing apparatus or image processor. In specific implementation, the electronic device may be a mobile phone, a camera, a tablet computer, a virtual reality device, a vehicle-mounted device, and the like, and the embodiment of the present application does not limit the specific product form.
Corresponding to the above embodiments, the present application further provides a computer-readable storage medium, where the computer-readable storage medium may store a program, and when the program runs, the apparatus in which the computer-readable storage medium is located may be controlled to perform some or all of the steps in the above method embodiments. In a specific implementation, the computer-readable storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Corresponding to the above embodiments, the present application also provides a computer program product, which contains executable instructions, and when the executable instructions are executed on a computer, the computer is caused to execute some or all of the steps in the above method embodiments.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of electronic hardware and computer software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. An image processing method, comprising:
performing data compensation on an original image according to a preset compensation rule to obtain a first intermediate image, wherein the original image is an image acquired by a first image sensor, and the compensation rule is matched with the defect characteristics of the first image sensor;
adjusting the brightness of the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image;
denoising the second intermediate image based on a first deep learning model to obtain a third intermediate image;
based on a second deep learning model, performing enhancement processing on the third intermediate image to enhance details of the third intermediate image and obtain a fourth intermediate image;
performing image processing on the fourth intermediate image to obtain a fifth intermediate image;
and according to a preset correction rule, performing correction processing on the fifth intermediate image to correct the deviation generated by the first deep learning model and the second deep learning model, and obtaining a target image.
2. The method according to claim 1, wherein the performing a brightness enhancement process on the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image comprises:
counting average values of luminance directly related to the first image sensor exposure, the average values of luminance directly related to the first image sensor exposure including an average value of luminance of the first intermediate image and an average value of ambient luminance;
determining a brightness adjustment parameter for the first intermediate image based on a brightness average directly related to the first image sensor exposure;
and adjusting the brightness of the first intermediate image according to the brightness adjustment parameter of the first intermediate image to obtain a second intermediate image.
3. The method according to claim 1, wherein the enhancing the third intermediate image based on the second deep learning model to enhance details of the third intermediate image to obtain a fourth intermediate image comprises:
and inputting the second intermediate image and the third intermediate image into the second deep learning model, and outputting the fourth intermediate image.
4. The method of claim 1, wherein the first deep learning model is a deep learning model trained based on a noise model associated with the first image sensor.
5. The method of claim 1, wherein the training samples of the second deep learning model comprise a first type of image and a second type of image, the first type of image being an image captured by the first image sensor, the second type of image being an image captured by a second image sensor, the second image sensor having better performance than the first image sensor.
6. The method of claim 5, wherein the second type of image is less than or equal to 20% of the training sample.
7. The method of claim 1, wherein the first deep learning model and/or the second deep learning model is a deep learning model of a coding-decoding structure.
8. The method according to claim 1, wherein the enhancing the third intermediate image based on the second deep learning model to enhance details of the third intermediate image to obtain a fourth intermediate image comprises:
and performing HDR enhancement processing on the third intermediate image based on a second deep learning model to enhance the details of the third intermediate image to obtain a fourth intermediate image.
9. An image processing apparatus characterized by comprising:
the sensing processor is used for carrying out data compensation on an original image according to a preset compensation rule to obtain a first intermediate image, wherein the original image is an image acquired by a first image sensor, and the compensation rule is matched with the defect characteristics of the first image sensor; adjusting the brightness of the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image;
the image preprocessor adopts a memory computation CIM (common information model) framework and is used for denoising the second intermediate image based on a first deep learning model to obtain a third intermediate image; based on a second deep learning model, performing enhancement processing on the third intermediate image to enhance details of the third intermediate image and obtain a fourth intermediate image;
the image processor is used for carrying out image processing on the fourth intermediate image to obtain a fifth intermediate image;
and the image post processor is used for correcting the fifth intermediate image according to a preset correction rule so as to correct the deviation generated by the first deep learning model and the second deep learning model and obtain a target image.
10. An image processor, comprising:
the sensing processing module is used for performing data compensation on an original image according to a preset compensation rule to obtain a first intermediate image, wherein the original image is an image acquired by a first image sensor, and the compensation rule is matched with the defect characteristics of the first image sensor; adjusting the brightness of the first intermediate image according to a preset brightness adjustment rule to obtain a second intermediate image;
the image preprocessing module is used for denoising the second intermediate image based on a first deep learning model so as to eliminate noise generated after brightness adjustment and obtain a third intermediate image; based on a second deep learning model, performing enhancement processing on the third intermediate image to enhance details of the third intermediate image and obtain a fourth intermediate image;
the image processing module is used for carrying out image processing on the fourth intermediate image to obtain a fifth intermediate image;
and the image post-processing module is used for correcting the fifth intermediate image according to a preset correction rule so as to correct the deviation generated by the first deep learning model and the second deep learning model and obtain a target image.
11. An electronic device, characterized in that it comprises the image processing apparatus of claim 9 or the image processor of claim 10.
12. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method of any one of claims 1 to 8.
CN202211257269.9A 2022-10-14 2022-10-14 Image processing method, image processing device, processor, electronic device and storage medium Pending CN115861086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211257269.9A CN115861086A (en) 2022-10-14 2022-10-14 Image processing method, image processing device, processor, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211257269.9A CN115861086A (en) 2022-10-14 2022-10-14 Image processing method, image processing device, processor, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115861086A true CN115861086A (en) 2023-03-28

Family

ID=85661463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211257269.9A Pending CN115861086A (en) 2022-10-14 2022-10-14 Image processing method, image processing device, processor, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115861086A (en)

Similar Documents

Publication Publication Date Title
US11127117B2 (en) Information processing method, information processing apparatus, and recording medium
WO2020199831A1 (en) Method for training image processing model, image processing method, network device, and storage medium
US9615039B2 (en) Systems and methods for reducing noise in video streams
US7542600B2 (en) Video image quality
CN110675336A (en) Low-illumination image enhancement method and device
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN110796624B (en) Image generation method and device and electronic equipment
CN111695421B (en) Image recognition method and device and electronic equipment
WO2021232963A1 (en) Video noise-reduction method and apparatus, and mobile terminal and storage medium
JP5440241B2 (en) Image enhancement device, image enhancement method, and image enhancement program
CN111696064A (en) Image processing method, image processing device, electronic equipment and computer readable medium
Tan et al. A real-time video denoising algorithm with FPGA implementation for Poisson–Gaussian noise
CN112019827A (en) Method, device, equipment and storage medium for enhancing video image color
CN113538223A (en) Noise image generation method, noise image generation device, electronic device, and storage medium
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
CN114881867A (en) Image denoising method based on deep learning
Saleem et al. A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset
CN114615495A (en) Model quantization method, device, terminal and storage medium
CN115861086A (en) Image processing method, image processing device, processor, electronic device and storage medium
CN116645305A (en) Low-light image enhancement method based on multi-attention mechanism and Retinex
Li et al. LDNet: low-light image enhancement with joint lighting and denoising
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product
George et al. Design and implementation of hardware-efficient architecture for saturation-based image dehazing algorithm
Li et al. Low-light image enhancement under non-uniform dark
CN110189272B (en) Method, apparatus, device and storage medium for processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination