WO2020215263A1 - 一种图像处理方法及装置 - Google Patents

一种图像处理方法及装置 Download PDF

Info

Publication number
WO2020215263A1
WO2020215263A1 PCT/CN2019/084158 CN2019084158W WO2020215263A1 WO 2020215263 A1 WO2020215263 A1 WO 2020215263A1 CN 2019084158 W CN2019084158 W CN 2019084158W WO 2020215263 A1 WO2020215263 A1 WO 2020215263A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing
image data
raw data
data
image
Prior art date
Application number
PCT/CN2019/084158
Other languages
English (en)
French (fr)
Inventor
郑成林
李蒙
胡慧
陈海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2019/084158 priority Critical patent/WO2020215263A1/zh
Priority to CN201980088470.9A priority patent/CN113287147A/zh
Publication of WO2020215263A1 publication Critical patent/WO2020215263A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems

Definitions

  • This application relates to the field of computer processing technology, and in particular to an image processing method and device.
  • the image signal processor (Image Signal Processor, ISP) is an important part of the camera equipment.
  • ISP Image Signal Processor
  • the camera device can obtain the Bayer image corresponding to the target scene through the lens, and the Bayer image can be converted from analog to digital to obtain the digital image signal (ie RAW data).
  • the RAW data is passed through the ISP Perform a series of calculation optimizations, such as noise reduction, color adjustment, brightness adjustment, and exposure adjustment, and finally generate the target image for display on the display.
  • Deep learning is mainly based on various methods of artificial neural networks to achieve computational optimization, and its applications are becoming more and more extensive.
  • multiple processing steps such as noise reduction, color adjustment, and brightness adjustment are usually executed in a serial manner, for example, through a neural network that can realize multiple processing steps (multiple processing steps are performed in serial Method execution) Realize the processing of RAW data to the target image, or realize the processing of RAW data to the target image through multiple neural networks (each neural network can achieve different processing steps).
  • the relevant parameters of each neural network are fixed, and the effect of the obtained target image is also fixed.
  • the effect is not good or you need to switch between different image effects, etc. to adjust.
  • the input data of the latter neural network is completely dependent on the output data of the previous neural network, that is, the front and back neural networks are strongly dependent, resulting in a relatively late neural network.
  • the training difficulty of neural network increases.
  • the embodiments of the present application provide an image processing method and device, which are used to improve the flexibility, adjustability, and processing accuracy of image processing, and at the same time improve the clarity of the processed image.
  • an image processing method includes preprocessing original RAW data to obtain first RAW data and second RAW data, and the resolution of the image corresponding to the second RAW data is smaller than that of the image corresponding to the first RAW data. Resolution; perform local pixel processing on the first RAW data to obtain the first image data, and perform global pixel processing on the second RAW data to obtain the second image data; generate the target based on the first image data and the second image data Image data.
  • the local pixel processing step and the global pixel processing step are executed independently and in parallel, and the parameters of the local pixel processing and the global pixel processing can be adjusted as needed, which improves the flexibility and adjustability of image processing
  • the image data of local pixels and the image data of global pixel processing are not dependent on each other, so that the original image information of the image to be processed is retained to the greatest extent, the processing accuracy of the image is improved, and the definition of the final generated image is improved.
  • the first image data includes linear RGB image data
  • performing partial pixel processing on the first RAW data to obtain the first image data includes: performing partial pixel processing on the first RAW data At least one of noise reduction processing or demosaicing processing to obtain linear RGB image data.
  • noise reduction processing or demosaicing processing is performed on local pixels of the original image processing, which can improve the definition of the target image.
  • the first image data further includes a first brightness ratio matrix, and performing partial pixel processing on the first RAW data to obtain the first image data, and further includes:
  • the local pixels are subjected to brightness processing to obtain the first brightness ratio matrix.
  • the second image data includes a color conversion matrix
  • performing global pixel processing on the second RAW data to obtain the second image data includes: performing global pixel processing on the second RAW data Color processing to obtain a color conversion matrix.
  • the second image data further includes a second brightness ratio matrix
  • global pixel processing is performed on the second RAW data to obtain the second image data, including:
  • the global pixels perform brightness processing to obtain the second brightness ratio matrix.
  • generating the target image data according to the first image data and the second image data includes: according to one of the first brightness ratio matrix and the second brightness ratio matrix, a linear RGB image Data and color conversion matrix to generate target image data.
  • the linear RGB image data, the color conversion matrix, the first brightness ratio matrix, or the second brightness ratio matrix are all generated based on the processing of the original image data, which can retain the original image information to the greatest extent, and combine with each other. It does not depend on it, and the processing accuracy of the image can be improved, so that the sharpness of the finally generated image can be improved.
  • preprocessing the original image data to obtain the first RAW data and the second RAW data includes: performing black level correction, normalization, and channel splitting on the original image data. Processing to obtain the first RAW data; down-sampling processing, black level correction, normalization processing and channel split processing are performed on the original image data to obtain the second RAW data.
  • the above possible implementations are to perform different data preprocessing on the original image data in two ways, which facilitates the subsequent parallel processing of global pixel processing and local pixel processing, improves the flexibility of image processing, and also improves the definition of the final image. .
  • an image processing system in a second aspect, includes: a preprocessing circuit for preprocessing original RAW data to obtain first RAW data and second RAW data.
  • the resolution of the image corresponding to the second RAW data is smaller than the first RAW data.
  • a RAW data corresponds to the resolution of the image; a local pixel processing network for receiving the first RAW data output by the preprocessing circuit, and performing local pixel processing on the first RAW data to obtain the first image data; a global pixel processing network for using To receive the second RAW data output by the preprocessing circuit, and perform global pixel processing on the second RAW data to obtain second image data; an image synthesis circuit for receiving the first image data and global pixels output by the local pixel processing network Process the second image data output by the network, and generate target image data according to the first image data and the second image data.
  • the first image data is linear RGB image data
  • the local pixel processing network includes a primary processing network
  • the primary processing network is specifically configured to: receive the first RAW data output by the preprocessing circuit, At least one of noise reduction processing or demosaic processing is performed on the local pixels of the first RAW data to obtain linear RGB image data.
  • the first image data further includes a first brightness ratio matrix
  • the local pixel processing network further includes a first brightness processing network
  • the first brightness processing network is specifically used for: receiving a preprocessing circuit
  • the first RAW data is output, and brightness processing is performed on the local pixels of the first RAW data to obtain the first brightness ratio matrix.
  • the second image data includes a color conversion matrix
  • the global pixel processing network includes a color processing network
  • the color processing network is specifically used to: receive the second RAW data output by the preprocessing circuit, and Perform color processing on the global pixels of the second RAW data to obtain a color conversion matrix.
  • the second image data further includes a second brightness ratio matrix
  • the global pixel processing network further includes a second brightness processing network
  • the second brightness processing network is specifically used for: receiving a preprocessing circuit
  • the second RAW data is output, and brightness processing is performed on the global pixels of the second RAW data to obtain a second brightness ratio matrix.
  • the image synthesis circuit is specifically configured to: receive one of the first brightness ratio matrix output by the first brightness processing network and the second brightness ratio matrix output by the second brightness processing network , The linear RGB image data output by the primary processing network and the color conversion matrix output by the color processing network, and according to one of the first brightness ratio matrix and the second brightness ratio matrix, the linear RGB image data and the color conversion matrix, the target image data is generated .
  • the preprocessing circuit is specifically used to: perform black level correction, normalization processing, and channel split processing on the original image data to obtain the first RAW data;
  • the data is subjected to down-sampling processing, black level correction, normalization processing and channel splitting processing to obtain the second RAW data.
  • an image processing device which includes: a preprocessing unit for preprocessing original RAW data to obtain first RAW data and second RAW data, and the second RAW data corresponds to an image with a resolution smaller than the first RAW data.
  • a RAW data corresponds to the resolution of an image;
  • a pixel processing unit for performing local pixel processing on the first RAW data to obtain first image data, and performing global pixel processing on the second RAW data to obtain second image data;
  • the synthesis unit is used to generate target image data according to the first image data and the second image data.
  • the first image data is linear RGB image data
  • the pixel processing unit is specifically configured to: perform at least one of noise reduction processing or demosaicing processing on local pixels of the first RAW data One to get linear RGB image data.
  • the first image data further includes a first brightness ratio matrix, a pixel processing unit, and is also specifically configured to: perform brightness processing on local pixels of the first RAW data to obtain the first Brightness ratio matrix.
  • the second image data includes a color conversion matrix and a pixel processing unit, and is further specifically configured to: perform color processing on global pixels of the second RAW data to obtain a color conversion matrix.
  • the second image data further includes a second brightness ratio matrix, a pixel processing unit, and is further specifically configured to: perform brightness processing on global pixels of the second RAW data to obtain the second Brightness ratio matrix.
  • the image synthesis unit is specifically configured to generate target image data according to one of the first brightness ratio matrix and the second brightness ratio matrix, linear RGB image data and a color conversion matrix.
  • the preprocessing unit is specifically configured to: perform black level correction, normalization processing, and channel split processing on the original image data to obtain the first RAW data;
  • the data is subjected to down-sampling processing, black level correction, normalization processing and channel splitting processing to obtain the second RAW data.
  • an image processing device in a fourth aspect, includes a memory and a processor coupled to the memory.
  • the memory stores instructions and data.
  • the processor runs the instructions in the memory.
  • the The device executes the image processing method provided in the foregoing first aspect or any one of the possible implementation manners of the first aspect.
  • a computer storage medium stores instructions.
  • the computer executes the above-mentioned first aspect or any one of the possible implementations of the first aspect. Image processing method.
  • a computer program product is provided.
  • the computer program product runs on a computer, the computer executes the image processing method provided in the first aspect or any one of the possible implementations of the first aspect.
  • any of the image processing apparatuses, systems, readable storage media, and computer program products provided above are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the above The beneficial effects of the corresponding methods provided in the article will not be repeated here.
  • FIG. 1 is a schematic structural diagram of an image processing device provided by an embodiment of the application.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of this application.
  • FIG. 3 is a schematic diagram of a data format of original RAW data provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of a preprocessing process for original RAW data provided by an embodiment of the application
  • FIG. 5 is a schematic diagram of a processing process of first image data provided by an embodiment of this application.
  • FIG. 6 is a schematic diagram of a brightness processing process provided by an embodiment of this application.
  • FIG. 7 is a schematic structural diagram of an image processing system provided by an embodiment of this application.
  • FIG. 8 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the application.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • And/or describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, both A and B exist, and B exists alone, where A, B can be singular or plural.
  • the following at least one item (a) or similar expressions refers to any combination of these items, including any combination of a single item (a) or plural items (a).
  • At least one of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, and c can be single or multiple.
  • the character "/" generally indicates that the associated objects are in an "or” relationship.
  • words such as "first” and “second” do not limit the number and execution order.
  • FIG. 1 is a schematic structural diagram of an image processing device provided by an embodiment of the application.
  • the image processing device may be a mobile phone, a tablet computer, a computer, a notebook computer, a video camera, a camera, a wearable device, a vehicle-mounted device, or a terminal device.
  • the above-mentioned devices are collectively referred to as image processing devices in this application.
  • the image processing device is a mobile phone as an example for description.
  • the mobile phone includes: a memory 101, a processor 102, a sensor component 103, a multimedia component 104, an audio component 105, and a power supply component 106.
  • the memory 101 can be used to store data, software programs, and modules; it mainly includes a program storage area and a data storage area.
  • the storage program area can store an operating system and at least one application program required for a function, such as a sound playback function, an image playback function, etc. ;
  • the data storage area can store data created according to the use of the mobile phone, such as audio data, image data, phone book, etc.
  • the mobile phone may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device.
  • the processor 102 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire device. It executes by running or executing software programs and/or modules stored in the memory 101, and calling data stored in the memory 101. Various functions and processing data of the mobile phone can be used to monitor the mobile phone as a whole.
  • the processor 102 may be a single-processor structure, a multi-processor structure, a single-threaded processor, a multi-threaded processor, etc.; in some feasible embodiments, the processor 102 may include a central processing unit Unit, general purpose processor, digital signal processor, microcontroller or microprocessor, etc.
  • the processor 102 may further include other hardware circuits or accelerators, such as application specific integrated circuits, field programmable gate arrays or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It can implement or execute various exemplary logical blocks, modules and circuits described in conjunction with the disclosure of this application.
  • the processor 102 may also be a combination that implements computing functions, for example, includes a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so on.
  • the sensor component 103 includes one or more sensors, which are used to provide various aspects of state evaluation for the mobile phone.
  • the sensor component 103 may include a light sensor, such as a Complementary Metal-Oxide-Semiconductor (CMOS) or a Charge Coupled Device (CCD) image sensor, which is used to detect the distance between an external object and the mobile phone. Or used in imaging applications, that is, it becomes an integral part of the camera or camera.
  • the sensor component 103 can also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor. Through the sensor component 103, the acceleration/deceleration, orientation, open/close state of the mobile phone, the relative positioning of the components, or The temperature change of the mobile phone, etc.
  • the multimedia component 104 provides a screen with an output interface between the mobile phone and the user.
  • the screen may be a touch panel, and when the screen is a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 104 further includes at least one camera.
  • the multimedia component 104 includes a front camera and/or a rear camera. When the mobile phone is in an operating mode, such as shooting mode or video mode, the front camera and/or the rear camera can receive external multimedia data.
  • Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 105 may provide an audio interface between the user and the mobile phone.
  • the audio component 105 may include an audio circuit, a speaker, and a microphone.
  • the audio circuit can transmit the electric signal after the received audio data conversion to the speaker, which is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is converted into audio after being received by the audio circuit Data, and then output audio data to send to, for example, another mobile phone, or output audio data to the processor 102 for further processing.
  • the power supply component 106 is used to provide power for various components of the mobile phone.
  • the power supply component 106 may include a power management system, one or more power supplies, and other components related to the generation, management, and distribution of power by the mobile phone.
  • the mobile phone may also include a wireless fidelity (Wireless Fidelity, WiFi) module, a Bluetooth module, etc., which are not described in detail in the embodiment of the present application.
  • a wireless fidelity Wireless Fidelity, WiFi
  • a Bluetooth module etc.
  • FIG. 1 does not constitute a limitation on the mobile phone, and may include more or less components than those shown in the figure, or a combination of certain components, or different component arrangements.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the application. The method may be executed by the image processing device shown in FIG. 1. Referring to FIG. 2, the method may include the following steps.
  • S201 Preprocess the original RAW data to obtain first RAW data and second RAW data, where the resolution of the image corresponding to the second RAW data is smaller than the resolution of the image corresponding to the first RAW data.
  • the original RAW data may also be referred to as original image data, and may specifically be Bayer image data corresponding to the target scene, or RAW data in Bayer format obtained after the Bayer image data is converted from analog to digital.
  • the Bayer image can be obtained by the sensor component 103 in the image processing device shown in FIG. 1, and the processor 102 can convert the Bayer image through a series of analog to digital conversions to obtain a digital image signal, which is in Bayer format.
  • RAW data can represent raw image data.
  • the original RAW data may include multiple pixel array units, and one pixel array unit may include two green (green, G) pixels, one blue (blue, B) pixel, and one pixel array unit. There are two red (red, R) pixels.
  • the four pixels in the dashed frame in FIG. 3 are a pixel array unit. Among them, H represents the height of the original RAW data, and W represents the width of the original RAW data.
  • a pixel of the original RAW data has only one color, namely red, green, or blue, and each pixel has a pixel value.
  • preprocessing the original image data may specifically include: performing black level correction, normalization processing, and channel splitting processing on the original RAW data to obtain the first RAW data; performing down-sampling processing and blackout processing on the original RAW data Level correction, normalization processing and channel split processing to obtain the second RAW data.
  • the black level correction processing can refer to the process of restoring the pixel value range of the pixel in the original RAW data to the standard pixel value range.
  • the pixel value of the pixel in the original RAW data ranges from 5 to 255, and the standard pixel value The range is 0 ⁇ 255. Since the image data collected by the sensor undergoes analog-to-digital conversion, usually because the analog-to-digital conversion cannot provide high enough accuracy, it is impossible to convert a small voltage value. A fixed offset must be added before the analog-to-digital conversion. Shift so that the lowest input level is not zero. For example, if the fixed shift is 5, the pixel value range obtained is 5-255. Therefore, when processing the original RAW data, the pixel values of the pixels in the original RAW data can be restored to adjust the minimum value of the pixel value range to zero. This adjustment process is the black level correction processing.
  • the normalization process may refer to a process of converting the pixel value range of pixels in the image data from 0 to 255 to 0 to 1.
  • the normalization process can reduce the amount of subsequent calculations, thereby improving the calculation efficiency of image data processing, and it is also convenient for subsequent image processing calculations to adopt floating-point data calculations.
  • Channel splitting processing can refer to the process of splitting image data into several single pixel channels, for example, splitting the original RAW data in the form of a pixel array unit (data format shown in Figure 3) into R, G, and B , G has four single-pixel channels to facilitate subsequent processing of each single-pixel channel separately.
  • Down-sampling processing may refer to a process of re-sampling original image data to generate new image data according to a certain sampling coefficient.
  • the down-sampling processing may adopt down-sampling methods such as bilinear interpolation and bicubic interpolation.
  • the downsampling coefficient is k, which means that in the original RAW data, for each row and each column of pixels, one pixel is taken every k points to re-sample to generate new image data.
  • FIG. 4 is a schematic diagram of a process of preprocessing raw RAW data.
  • the process of obtaining the first RAW data can be: performing black level correction and normalization processing on the original RAW data, and outputting RAW floating point data.
  • the RAW floating point data can be a decimal part with a pixel value of 0 to 1
  • the output RAW floating point data is divided into channels to obtain the first RAW data.
  • the first RAW data can include image data of four channels of R, G, B, and G, each pixel The pixel value of the point ranges from 0 to 1, and the image height corresponding to each channel is H/2 and the width is W/2.
  • the process of obtaining the second RAW data can be: first perform 8 times downsampling processing on the original RAW data, output the down-sampled RAW data, then perform black level correction and normalization processing, and output the down-sampled RAW data Floating point data; after that, the down-sampled RAW floating point data is divided into channels, and the second RAW data is output.
  • the second RAW data can also include the image data of the four channels of R, G, B, and G.
  • the pixel value range of each pixel is between 0 and 1, and the image height corresponding to each channel is H/8 and the width is W/8.
  • the local pixel processing may refer to processing local pixels in the first RAW data, and the local pixel processing may be used to change the image characteristics of a specific area in the image.
  • Local pixel processing can usually include noise reduction processing, demosaicing processing, or local brightness processing.
  • performing partial pixel processing on the first RAW data to obtain the first image data may specifically include: performing noise reduction processing on the partial pixels of the first RAW data; or De-mosaic processing is performed on local pixels; or, noise reduction processing and demosaic processing are performed on local pixels of the first RAW data.
  • the processed image data may be linear RGB image data.
  • noise reduction processing may specifically refer to processing to eliminate or reduce noise in image data.
  • the original RAW data is usually interfered by imaging equipment or external environmental noise in the process of digital processing and data transmission
  • the original RAW data, as well as the preprocessed first and second RAW data usually contain noise RAW data, so it needs to be processed for noise reduction.
  • Demosaicing can be a kind of color reconstruction image processing. The purpose is to reconstruct a full-color image from the input image data of incomplete color sampling, that is, the processing of reconstructing the complete RGB three primary color image data of each pixel.
  • RGB image data can also be called three primary color image data.
  • Linear RGB image data is an image in which the change of pixel color can be represented by the linear change of pixel value data.
  • FIG. 5 shows a process of acquiring first image data.
  • Input raw RAW data because the input data generally contains noise, in order to achieve adjustable noise, you can also add a channel ⁇ to adjust the noise level to perform a series of operations for local pixel processing (for example, two-dimensional convolution operation, correction Linear unit (Rectified Linear Unit, ReLU) operation), the output is 3-channel noise-free linear RGB data.
  • the embodiments of this application do not limit the specific process of local pixel processing, and general noise reduction and demosaicing are applicable to this application.
  • By adjusting the operation parameters and operation structure in the processing process different degrees of noise reduction and demosaicing can be achieved, the parameters of local pixel processing can be adjusted, and the flexibility and processing accuracy of image processing can be improved.
  • the noise reduction processing in the embodiments of the present application can be implemented through a corresponding neural network.
  • the neural network used for noise reduction is trained, you can add noise to the training sample by adjusting the value of the channel ⁇ .
  • the value of the channel ⁇ is how much noise is added.
  • the neural network and noise obtained The corresponding relationship between the levels is established. Therefore, when processing a noisy image, the noise reduction can be performed according to the value of the channel ⁇ .
  • performing local pixel processing on the first RAW data to obtain the first image data may specifically include: performing brightness processing on the local pixels of the first RAW data to obtain the first brightness ratio matrix.
  • the local brightness processing is mainly to perform brightness information processing on the local pixels of the image data to achieve the purpose of changing the local brightness of the image.
  • the first RAW data after the channel split is preprocessed and subjected to local brightness processing to obtain a first brightness ratio matrix, which may also be referred to as a first brightness improvement ratio matrix, which is used to represent the brightness improvement information of local pixels.
  • the local brightness processing may extract brightness information for each pixel or a 3 ⁇ 3, 5 ⁇ 5 pixel block to obtain a local brightness improvement ratio.
  • Fig. 6 is a schematic diagram of brightness processing.
  • the input is the first RAW data of four channels, and the pixel value of each pixel of the four channels of R, G, B, and G can be used as four brightness matrices.
  • the height of each brightness matrix can be H/2 and the width can be W/2.
  • the four brightness matrices are processed through a series of operations (for example, combination operations, convolution operations, and excitation function operations, etc.) to obtain the first brightness Ratio matrix, the height of the brightness ratio matrix can be H and the width can be W.
  • performing local pixel processing on the first RAW data to obtain the first image data may also include: performing color enhancement or detail enhancement processing on the local pixels of the first RAW data.
  • color enhancement refers to a technology that uses various methods and means to perform color synthesis or color display to highlight the differences between different objects and improve the image display effect.
  • Detail enhancement refers to the process of adjusting the details of the image, such as brightness, contrast, sharpness, etc., through different methods and means, which are not specifically described in the embodiments of the present application.
  • S203 Perform global pixel processing on the second RAW data to obtain second image data.
  • S202 and S203 may be in no particular order.
  • S202 and S203 are executed in parallel as an example for illustration.
  • global pixel processing refers to processing all pixels of the image data, and global pixel processing can be used to adjust certain image characteristics of the overall image, such as the color, contrast, and exposure of the overall image.
  • performing global pixel processing on the second RAW data to obtain the second image data may specifically include: performing color processing on the global pixels of the second RAW data to obtain a color conversion matrix.
  • the color processing may be color correction (CC) processing, which is specifically based on optical theory to accurately restore the overall color of the image to the true color of the shooting scene felt by the human eye.
  • CC color correction
  • the output color conversion matrix can be used to identify the color information of each pixel.
  • the second RAW data includes image data of four channels of R, G, B, and G after down-sampling processing, and the pixel value of each pixel of the four channels of R, G, B, and G can be used as 4 colors Matrix, the height of each color matrix can be H/8, and the width can be W/8.
  • These four color matrices are processed through a series of operations (such as combination operations, convolution operations, and excitation function operations, etc.) to obtain The color conversion matrix, the height of the color conversion matrix can be H and the width can be W.
  • performing global pixel processing on the second RAW data to obtain the second image data may also include: performing automatic white balance, automatic focus and other processing on the global pixels of the second RAW data.
  • the automatic white balance processing may include the processing of reverting to the original color after color restoration and toning processing of pixels that have a color cast due to lighting or other reasons.
  • Autofocus processing can be the processing of adjusting the sharpness of the key positions of the image, and the focus position can be made the clearest place of the entire image through autofocus.
  • the global pixel processing may also include processing in other different ways, which is not specifically limited in the embodiment of the present application.
  • performing global pixel processing on the second RAW data to obtain second image data may also include: performing global pixel brightness processing on the second RAW data to obtain a second brightness ratio matrix.
  • the global brightness processing process is similar to the above-mentioned local brightness processing process, which is to adjust the brightness value of the overall image.
  • the specific global brightness processing can extract a brightness promotion ratio for the entire image, or each gray value (0-255) has a brightness promotion ratio, that is, gamma correction processing.
  • both the local pixel processing and the global pixel processing in the embodiments of the present application can be implemented through a corresponding neural network, and a stable neural network model can be obtained by learning and training a large amount of input data.
  • the embodiment of this application does not limit the specific structure of the neural network.
  • the user can set the parameters and structures of different neural networks according to the desired target image, and can adjust the color, brightness, etc. of the target image by adjusting the structure and parameters of the neural network. , That is to improve the flexibility and accuracy of image processing.
  • S204 Generate target image data according to the first image data and the second image data.
  • the target image data can be the target RGB image displayed on the device display screen.
  • the target image data can be generated by the processor in the image processing device;
  • the display can be specifically displayed by the display panel included in the multimedia component in the image processing device.
  • the target image data is generated based on the first image data and the second image data, which may specifically be: when the first image data includes linear RGB image data obtained by demosaicing and noise reduction processing and a brightness ratio obtained by local brightness processing Matrix R, when the second image data includes the color conversion matrix T obtained by performing global color processing, the brightness ratio matrix R and the color conversion matrix T can be superimposed on the linear RGB image data to generate target RGB image data.
  • the target RGB image data can be generated by the formula F(linear RGB)*T*R, where F represents a function that performs certain operations on the R, G, and B values of linear RGB data, and may include R, G and B values are subjected to operations such as squaring and cross multiplication; * indicates matrix multiplication operations.
  • the F function performs operations on linear RGB data, which may include squaring the R, G, and B values to obtain R 2 , G 2 , B 2 , and cross multiplying to obtain R*B, R*G, B*G , And then R, G, B, R 2 , G 2 , B 2 , and R*B, R*G, B*G are combined in a certain order and multiplied by the color conversion matrix T to obtain the color processing The image data is then multiplied by the brightness ratio matrix R to obtain the target RGB image.
  • the processing method for generating the target RGB image data there is another embodiment as F(linear RGB*R)*T , That is, first perform brightness enhancement processing on linear RGB image data, and then perform color processing to obtain the target RGB image.
  • the embodiments of the present application do not specifically limit this.
  • At least one local pixel processing step and at least one global pixel processing step are mutually independent and executed in parallel, and the parameters of the local pixel processing and the global pixel processing can be adjusted as needed.
  • the individual adjustment of each processing step can be realized, and the flexibility and adjustability of image processing can be improved; at the same time, the image data of each local pixel and the image data processed by each global pixel are not dependent on each other, so the image to be processed
  • the original image information is preserved to the greatest extent, the processing accuracy of the image is improved, and the definition of the final generated image is improved.
  • the embodiment of the present application also provides an image processing system.
  • the system may include: a preprocessing circuit 701, a local pixel processing network 702, a global pixel processing network 703, and an image synthesis circuit 704.
  • the preprocessing circuit 701 can be used to preprocess the original RAW data to obtain the first RAW data and the second RAW data, wherein the resolution of the image corresponding to the second RAW data is smaller than the resolution of the image corresponding to the first RAW data;
  • the local pixel processing network 702 can be used to receive the first RAW data output by the preprocessing circuit 701 and perform local pixel processing on the first RAW data to obtain the first image data;
  • the global pixel processing network 703 can be used to receive the preprocessing circuit 701 output second RAW data, and perform global pixel processing on the second RAW data to obtain second image data;
  • image synthesis circuit 704 for receiving the first image data output by the local pixel processing network 702 and the global pixel processing network 703 outputs the second image data,
  • the pre-processing circuit 701 can be specifically used to: perform black level correction, normalization processing, and channel split processing on the original image data to obtain the first RAW data; perform down-sampling processing and blackout processing on the original image data. Level correction, normalization processing and channel split processing to obtain the second RAW data.
  • the first image data may be linear RGB image data
  • the local pixel processing network 702 may include a primary processing network, and the primary processing network is specifically configured to: receive the first RAW data output by the preprocessing circuit 701, and perform the first Partial pixels of the RAW data are subjected to at least one of noise reduction processing or demosaic processing to obtain linear RGB image data.
  • the first image data may further include a first brightness ratio matrix
  • the local pixel processing network 702 further includes a first brightness processing network
  • the first brightness processing network may be specifically used to: receive the first output from the preprocessing circuit 701 RAW data, and perform brightness processing on local pixels of the first RAW data to obtain a first brightness ratio matrix.
  • the second image data may include a color conversion matrix
  • the global pixel processing network 703 may include a color processing network
  • the color processing network may be specifically used to: receive the second RAW data output by the preprocessing circuit 701, and perform processing on the second The global pixels of the RAW data are color processed to obtain the color conversion matrix.
  • the second image data may also include a second brightness ratio matrix
  • the global pixel processing network 703 may also include a second brightness processing network
  • the second brightness processing network may be specifically used to: receive the output from the preprocessing circuit 701 Second RAW data, and perform brightness processing on the global pixels of the second RAW data to obtain a second brightness ratio matrix.
  • the primary processing network, the color processing network, the first brightness processing network, and the second brightness processing network may be obtained through artificial neural network training.
  • Artificial neural network abstracts the human brain neuron network from the perspective of information processing, establishes a certain calculation model, and forms different networks according to different connection methods. It can train a large amount of input data to obtain stable and adjustable parameters.
  • the operation of the neural network may include convolution operation, linear rectification function (Rectified Linear Unit, ReLU), and excitation function, which are not specifically limited in this application.
  • the color processing network of the embodiment of the present application may be composed of N convolutional layers, M pooling layers, and K fully connected layers.
  • the convolutional layer can perform feature extraction on the input data, and it can contain multiple combination operations, convolution operations, and excitation function operations; after the convolution layer performs feature extraction, the output feature map will be passed to the pooling layer for features
  • the pooling layer can contain a preset pooling function, whose function is to replace the result of a single point in the feature map with the feature map statistics of its neighboring regions; the fully connected layer is usually built on the convolutional nerve In the last part of the hidden layer of the network, the feature map loses its 3-dimensional structure in the fully connected layer, and is expanded into a vector and passed to the next layer through the activation function.
  • the image synthesis circuit 704 may be specifically configured to: receive one of the first brightness ratio matrix output by the first brightness processing network and the second brightness ratio matrix output by the second brightness processing network, the primary The linear RGB image data output by the processing network and the color conversion matrix output by the color processing network are processed, and the target image data is generated according to one of the first brightness ratio matrix and the second brightness ratio matrix, the linear RGB image data and the color conversion matrix.
  • At least one local pixel processing network and at least one global pixel processing network are mutually independent and processed in parallel, and the parameters of the local pixel processing network and the global pixel processing network can be adjusted as needed.
  • the individual adjustment of each processing network can be realized, and the flexibility and adjustability of image processing can be improved; at the same time, the image data of each local pixel and the image data processed by each global pixel are not dependent on each other, so the image to be processed
  • the original image information is preserved to the greatest extent, the processing accuracy of the image is improved, and the definition of the final generated image can be improved.
  • An embodiment of the present application also provides an image processing device, as shown in FIG. 8, the device may include: a preprocessing unit 801, a pixel processing unit 802, and an image synthesis unit 803.
  • the preprocessing unit 801 may be used to preprocess the original RAW data to obtain the first RAW data and the second RAW data, where the resolution of the image corresponding to the second RAW data is smaller than the resolution of the image corresponding to the first RAW data;
  • the pixel processing unit 802 can be used to perform local pixel processing on the first RAW data to obtain first image data, and perform global pixel processing on the second RAW data to obtain second image data;
  • the image synthesis unit 803 can be used to Based on the first image data and the second image data, target image data is generated.
  • the preprocessing unit 801 may be specifically used to: perform black level correction, normalization processing, and channel split processing on the original image data to obtain the first RAW data; perform down-sampling processing and blackout processing on the original image data. Level correction, normalization processing and channel split processing to obtain the second RAW data.
  • the first image data may be linear RGB image data
  • the pixel processing unit 802 may be specifically configured to: perform at least one of noise reduction processing or demosaicing processing on local pixels of the first RAW data to obtain a linear RGB image data.
  • the first image data may further include a first brightness ratio matrix
  • the pixel processing unit 802 may also be specifically configured to perform brightness processing on local pixels of the first RAW data to obtain the first brightness ratio matrix.
  • the pixel processing unit may be specifically used to perform color processing on the global pixels of the second RAW data to obtain the color conversion matrix.
  • the second image data further includes a second brightness ratio matrix
  • the pixel processing unit 802 may also be specifically configured to: perform brightness processing on global pixels of the second RAW data to obtain the second brightness ratio matrix.
  • the image synthesis unit 803 may be specifically configured to generate target image data according to one of the first brightness ratio matrix and the second brightness ratio matrix, linear RGB image data and a color conversion matrix.
  • the pixel processing unit performs local pixel brightness processing, global pixel color processing, global pixel brightness processing, and at least one of the processes are mutually independent and parallel processing, and can perform processing on local pixels as needed.
  • the parameters of processing and global pixel processing can be adjusted to realize the individual adjustment of each processing step and improve the flexibility and adjustability of image processing; at the same time, the image data of each local pixel and the image data processed by each global pixel are different They are not dependent on each other, so that the original image information of the image to be processed is retained to the greatest extent, the processing accuracy of the image is improved, and the definition of the final image is improved.
  • the embodiment of the present application also provides an image processing device.
  • the structure of the device can be seen in FIG. 1.
  • the device may include a memory 101 and a processor 102 coupled with the memory.
  • the memory 101 stores instructions and data, and the processor 102 runs the memory 101.
  • the processor 102 runs the stored instructions, the device can execute the image processing method provided in steps S201 to S204 in the foregoing method embodiment.
  • the disclosed method, system, and device may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each functional unit in each embodiment of the present application may be integrated into one data processing unit, or each unit may be physically included separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute some steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks, etc., which can store program codes Medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

本申请提供一种图像处理方法及装置,涉及计算机处理技术领域,用于提高图像处理的灵活性和可调节性,并且能提高最终生成图像的清晰度和精度。该方法包括:预处理原始RAW数据,以得到第一RAW数据和第二RAW数据,第二RAW数据对应图像的分辨率小于第一RAW数据对应图像的分辨率;对第一RAW数据进行局部像素处理以得到第一图像数据,以及对第二RAW数据进行全局像素处理,以得到第二图像数据;根据第一图像数据和第二图像数据,生成目标图像数据。

Description

一种图像处理方法及装置 技术领域
本申请涉及计算机处理技术领域,尤其涉及一种图像处理方法及装置。
背景技术
图像作为人类感知世界的视觉基础,是人类获取信息、表达信息和传递信息的重要手段。图像信号处理器(Image Signal Processor,ISP),是拍照设备的重要组成部分。当我们在拍照图像时,拍照设备能够通过镜头获得目标景物对应的拜耳(Bayer)图像,将该拜耳图像经过模拟到数字的转换即可得到数字图像信号(即RAW数据),该RAW数据通过ISP进行一系列的计算优化,如降噪、颜色调整、亮度调整、以及曝光度调整等,最终生成在显示屏上进行展示的目标图像。
目前,图像处理通常采用深度学习技术来实现,深度学习主要是基于人工神经网络的各种方法来实现计算优化,其应用越来越广泛。现有技术中,通常是将降噪、颜色调整和亮度调整等多个处理步骤以串行方式来执行,比如,通过一个能够实现多个处理步骤的神经网络(多个处理步骤以串行的方式执行)实现RAW数据到目标图像的处理,或者通过串行的多个神经网络(每个神经网络能够实现不同的处理步骤)来实现RAW数据到目标图像的处理。
但是,上述通过一个神经网络或者多个串行的神经网络处理图像的方式中,每个神经网络的相关参数都是固定不变的,从而得到的目标图像的效果也是固定的,无法在目标图像的效果不佳或者需要切换不同的图像效果等情况下进行调节。此外,上述通过多个串行的神经网络处理图像的方式中,后一个神经网络的输入数据完全依赖于前一个神经网络的输出数据,即前后神经网络依赖性较强,从而导致比较靠后的神经网络的训练难度增加。
发明内容
本申请实施例提供一种图像处理方法及装置,用于提高图像处理的灵活性、可调节性和处理精度,同时提高被处理图像的清晰度。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,提供一种图像处理方法,该方法包括:预处理原始RAW数据,以得到第一RAW数据和第二RAW数据,第二RAW数据对应图像的分辨率小于第一RAW数据对应图像的分辨率;对第一RAW数据进行局部像素处理以得到第一图像数据,以及对第二RAW数据进行全局像素处理,以得到第二图像数据;根据第一图像数据和第二图像数据,生成目标图像数据。
上述技术方案中,局部像素处理步骤和全局像素处理步骤之间是相互独立、并行执行的,且可以根据需要对局部像素处理和全局像素处理的参数进行调节,提高图像处理的灵活性和可调节性;同时,局部像素的图像数据和全局像素处理的图像数据之间并不相互依赖,从而对待处理图像最大程度的保留原始图像信息,提高了图像的处理精度,提高最终生成图像的清晰度。
在第一方面的一种可能的实现方式中,第一图像数据包括线性RGB图像数据,对第一RAW数据进行局部像素处理以得到第一图像数据,包括:对第一RAW数据的局部像素进行降噪处理或去马赛克处理中至少一个,以得到线性RGB图像数据。上述可能的实现方式,对原始图像处理的局部像素进行降噪处理或去马赛克处理,能够提高目标图像的清晰度。
在第一方面的一种可能的实现方式中,第一图像数据还包括第一亮度比率矩阵,对第一RAW数据进行局部像素处理以得到第一图像数据,还包括:对第一RAW数据的局部像素进行亮度处理,以得到第一亮度比率矩阵。上述可能的实现方式,对预处理后的图像数据进行亮度处理,并与其他局部像素处理和全局像素处理步骤之间是相互独立、并行执行的,且可以根据需要对局部像素处理的参数进行调节,提高图像处理的灵活性和可调节性。
在第一方面的一种可能的实现方式中,第二图像数据包括颜色转换矩阵,对第二RAW数据进行全局像素处理,以得到第二图像数据,包括:对第二RAW数据的全局像素进行颜色处理,以得到颜色转换矩阵。上述可能的实现方式,对预处理后的图像数据进行颜色处理,并与其他局部像素处理和全局像素处理步骤之间是相互独立、并行执行的,且可以根据需要对全局像素处理的参数进行调节,提高图像处理的灵活性和可调节性。
在第一方面的一种可能的实现方式中,第二图像数据还包括第二亮度比率矩阵,对第二RAW数据进行全局像素处理,以得到第二图像数据,包括:对第二RAW数据的全局像素进行亮度处理,以得到第二亮度比率矩阵。上述可能的实现方式,对预处理后的图像数据进行亮度处理,并与其他局部像素处理和全局像素处理步骤之间是相互独立、并行执行的,且可以根据需要对全局像素处理的参数进行调节,提高图像处理的灵活性和可调节性。
在第一方面的一种可能的实现方式中,根据第一图像数据和第二图像数据,生成目标图像数据,包括:根据第一亮度比率矩阵和第二亮度比率矩阵中的一个,线性RGB图像数据和颜色转换矩阵,生成目标图像数据。上述可能的实现方式,线性RGB图像数据、颜色转换矩阵、第一亮度比率矩阵或第二亮度比率矩阵都是基于原始图像数据处理生成的,能够最大程度的保留原始图像信息,且相互之间并不依赖,能够提高图像的处理精度,从而能够提高最终生成的图像的清晰度。
在第一方面的一种可能的实现方式中,预处理原始图像数据,以得到第一RAW数据和第二RAW数据,包括:对原始图像数据进行黑电平校正、归一化处理和通道拆分处理,以得到第一RAW数据;对原始图像数据进行降采样处理、黑电平校正、归一化处理和通道拆分处理,以得到第二RAW数据。上述可能的实现方式,对原始图像数据分两路进行不同的数据预处理,方便后续全局像素处理和局部像素处理的并行处理,提高图像处理的灵活性,也能够提高最终生成的图像的清晰度。
第二方面,提供一种图像处理系统,该系统包括:预处理电路,用于预处理原始RAW数据,以得到第一RAW数据和第二RAW数据,第二RAW数据对应图像的分辨率小于第一RAW数据对应图像的分辨率;局部像素处理网络,用于接收预处理电路输出的第一RAW数据,并对第一RAW数据进行局部像素处理以得到第一图像数据; 全局像素处理网络,用于接收预处理电路输出的第二RAW数据,并对第二RAW数据进行全局像素处理,以得到第二图像数据;图像合成电路,用于接收局部像素处理网络输出的第一图像数据和全局像素处理网络输出的第二图像数据,并根据第一图像数据和第二图像数据生成目标图像数据。
在第二方面的一种可能的实现方式中,第一图像数据为线性RGB图像数据,局部像素处理网络包括初级处理网络,初级处理网络具体用于:接收预处理电路输出的第一RAW数据,并对第一RAW数据的局部像素进行降噪处理或去马赛克处理中的至少一个,以得到线性RGB图像数据。
在第二方面的一种可能的实现方式中,第一图像数据还包括第一亮度比率矩阵,局部像素处理网络还包括第一亮度处理网络,第一亮度处理网络具体用于:接收预处理电路输出的第一RAW数据,并对第一RAW数据的局部像素进行亮度处理,以得到第一亮度比率矩阵。
在第二方面的一种可能的实现方式中,第二图像数据包括颜色转换矩阵,全局像素处理网络包括颜色处理网络,颜色处理网络具体用于:接收预处理电路输出的第二RAW数据,并对第二RAW数据的全局像素进行颜色处理,以得到颜色转换矩阵。
在第二方面的一种可能的实现方式中,第二图像数据还包括第二亮度比率矩阵,全局像素处理网络还包括第二亮度处理网络,第二亮度处理网络具体用于:接收预处理电路输出的第二RAW数据,并对第二RAW数据的全局像素进行亮度处理,以得到第二亮度比率矩阵。
在第二方面的一种可能的实现方式中,图像合成电路,具体用于:接收第一亮度处理网络输出的第一亮度比率矩阵和第二亮度处理网络输出的第二亮度比率矩阵中的一个,初级处理网络输出的线性RGB图像数据和颜色处理网络输出的颜色转换矩阵,并根据第一亮度比率矩阵和第二亮度比率矩阵中的一个,线性RGB图像数据和颜色转换矩阵,生成目标图像数据。
在第二方面的一种可能的实现方式中,预处理电路具体用于:对原始图像数据进行黑电平校正、归一化处理和通道拆分处理,以得到第一RAW数据;对原始图像数据进行降采样处理、黑电平校正、归一化处理和通道拆分处理,以得到第二RAW数据。
第三方面,提供一种图像处理装置,该装置包括:预处理单元,用于预处理原始RAW数据,以得到第一RAW数据和第二RAW数据,第二RAW数据对应图像的分辨率小于第一RAW数据对应图像的分辨率;像素处理单元,用于对第一RAW数据进行局部像素处理以得到第一图像数据,以及对第二RAW数据进行全局像素处理,以得到第二图像数据;图像合成单元,用于根据第一图像数据和第二图像数据,生成目标图像数据。
在第三方面的一种可能的实现方式中,第一图像数据为线性RGB图像数据,像素处理单元,具体用于:对第一RAW数据的局部像素进行降噪处理或去马赛克处理中的至少一个,以得到线性RGB图像数据。
在第三方面的一种可能的实现方式中,第一图像数据还包括第一亮度比率矩阵,像素处理单元,还具体用于:对第一RAW数据的局部像素进行亮度处理,以得到第 一亮度比率矩阵。
在第三方面的一种可能的实现方式中,第二图像数据包括颜色转换矩阵,像素处理单元,还具体用于:对第二RAW数据的全局像素进行颜色处理,以得到颜色转换矩阵。
在第三方面的一种可能的实现方式中,第二图像数据还包括第二亮度比率矩阵,像素处理单元,还具体用于:对第二RAW数据的全局像素进行亮度处理,以得到第二亮度比率矩阵。
在第三方面的一种可能的实现方式中,图像合成单元具体用于:根据第一亮度比率矩阵和第二亮度比率矩阵中的一个,线性RGB图像数据和颜色转换矩阵,生成目标图像数据。
在第三方面的一种可能的实现方式中,预处理单元具体用于:对原始图像数据进行黑电平校正、归一化处理和通道拆分处理,以得到第一RAW数据;对原始图像数据进行降采样处理、黑电平校正、归一化处理和通道拆分处理,以得到第二RAW数据。
第四方面,提供一种图像处理装置,该装置包括存储器、以及与存储器耦合的处理器,存储器存储指令和数据,处理器运行存储器中的指令,当该处理器运行存储的指令时,使得该装置执行上述第一方面或第一方面任一种可能的实现方式所提供的图像处理方法。
第五方面,提供一种计算机存储介质,计算机可读存储介质中存储有指令,当指令在计算机上运行时,使得计算机执行上述第一方面或第一方面任一种可能的实现方式所提供的图像处理方法。
第六方面,提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述第一方面或第一方面任一种可能的实现方式所提供的图像处理方法。
可以理解地,上述提供的任一种图像处理装置、系统、可读存储介质和计算机程序产品,均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种图像处理设备的结构示意图;
图2为本申请实施例提供的一种图像处理方法的流程示意图;
图3为本申请实施例提供的一种原始RAW数据的数据格式示意图;
图4为本申请实施例提供的一种对原始RAW数据的预处理流程示意图;
图5为本申请实施例提供的一种第一图像数据的处理过程示意图;
图6为本申请实施例提供的一种亮度处理过程示意图;
图7为本申请实施例提供的一种图像处理系统的结构示意图;
图8为本申请实施例提供的一种图像处理装置的结构示意图。
具体实施方式
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表 示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c或a-b-c,其中a、b和c可以是单个,也可以是多个。字符“/”一般表示前后关联对象是一种“或”的关系。另外,在本申请的实施例中,“第一”、“第二”等字样并不对数量和执行次序进行限定。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
图1为本申请实施例提供的一种图像处理设备的结构示意图,该图像处理设备可以为手机、平板电脑、计算机、笔记本电脑、摄像机、照相机、可穿戴设备、车载设备、或终端设备等。为方便描述,本申请中将上面提到的设备统称为图像处理设备。本申请实施例以该图像处理设备为手机为例进行说明,该手机包括:存储器101、处理器102、传感器组件103、多媒体组件104、音频组件105和电源组件106等。
下面结合图1对手机的各个构成部件进行具体的介绍:
存储器101可用于存储数据、软件程序以及模块;主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序,比如声音播放功能、图像播放功能等;存储数据区可存储根据手机的使用所创建的数据,比如音频数据、图像数据、电话本等。此外,手机可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。
处理器102是手机的控制中心,利用各种接口和线路连接整个设备的各个部分,通过运行或执行存储在存储器101内的软件程序和/或模块,以及调用存储在存储器101内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。在一些可行的实施例中,处理器102可以是单处理器结构、多处理器结构、单线程处理器以及多线程处理器等;在一些可行的实施例中,处理器102可以包括中央处理器单元、通用处理器、数字信号处理器、微控制器或微处理器等。除此以外,处理器102还可进一步包括其他硬件电路或加速器,如专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器102也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理器和微处理器的组合等。
传感器组件103包括一个或多个传感器,用于为手机提供各个方面的状态评估。其中,传感器组件103可以包括光传感器,如互补金属氧化物半导体(Complementary Metal-Oxide-Semiconductor,CMOS)或电荷耦合器件(Charge Coupled Device,CCD)图像传感器,用于检测外部物体与手机的距离,或者在成像应用中使用,即成为相机或摄像头的组成部分。此外,传感器组件103还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器,通过传感器组件103可以检测到手机的加 速/减速、方位、打开/关闭状态,组件的相对定位,或手机的温度变化等。
多媒体组件104在手机和用户之间的提供一个输出接口的屏幕,该屏幕可以为触摸面板,且当该屏幕为触摸面板时,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。此外,多媒体组件104还包括至少一个摄像头,比如,多媒体组件104包括一个前置摄像头和/或后置摄像头。当手机处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件105可提供用户与手机之间的音频接口,比如,音频组件105可以包括音频电路、扬声器和麦克风。音频电路可将接收到的音频数据转换后的电信号,传输到扬声器,由扬声器转换为声音信号输出;另一方面,麦克风将收集的声音信号转换为电信号,由音频电路接收后转换为音频数据,再输出音频数据以发送给比如另一手机,或者将音频数据输出至处理器102以便进一步处理。
电源组件106用于为手机的各个组件提供电源,电源组件106可以包括电源管理系统,一个或多个电源,及其他与手机生成、管理和分配电力相关联的组件。
尽管未示出,手机还可以包括无线保真(Wireless Fidelity,WiFi)模块、蓝牙模块等,本申请实施例在此不再赘述。本领域技术人员可以理解,图1中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
图2为本申请实施例提供一种图像处理方法的流程示意图,该方法可以由图1所示的图像处理设备来执行,参见图2,该方法可以包括以下步骤。
S201:预处理原始RAW数据,以得到第一RAW数据和第二RAW数据,第二RAW数据对应图像的分辨率小于第一RAW数据对应图像的分辨率。
其中,原始RAW数据也可以称为原始图像数据,具体可以为目标景物对应的拜耳(Bayer)图像数据,或者是该拜耳图像数据经过模拟到数字转化后得到的Bayer格式的RAW数据。其中,该拜耳图像可以由上述图1所示的图像处理设备中的传感器组件103来获取,处理器102可以将该拜耳图像经过模拟到数字的一系列转换,得到数字图像信号,即为Bayer格式的RAW数据。这里的RAW数据可以表示未加工的图像数据。
具体的,如图3所示,原始RAW数据可以包括多个像素阵列单元,一个像素阵列单元可以包括2个绿色(green,G)像素点、1个蓝色(blue,B)像素点和1个红色(red,R)像素点,如图3中虚线框中的4个像素点即为一个像素阵列单元。其中,H表示原始RAW数据的高度,W表示原始RAW数据的宽度。原始RAW数据的一个像素点只有一个颜色,即红色、绿色、或蓝色中一种,每个像素点都有一个像素值。
进一步的,预处理原始图像数据,具体可以包括:对原始RAW数据进行黑电平校正、归一化处理和通道拆分处理,以得到第一RAW数据;对原始RAW数据进行降采样处理、黑电平校正、归一化处理和通道拆分处理,以得到第二RAW数据。
其中,黑电平校正处理可以是指将原始RAW数据中像素点的像素值范围恢复为 标准像素值范围的过程,比如,原始RAW数据中像素点的像素值范围为5~255,标准像素值范围为0~255。由于传感器采集到的图像数据在经过模数转换时,通常会因为模数转换无法提供足够高的精度,从而无法将很小的电压值转换出来,需要在模数转换之前加上一个固定的偏移量,使得输入的最低电平不为零,比如固定的偏移量为5,则得到的像素值范围即为5~255。因此,在处理原始RAW数据时,可以对原始RAW数据中像素点的像素值进行还原,以将像素值范围的最小值调整为零,这个调整过程就是黑电平校正处理。
归一化处理可以是指将图像数据中像素点的像素值范围从0~255经过处理转化为0~1的处理过程。归一化处理能够减小后续运算处理的运算量,从而可以提高图像数据处理的运算效率,同时也便于后续图像处理运算采用浮点数据运算。
通道拆分处理可以是指将图像数据拆分成几个单像素通道的过程,例如,将像素阵列单元形式的原始RAW数据(如图3所示的数据形式)拆分成R、G、B、G四个单像素通道的形式,以方便后续对每个单像素通道分别进行处理。
降采样处理可以是指按照一定的采样系数,对原图像数据进行重新采样以生成新图像数据的过程。具体的,该降采样处理可以采用双线性插值、双立方插值等降采样方法。例如,降采样系数为k,表示在原始RAW数据中,对每行和每列的像素点,分别每隔k个点取一个像素点,以重新采样,从而生成新的图像数据。
示例性的,图4为一种对原始RAW数据进行预处理的流程示意图。其中,获取第一RAW数据的过程可以为:对原始RAW数据进行黑电平校正和归一化处理,输出RAW浮点数据,RAW浮点数据可以为像素值为0~1的带有小数部分的数据;之后,对输出的RAW浮点数据进行通道拆分处理,以得到第一RAW数据,比如,第一RAW数据可以包括R、G、B、G四个通道的图像数据,每个像素点的像素值范围在0~1之间,且每个通道对应的图像高度为H/2,宽度为W/2。其中,获取第二RAW数据的过程可以为:对原始RAW数据先进行8倍降采样处理,输出降采样后的RAW数据,再进行黑电平校正和归一化处理,输出降采样后的RAW浮点数据;之后,对降采样后的RAW浮点数据进行通道拆分,输出第二RAW数据,比如,第二RAW数据也可以包括R、G、B、G四个通道的图像数据,每个像素点的像素值范围在0~1之间,且每个通道对应的图像高度为H/8,宽度为W/8。
S202:对第一RAW数据进行局部像素处理,以得到第一图像数据。
其中,局部像素处理可以是指对第一RAW数据中的局部像素点进行处理,局部像素处理可用于改变图像中特定区域的图像特征。局部像素处理通常可以包括降噪处理、去马赛克处理、或者局部亮度处理等。
在一种可能的实现方式中,对第一RAW数据进行局部像素处理以得到第一图像数据,具体可以包括:对第一RAW数据的局部像素进行降噪处理;或者,对第一RAW数据的局部像素进行去马赛克处理;或者,对第一RAW数据的局部像素进行降噪处理和去马赛克处理。处理之后的图像数据可以为线性RGB图像数据。
其中,降噪处理具体可以是指消除或减少图像数据中噪声的处理。由于原始RAW数据在数字化处理和数据传输等过程中,通常会受到成像设备或外部环境噪声的干扰,因此该原始RAW数据、以及经过预处理的第一RAW数据和第二RAW数据通常是包 含噪声的RAW数据,因此需要进行降噪处理。去马赛克处理可以为一种重建色彩的图像处理,目的是从输入的不完全色彩取样的图像数据中,重建出全彩影像,即重建出各像素完整的RGB三原色图像数据的处理。
RGB图像数据也可以称为三原色图像数据,其一个像素点是由红色(Red,R)、绿色(Green,G)和蓝色(Blue,B)三种颜色构成的混合色,R、G、B各占一个字节,取值范围在0~255;经过组合,一个像素点可代表的颜色数为256×256×256个。例如,黑色:R=G=B=0,白色:R=G=B=255,黄色:R=G=255,B=0等。因此,将第一RAW数据处理为RGB图像,需要通过运算处理将每个像素点周围的颜色值进行插值运算或填充另外的两种颜色,最后生成一幅彩色的RGB图像。线性RGB图像数据为像素点颜色的变化可以用像素值数据的线性变化来表示的图像。
示例性的,图5示出了一种获取第一图像数据的过程。输入原始RAW数据,由于输入数据中一般会有噪声,为了实现对噪声可调,还可以增加一个调节噪声等级的通道σ,进行局部像素处理的一系列运算(比如,二维卷积运算、修正线性单元(Rectified Linear Unit,ReLU)运算),输出为3通道的无噪声的线性RGB数据。本申请实施例对局部像素处理的具体过程不做限定,一般的降噪以及去马赛克都适用于本申请。通过对处理过程中的运算参数和运算结构做调整,可以实现不同程度的降噪和去马赛克的处理效果,实现局部像素处理的参数可调,提高图像处理的灵活性和处理精度。
需要说明的是,本申请实施例中的降噪处理,可以通过对应的神经网络来实现。用于降噪的神经网络在训练的时候,可以通过调节该通道σ的值对训练样本加入噪声,该通道σ的值为多大即加入的噪声就是多大,经过训练之后,得到的神经网络和噪声等级之间的对应关系就建立起来了,因此,在对一张带有噪声的图像进行处理时,降噪就可以根据该通道σ的值来进行降噪处理。
进一步的,对第一RAW数据进行局部像素处理以得到第一图像数据,具体还可以包括:对第一RAW数据的局部像素进行亮度处理,以得到第一亮度比率矩阵。
其中,局部亮度处理主要是对图像数据的局部像素点进行亮度信息处理,达到改变图像的局部亮度的目的。预处理进行通道拆分后的第一RAW数据,经过局部亮度处理,以得到第一亮度比率矩阵,也可以称为第一亮度提升比率矩阵,用来表示局部像素点的亮度提升信息。具体的,局部亮度处理可以对每一个像素点,或者一个3x3、5x5的像素块提取亮度信息,以得到一个局部的亮度提升比率。
示例性的,图6为一种亮度处理的示意图,输入为四通道的第一RAW数据,其R、G、B、G四通道的每个像素点的像素值可以作为4个亮度矩阵,每个亮度矩阵的高度可以为H/2、宽度可以为W/2,将这四个亮度矩阵通过一系列运算(比如,组合运算、卷积运算和激励函数运算等)处理,以得到第一亮度比率矩阵,该亮度比率矩阵的高度可以为H、宽度可以为W。
需要说明的是,对亮度处理的具体过程可以参见现有技术的相关描述,本申请实施例对此不做具体限定。在亮度处理过程中,通过调节运算的参数和运算结构,可以实现对图像局部不同程度的亮度处理,从而提高图像处理的灵活性和处理精度。
此外,在上述步骤S202中,对第一RAW数据进行局部像素处理,以得到第一图 像数据,还可以包括:对第一RAW数据的局部像素进行彩色增强或细节增强处理等。其中,彩色增强是指通过各种方法和手段进行彩色合成或彩色显示,以突出不同物体之间的差别,提高图像显示效果的技术。细节增强是指通过不同方法和手段,对图像的细节、例如亮度、对比度、清晰度等进行调整的处理,本申请实施例对此不作具体阐述。
S203:对第二RAW数据进行全局像素处理,以得到第二图像数据。其中,S202和S203可以不分先后,图2中以S202和S203是并行执行的为例进行说明。
其中,全局像素处理是指对图像数据的所有像素点均进行处理,全局像素处理可用于调整整体图像的某个图像特征,比如整体图像的颜色、对比度、曝光度等。具体的,对第二RAW数据进行全局像素处理,以得到第二图像数据,具体可以包括:对第二RAW数据的全局像素进行颜色处理,以得到颜色转换矩阵。
其中,颜色处理可以为颜色矫正(Color correction,CC)处理,具体是以光学理论为基础,将图像整体颜色准确还原为人眼感受的拍摄现场的真实色调。输出的颜色转换矩阵,可以用于标识每个像素点的颜色信息。
示例性的,第二RAW数据包括降采样处理后的R、G、B、G四通道的图像数据,其R、G、B、G四通道的每个像素点的像素值可以作为4个颜色矩阵,每个颜色矩阵的高度可以为H/8、宽度可以为W/8,将这四个颜色矩阵通过一系列运算(比如,组合运算、卷积运算和激励函数运算等)处理,以得到颜色转换矩阵,该颜色转换矩阵的高度可以为H、宽度可以为W。通过对图像全局的不同程度的颜色进行处理,从而能够提高图像处理的灵活性和处理精度。
进一步的,在上述步骤S203中,对第二RAW数据进行全局像素处理,以得到第二图像数据,还可以包括:对第二RAW数据的全局像素进行自动白平衡、自动对焦等处理。其中,自动白平衡处理可以包括对因为光照或其它原因而产生偏色的像素点,经过将色彩还原和调色处理,调整恢复为原来的色彩的处理。自动对焦处理可以为调整图像关键位置的清晰度的处理,通过自动对焦可以使对焦的位置成为整个图像最清晰的地方。在实际应用中,该全局像素处理还可以包括其他不同方式的处理,本申请实施例对此不作具体限定。
可选的,对第二RAW数据进行全局像素处理,以得到第二图像数据,还可以包括:对第二RAW数据进行全局像素的亮度处理,以得到第二亮度比率矩阵。全局亮度处理过程与上述局部亮度处理的过程类似,是对整体图像的亮度值进行调整。具体的全局亮度处理可以对整个图像提取一个亮度提升比率,或者每一个灰度值(0~255)有一个亮度提升比率,即gamma矫正处理。具体的处理原理可以参见上述的局部亮度处理的相关描述,本申请实施例在此不再赘述。
需要说明的是,本申请实施例中的局部像素处理和全局像素处理,均可以通过对应的神经网络来实现,通过对输入的大量数据进行学习、训练,以得到稳定的神经网络模型。本申请实施例对神经网络的具体结构不做限定,用户可以根据期望的目标图像设置不同神经网络的参数和结构,并且可以通过调节神经网络的结构和参数,来调节目标图像的颜色、亮度等,即提高图像处理的灵活性和处理精度。
S204:根据第一图像数据和第二图像数据,生成目标图像数据。
其中,目标图像数据可以是在设备显示屏上进行展示的目标RGB图像,根据第一图像数据和第二图像数据,生成目标图像数据可以由图像处理设备中的处理器来实现;目标RGB图像的展示具体可以由图像处理设备中的多媒体组件包括的显示面板来实现显示。
具体的,根据第一图像数据和第二图像数据,生成目标图像数据,具体可以为:当第一图像数据包括进行去马赛克和降噪处理得到的线性RGB图像数据和局部亮度处理得到的亮度比率矩阵R,第二图像数据包括进行全局颜色处理得到的颜色转换矩阵T时,可以将亮度比率矩阵R和颜色转换矩阵T,叠加到线性RGB图像数据,来生成目标RGB图像数据。示例性的,具体可以通过公式F(linear RGB)*T*R来生成目标RGB图像数据,其中,F表示对linear RGB数据的R、G、B值进行一定运算的函数,可以包括对R、G、B值进行平方、交叉乘等运算;*表示矩阵相乘运算。
进一步的,F函数对linear RGB数据进行运算,可以包括对R、G、B值进行平方以得到R 2,G 2,B 2,以及交叉乘以得到R*B,R*G,B*G,然后将R、G、B,R 2、G 2、B 2,以及R*B,R*G,B*G按照一定的顺序组合起来与颜色转换矩阵T相乘,以得到颜色处理之后的图像数据,然后与亮度比率矩阵R相乘,以得到目标RGB图像。
示例性的,根据线性RGB图像数据、亮度比率矩阵R和颜色转换矩阵T,生成目标RGB图像数据的处理方法除上述示例外,还有另外一种实施例为F(linear RGB*R)*T,即先对linear RGB图像数据进行亮度提升处理,然后进行颜色处理,以得到目标RGB图像。本申请实施例对此不做具体限定。
在本申请实施例中,至少一个局部像素处理的步骤和至少一个全局像素处理的步骤之间均是相互独立、并行执行的,且可以根据需要对局部像素处理和全局像素处理的参数进行调节,从而能够实现各个处理步骤的单独调整,提高图像处理的灵活性和可调节性;同时,每个局部像素的图像数据和每个全局像素处理的图像数据之间并不相互依赖,从而对待处理图像最大程度的保留原始图像信息,提高了图像的处理精度,提高最终生成图像的清晰度。
本申请实施例还提供一种图像处理系统,如图7所示,该系统可以包括:预处理电路701、局部像素处理网络702、全局像素处理网络703和图像合成电路704。其中,预处理电路701,可以用于预处理原始RAW数据,以得到第一RAW数据和第二RAW数据,其中,第二RAW数据对应图像的分辨率小于第一RAW数据对应图像的分辨率;局部像素处理网络702,可以用于接收预处理电路701输出的第一RAW数据,并对第一RAW数据进行局部像素处理以得到第一图像数据;全局像素处理网络703,用于接收预处理电路701输出的第二RAW数据,并对第二RAW数据进行全局像素处理,以得到第二图像数据;图像合成电路704,用于接收局部像素处理网络702输出的第一图像数据和全局像素处理网络703输出的第二图像数据,并根据第一图像数据和第二图像数据生成目标图像数据。
进一步的,预处理电路701具体可以用于:对原始图像数据进行黑电平校正、归一化处理和通道拆分处理,以得到第一RAW数据;对原始图像数据进行降采样处理、黑电平校正、归一化处理和通道拆分处理,以得到第二RAW数据。
进一步的,第一图像数据可以为线性RGB图像数据,则局部像素处理网络702 可以包括初级处理网络,该初级处理网络具体用于:接收预处理电路701输出的第一RAW数据,并对第一RAW数据的局部像素进行降噪处理或去马赛克处理中的至少一个,以得到线性RGB图像数据。
可选的,第一图像数据还可以包括第一亮度比率矩阵,则局部像素处理网络702还包括第一亮度处理网络,第一亮度处理网络具体可以用于:接收预处理电路701输出的第一RAW数据,并对第一RAW数据的局部像素进行亮度处理,以得到第一亮度比率矩阵。
进一步的,第二图像数据可以包括颜色转换矩阵,则全局像素处理网络703可以包括颜色处理网络,该颜色处理网络具体可以用于:接收预处理电路701输出的第二RAW数据,并对第二RAW数据的全局像素进行颜色处理,以得到颜色转换矩阵。
可选的,第二图像数据还可以包括第二亮度比率矩阵,则全局像素处理网络703还可以包括第二亮度处理网络,该第二亮度处理网络具体可以用于:接收预处理电路701输出的第二RAW数据,并对第二RAW数据的全局像素进行亮度处理,以得到第二亮度比率矩阵。
具体的,初级处理网络、颜色处理网络、第一亮度处理网络和第二亮度处理网络可以由人工神经网络训练得到。人工神经网络是由从信息处理角度对人脑神经元网络进行抽象,建立某种运算模型,按不同的连接方式组成不同的网络,可以对大量的输入数据训练,以得到稳定的、参数可调的运算模型。其神经网络的运算可以包括卷积运算、线性整流函数(Rectified Linear Unit,ReLU)和激励函数等组成,本申请对此不作具体限定。
示例性的,本申请实施例的颜色处理网络可以由N个卷积层、M个池化层以及K个全连接层组成。卷积层可以对输入数据进行特征提取,其内部可以包含多个组合运算、卷积运算和激励函数运算;在卷积层进行特征提取后,输出的特征图会被传递至池化层进行特征选择和信息过滤,池化层可以包含预设定的池化函数,其功能是将特征图中单个点的结果替换为其相邻区域的特征图统计量;全连接层通常搭建在卷积神经网络隐含层的最后部分,特征图在全连接层中会失去3维结构,被展开为向量并通过激励函数传递至下一层。
在本实施例可能的实现方式中,图像合成电路704具体可以用于:接收第一亮度处理网络输出的第一亮度比率矩阵和第二亮度处理网络输出的第二亮度比率矩阵中的一个,初级处理网络输出的线性RGB图像数据和颜色处理网络输出的颜色转换矩阵,并根据第一亮度比率矩阵和第二亮度比率矩阵中的一个,线性RGB图像数据和颜色转换矩阵,生成目标图像数据。
在本申请实施例中,至少一个局部像素处理网络和至少一个全局像素处理网络之间均是相互独立、并行处理的,且可以根据需要对局部像素处理网络和全局像素处理网络的参数进行调节,从而能够实现各个处理网络的单独调整,提高图像处理的灵活性和可调节性;同时,每个局部像素的图像数据和每个全局像素处理的图像数据之间并不相互依赖,从而对待处理图像最大程度的保留原始图像信息,提高了图像的处理精度,能够提高最终生成图像的清晰度。
本申请实施例还提供一种图像处理装置,如图8,该装置可以包括:预处理单元801、 像素处理单元802和图像合成单元803。其中,预处理单元801,可以用于预处理原始RAW数据,以得到第一RAW数据和第二RAW数据,其中,第二RAW数据对应图像的分辨率小于第一RAW数据对应图像的分辨率;像素处理单元802,可以用于对第一RAW数据进行局部像素处理以得到第一图像数据,以及对第二RAW数据进行全局像素处理,以得到第二图像数据;图像合成单元803,可以用于根据第一图像数据和第二图像数据,生成目标图像数据。
进一步的,预处理单元801具体可以用于:对原始图像数据进行黑电平校正、归一化处理和通道拆分处理,以得到第一RAW数据;对原始图像数据进行降采样处理、黑电平校正、归一化处理和通道拆分处理,以得到第二RAW数据。
进一步的,第一图像数据可以为线性RGB图像数据,则像素处理单元802具体可以用于:对第一RAW数据的局部像素进行降噪处理或去马赛克处理中的至少一个,以得到线性RGB图像数据。
可选的,第一图像数据还可以包括第一亮度比率矩阵,则像素处理单元802具体还可以用于:对第一RAW数据的局部像素进行亮度处理,以得到第一亮度比率矩阵。
进一步的,第二图像数据包括颜色转换矩阵,则像素处理单元具体可以用于:对第二RAW数据的全局像素进行颜色处理,以得到颜色转换矩阵。
可选的,第二图像数据还包括第二亮度比率矩阵,则像素处理单元802,具体还可以用于:对第二RAW数据的全局像素进行亮度处理,以得到第二亮度比率矩阵。
在本实施例可能的实现方式中,图像合成单元803具体可以用于:根据第一亮度比率矩阵和第二亮度比率矩阵中的一个,线性RGB图像数据和颜色转换矩阵,生成目标图像数据。
在本申请实施例中,像素处理单元进行局部像素的亮度处理、全局像素的颜色处理、全局像素的亮度处理等至少一个处理之间均是相互独立、并行处理的,且可以根据需要对局部像素处理和全局像素处理的参数进行调节,从而能够实现各个处理步骤的单独调整,提高图像处理的灵活性和可调节性;同时,每个局部像素的图像数据和每个全局像素处理的图像数据之间并不相互依赖,从而对待处理图像最大程度的保留原始图像信息,提高了图像的处理精度,提高最终生成图像的清晰度。
本申请实施例还提供一种图像处理装置,该装置的结构可以参见图1,该装置可以包括存储器101、以及与存储器耦合的处理器102,存储器101存储指令和数据,处理器102运行存储器101中的指令,当该处理器102运行存储的指令时,使得该装置可以执行上述方法实施例中步骤S201~S204所提供的图像处理方法。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法、系统和装置,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个 网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个数据处理单元中,也可以是各个单元单独物理包括,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (24)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    预处理原始RAW数据,以得到第一RAW数据和第二RAW数据,所述第二RAW数据对应图像的分辨率小于所述第一RAW数据对应图像的分辨率;
    对所述第一RAW数据进行局部像素处理以得到第一图像数据,以及对所述第二RAW数据进行全局像素处理,以得到第二图像数据;
    根据所述第一图像数据和所述第二图像数据,生成目标图像数据。
  2. 根据权利要求1所述的方法,其特征在于,所述第一图像数据包括线性RGB图像数据,所述对所述第一RAW数据进行局部像素处理以得到第一图像数据,包括:
    对所述第一RAW数据的局部像素进行降噪处理或去马赛克处理中至少一个,以得到所述线性RGB图像数据。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一图像数据还包括第一亮度比率矩阵,所述对所述第一RAW数据进行局部像素处理以得到第一图像数据,还包括:
    对所述第一RAW数据的局部像素进行亮度处理,以得到所述第一亮度比率矩阵。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述第二图像数据包括颜色转换矩阵,所述对所述第二RAW数据进行全局像素处理,以得到第二图像数据,包括:
    对所述第二RAW数据的全局像素进行颜色处理,以得到所述颜色转换矩阵。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述第二图像数据还包括第二亮度比率矩阵,所述对所述第二RAW数据进行全局像素处理,以得到第二图像数据,包括:
    对所述第二RAW数据的全局像素进行亮度处理,以得到所述第二亮度比率矩阵。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述第一图像数据和所述第二图像数据,生成目标图像数据,包括:
    根据所述第一亮度比率矩阵和所述第二亮度比率矩阵中的一个,所述线性RGB图像数据和所述颜色转换矩阵,生成所述目标图像数据。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述预处理原始图像数据,以得到第一RAW数据和第二RAW数据,包括:
    对所述原始图像数据进行黑电平校正、归一化处理和通道拆分处理,以得到所述第一RAW数据;
    对所述原始图像数据进行降采样处理、黑电平校正、归一化处理和通道拆分处理,以得到所述第二RAW数据。
  8. 一种图像处理系统,其特征在于,所述系统包括:
    预处理电路,用于预处理原始RAW数据,以得到第一RAW数据和第二RAW数据,所述第二RAW数据对应图像的分辨率小于所述第一RAW数据对应图像的分辨率;
    局部像素处理网络,用于接收所述预处理电路输出的所述第一RAW数据,并对所述第一RAW数据进行局部像素处理以得到第一图像数据;
    全局像素处理网络,用于接收所述预处理电路输出的所述第二RAW数据,并对所述第二RAW数据进行全局像素处理,以得到第二图像数据;
    图像合成电路,用于接收所述局部像素处理网络输出的所述第一图像数据和所述全局像素处理网络输出的所述第二图像数据,并根据所述第一图像数据和所述第二图像数据生成目标图像数据。
  9. 根据权利要求8所述的系统,其特征在于,所述第一图像数据为线性RGB图像数据,所述局部像素处理网络包括初级处理网络,所述初级处理网络具体用于:
    接收所述预处理电路输出的所述第一RAW数据,并对所述第一RAW数据的局部像素进行降噪处理或去马赛克处理中的至少一个,以得到所述线性RGB图像数据。
  10. 根据权利要求8或9所述的系统,其特征在于,所述第一图像数据还包括第一亮度比率矩阵,所述局部像素处理网络还包括第一亮度处理网络,所述第一亮度处理网络具体用于:
    接收所述预处理电路输出的所述第一RAW数据,并对所述第一RAW数据的局部像素进行亮度处理,以得到所述第一亮度比率矩阵。
  11. 根据权利要求8-10任一项所述的系统,其特征在于,所述第二图像数据包括颜色转换矩阵,所述全局像素处理网络包括颜色处理网络,所述颜色处理网络具体用于:
    接收所述预处理电路输出的所述第二RAW数据,并对所述第二RAW数据的全局像素进行颜色处理,以得到所述颜色转换矩阵。
  12. 根据权利要求8-11任一项所述的系统,其特征在于,所述第二图像数据还包括第二亮度比率矩阵,所述全局像素处理网络还包括第二亮度处理网络,所述第二亮度处理网络具体用于:
    接收所述预处理电路输出的所述第二RAW数据,并对所述第二RAW数据的全局像素进行亮度处理,以得到所述第二亮度比率矩阵。
  13. 根据权利要求12所述的系统,其特征在于,所述图像合成电路,具体用于:
    接收所述第一亮度处理网络输出的所述第一亮度比率矩阵和所述第二亮度处理网络输出的所述第二亮度比率矩阵中的一个,所述初级处理网络输出的所述线性RGB图像数据和所述颜色处理网络输出的所述颜色转换矩阵,并根据所述第一亮度比率矩阵和所述第二亮度比率矩阵中的一个,所述线性RGB图像数据和所述颜色转换矩阵,生成所述目标图像数据。
  14. 根据权利要求8-13任一项所述的系统,其特征在于,所述预处理电路,具体用于:
    对所述原始图像数据进行黑电平校正、归一化处理和通道拆分处理,以得到所述第一RAW数据;
    对所述原始图像数据进行降采样处理、黑电平校正、归一化处理和通道拆分处理,以得到所述第二RAW数据。
  15. 一种图像处理装置,其特征在于,所述装置包括:
    预处理单元,用于预处理原始RAW数据,以得到第一RAW数据和第二RAW数据,所述第二RAW数据对应图像的分辨率小于所述第一RAW数据对应图像的分辨率;
    像素处理单元,用于对所述第一RAW数据进行局部像素处理以得到第一图像数据,以及对所述第二RAW数据进行全局像素处理,以得到第二图像数据;
    图像合成单元,用于根据所述第一图像数据和所述第二图像数据,生成目标图像数据。
  16. 根据权利要求15所述的装置,其特征在于,所述第一图像数据为线性RGB图像数据,所述像素处理单元,具体用于:
    对所述第一RAW数据的局部像素进行降噪处理或去马赛克处理中的至少一个,以得到所述线性RGB图像数据。
  17. 根据权利要求15或16所述的装置,其特征在于,所述第一图像数据还包括第一亮度比率矩阵,所述像素处理单元,还具体用于:
    对所述第一RAW数据的局部像素进行亮度处理,以得到所述第一亮度比率矩阵。
  18. 根据权利要求15-17任一项所述的装置,其特征在于,所述第二图像数据包括颜色转换矩阵,所述像素处理单元,还具体用于:
    对所述第二RAW数据的全局像素进行颜色处理,以得到所述颜色转换矩阵。
  19. 根据权利要求15-18任一项所述的装置,其特征在于,所述第二图像数据还包括第二亮度比率矩阵,所述像素处理单元,还具体用于:
    对所述第二RAW数据的全局像素进行亮度处理,以得到所述第二亮度比率矩阵。
  20. 根据权利要求19所述的装置,其特征在于,所述图像合成单元具体用于:
    根据所述第一亮度比率矩阵和所述第二亮度比率矩阵中的一个,所述线性RGB图像数据和所述颜色转换矩阵,生成所述目标图像数据。
  21. 根据权利要求15-20任一项所述的装置,其特征在于,所述预处理单元具体用于:
    对所述原始图像数据进行黑电平校正、归一化处理和通道拆分处理,以得到所述第一RAW数据;
    对所述原始图像数据进行降采样处理、黑电平校正、归一化处理和通道拆分处理,以得到所述第二RAW数据。
  22. 一种图像处理装置,其特征在于,所述装置包括:存储器、以及与所述存储器耦合的处理器,所述存储器存储指令和数据,所述处理器运行所述存储器中的指令,以使所述处理器执行如权利要求1-7任一项所述的图像处理方法。
  23. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得所述计算机执行如权利要求1-7任一项所述的图像处理方法。
  24. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-7任一项所述的图像处理方法。
PCT/CN2019/084158 2019-04-24 2019-04-24 一种图像处理方法及装置 WO2020215263A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/084158 WO2020215263A1 (zh) 2019-04-24 2019-04-24 一种图像处理方法及装置
CN201980088470.9A CN113287147A (zh) 2019-04-24 2019-04-24 一种图像处理方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/084158 WO2020215263A1 (zh) 2019-04-24 2019-04-24 一种图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2020215263A1 true WO2020215263A1 (zh) 2020-10-29

Family

ID=72941246

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/084158 WO2020215263A1 (zh) 2019-04-24 2019-04-24 一种图像处理方法及装置

Country Status (2)

Country Link
CN (1) CN113287147A (zh)
WO (1) WO2020215263A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022223875A1 (en) * 2021-04-23 2022-10-27 Varjo Technologies Oy Selective image signal processing

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808037A (zh) * 2021-09-02 2021-12-17 深圳东辉盛扬科技有限公司 一种图像优化方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141571A (zh) * 2006-09-06 2008-03-12 三星电子株式会社 图像产生系统、方法和介质
CN103202022A (zh) * 2010-11-08 2013-07-10 佳能株式会社 图像处理设备及其控制方法
CN103238336A (zh) * 2010-12-01 2013-08-07 佳能株式会社 图像处理设备和图像处理方法
CN104469191A (zh) * 2014-12-03 2015-03-25 东莞宇龙通信科技有限公司 图像降噪的方法及其装置
CN105338338A (zh) * 2014-07-17 2016-02-17 诺基亚技术有限公司 用于成像条件检测的方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141571A (zh) * 2006-09-06 2008-03-12 三星电子株式会社 图像产生系统、方法和介质
CN103202022A (zh) * 2010-11-08 2013-07-10 佳能株式会社 图像处理设备及其控制方法
CN103238336A (zh) * 2010-12-01 2013-08-07 佳能株式会社 图像处理设备和图像处理方法
CN105338338A (zh) * 2014-07-17 2016-02-17 诺基亚技术有限公司 用于成像条件检测的方法和装置
CN104469191A (zh) * 2014-12-03 2015-03-25 东莞宇龙通信科技有限公司 图像降噪的方法及其装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022223875A1 (en) * 2021-04-23 2022-10-27 Varjo Technologies Oy Selective image signal processing
US11688046B2 (en) 2021-04-23 2023-06-27 Varjo Technologies Oy Selective image signal processing

Also Published As

Publication number Publication date
CN113287147A (zh) 2021-08-20

Similar Documents

Publication Publication Date Title
WO2020192483A1 (zh) 图像显示方法和设备
JP6929047B2 (ja) 画像処理装置、情報処理方法及びプログラム
Delbracio et al. Mobile computational photography: A tour
EP3816929B1 (en) Method and apparatus for restoring image
JP2022517444A (ja) 映像フレーム補間のための特徴ピラミッドワーピング
JP4941285B2 (ja) 撮像装置、撮像システム、撮像方法及び画像処理装置
US11317070B2 (en) Saturation management for luminance gains in image processing
JP2019534520A (ja) 画像処理用のニューラルネットワークモデルのトレーニング方法、装置、及び記憶媒体
Chakrabarti et al. Modeling radiometric uncertainty for vision with tone-mapped color images
WO2023151511A1 (zh) 模型训练方法、图像去摩尔纹方法、装置及电子设备
WO2024027287A1 (zh) 图像处理系统及方法、计算机可读介质和电子设备
CN113168673A (zh) 图像处理方法、装置和电子设备
WO2020215263A1 (zh) 一种图像处理方法及装置
CN111429371A (zh) 图像处理方法、装置及终端设备
WO2024067461A1 (zh) 图像处理方法、装置、计算机设备和存储介质
CN112489144A (zh) 图像处理方法、图像处理装置、终端设备及存储介质
TWI471848B (zh) 顏色校正方法與影像處理裝置
CN114556897B (zh) 原始到rgb的图像转换
WO2021179142A1 (zh) 一种图像处理方法及相关装置
CN115187488A (zh) 图像处理方法及装置、电子设备、存储介质
CN115187487A (zh) 图像处理方法及装置、电子设备、存储介质
US11669939B1 (en) Burst deblurring with kernel estimation networks
CN115205168A (zh) 图像处理方法、装置、电子设备和存储介质、产品
CN111383171B (zh) 一种图片处理方法、系统及终端设备
CN113971629A (zh) 图像恢复方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925664

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19925664

Country of ref document: EP

Kind code of ref document: A1