CN115460343A - Image processing method, apparatus and storage medium - Google Patents

Image processing method, apparatus and storage medium Download PDF

Info

Publication number
CN115460343A
CN115460343A CN202210912803.9A CN202210912803A CN115460343A CN 115460343 A CN115460343 A CN 115460343A CN 202210912803 A CN202210912803 A CN 202210912803A CN 115460343 A CN115460343 A CN 115460343A
Authority
CN
China
Prior art keywords
image data
processor
image
camera
noise reduction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210912803.9A
Other languages
Chinese (zh)
Other versions
CN115460343B (en
Inventor
李子荣
殷仕帆
刘琰培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210912803.9A priority Critical patent/CN115460343B/en
Publication of CN115460343A publication Critical patent/CN115460343A/en
Application granted granted Critical
Publication of CN115460343B publication Critical patent/CN115460343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses an image processing method, image processing equipment and a storage medium, and belongs to the technical field of image processing. The method comprises the following steps: the method comprises the steps that a first processor obtains first image data collected by a first camera and second image data collected by a second camera, wherein the first image data is black-and-white image data, and the second image data is color image data; respectively denoising the first image data and the second image data, carrying out image fusion on the denoised first image data and the denoised second image data to obtain third image data, and sending the third image data to the integrated processor; and the integrated processor performs image enhancement processing on the third image data to obtain target image data. The image that this application can shoot the camera carries out definition, color and luminance reinforcing, obtains the image that the definition is higher, the color reduction ability is stronger, and luminance is comparatively even, improves the shooting effect, is particularly useful for shooting the scene night.

Description

Image processing method, apparatus and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
With the development of the technology, the shooting capability of the mobile phone becomes an important performance index of the mobile phone. The night shooting is a common scene shot by a user using a mobile phone. When the mobile phone shoots in a night scene, the environment is dark, the pictures or videos shot by the mobile phone are dark, and the visual experience of a user is poor.
In the related art, a mobile phone is configured with a camera and a general integrated processor, and the integrated processor is integrated with various processors such as a CPU and a GPU. For example, a general purpose integrated processor may be a System On Chip (SOC). In order to improve the night shooting effect, after the mobile phone shoots through the camera, the integrated circuit device can acquire color image data acquired by the camera, perform image enhancement processing on the color image data to obtain target image data, and store or display the target data image.
However, because the image enhancement processing capability of the general integrated processor is limited, in a night shooting scene, after the image data acquired by the camera is subjected to image enhancement processing by the integrated processor, the obtained target image data has high noise and poor color reproduction capability, and the situations that a bright area is too bright and a dark area is too dark are easy to occur.
Disclosure of Invention
The application provides an image processing method, an image processing device and a storage medium, which can enhance the definition, the color and the brightness of an image shot by a camera to obtain an image with higher definition, stronger color reduction capability and more uniform brightness. The technical scheme is as follows:
in a first aspect, an image processing method is provided, which is applied to an electronic device, where the electronic device includes a first camera, a second camera, a first processor, and an integrated processor, the first camera is a black-and-white camera, and the second camera is a color camera, and the method includes:
the first processor acquires black-and-white image data acquired by the first camera and color image data acquired by the second camera, respectively performs noise reduction on the black-and-white image data and the color image data, performs image fusion on the black-and-white image data and the color image data to obtain fused image data, and sends the fused image data to the integrated processor. And the integrated processor performs image enhancement processing on the fused image data to obtain target image data.
In the embodiment of the application, an independent first processor is additionally configured outside the integrated processor, and a double-shooting scheme that a black-and-white camera and a color camera are respectively used for shooting is adopted. The first processor is used for respectively reducing noise of the black-and-white image data acquired by the black-and-white camera and the color image data acquired by the color camera, so that the black-and-white image data and the color image data with high signal-to-noise ratio can be obtained, and the definition of the image data is improved. By carrying out image fusion on the black-and-white image data and the color image data after noise reduction, the brightness information and the detail information of the black-and-white image data after noise reduction and the color information of the color image data after noise reduction can be fused, and the image data with high signal-to-noise ratio, clear details and accurate colors can be obtained. Therefore, the definition, the color and the brightness of the image can be enhanced in all directions, the image with higher definition, stronger color reduction capability and more uniform brightness is obtained, the shooting effect of the camera is further improved, and the shooting effect of the shooting scenes with weaker light, such as night scenes, indoor scenes, cloudy scenes and the like, can be particularly improved.
In a possible embodiment, before the first processor performs noise reduction on the black-and-white image data and the color image data, respectively, the first processor may also perform noise reduction on the black-and-white image data and the color image data, respectively, and then perform noise reduction on the preprocessed black-and-white image data to obtain noise-reduced black-and-white image data; and denoising the preprocessed color image data to obtain denoised color image data.
Wherein the preprocessing includes one or more of black and white level correction, dead pixel correction, lens shading correction, and automatic white balance. By respectively preprocessing the black-and-white image data and the color image data, the two image data can be subjected to image correction, and the image data with more accurate image information can be obtained.
In a possible embodiment, the first processor includes a first IFE and a second IFE, and the first processor preprocesses the black-and-white image data by the first IFE to obtain preprocessed black-and-white image data; and preprocessing the color image data through the second IFE to obtain preprocessed color image data.
As one example, the first processor may employ an AI algorithm to perform noise reduction on the first image data and the second image data, respectively. Therefore, the operation efficiency of the first processor can be improved.
In one possible embodiment, the first processor includes an NPU including a first neural network for denoising black and white image data and a second neural network for denoising color image data. The first processor performs noise reduction on the black-and-white image data through a first neural network to obtain noise-reduced black-and-white image data; and denoising the color image data through a second neural network to obtain denoised color image data.
By running the AI noise reduction algorithm on the special NPU, the operation speed of the algorithm can be improved, thereby further improving the operation efficiency of the first processor.
In a possible embodiment, the black-and-white image data is first video frame data collected by a first camera, and the color image data is second video frame data collected by a second camera; the first processor takes the first video frame data and the third video frame data as input of a first neural network, the first video frame data and the third video frame data are processed through the first neural network to obtain black-and-white image data after noise reduction, and the third video frame data are obtained after the noise reduction is carried out on the video frame data collected by the first camera before the first video frame data; the first processor takes the second video frame data and the fourth video frame data as input of a second neural network, the second video frame data and the fourth video frame data are processed through the second neural network to obtain color image data after noise reduction, and the fourth video frame data are obtained after the noise reduction is carried out on the video frame data collected by the second camera before the second video frame data.
The noise reduction accuracy can be improved by determining the noise reduction result of the current frame according to the noise reduction results of the current frame and the previous frame.
As an example, the first processor may perform image fusion on the noise-reduced black-and-white image data and the color image data using an AI algorithm. Therefore, the operation efficiency of the first processor can be improved.
In one possible embodiment, the first processor comprises an NPU, the NPU comprising a third neural network for image fusion of the black and white image data and the color image data; and the first processor performs image fusion on the black-and-white image data subjected to noise reduction and the color image data subjected to noise reduction through a third neural network to obtain third image data.
By operating the AI image fusion algorithm on the special NPU, the operation speed of the algorithm can be improved, and the operation efficiency of the first processor is further improved.
In a possible embodiment, the first processor performs scale alignment on the black-and-white image data after noise reduction and the color image data after noise reduction to obtain the black-and-white image data and the color image data after scale alignment, wherein the black-and-white image data and the color image data after scale alignment have the same scale; and then the black-and-white image data and the color image data after the scale alignment are used as the input of a third neural network, and the black-and-white image data and the color image data after the scale alignment are processed through the third neural network to obtain third image data.
In one possible embodiment, the first processor obtains key parameters of the first camera and the second camera, the key reference including one or more of a focal length, a pixel size, and a field angle; determining the scale difference between the black-and-white image data after noise reduction and the color image data after noise reduction according to the key parameters of the first camera and the second camera; and carrying out scale alignment on the black-and-white image data subjected to noise reduction and the color image data subjected to noise reduction according to the scale difference to obtain the black-and-white image data and the color image data subjected to scale alignment.
In a possible embodiment, the first processor may first perform dynamic range compression on the third image data to obtain fifth image data, where a dynamic range of the fifth image data is lower than that of the third image data; and sending the fifth image data to the integrated processor so that the integrated processor can perform image enhancement processing on the fifth image data to obtain target image data.
As one example, the first processor may employ an AI algorithm to perform dynamic range compression on the third image data. Therefore, the operation efficiency of the first processor can be improved.
In one possible embodiment, the first processor comprises an NPU comprising a fourth neural network for dynamic range compression of the image data; and the first processor performs dynamic range compression on the third image data through a fourth neural network to obtain fifth image data.
By operating the AI dynamic range compression algorithm on the dedicated NPU, the operation speed of the algorithm can be increased, thereby further improving the operation efficiency of the first processor.
In a possible embodiment, the first processor uses the third image data as an input of a fourth neural network, and performs tone mapping on the third image data through the fourth neural network to obtain fifth image data.
In a possible embodiment, after the first processor performs noise reduction on the color image data, demosaicing the noise-reduced color image data to obtain sixth image data; and then carrying out image fusion on the black-and-white image data subjected to noise reduction and the sixth image data to obtain third image data.
The color image data after noise reduction can be converted from the RAW domain to the RGB domain by demosaicing the color image data after noise reduction.
In a possible embodiment, the integrated processor includes an IPE, and the integrated processor performs image enhancement processing on the third image data through the IPE to obtain the target image data.
The image enhancement processing may include image processing operations such as hardware noise reduction, image cropping, color enhancement, or detail enhancement, and may also include other image processing operations, which is not limited in this embodiment of the present application.
In a possible embodiment, after the first processor acquires the black-and-white image data acquired by the first camera and the color image data acquired by the second camera, the black-and-white image data and the color image data can also be sent to the integrated processor, the integrated processor determines a first 3A value according to the black-and-white image data, and determines a second 3A value according to the color image data; and controlling the first camera according to the first 3A value, and controlling the second camera according to the second 3A value.
Wherein the first 3A value and the second 3A value comprise an auto-focus AF value, an auto-exposure AE value, and an auto-white balance AWE value. The integrated processor may determine the 3A value by using a 3A algorithm according to the image data, where the 3A algorithm may be preset, and this is not limited in this embodiment of the application.
As one example, the integrated processor may adjust the 3A value of the first camera based on the first 3A value and adjust the 3A value of the second camera based on the second 3A value. Or, the integrated processor may also send the first 3A value to the first camera, and the first camera adjusts its own 3A value according to the first 3A value; and sending the second 3A value to a second camera, and adjusting the 3A value of the second camera according to the second 3A value.
Therefore, automatic exposure, automatic white balance and automatic focusing can be carried out on the first camera and the second camera according to the image information of the first image data and the second image data, and the shooting effect of subsequent images is improved.
In one possible embodiment, the first processor is an ISP, which includes an NPU.
In one possible embodiment, the integrated processor is an SOC.
In a second aspect, there is provided an image processing apparatus having a function of implementing the behaviors of the image processing method in the first aspect described above. The image processing device comprises at least one module, and the at least one module is used for realizing the image processing method provided by the first aspect.
In a third aspect, an image processing apparatus is provided, which includes a processor and a memory, and the memory is used for storing a program for supporting the image processing apparatus to execute the image processing method provided in the first aspect, and storing data for implementing the image processing method in the first aspect. The processor is configured to execute programs stored in the memory. The image processing apparatus may further include a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, there is provided a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to execute the image processing method of the first aspect described above.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the image processing method of the first aspect described above.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of a mobile phone provided in the related art;
fig. 2 shows a schematic structural diagram of a mobile phone provided in an embodiment of the present application;
fig. 3 is a schematic diagram showing a comparison between a target image processed by using an image processing method provided in the related art and a target image processed by using an image processing method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of another mobile phone provided in the embodiment of the present application;
fig. 5 is a schematic diagram illustrating an AI noise reduction module provided in an embodiment of the present application;
fig. 6 is a schematic diagram illustrating an AI image fusion module provided in an embodiment of the present application;
FIG. 7 is a diagram illustrating an AI dynamic range compression module according to an embodiment of the application;
fig. 8 shows a schematic hardware structure diagram of a mobile phone provided in an embodiment of the present application;
fig. 9 is a block diagram illustrating a software system of a mobile phone according to an embodiment of the present application;
FIG. 10 is a flow chart illustrating an image processing method provided by an embodiment of the present application;
fig. 11 shows a flowchart of another image processing method provided in the embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference to "a plurality" in this application means two or more. In the description of this application, "/" indicates an inclusive meaning, for example, A/B may indicate either A or B; "and/or" herein is only an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, for the convenience of clearly describing the technical solutions of the present application, the words "first", "second", and the like are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," and the like do not denote any order or importance, but rather the terms "first," "second," and the like do not denote any order or importance.
It should be noted that the image processing method provided in the embodiment of the present application is applicable to any electronic device that has a shooting function and multiple cameras, such as a mobile phone, a tablet computer, a camera, and a smart wearable device, and the embodiment of the present application does not limit this. In addition, the image processing method provided in the embodiment of the present application may be applied to various shooting scenes, such as a night scene, an indoor scene, a cloudy scene, and the like, and may also be applied to other shooting scenes, which is not limited in the embodiment of the present application. For convenience of description, the following description will be given by taking the example of shooting in a night scene with a mobile phone.
As background art, when people use a mobile phone to shoot in a night scene, the image shot by the mobile phone is darker due to a darker environment, and the visual experience of a user is poorer. In the related art, in order to improve the image capturing effect in the night scene, the image data captured by the camera may be subjected to image enhancement processing by a general integrated processor in the mobile phone.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a mobile phone provided in the related art. As shown in fig. 1, the mobile phone includes a color camera 11 and a general-purpose integrated processor 12, and the integrated processor 12 includes an Image Front End (IFE) and an Image Processing Engine (IPE). The color camera 11 is configured to capture a color image, and after the color camera 11 is started, the integrated processor 12 collects color image data collected by the color camera 11, preprocesses the color image data by the IFE, and then performs image enhancement processing on the preprocessed color image data by the IPE to obtain target color image data to be displayed.
However, because the general integrated processor has limited image processing capabilities such as preprocessing and image enhancement, in a night shooting scene, after image data collected by the camera is processed by the integrated processor, the image frame of the obtained target image data may have the problems of unclear image, large noise, over-bright area, over-dark removal from dark, poor color rendering capability, and the like. In order to solve the technical problem, an embodiment of the present application provides an image capturing method.
Referring to fig. 2, fig. 2 shows a schematic structural diagram of a mobile phone 100 according to an embodiment of the present disclosure, and as shown in fig. 2, the mobile phone 100 includes a camera 10, a camera 20, a processor 30, and an integrated processor 40.
The camera 10 is a black-and-white camera and is used for shooting black-and-white images.
The camera 20 is a color camera, and is used for capturing a color image.
The integrated processor 40 is integrated with a plurality of processors, such as a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), and the like. The integrated processor 40 is typically integrated on an integrated circuit Chip, such as a general purpose Chip on Chip (SOC).
The integrated processor 40 includes an image enhancement module 41, and the image enhancement module 41 is configured to perform enhancement processing on the image data. For example, the image enhancement module 41 may be an IPE or the like.
Where processor 30 is a separate processor than integrated processor 40, it may be located at the front end of integrated processor 40. That is, in the embodiment of the present application, a processor dedicated to image processing of an image captured by a camera is additionally configured before the integrated processor 40, so as to improve an image processing effect and further improve an image capturing effect.
For example, the processor 30 may be an ISP, which is a processor dedicated to image processing and configured in addition to the ISP in the integrated processor 40. For example, the processor 30 is an Artificial Intelligence (AI) ISP including a neural Network Processing Unit (NPU). Illustratively, the processor 30 is integrated on a first chip, which is a chip other than an SOC.
The processor 30 includes a first noise reduction module 31, a second noise reduction module 32, and an image fusion module 33. The first noise reduction module 31 is configured to reduce noise of black-and-white image data, the second noise reduction module 32 is configured to reduce noise of color image data, and the image fusion module 33 is configured to perform image fusion on the black-and-white image data and the color image data, so as to obtain a color image data with clear details and accurate colors through fusion.
As an example, the first noise reduction module 31, the second noise reduction module 32 and the image fusion module 33 are AI modules, and an AI algorithm may be adopted to implement the corresponding functions.
In the embodiment of the present application, when the mobile phone 100 performs shooting, the camera 10 may collect black-and-white image data, and the camera 20 may collect color image data. The processor 30 may obtain black-and-white image data collected by the camera 10 and color image data collected by the camera 20, perform noise reduction on the black-and-white image data through the first noise reduction module 31, perform noise reduction on the color image data through the second noise reduction module 32, perform image fusion on the noise-reduced black-and-white image data and the noise-reduced color image data through the image fusion module 33 to obtain fused image data, and send the fused image data to the integrated processor 40. The integrated processor 40 performs image enhancement processing on the fused image data through the image enhancement module 41 to obtain target image data. Then, the target image data is stored or displayed.
By configuring an additional processor 30 outside the integrated processor 40 and adopting a double-shooting scheme in which the black-and-white camera and the color camera respectively perform shooting, before the image enhancement of the integrated processor 40, the processor 30 can respectively perform noise reduction on the black-and-white image data acquired by the camera 10 and the color image data acquired by the camera 20, and perform image fusion on the black-and-white image data and the color image data after the noise reduction. Therefore, the noise reduction effect, the picture definition and the color reduction capability of the image can be improved, so that the shooting effect of the camera is improved, and especially the shooting effect of night scenes can be improved.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a comparison between a target image processed by using an image processing method provided in the related art and a target image processed by using an image processing method provided in an embodiment of the present application. Fig. 3 (a) is a target image obtained by processing an image captured by a camera by using an image processing method provided in the related art, and fig. 3 (b) is a target image obtained by processing an image captured by a camera by using an image processing method provided in an embodiment of the present application. As can be seen from the diagram (a) in fig. 3, the target image processed by the image processing method provided in the related art has the problems of unclear image, large noise, poor color reduction capability, and the like, and the situations of too bright area and too dark area occur. As can be seen from comparing the graph (b) in fig. 3 with the graph (a) in fig. 3, the image of the target image processed by the image processing method provided by the embodiment of the present application is clearer, the brightness is uniform, the color restoration capability is better, the image effect is better, and the user visual experience is better.
As an example, the camera 10 and the camera 20 are connected to the processor 30 through a processor interface, and the acquired image data is transmitted to the processor 30 through the processor interface. The processor 30 is connected to the processor 40 via a processor interface, and the fused image data is transmitted to the integrated processor 40 via the processor interface. The processor interface may be a mobile industry processor interface (Mipi) or the like.
As an example, a first pre-processing module may also be configured before the first noise reduction module 31 to pre-process the black and white image data before the first noise reduction module 31. And a second pre-processing module is configured before the second noise reduction module 32 to pre-process the color image data before the second noise reduction module 32. The first and second pre-processing modules may be IFEs or the like.
As an example, a demosaicing module may be further configured after the second denoising module 32 to perform demosaicing processing on the denoised color image data after denoising the color image data to convert the denoised color image data from the RAW domain into the RGB domain.
As an example, a Dynamic Range Compression (DRC) module may also be configured after the image fusion module 33, perform dynamic range compression on the fused image data to reduce the dynamic range of the fused image data, compress the fused image data from high dynamic range imaging to low dynamic range imaging, and retain image local contrast and details. Illustratively, the image fusion module 33 may be an AI module, and the corresponding function is realized by an AI algorithm.
As one example, the processor 30 is an ISP that includes an NPU, and the modules described above are AI modules in the NPU.
Referring to fig. 4, fig. 4 is a schematic structural diagram of another mobile phone 100 according to an embodiment of the present disclosure. As shown in fig. 4, the cellular phone 100 includes a camera 10, a camera 20, an AI ISP30, and a SOC40, and the SOC40 includes an ISP42.
The AI ISP30 includes a routing module 34, an IFE35, an IFE36, an AI noise reduction module 31, an AI noise reduction module 32, a demosaicing module 37, an AI image fusion module 33, and an AI dynamic range compression module 38. The ISP42 includes IPE41 and 3A (auto exposure, AE), auto white balance (AWE), auto Focus (AF) modules 43.
In addition, the AI ISP30 also includes a plurality of processor interfaces, including, for example, mipi0, mipi1, and Mipi2. The camera 10 is connected with the routing module 34 through a Mipi0, and the camera 20 is connected with the routing module 34 through a Mipi 1. The AI dynamic range compression module 38 is connected to the IPE41 through Mipi 0. The routing module 34 is connected to the 3A module 43 through Mipi1 and Mipi2.
The routing module 34 is used for copying the image data collected by the camera. For example, the routing module 34 is a Standard Image Format (SIF) routing module. The black and white image data collected by the camera 10 is sent to the routing module 34 through the Mipi0, and the color image data collected by the camera 20 is sent to the routing module 34 through the Mipi 1. The routing module 34 copies the black-and-white image data and the color image data, respectively, to obtain two paths of black-and-white image data and two paths of color image data. The routing module 34 sends one path of black-and-white image data to the IFE35, and sends one path of color image data to the IFE36. And the other path of black-and-white image data is sent to the 3A module 43 through the Mipi1, and the other path of color image data is sent to the 3A module 44 through the Mipi2.
The IFE35 preprocesses the black-and-white image data, and sends the preprocessed black-and-white image data to the AI noise reduction module 31. The IFE36 preprocesses the color image data and sends the preprocessed color image data to the AI noise reduction module 32. The preprocessing is used to correct the image data, and for example, the preprocessing includes one or more of black and white level correction (BLC), dead pixel correction (BPC), lens Shading Correction (LSC), and AWE. It should be understood that the preprocessing may also include other image processing operations, which are not limited in this application.
The AI noise reduction module 31 performs noise reduction on the preprocessed black-and-white image data to obtain noise-reduced black-and-white image data, and sends the noise-reduced black-and-white image data to the AI image fusion module 33. The AI denoising module 32 performs denoising on the preprocessed color image data to obtain denoised color image data, sends the denoised color image data to the demosaicing module 37, performs demosaicing on the denoised color image data through the demosaicing module 37, and sends the demosaiced color image data to the AI image fusion module 33. The AI noise reduction module 31 and the AI noise reduction module 32 are noise reduction modules using an AI algorithm.
The AI image fusion module 33 performs image fusion on the black-and-white image data after noise reduction and the color image data after demosaicing to obtain fused image data, and sends the fused image data to the AI dynamic range compression module 38. The AI image fusion module 33 is an image fusion module that employs an AI algorithm.
The AI dynamic range compression module 38 performs dynamic range compression on the fused image data, and transmits the fused image data after dynamic range compression to the IPE41 in the ISP42 through the Mipi 0. The AI dynamic range compression module 38 is a dynamic range compression module that employs an AI algorithm.
The IPE41 performs image enhancement processing on the fused image data after dynamic range compression to obtain target image data. The image enhancement processing may include image processing operations such as hardware noise reduction, image cropping, color enhancement, or detail enhancement.
The 3A module 43 is used to calculate 3A values (AE value, AWE value, and AF value) using a 3A algorithm from the image data. After the routing module 34 sends another path of black-and-white image data to the 3A module 43 through the Mipi1 and sends another path of color image data to the 3A module 43 through the Mipi2, the 3A module 43 may calculate a first 3A value according to the received black-and-white image data, and control the camera 10 according to the first 3A value, for example, adjust the 3A value of the camera 10 according to the first 3A value, or send the first 3A value to the camera 10, and the camera 10 adjusts its own 3A value according to the first 3A value. In addition, the 3A module 43 may calculate a second 3A value according to the received color image data, and control the camera 20 according to the second 3A value, for example, adjusting the 3A value of the camera 20 according to the second 3A value, or send the second 3A value to the camera 20, and adjust the 3A value thereof by the camera 20 according to the second 3A value.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating an AI noise reduction module according to an embodiment of the present disclosure. As shown in fig. 5, the noise reduction module includes a neural network 1, and the neural network 1 is used for reducing noise of an image. The image data may be input to the neural network 1, and the noise-reduced image data may be output through the neural network 1. For example, referring to fig. 5, taking image data as video frame data as an example, the N frame video frame data and the N-1 frame video frame data after noise reduction may be used as input of the neural network 1, and the N frame video frame data after noise reduction may be output through the neural network 1.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating an AI image fusion module according to an embodiment of the present disclosure. As shown in fig. 6, the AI image fusion module includes a neural network 2, and the neural network 2 is configured to perform image fusion on the black-and-white image data and the color image data to obtain fused image data. For example, the black-and-white image data and the color image data are subjected to scale alignment, and then the black-and-white image data and the color image data after the scale alignment are used as the input of the neural network 2, and the fused image data is output through the neural network 2.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating an AI dynamic range compression module according to an embodiment of the disclosure. As shown in fig. 7, the AI dynamic range compression module includes a neural network 3, and the neural network 3 is used to perform dynamic range compression on image data. For example, the high dynamic range image data may be input to the neural network 3, and the low dynamic range image data may be output through the neural network 3.
It should be noted that the neural network may be a convolutional neural network, a Net neural network, or the like, and may also be other neural networks, which is not limited in this embodiment of the present application.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a hardware structure of a mobile phone 100 according to an embodiment of the present disclosure. Referring to fig. 8, the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc.
The different processing units may be separate devices or may be integrated into one or more processors. For example, processor 110 includes a separate processor 111 and an integrated processor 112. The processor 111 is configured to perform noise reduction and image fusion on image data acquired by a plurality of cameras in the camera 193 to obtain fused image data. The integrated processor 112 is configured to perform image enhancement processing on the fused image data obtained by the first processor 111. Illustratively, the processor 111 is an AI ISP including an NPU, and the integrated processor 112 includes a general purpose ISP.
Wherein the controller may be a neural center and a command center of the cell phone 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces, such as an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the camera function of the handset 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the mobile phone 100.
It should be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the mobile phone 100. In other embodiments of the present application, the mobile phone 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive a charging input from a charger. The charger can be a wireless charger or a wired charger. The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the mobile phone 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. Such as: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the handset 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave.
The wireless communication module 160 may provide solutions for wireless communication applied to the mobile phone 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module.
The mobile phone 100 implements the display function through the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the cell phone 100 may include 1 or N display screens 194, N being an integer greater than 1.
The mobile phone 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when taking a picture, open the shutter, on light passed through the lens and transmitted camera light sensing element, light signal conversion was the signal of telecommunication, and camera light sensing element transmits the signal of telecommunication to ISP and handles, turns into the image that the naked eye is visible. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the handset 100 may include 1 or N cameras 193, N being an integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the handset 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. Handset 100 may support one or more video codecs. Thus, the mobile phone 100 can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, which processes input information quickly by referring to a biological neural network structure, for example, by referring to a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent recognition of the mobile phone 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving files of music, video, etc. in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the cellular phone 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area can store data (such as audio data, phone book, etc.) created by the mobile phone 100 during use. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The mobile phone 100 can implement audio functions, such as music playing, recording, etc., through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys or touch keys. The cellular phone 100 may receive a key input, and generate a key signal input related to user setting and function control of the cellular phone 100. The motor 191 may generate a vibration cue. Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 195 is used to connect a SIM card.
Next, a software system of the cellular phone 100 will be described.
The software system of the mobile phone 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android (Android) system with a layered architecture as an example to exemplarily describe a software system of the mobile phone 100.
Fig. 9 shows a block diagram of a software system of the mobile phone 100 according to an embodiment of the present application. Referring to fig. 9, the layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application Layer, an application framework Layer, an Android runtime (Android runtime) and system Layer, a kernel Layer and a Hardware Abstraction Layer (HAL), from top to bottom.
The application layer may include a series of application packages. As shown in fig. 9, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. As shown in FIG. 9, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like. The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data, which may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc., and makes the data accessible to applications. The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system can be used for constructing a display interface of an application program, and the display interface can be composed of one or more views, such as a view for displaying a short message notification icon, a view for displaying text and a view for displaying pictures. The phone manager is used to provide communication functions of the handset 100, such as management of call states (including connection, disconnection, etc.). The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to the application. The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a brief dwell, and does not require user interaction. For example, a notification manager is used to notify that a download is complete, a message alert, etc. The notification manager may also be a notification that appears in the form of a chart or scrollbar text at the top status bar of the system, such as a notification of a background running application. The notification manager may also be a notification that appears on the screen in the form of a dialog window, such as prompting a text message in a status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system. The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application layer and the application framework layer as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules, such as: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like. The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications. The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises device drivers such as a camera driver, a processor driver, a display driver, an audio driver and the like. The device driver is an interface between the I/O system and the associated hardware for driving the corresponding hardware device.
The hardware abstraction layer is an interface layer between the kernel layer and the hardware circuitry. The HAL is a core-mode module, can hide various hardware-related details, such as an I/O interface, an interrupt controller, a multiprocessor communication mechanism and the like, provides a uniform service interface for different hardware platforms running Windows, and realizes portability on various hardware platforms.
The hardware layer includes a camera group, a separate processor, an integrated processor, a display, an audio device, and the like. The camera group includes a plurality of cameras, including black and white cameras and color cameras, for example. For example, the individual processors may be AI ISPs, and the integrated processor may integrate CPUs, GPUs, and general ISPs, among others.
It should be noted that, in the embodiment of the present application, only a Windows system is used as an example, and in other operating systems (for example, an android system, an IOS system, and the like), as long as the functions implemented by the functional modules are similar to the embodiment of the present application, the scheme of the present application can also be implemented.
The following describes exemplary workflow of the software and hardware of the mobile phone 100 in connection with capturing a photo scene.
When the touch sensor receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the original input event. Taking the touch operation as a click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application program frame layer, starts the camera application, then calls a kernel layer to start a camera drive, and captures a still image or a video through the camera. After capturing a still image or video by the camera, the camera application may also call a separate processor and an integrated processor through the HAL layer, and perform image processing on the still image or video captured by the camera through the separate processor and the integrated processor.
Next, the image processing method provided in the embodiment of the present application will be described in detail.
Fig. 10 shows a flowchart of an image processing method provided in an embodiment of the present application, where the method is applied to an electronic device such as a mobile phone, where the electronic device includes a first camera, a second camera, a first processor, and an integrated processor. As shown in fig. 10, the method includes the steps of:
step 1001: the first camera collects first image data, and the first image data are black and white image data.
The first camera is a black-and-white camera for shooting black-and-white images. The first image data is single-channel original image data acquired by the first camera.
Step 1002: the first camera sends the first image data to the first processor.
The first camera can be connected with the first processor through a relevant interface, and the first image data is sent to the first processor through the relevant interface. For example, the correlation interface may be a Mipi interface.
Step 1003: the second camera acquires second image data, and the second image data is color image data.
The second camera is a color camera for shooting color images. The second image data is the original image data of the RAW field collected by the second camera.
Step 1004: the second camera sends the color image data to the first processor.
The second camera can be connected with the second processor through a relevant interface, and second image data is sent to the first processor through the relevant interface. For example, the correlation interface may be a Mipi interface.
Step 1005: the first processor performs noise reduction on the first image data and the second image data after receiving the first image data and the second image data, respectively.
The first processor is a separate processor outside the integrated processor, and is a processor which is separately arranged outside the integrated processor and used for processing images acquired by the camera in the embodiment of the application. For example, the first processor is an AI processor including an NPU. For example, the first processor is an ISP which includes an NPU, i.e. the first processor is an AI ISP which includes an NPU.
By reducing noise of the first image data and the second image data, image data with high signal to noise ratio can be obtained, and the definition of the image data is improved.
As one example, the first processor may employ an AI algorithm to perform noise reduction on the first image data and the second image data, respectively. Therefore, the operation efficiency of the first processor can be improved.
For example, the first processor includes an NPU including a first neural network for denoising black and white image data and a second neural network for denoising color image data. The first processor can perform noise reduction on the first image data through the first neural network to obtain noise-reduced first image data; and denoising the second image data through a second neural network to obtain denoised second image data.
The first neural network and the second neural network may be the same neural network or different neural networks, which is not limited in the embodiment of the present application.
The operating efficiency of the first processor can be further improved by running an AI noise reduction algorithm on the dedicated NPU.
As one example, the first neural network and the second neural network may be as described above in fig. 5. Taking a video shooting scene as an example, the first neural network and the second neural network may determine a noise reduction frame of a current frame according to the noise reduction frame of the current frame and an adjacent frame. For example, the first neural network and the second neural network may determine the noise reduction frame of the current frame according to the time domain information of the current frame and the time domain information of the noise reduction frame of the adjacent frame.
For example, the first image data is first video frame data collected by a first camera. For the first video frame data, the first processor may use the first video frame data and the third video frame data as inputs of the first neural network, and process the first video frame data and the third video frame data through the first neural network to obtain the first image data after noise reduction. The third video frame data is obtained by performing noise reduction on video frame data acquired by the first camera before the first video frame data, namely the third video frame data is noise reduction data of a previous frame of video frame data of the first video frame data.
For example, the second image data is second video frame data collected by a second camera. For the second video frame data, the first processor may use the second video frame data and the fourth video frame data as input of the second neural network, and process the second video frame data and the fourth video frame data through the second neural network to obtain the second image data after noise reduction. The fourth video frame data is obtained by performing noise reduction on video frame data acquired by the second camera before the second video frame data, that is, the fourth video frame data is noise reduction data of a previous frame of video frame data of the second video frame data.
As an example, the first processor may also pre-process the first image data and the second image data, respectively, before de-noising the first image data and the second image data, respectively. Then, denoising the preprocessed first image data to obtain denoised first image data; and denoising the preprocessed second image data to obtain denoised second image data.
Wherein the preprocessing is used to correct the image data. For example, the preprocessing includes one or more of black-and-white level correction, dead pixel correction, lens shading correction, and automatic white balance, and may also include other image processing operations, which is not limited in this embodiment.
As one example, the first processor includes a first IFE and a second IFE. The first processor preprocesses the first image data through the first IFE to obtain preprocessed first image data. And the first processor preprocesses the second image data through the second IFE to obtain preprocessed second image data.
The first IFE and the second IFE may be the same IFE or different IFEs, which is not limited in this embodiment of the application.
As an example, after the first processor performs noise reduction on the second image data, demosaicing processing may be performed on the second image data after noise reduction, so as to obtain sixth image data.
The second image data after noise reduction is image data in a RAW domain, and the sixth image data is image data in an RGB domain. The second image data after noise reduction may be converted from the RAW domain into the RGB domain by performing demosaicing processing on the second image data after noise reduction.
Step 1006: and the first processor performs image fusion on the first image data subjected to noise reduction and the second image data subjected to noise reduction to obtain third image data.
The third image data is fusion image data and is color image data.
By carrying out image fusion on the noise-reduced first image data and the noise-reduced second image data, the brightness information and the detail information of the noise-reduced first image data and the color information of the noise-reduced second image data can be fused, and fused image data with high signal-to-noise ratio, clear details and accurate colors is obtained.
As an example, the first processor may perform image fusion on the noise-reduced first image data and the noise-reduced second image data using an AI algorithm. Therefore, the operation efficiency of the first processor can be improved.
For example, the first processor includes an NPU, and the NPU includes a third neural network for image fusion of the black and white image data and the color image data. The first processor can perform image fusion on the noise-reduced first image data and the noise-reduced second image data through a third neural network to obtain third image data.
By running the AI image fusion algorithm on the special NPU, the operation speed of the algorithm can be improved, thereby further improving the operation efficiency of the first processor.
As an example, the first processor may first perform scale alignment on the noise-reduced first image data and the noise-reduced second image data to obtain the scale-aligned first image data and second image data, where the scales of the scale-aligned first image data and second image data are the same. And then the first image data and the second image data after the scale alignment are used as the input of a second neural network, and the first image data and the second image data after the scale alignment are processed through the second neural network to obtain third image data.
The scale of the image data with a larger scale may be reduced, or the scale of the image data with a smaller scale may be increased, so as to perform scale alignment on the first image data after noise reduction and the second image data after noise reduction.
As an example, the first processor may obtain key parameters of the first camera and the second camera, and determine a scale difference between the noise-reduced first image data and the noise-reduced second image data according to the key parameters of the first camera and the second camera. And then, carrying out scale alignment on the first image data subjected to noise reduction and the second image data subjected to noise reduction according to the scale difference to obtain the first image data and the second image data subjected to scale alignment.
The key reference of the camera includes one or more of a focal length, a pixel size, and a field angle, and may also include other key parameters, which are not limited in this embodiment of the present application.
In addition, after the first processor performs noise reduction on the second image data, demosaicing processing may be performed on the second image data after noise reduction to obtain sixth image data. And then, carrying out image fusion on the first image data and the sixth image data after noise reduction to obtain third image data.
Step 1007: the first processor sends the third image data to the integrated processor.
The integrated processor is integrated with a plurality of processors, such as a general ISP, a CPU and a GPU. The integrated processor may be integrated on an integrated circuit, such as the integrated processor integrated on a SOC.
The first processor may be coupled to the integrated processor via an associated interface, and the third image data may be transmitted to the integrated processor via the associated interface. For example, the associated interface may be a Mipi interface.
As an example, the first processor may further perform dynamic range compression on the third image data to obtain fifth image data, and then send the fifth image data to the integrated processor.
Wherein the dynamic range of the fifth image data is lower than the dynamic range of the third image data. For example, the third image data is High Dynamic Range (HDR) image data, and the fifth image data is Low Dynamic Range (LDR) image data.
By performing dynamic range compression on the third image data, the third image data can be compressed from a high-bit-width image to a low-bit-width image, and the local contrast and detail information of the image can be retained.
As one example, the first processor may perform dynamic range compression on the third image data using an AI algorithm. Therefore, the operation efficiency of the first processor can be improved.
For example, the first processor includes an NPU that includes a fourth neural network for performing dynamic range compression on the image data. The first processor may perform dynamic range compression on the third image data through a fourth neural network to obtain fifth image data.
By running the AI dynamic range compression algorithm on the dedicated NPU, the operation speed of the algorithm can be increased, thereby further improving the operation efficiency of the first processor.
As one example, the fourth neural network may employ Tone Mapping (TM) for dynamic range compression of the third image data. For example, the first processor uses the third image data as an input of the fourth neural network, and performs tone mapping on the third image data through the fourth neural network to obtain fifth image data.
In addition, after the first processor receives the first image data and the second image data, the first image data and the second image data can also be sent to the integrated processor.
For example, the first processor is connected to the integrated processor through an associated interface, and the first image data is transmitted to the integrated processor through the associated interface. For example, the correlation interface may be a Mipi interface.
Step 1008: and the integrated processor performs image enhancement processing on the third image data to obtain target image data.
The image enhancement processing may include image processing operations such as hardware noise reduction, image cropping, color enhancement, or detail enhancement, or may also include other image processing operations, which is not limited in this embodiment of the present application.
As an example, the integrated processor includes an IPE, and the integrated processor performs image enhancement processing on the third image data through the IPE to obtain target image data.
After the target image data is obtained, the integrated processor may save or render the target image data. For example, the integrated processor may convert the target image data from the RGB domain to the YUV domain, and then store or display the target image data in the YUV domain.
In addition, the integrated processor can also receive the first image data and the second image data sent by the first processor. After receiving the first image data and the second image data, the integrated processor can determine a first 3A value according to the first image data, and control the first camera according to the first 3A value; and determining a second 3A value according to the second image data, and controlling the second camera according to the second 3A value.
Wherein the first 3A value and the second 3A value comprise an AF value, an AE value, and an AWE value. The integrated processor may determine the 3A value by using a 3A algorithm according to the image data, where the 3A algorithm may be preset, and the embodiment of the present application does not limit this.
As one example, the integrated processor may adjust the 3A value of the first camera based on the first 3A value and adjust the 3A value of the second camera based on the second 3A value. Or, the integrated processor may also send the first 3A value to the first camera, and the first camera adjusts its own 3A value according to the first 3A value; and sending the second 3A value to a second camera, and adjusting the 3A value of the second camera according to the second 3A value.
Therefore, automatic exposure, automatic white balance and automatic focusing can be carried out on the first camera and the second camera according to the image information of the first image data and the second image data, and the shooting effect of subsequent images is improved.
As an example, the electronic device may process image data captured by a camera in a specific capturing mode by using the image processing method provided in the embodiment of the present application. The specific shooting mode may be preset, for example, the specific shooting mode may be a night shooting mode, an indoor shooting mode, a cloudy shooting mode, and the like, which is not limited in the embodiment of the present application.
In the embodiment of the application, an additional first processor is configured outside the integrated processor, and a double-shooting scheme that the black-and-white camera and the color camera respectively shoot is adopted, so that before the image enhancement processing of the integrated processor, the black-and-white image data collected by the black-and-white camera and the color image data collected by the color camera are respectively subjected to noise reduction through the first processor, the image fusion is performed on the black-and-white image data and the color image data after the noise reduction, then the fused image data is sent to the integrated processor, and the integrated processor further performs the image enhancement processing on the fused image data. The first processor is used for respectively reducing noise of the black-and-white image data collected by the black-and-white camera and the color image data collected by the color camera, so that the black-and-white image data and the color image data with high signal to noise ratio can be obtained, and the definition of the image data is improved. By carrying out image fusion on the black-and-white image data and the color image data after noise reduction, the brightness information and the detail information of the black-and-white image data after noise reduction and the color information of the color image data after noise reduction can be fused, and the image data with high signal-to-noise ratio, clear details and accurate colors can be obtained. So, can carry out all-round reinforcing to definition, color and the luminance of image, obtain that the definition is higher, color reduction ability is stronger, and luminance comparatively even image, and then improved the shooting effect of camera, especially can improve the shooting effect of the less strong shooting scenes of light such as night scene, indoor scene, cloudy day scene.
Next, with reference to fig. 4, a video shooting scene is taken as an example, and the image processing method provided in the embodiment of the present application is described in detail.
Fig. 11 is a flowchart illustrating another image processing method provided in the embodiment of the present application, and the method is applied to the mobile phone 100 shown in fig. 4. As shown in fig. 11, the method includes the steps of:
step 1101: the camera 10 captures a video frame 1, where the video frame 1 is a black and white video frame.
The camera 10 is a camera for taking black and white images. The camera 20 is used for photographing a color camera.
Step 1102: the camera 10 sends the video frame 1 to the routing module 34.
For example, the camera 10 sends the video frame 1 to the routing module 34 through the Mipi 0.
Step 1103: the camera 20 captures a video frame 2, and the video frame 2 is a color video frame.
As one example, video frame 1 and video frame 2 are video frames captured at the same time.
Step 1104: camera 20 sends video frame 2 to routing module 34.
For example, the camera 20 sends the video frame 2 to the routing module 34 through Mipi 0.
As an example, the camera application may invoke the camera 10 and the camera 20 at the same time after receiving the video shooting instruction, and simultaneously capture a black-and-white video frame and a color video frame in a dual-shooting manner.
It should be noted that the video frame 1 and the video frame 2 are both original video data acquired by the camera, the video frame 1 is a single-channel video frame, and the video frame 2 is a video frame in a RAW domain.
Step 1105: the routing module 34 copies the video 1 to obtain a video frame 3, and copies the video frame 2 to obtain a video frame 4.
Step 1106: routing module 34 sends video frame 1 to IFE35 and video frame 2 to IFE36.
Step 1107: the routing module 34 sends video frames 3 and 4 to the 3A module 43.
For example, the routing module 34 may send video frames 3 and 4 to the 3A module 43 over the same or different interfaces. For example, video frame 3 is sent to 3A module 43 through Mipi1, and video frame 4 is sent to 3A module 43 through Mipi2.
Step 1108: the IFE35 preprocesses the video frame 1 to obtain a preprocessed video frame 1.
Wherein the preprocessing is used for correcting the image data, such as preprocessing including one or more of black and white level correction, dead pixel correction, lens shading correction, and automatic white balance. It should be understood that the preprocessing may also include other image processing operations, which are not limited in this application.
Step 1109: the IFE35 sends the preprocessed video frame 1 to the AI noise reduction module 31.
Step 1110: the IFE36 preprocesses the video frame 2 to obtain a preprocessed video frame 2.
It should be noted that the IFE35 and the IFE36 may be the same IFE or different IFEs, which is not limited in the embodiment of the present application.
Step 1111: the IFE36 sends the preprocessed video frame 2 to the AI noise reduction module 32.
Step 1112: the AI denoising module 31 denoises the preprocessed video frame 1 to obtain a denoised video frame 1.
The AI denoising module 31 may denoise the preprocessed video frame 1 by using an AI denoising algorithm.
For example, the AI noise reduction module 31 includes a first neural network, and the first neural network is used for reducing noise of black-and-white image data. The AI denoising module 31 performs denoising on the preprocessed video frame 1 through the first neural network to obtain a denoised video frame 1.
Illustratively, the first neural network may be the neural network 1 shown in fig. 5. The noise reduction result of the video frame 1 and the previous video frame of the video frame 1 may be used as the input of the first neural network, and the noise reduction result of the video frame 1, that is, the noise-reduced video frame 1, may be output through the first neural network.
Step 1113: the AI denoising module 31 sends the denoised video frame 1 to the AI image fusion module 33.
Step 1114: the AI denoising module 32 denoises the preprocessed video frame 2 to obtain a denoised video frame 2.
The AI noise reduction module 32 may reduce the noise of the preprocessed video frame 2 by using an AI noise reduction algorithm.
For example, AI noise reduction module 32 includes a second neural network for noise reducing the color image data. The AI denoising module 32 performs denoising on the preprocessed video frame 2 through the second neural network to obtain a denoised video frame 2.
For example, the second neural network may be the neural network 1 shown in fig. 5. The noise reduction result of the video frame 2 and the previous video frame of the video frame 2 can be used as the input of the second neural network, and the noise reduction result of the video frame 2, that is, the noise-reduced video frame 2, is output through the second neural network.
By reducing the noise of the video frame 1 and the video frame 2, the noise of the video frame 1 and the video frame 2 can be reduced, the video frame 1 and the video frame 2 with high signal to noise ratio can be obtained, and the definition of the video frame is improved.
Step 1115: AI denoising module 32 sends denoised video frame 2 to demosaicing module 37.
Step 1116: the demosaicing module 37 performs demosaicing processing on the video frame 2 subjected to noise reduction, so as to obtain the video frame 2 subjected to demosaicing processing.
By demosaicing the de-noised video frame 2, the de-noised video frame 2 can be converted from the RAW domain to the RGB domain, and the video frame 2 with high signal-to-noise ratio in the RGB domain is obtained.
Step 1117: the demosaicing module 37 sends the demosaiced video frame 2 to the AI image fusion module 33.
Step 1118: the AI image fusion module 33 performs image fusion on the video frame 1 after noise reduction and the video frame 2 after demosaicing processing to obtain a fused video frame.
The fusion video frame is a color video frame and is a color video frame in an RGB domain.
The AI image fusion module 33 may perform image fusion on the de-noised video frame 1 and the demosaiced video frame 2 by using AI image fusion.
For example, the AI image fusion module 33 includes a third neural network, and the third neural network is used for performing image fusion on the black-and-white image data and the color image data. The AI image fusion module 33 performs image fusion on the video frame 1 after noise reduction and the video frame 2 after demosaicing processing through a third neural network to obtain a fused video frame.
Illustratively, the third neural network may be the neural network 2 shown in fig. 6. The video frame 1 after noise reduction and the video frame 2 after demosaicing processing can be subjected to scale alignment, then the two video frames after scale alignment are used as the input of a third neural network, and the fused video frame is output through the third neural network.
By carrying out image fusion on the video frame 1 subjected to noise reduction and the video frame 2 subjected to demosaicing processing, the brightness information and the detail information of the video frame 1 subjected to noise reduction and the color information of the video frame 2 subjected to demosaicing processing can be fused, and a fused video frame with high signal-to-noise ratio, clear details and accurate colors is obtained.
Step 1119: the AI image fusion module 33 sends the fused video frames to the AI dynamic range compression module 38.
Step 1120: the AI dynamic range compression module 38 performs dynamic range compression on the fused video frame to obtain a fused video frame after dynamic range compression.
And the dynamic range of the fused video frame after the dynamic range compression is lower than that of the fused video frame. For example, the fused video frame is an HDR video frame, and the fused video frame after dynamic range compression is an LDR video frame.
The AI dynamic range compression module 38 may perform dynamic range compression on the fused video frame by using AI dynamic range compression.
For example, the AI dynamic range compression module 38 includes a fourth neural network for performing dynamic range compression on the image data. The AI dynamic range compression module 38 performs dynamic range compression on the AI dynamic range compression module 38 through a fourth neural network to obtain a fused video frame after the dynamic range compression.
Illustratively, the fourth neural network may be the neural network 3 shown in fig. 7. The fused video frame may be used as an input of the fourth neural network, and the fused video frame after dynamic range compression may be output through the fourth neural network.
By compressing the dynamic range of the fused video frame, the fused video frame can be compressed from a high-bit-width image to a low-bit-width image, and the local contrast and detail information of the image can be reserved.
Step 1121: the AI dynamic range compression module 38 sends the fused video frames after dynamic range compression to the IPE41.
For example, the AI dynamic range compression module 38 may send the fused video frame after dynamic range compression to the IPE41 through Mipi 0.
Step 1122: and the IPE41 performs image enhancement processing on the fused video frame after the dynamic range compression to obtain a target video frame.
The image enhancement processing may include image processing operations such as hardware noise reduction, image cropping, color enhancement, or detail enhancement, or may also include other image processing operations, which is not limited in this embodiment of the present application.
Step 1123: the IPE41 stores or displays the target video frame.
As an example, the IPE41 may convert the target video frame from RGB domain to YUV domain, and then save or display the target video frame in YUV domain.
Step 1124: the 3A module 43 determines a first 3A value from the video frame 3 using a 3A algorithm and a second 3A value from the video frame 4 using a 3A algorithm.
Wherein the 3A value includes an AF value, an AE value, and an AWE value.
Step 1125: the 3A module 43 sends the first 3A value to the camera 10 and the second 3A value to the camera 20.
Step 1126: the camera 10 adjusts its 3A value according to the first 3A value.
Step 1127: the camera 20 adjusts its 3A value according to the second 3A value.
Therefore, automatic exposure, automatic white balance and automatic focusing can be carried out according to the original video frame camera, and the shooting effect of subsequent videos is improved.
It should be noted that the AI ISP30 includes an NPU, and the AI noise reduction module 31, the AI noise reduction module 32, the AI image fusion module 33, and the AI dynamic range compression module 38 can all be operated on a dedicated NPU, so that the operation speed of the algorithm can be increased, and the operation speed of the processor 30 can be increased.
In the embodiment of the application, by configuring the AI ISP30 outside the ISP of the SOC40 and adopting a double-shooting scheme in which the black-and-white camera and the color camera respectively perform shooting, before the image enhancement processing of the SOC40, the AI ISP30 may perform noise reduction on black-and-white image data acquired by the black-and-white camera and color image data acquired by the color camera respectively, perform image fusion on the black-and-white image data and the color image data after the noise reduction, and then send the fused image data to the integrated processor, which further performs image enhancement processing. The AI ISP30 is used for respectively reducing noise of black-and-white image data acquired by the black-and-white camera and color image data acquired by the color camera, so that black-and-white image data and color image data with high signal-to-noise ratio can be obtained, and the definition of the image data is improved. By carrying out image fusion on the black-and-white image data and the color image data after noise reduction, the brightness information and the detail information of the black-and-white image data after noise reduction and the color information of the color image data after noise reduction can be fused to obtain the image data with high signal-to-noise ratio, clear details and accurate colors. So, can carry out all-round reinforcing to definition, color and the luminance of image, obtain that the definition is higher, color reduction ability is stronger, and luminance comparatively even image, and then improved the shooting effect of camera, especially can improve the shooting effect of the less strong shooting scenes of light such as night scene, indoor scene, cloudy day scene.
In the above embodiments, the implementation may be wholly or partly realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., digital Versatile Disk (DVD)), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The above description is provided for the alternative embodiments of the present application and not intended to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the technical scope of the present application should be included in the protection scope of the present application.

Claims (17)

1. An image processing method is applied to an electronic device, the electronic device comprises a first camera, a second camera, a first processor and an integrated processor, and the method comprises the following steps:
the first processor acquires first image data acquired by the first camera and second image data acquired by the second camera, wherein the first image data is black-and-white image data, and the second image data is color image data;
the first processor respectively performs noise reduction on the first image data and the second image data, performs image fusion on the noise-reduced first image data and the noise-reduced second image data to obtain third image data, and sends the third image data to the integrated processor;
and the integrated processor performs image enhancement processing on the third image data to obtain target image data.
2. The method of claim 1, wherein prior to the first processor denoising the first image data and the second image data, respectively, further comprising:
the first processor respectively preprocesses the first image data and the second image data, wherein the preprocessing comprises one or more of black-and-white level correction, dead pixel correction, lens shading correction and automatic white balance;
the first processor performs noise reduction on the first image data and the second image data, respectively, and includes:
the first processor performs noise reduction on the preprocessed first image data to obtain the noise-reduced first image data;
and the first processor performs noise reduction on the preprocessed second image data to obtain the noise-reduced second image data.
3. The method of claim 1 or 2, wherein the first processor comprises a first image front end IFE and a second IFE, the first processor pre-processing the first image data and the second image data, respectively, comprising:
the first processor preprocesses the first image data through the first IFE to obtain the preprocessed first image data;
and the first processor preprocesses the second image data through the second IFE to obtain the preprocessed second image data.
4. The method of any of claims 1-3, wherein the first processor comprises a neural Network Processing Unit (NPU), the NPU comprising a first neural network and a second neural network, the first neural network to denoise black and white image data, the second neural network to denoise color image data;
the first processor performs noise reduction on the first image data and the second image data, respectively, and includes:
the first processor performs noise reduction on the first image data through the first neural network to obtain the noise-reduced first image data;
and the first processor performs noise reduction on the second image data through the second neural network to obtain the second image data subjected to noise reduction.
5. The method of claim 4, wherein the first image data is first video frame data captured by the first camera and the second image data is second video frame data captured by the second camera;
the first processor performs noise reduction on the first image data through the first neural network to obtain the noise-reduced first image data, and the noise-reduced first image data includes:
the first processor takes the first video frame data and the third video frame data as input of the first neural network, and processes the first video frame data and the third video frame data through the first neural network to obtain the first image data after noise reduction, wherein the third video frame data is obtained after noise reduction is performed on video frame data collected by the first camera before the first video frame data;
the first processor performs noise reduction on the second image data through the second neural network to obtain the noise-reduced second image data, and the method includes:
the first processor takes the second video frame data and the fourth video frame data as the input of the second neural network, and processes the second video frame data and the fourth video frame data through the second neural network to obtain the second image data after noise reduction, wherein the fourth video frame data is obtained after the noise reduction is performed on the video frame data collected by the second camera before the second video frame data.
6. The method of any of claims 1-5, wherein the first processor comprises an NPU, the NPU comprising a third neural network for image fusion of black and white image data and color image data;
the first processor performs image fusion on the first image data subjected to noise reduction and the second image data subjected to noise reduction to obtain third image data, and the method comprises the following steps:
and the first processor performs image fusion on the noise-reduced first image data and the noise-reduced second image data through the third neural network to obtain third image data.
7. The method of claim 6, wherein prior to the image fusing, by the first processor, the denoised first image data and the denoised second image data via the third neural network, further comprising:
the first processor performs scale alignment on the first image data subjected to noise reduction and the second image data subjected to noise reduction to obtain first image data and second image data subjected to scale alignment, wherein the scales of the first image data and the second image data subjected to scale alignment are the same;
the first processor performs image fusion on the noise-reduced first image data and the noise-reduced second image data through the third neural network to obtain third image data, and the image fusion method includes:
and the first processor takes the first image data and the second image data after the scale alignment as the input of the third neural network, and processes the first image data and the second image data after the scale alignment through the third neural network to obtain the third image data.
8. The method of claim 7, wherein the first processor performs scale alignment on the noise-reduced first image data and the noise-reduced second image data to obtain the scale-aligned first image data and second image data, and comprises:
the first processor acquires key parameters of the first camera and the second camera, wherein the key reference comprises one or more of a focal length, a pixel size and a field angle;
the first processor determines the scale difference between the first image data subjected to noise reduction and the second image data subjected to noise reduction according to the key parameters of the first camera and the second camera;
and the first processor performs scale alignment on the first image data subjected to noise reduction and the second image data subjected to noise reduction according to the scale difference to obtain the first image data and the second image data subjected to scale alignment.
9. The method of any of claims 1-8, wherein the first processor sending the third image data to the integrated processor comprises:
the first processor performs dynamic range compression on the third image data to obtain fifth image data, wherein the dynamic range of the fifth image data is lower than that of the third image data;
the first processor sending the fifth image data to the integrated processor;
the integrated processor performs image enhancement processing on the third image data to obtain target image data, and the image enhancement processing includes:
and the integrated processor performs image enhancement processing on the fifth image data to obtain the target image data.
10. The method of claim 9, wherein the first processor comprises an NPU comprising a fourth neural network for dynamic range compression of image data;
the first processor performs dynamic range compression on the third image data to obtain fifth image data, and the method includes:
and the first processor performs dynamic range compression on the third image data through the fourth neural network to obtain fifth image data.
11. The method of claim 10, wherein the first processor performs dynamic range compression on the third image data through the fourth neural network to obtain the fifth image data, comprising:
and the first processor takes the third image data as the input of the fourth neural network, and performs tone mapping on the third image data through the fourth neural network to obtain fifth image data.
12. The method of any of claims 1-11, wherein after the first processor performs noise reduction on the second image data, further comprising:
the first processor performs demosaicing processing on the second image data subjected to noise reduction to obtain sixth image data;
the first processor performs image fusion on the first image data subjected to noise reduction and the second image data subjected to noise reduction to obtain third image data, and the image fusion method comprises the following steps:
and the first processor performs image fusion on the first image data subjected to noise reduction and the sixth image data to obtain third image data.
13. The method of any one of claims 1-12, wherein the integrated processor comprises an Image Processing Engine (IPE), and wherein the integrated processor performs image enhancement processing on the third image data to obtain target image data, comprising:
and the integrated processor performs image enhancement processing on the third image data through the IPE to obtain the target image data.
14. The method of any of claims 1-13, wherein after the first processor acquires first image data acquired by the first camera and second image data acquired by the second camera, further comprising:
the first processor sending the first image data and the second image data to the integrated processor;
the integrated processor determining a first 3A value from the first image data and a second 3A value from the second image data, the first 3A value and the second 3A value comprising an auto-focus AF value, an auto-exposure AE value, and an auto-white balance AWE value;
and the integrated processor controls the first camera according to the first 3A value and controls the second camera according to the second 3A value.
15. The method of any of claims 1-14, wherein the first processor is an image signal processor, ISP, comprising an NPU.
16. An electronic device, comprising a first camera for capturing black and white image data, a second camera for capturing color image data, a first computer program stored in the memory and executable on the first processor, the first computer program when executed by the first processor implementing the method as performed by the first processor in any one of claims 1-15, and a second computer program stored in the memory and executable on the integrated processor, the second computer program when executed by the integrated processor implementing the method as performed by the integrated processor in any one of claims 1-15, a memory, a first processor, an integrated processor, a first computer program stored in the memory and executable on the first processor, and a second computer program stored in the memory and executable on the integrated processor.
17. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-15.
CN202210912803.9A 2022-07-31 2022-07-31 Image processing method, device and storage medium Active CN115460343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210912803.9A CN115460343B (en) 2022-07-31 2022-07-31 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210912803.9A CN115460343B (en) 2022-07-31 2022-07-31 Image processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN115460343A true CN115460343A (en) 2022-12-09
CN115460343B CN115460343B (en) 2023-06-13

Family

ID=84297510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210912803.9A Active CN115460343B (en) 2022-07-31 2022-07-31 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN115460343B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002301447B2 (en) * 2001-10-12 2005-04-14 Canon Kabushiki Kaisha Interactive Animation of Sprites in a Video Production
JP2009157647A (en) * 2007-12-26 2009-07-16 Sony Corp Image processing circuit, imaging apparatus, method and program
CN102881004A (en) * 2012-08-31 2013-01-16 电子科技大学 Digital image enhancement method based on optic nerve network
WO2017090837A1 (en) * 2015-11-24 2017-06-01 Samsung Electronics Co., Ltd. Digital photographing apparatus and method of operating the same
CN106878605A (en) * 2015-12-10 2017-06-20 北京奇虎科技有限公司 The method and electronic equipment of a kind of image generation based on electronic equipment
CN107147837A (en) * 2017-06-30 2017-09-08 维沃移动通信有限公司 The method to set up and mobile terminal of a kind of acquisition parameters
WO2018082165A1 (en) * 2016-11-03 2018-05-11 华为技术有限公司 Optical imaging method and apparatus
US10511908B1 (en) * 2019-03-11 2019-12-17 Adobe Inc. Audio denoising and normalization using image transforming neural network
CN111586312A (en) * 2020-05-14 2020-08-25 Oppo(重庆)智能科技有限公司 Automatic exposure control method and device, terminal and storage medium
WO2020207262A1 (en) * 2019-04-09 2020-10-15 Oppo广东移动通信有限公司 Image processing method and apparatus based on multiple frames of images, and electronic device
CN112217962A (en) * 2019-07-10 2021-01-12 杭州海康威视数字技术股份有限公司 Camera and image generation method
CN113810600A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Terminal image processing method and device and terminal equipment
CN113962884A (en) * 2021-10-10 2022-01-21 杭州知存智能科技有限公司 HDR video acquisition method and device, electronic equipment and storage medium
CN114693569A (en) * 2020-12-25 2022-07-01 华为技术有限公司 Method for fusing videos of two cameras and electronic equipment
CN114693857A (en) * 2020-12-30 2022-07-01 华为技术有限公司 Ray tracing multi-frame noise reduction method, electronic equipment, chip and readable storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002301447B2 (en) * 2001-10-12 2005-04-14 Canon Kabushiki Kaisha Interactive Animation of Sprites in a Video Production
JP2009157647A (en) * 2007-12-26 2009-07-16 Sony Corp Image processing circuit, imaging apparatus, method and program
CN102881004A (en) * 2012-08-31 2013-01-16 电子科技大学 Digital image enhancement method based on optic nerve network
WO2017090837A1 (en) * 2015-11-24 2017-06-01 Samsung Electronics Co., Ltd. Digital photographing apparatus and method of operating the same
CN106878605A (en) * 2015-12-10 2017-06-20 北京奇虎科技有限公司 The method and electronic equipment of a kind of image generation based on electronic equipment
WO2018082165A1 (en) * 2016-11-03 2018-05-11 华为技术有限公司 Optical imaging method and apparatus
CN107147837A (en) * 2017-06-30 2017-09-08 维沃移动通信有限公司 The method to set up and mobile terminal of a kind of acquisition parameters
US10511908B1 (en) * 2019-03-11 2019-12-17 Adobe Inc. Audio denoising and normalization using image transforming neural network
WO2020207262A1 (en) * 2019-04-09 2020-10-15 Oppo广东移动通信有限公司 Image processing method and apparatus based on multiple frames of images, and electronic device
CN112217962A (en) * 2019-07-10 2021-01-12 杭州海康威视数字技术股份有限公司 Camera and image generation method
CN111586312A (en) * 2020-05-14 2020-08-25 Oppo(重庆)智能科技有限公司 Automatic exposure control method and device, terminal and storage medium
CN114693569A (en) * 2020-12-25 2022-07-01 华为技术有限公司 Method for fusing videos of two cameras and electronic equipment
CN114693857A (en) * 2020-12-30 2022-07-01 华为技术有限公司 Ray tracing multi-frame noise reduction method, electronic equipment, chip and readable storage medium
CN113810600A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Terminal image processing method and device and terminal equipment
CN113962884A (en) * 2021-10-10 2022-01-21 杭州知存智能科技有限公司 HDR video acquisition method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115460343B (en) 2023-06-13

Similar Documents

Publication Publication Date Title
CN109559270B (en) Image processing method and electronic equipment
CN115473957B (en) Image processing method and electronic equipment
US11949978B2 (en) Image content removal method and related apparatus
CN113194242B (en) Shooting method in long-focus scene and mobile terminal
CN112532892B (en) Image processing method and electronic device
CN113810603B (en) Point light source image detection method and electronic equipment
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113630558B (en) Camera exposure method and electronic equipment
CN113452898A (en) Photographing method and device
CN114095666B (en) Photographing method, electronic device, and computer-readable storage medium
CN113891009B (en) Exposure adjusting method and related equipment
CN117278850A (en) Shooting method and electronic equipment
WO2023056795A1 (en) Quick photographing method, electronic device, and computer readable storage medium
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN115686182B (en) Processing method of augmented reality video and electronic equipment
WO2021204103A1 (en) Picture preview method, electronic device, and storage medium
CN115359105A (en) Depth-of-field extended image generation method, depth-of-field extended image generation device, and storage medium
CN115460343B (en) Image processing method, device and storage medium
CN115550556A (en) Exposure intensity adjusting method and related device
CN115802144B (en) Video shooting method and related equipment
CN116048323B (en) Image processing method and electronic equipment
US20240137659A1 (en) Point light source image detection method and electronic device
WO2024078275A1 (en) Image processing method and apparatus, electronic device and storage medium
CN116723382B (en) Shooting method and related equipment
CN116347217A (en) Image processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant