CN115460343B - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN115460343B
CN115460343B CN202210912803.9A CN202210912803A CN115460343B CN 115460343 B CN115460343 B CN 115460343B CN 202210912803 A CN202210912803 A CN 202210912803A CN 115460343 B CN115460343 B CN 115460343B
Authority
CN
China
Prior art keywords
image data
processor
image
camera
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210912803.9A
Other languages
Chinese (zh)
Other versions
CN115460343A (en
Inventor
李子荣
殷仕帆
刘琰培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210912803.9A priority Critical patent/CN115460343B/en
Publication of CN115460343A publication Critical patent/CN115460343A/en
Application granted granted Critical
Publication of CN115460343B publication Critical patent/CN115460343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses an image processing method, image processing equipment and a storage medium, and belongs to the technical field of image processing. The method comprises the following steps: the method comprises the steps that a first processor acquires first image data acquired by a first camera and second image data acquired by a second camera, wherein the first image data is black-and-white image data, and the second image data is color image data; respectively denoising the first image data and the second image data, performing image fusion on the denoised first image data and the denoised second image data to obtain third image data, and sending the third image data to the integrated processor; and the integrated processor performs image enhancement processing on the third image data to obtain target image data. The method and the device can enhance the definition, the color and the brightness of the image shot by the camera, obtain the image with higher definition, stronger color restoration capability and more uniform brightness, improve the shooting effect and are particularly suitable for shooting scenes at night.

Description

Image processing method, device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, and storage medium.
Background
With the development of technology, the shooting capability of mobile phones has become an important performance index of mobile phones. Among them, night shooting is a common scene that a user shoots using a mobile phone. When the mobile phone shoots in a night scene, the user has poor visual experience because the environment is darker and the photo or video shot by the mobile phone is darker.
In the related art, a mobile phone is configured with a camera and a general integrated processor, and the integrated processor is integrated with a plurality of processors such as a CPU and a GPU. For example, a general purpose integrated processor may be a System On Chip (SOC). In order to improve the night shooting effect, after the mobile phone shoots through the camera, the integrated circuit device can acquire color image data acquired by the camera, perform image enhancement processing on the color image data to obtain target image data, and store or send and display a target data image.
However, since the general integrated processor has limited image enhancement processing capability, the integrated processor performs image enhancement processing on the image data acquired by the camera in a night shooting scene, so that the noise of the obtained target image data is large, the color reduction capability is poor, and the situations of over-bright areas and over-dark areas are easy to occur.
Disclosure of Invention
The application provides an image processing method, image processing equipment and a storage medium, which can strengthen definition, color and brightness of an image shot by a camera to obtain an image with higher definition, stronger color reduction capability and more uniform brightness. The technical scheme is as follows:
in a first aspect, an image processing method is provided, applied to an electronic device, where the electronic device includes a first camera, a second camera, a first processor and an integrated processor, the first camera is a black-and-white camera, and the second camera is a color camera, and the method includes:
the first processor acquires black-and-white image data acquired by the first camera and color image data acquired by the second camera, respectively reduces noise of the black-and-white image data and the color image data, performs image fusion on the black-and-white image data and the color image data to obtain fused image data, and sends the fused image data to the integrated processor. And the integrated processor performs image enhancement processing on the fused image data to obtain target image data.
In this embodiment, an independent first processor is additionally configured outside the integrated processor, and a dual-shooting scheme of shooting by using a black-and-white camera and a color camera is adopted. The first processor is used for respectively reducing noise of the black-and-white image data acquired by the black-and-white camera and the color image data acquired by the color camera, so that the black-and-white image data and the color image data with high signal-to-noise ratio can be obtained, and the definition of the image data is improved. By performing image fusion on the denoised black-and-white image data and the color image data, the brightness information and the detail information of the denoised black-and-white image data and the color information of the denoised color image data can be fused, so that the image data with high signal-to-noise ratio, clear detail and accurate color can be obtained. Therefore, the definition, color and brightness of the image can be enhanced in all directions, the image with higher definition, stronger color restoration capability and more uniform brightness is obtained, the shooting effect of the camera is further improved, and especially the shooting effect of shooting scenes with weaker light rays such as night scenes, indoor scenes, overcast scenes and the like can be improved.
In one possible embodiment, before the first processor performs noise reduction on the black-and-white image data and the color image data, the first processor may also perform noise reduction on the preprocessed black-and-white image data after performing the preprocessing on the black-and-white image data and the color image data, so as to obtain noise-reduced black-and-white image data; and denoising the preprocessed color image data to obtain the denoised color image data.
Wherein the preprocessing includes one or more of black-and-white level correction, dead pixel correction, lens shading correction, and automatic white balance. By preprocessing the black-and-white image data and the color image data respectively, the two image data can be subjected to image correction, and image data with more accurate image information can be obtained.
In one possible embodiment, the first processor includes a first IFE and a second IFE, and the first processor performs preprocessing on the black-and-white image data through the first IFE to obtain preprocessed black-and-white image data; and preprocessing the color image data through a second IFE to obtain preprocessed color image data.
As one example, the first processor may employ an AI algorithm to denoise the first image data and the second image data, respectively. Thus, the operation efficiency of the first processor can be improved.
In one possible embodiment, the first processor includes an NPU including a first neural network for denoising black and white image data and a second neural network for denoising color image data. The first processor performs noise reduction on the black-and-white image data through a first neural network to obtain noise-reduced black-and-white image data; and denoising the color image data through a second neural network to obtain the denoised color image data.
By running the AI noise reduction algorithm on the special NPU, the operation speed of the algorithm can be improved, so that the operation efficiency of the first processor is further improved.
In one possible embodiment, the black-and-white image data is first video frame data collected by a first camera, and the color image data is second video frame data collected by a second camera; the first processor takes first video frame data and third video frame data as input of a first neural network, processes the first video frame data and the third video frame data through the first neural network to obtain noise-reduced black-and-white image data, and the third video frame data is obtained by noise reduction of video frame data acquired by a first camera before the first video frame data; the first processor takes second video frame data and fourth video frame data as input of a second neural network, processes the second video frame data and the fourth video frame data through the second neural network to obtain noise-reduced color image data, and the fourth video frame data is obtained after noise reduction of video frame data acquired by a second camera before the second video frame data.
By determining the noise reduction result of the current frame according to the noise reduction results of the current frame and the previous frame, the accuracy of noise reduction can be improved.
As one example, the first processor may employ an AI algorithm to image fuse the denoised black-and-white image data and the color image data. Thus, the operation efficiency of the first processor can be improved.
In one possible embodiment, the first processor includes an NPU including a third neural network for image fusion of black and white image data and color image data; the first processor performs image fusion on the noise-reduced black-and-white image data and the noise-reduced color image data through a third neural network to obtain third image data.
By running the AI image fusion algorithm on the special NPU, the operation speed of the algorithm can be improved, so that the operation efficiency of the first processor is further improved.
In one possible embodiment, the first processor performs scale alignment on the black-and-white image data after noise reduction and the color image data after noise reduction to obtain the black-and-white image data and the color image data after scale alignment, wherein the scales of the black-and-white image data and the color image data after scale alignment are the same; and then taking the black-and-white image data and the color image data with the aligned scales as the input of a third neural network, and processing the black-and-white image data and the color image data with the aligned scales through the third neural network to obtain third image data.
In one possible embodiment, the first processor obtains key parameters of the first camera and the second camera, the key parameters including one or more of a focal length, a pixel size, and a field angle; determining the scale difference between the noise-reduced black-and-white image data and the noise-reduced color image data according to key parameters of the first camera and the second camera; and performing scale alignment on the black-and-white image data after noise reduction and the color image data after noise reduction according to the scale difference to obtain the black-and-white image data and the color image data after scale alignment.
In one possible embodiment, the first processor may perform dynamic range compression on the third image data to obtain fifth image data, where a dynamic range of the fifth image data is lower than a dynamic range of the third image data; and sending the fifth image data to the integrated processor so that the integrated processor can carry out image enhancement processing on the fifth image data to obtain target image data.
As one example, the first processor may employ an AI algorithm to perform dynamic range compression on the third image data. Thus, the operation efficiency of the first processor can be improved.
In one possible embodiment, the first processor includes an NPU including a fourth neural network for dynamic range compression of the image data; the first processor performs dynamic range compression on the third image data through a fourth neural network to obtain fifth image data.
By running the AI dynamic range compression algorithm on the dedicated NPU, the operation speed of the algorithm can be increased, thereby further increasing the operation efficiency of the first processor.
In one possible embodiment, the first processor takes the third image data as an input of a fourth neural network, and performs tone mapping on the third image data through the fourth neural network to obtain fifth image data.
In one possible embodiment, after the first processor performs noise reduction on the color image data, performing demosaicing on the color image data after noise reduction to obtain sixth image data; and then carrying out image fusion on the black-and-white image data after noise reduction and the sixth image data to obtain third image data.
The color image data after noise reduction can be converted from the RAW domain to the RGB domain by performing demosaicing processing on the color image data after noise reduction.
In a possible embodiment, the integrated processor includes an IPE, and the integrated processor performs image enhancement processing on the third image data through the IPE to obtain the target image data.
The image enhancement processing may include image processing operations such as hardware noise reduction, image clipping, color enhancement or detail enhancement, and may of course also include other image processing operations, which are not limited in this embodiment of the present application.
In one possible embodiment, after the first processor acquires the black-and-white image data acquired by the first camera and the color image data acquired by the second camera, the black-and-white image data and the color image data may also be sent to the integrated processor, and the integrated processor determines a first 3A value according to the black-and-white image data and determines a second 3A value according to the color image data; and controlling the first camera according to the first 3A value, and controlling the second camera according to the second 3A value.
Wherein the first 3A value and the second 3A value include an auto focus AF value, an auto exposure AE value, and an auto white balance AWE value. The integrated processor may determine the 3A value according to the image data using a 3A algorithm, and the 3A algorithm may be preset, which is not limited in the embodiment of the present application.
As one example, the integrated processor may adjust the 3A value of the first camera based on the first 3A value and adjust the 3A value of the second camera based on the second 3A value. Or the integrated processor can also send the first 3A value to the first camera, and the first camera adjusts the 3A value of the integrated processor according to the first 3A value; and sending the second 3A value to a second camera, and adjusting the 3A value of the second camera according to the second 3A value by the second camera.
Therefore, the first camera and the second camera can be automatically exposed, balanced and focused according to the image information of the first image data and the second image data, and the shooting effect of the subsequent images is improved.
In one possible embodiment, the first processor is an ISP, which includes an NPU.
In one possible embodiment, the integrated processor is a SOC.
In a second aspect, there is provided an image processing apparatus having a function of realizing the behavior of the image processing method in the first aspect described above. The image processing apparatus comprises at least one module for implementing the image processing method provided in the first aspect.
In a third aspect, there is provided an image processing apparatus including a processor and a memory in a structure thereof, the memory storing a program for supporting the image processing apparatus to execute the image processing method provided in the first aspect, and storing data for implementing the image processing method of the first aspect. The processor is configured to execute a program stored in the memory. The image processing device may further comprise a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, there is provided a computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the image processing method of the first aspect described above.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the image processing method of the first aspect described above.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described in detail herein.
Drawings
Fig. 1 shows a schematic structural diagram of a mobile phone provided in the related art;
fig. 2 shows a schematic structural diagram of a mobile phone according to an embodiment of the present application;
fig. 3 is a schematic diagram showing a comparison of a target image processed by the image processing method provided by the related art and a target image processed by the image processing method provided by the embodiment of the present application;
fig. 4 shows a schematic structural diagram of another mobile phone according to an embodiment of the present application;
FIG. 5 shows a schematic diagram of an AI noise reduction module provided by an embodiment of the application;
FIG. 6 is a schematic diagram of an AI image fusion module provided in an embodiment of the disclosure;
FIG. 7 is a schematic diagram of an AI dynamic range compression module provided by an embodiment of the application;
fig. 8 shows a schematic hardware structure of a mobile phone according to an embodiment of the present application;
fig. 9 shows a block diagram of a software system of a mobile phone according to an embodiment of the present application;
fig. 10 shows a flowchart of an image processing method provided in an embodiment of the present application;
fig. 11 shows a flowchart of another image processing method provided in an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference herein to "a plurality" means two or more. In the description of the present application, "/" means or, unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, for the purpose of facilitating the clear description of the technical solutions of the present application, the words "first", "second", etc. are used to distinguish between the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, the image processing method provided in the embodiment of the present application is applicable to any electronic device having a shooting function and multiple cameras, such as a mobile phone, a tablet computer, a camera, an intelligent wearable device, etc., which is not limited in this embodiment of the present application. In addition, the image processing method provided in the embodiment of the present application may be applied to various shooting scenes, such as a night scene, an indoor scene, a cloudy day scene, and the like, and of course, may also be applied to other shooting scenes, which is not limited in the embodiment of the present application. Hereinafter, for convenience of description, a case in which a mobile phone photographs in a night scene will be described.
As to the background art, when people use a mobile phone to shoot in a night scene, the user visual experience is poor because the environment is darker and the image shot by the mobile phone is darker. In the related art, in order to improve the image capturing effect in the night scene, the image data captured by the camera may be subjected to image enhancement processing by an integrated processor commonly used in the mobile phone.
Referring to fig. 1, fig. 1 shows a schematic structural diagram of a mobile phone according to the related art. As shown in fig. 1, the mobile phone includes a color camera 11 and a general integrated processor 12, and the integrated processor 12 includes an Image Front End (IFE) and an image processing engine (image processing engine, IPE). The color camera 11 is used for shooting a color image, after the color camera 11 is started, the integrated processor 12 collects color image data collected by the color camera 11, the color image data is preprocessed through the IFE, and then the preprocessed color image data is subjected to image enhancement processing through the IPE, so that target color image data to be displayed is obtained.
However, because the preprocessing and image enhancement processes of the general integrated processor have limited image processing capabilities, the image picture of the target image data obtained after the integrated processor performs image processing on the image data collected by the camera in the night shooting scene may have problems of unclear image, large noise, excessively bright and dark bright areas, poor color restoration capability and the like. In order to solve the technical problem, an embodiment of the present application provides an image capturing method.
Referring to fig. 2, fig. 2 shows a schematic structural diagram of a mobile phone 100 according to an embodiment of the present application, and as shown in fig. 2, the mobile phone 100 includes a camera 10, a camera 20, a processor 30 and an integrated processor 40.
The camera 10 is a black-and-white camera, and is used for capturing black-and-white images.
The camera 20 is a color camera, and is used for capturing color images.
The integrated processor 40 is integrated with a plurality of processors, such as a central processing unit (central processing unit, CPU), a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), and the like. The integrated processor 40 is typically integrated on an integrated circuit Chip, such as a general purpose System On Chip (SOC).
The integrated processor 40 comprises an image enhancement module 41, the image enhancement module 41 being arranged to enhance the image data. For example, the image enhancement module 41 may be an IPE, or the like.
Wherein the processor 30 is a separate processor other than the integrated processor 40, and may be located at the front end of the integrated processor 40. That is, in the embodiment of the present application, a processor dedicated to image processing of the image captured by the camera is additionally configured before the processor 40 is integrated, so as to improve the image processing effect, and further improve the image capturing effect.
For example, the processor 30 may be an ISP, which is a processor specially configured for image processing in addition to the ISP in the integrated processor 40. For example, the processor 30 is an artificial intelligence (artificial intelligence, AI) ISP that includes a neural network processing unit (neural network processing unit, NPU). The processor 30 is illustratively integrated on a first chip, which is a chip other than the SOC.
The processor 30 includes a first noise reduction module 31, a second noise reduction module 32, and an image fusion module 33. The first noise reduction module 31 is used for reducing noise of black-and-white image data, the second noise reduction module 32 is used for reducing noise of color image data, and the image fusion module 33 is used for carrying out image fusion on the black-and-white image data and the color image data so as to obtain color image data with clear details and accurate colors through fusion.
As an example, the first noise reduction module 31, the second noise reduction module 32, and the image fusion module 33 are AI modules, and the corresponding functions may be implemented using an AI algorithm.
In this embodiment, when the mobile phone 100 performs shooting, black and white image data can be collected by the camera 10, and color image data can be collected by the camera 20. The processor 30 may acquire black-and-white image data acquired by the camera 10 and color image data acquired by the camera 20, denoise the black-and-white image data through the first denoising module 31, denoise the color image data through the second denoising module 32, image-fuse the denoised black-and-white image data and the denoised color image data through the image fusion module 33, obtain fused image data, and send the fused image data to the integrated processor 40. The integration processor 40 performs image enhancement processing on the fused image data by the image enhancement module 41 to obtain target image data. And then, storing or displaying the target image data.
By configuring the additional processor 30 outside the integrated processor 40 and adopting a dual-camera scheme of photographing with a black-and-white camera and a color camera, respectively, the black-and-white image data collected by the camera 10 and the color image data collected by the camera 20 can be respectively noise reduced by the processor 30 before the image enhancement of the integrated processor 40, and the noise reduced black-and-white image data and the color image data can be subjected to image fusion. Therefore, the noise reduction effect, the picture definition and the color restoration capability of the image can be improved, so that the shooting effect of the camera is improved, and especially the shooting effect of a night scene can be improved.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a comparison between a target image processed by the image processing method provided by the related art and a target image processed by the image processing method provided by the embodiment of the present application. The image (a) in fig. 3 is a target image obtained by processing an image acquired by a camera by using an image processing method provided by the related art, and the image (b) in fig. 3 is a target image obtained by processing an image acquired by a camera by using an image processing method provided by the embodiment of the present application. As can be seen from fig. 3 (a), the target image processed by the image processing method provided by the related art has the problems of unclear image frame, large noise, poor color reproduction capability, and the like, and the bright area is too bright, and the dark area is too dark. Comparing the graph (b) in fig. 3 with the graph (a) in fig. 3, it can be known that the image frame of the target image processed by the image processing method provided by the embodiment of the application is clearer, the brightness is uniform, the color reduction capability is better, the image effect is better, and the user visual experience is better.
As an example, camera 10 and camera 20 are connected to processor 30 through a processor interface, and the acquired image data is sent to processor 30 through the processor interface. The processor 30 is connected to the processor 40 through a processor interface, and transmits the fused image data to the integrated processor 40 through the processor interface. The processor interface may be a mobile industry processor interface (mobile industry processor interface, mipi) or the like.
As an example, the first preprocessing module may also be configured before the first noise reduction module 31 to preprocess the black-and-white image data before the first noise reduction module 31. And configuring a second preprocessing module before the second noise reduction module 32 to preprocess the color image data before the second noise reduction module 32. The first preprocessing module and the second preprocessing module may be IFEs or the like.
As an example, a demosaicing module may also be configured after the second noise reduction module 32 to demosaicing the noise-reduced color image data after the color image data is noise-reduced to convert the noise-reduced color image data from the RAW domain to the RGB domain.
As an example, a dynamic range compression (dynamic range compression, DRC) module may also be configured after the image fusion module 33 to perform dynamic range compression on the fused image data to reduce the dynamic range of the fused image data, compress the fused image data from high dynamic range imaging to low dynamic range imaging, and preserve image local contrast and detail. For example, the image fusion module 33 may be an AI module, and the corresponding function is implemented through an AI algorithm.
As one example, processor 30 is an ISP that includes an NPU, and the above modules are AI modules in the NPU.
Referring to fig. 4, fig. 4 is a schematic structural diagram of another mobile phone 100 according to an embodiment of the present application. As shown in fig. 4, the cellular phone 100 includes a camera 10, a camera 20, an AI ISP30, and an SOC40, and the SOC40 includes an ISP42.
The AI ISP30 includes a routing module 34, an IFE35, an IFE36, an AI noise reduction module 31, an AI noise reduction module 32, a demosaicing module 37, an AI image fusion module 33, and an AI dynamic range compression module 38. The ISP42 includes IPE41 and 3A (auto exposure (AE), auto white balance (auto white balance, AWE), auto Focus (AF)) modules 43.
In addition, the AI ISP30 also includes a plurality of processor interfaces, including, for example, mipi0, mipi1, and Mipi2. The camera 10 is connected to the routing module 34 via Mipi0 and the camera 20 is connected to the routing module 34 via Mipi 1. The AI dynamic range compression module 38 is connected to IPE41 via Mipi 0. The routing module 34 is connected to the 3A module 43 via Mipi1 and Mipi2.
The routing module 34 is used for copying the image data collected by the camera. For example, the routing module 34 is a standard image format (standard image format, SIF) routing module. The black and white image data collected by the camera 10 is sent to the routing module 34 through the Mipi0, and the color image data collected by the camera 20 is sent to the routing module 34 through the Mipi 1. The routing module 34 copies the black-and-white image data and the color image data, respectively, to obtain two paths of black-and-white image data and two paths of color image data. Routing module 34 sends a path of black and white image data to IFE35 and a path of color image data to IFE36. And another path of black-and-white image data is sent to the 3A module 43 via Mipi1 and another path of color image data is sent to the 3A module 44 via Mipi2.
IFE35 pre-processes the black-and-white image data and sends the pre-processed black-and-white image data to AI noise reduction module 31.IFE36 pre-processes the color image data and sends the pre-processed color image data to AI noise reduction module 32. Wherein the preprocessing is used to correct the image data, such as preprocessing including one or more of black and white level correction (black level correction, BLC), dead spot correction (bad pixel correction, BPC), lens shading correction (lens shading correction, LSC), and AWE. It should be appreciated that preprocessing may also include other image processing operations, which are not limited in this embodiment of the present application.
The AI noise reduction module 31 performs noise reduction on the preprocessed black-and-white image data to obtain noise-reduced black-and-white image data, and sends the noise-reduced black-and-white image data to the AI image fusion module 33. The AI noise reduction module 32 performs noise reduction on the preprocessed color image data to obtain noise-reduced color image data, sends the noise-reduced color image data to the demosaicing module 37, performs demosaicing on the noise-reduced color image data by the demosaicing module 37, and sends the demosaiced color image data to the AI image fusion module 33. The AI noise reduction module 31 and the AI noise reduction module 32 are noise reduction modules adopting an AI algorithm.
The AI image fusion module 33 performs image fusion on the black-and-white image data after noise reduction and the color image data after demosaicing to obtain fused image data, and sends the fused image data to the AI dynamic range compression module 38. The AI image fusion module 33 is an image fusion module employing an AI algorithm.
The AI dynamic range compression module 38 performs dynamic range compression on the fused image data, and sends the fused image data after dynamic range compression to the IPE41 in the ISP42 through Mipi 0. The AI dynamic range compression module 38 is a dynamic range compression module employing an AI algorithm.
The IPE41 performs image enhancement processing on the fusion image data after dynamic range compression to obtain target image data. The image enhancement processing may include hardware noise reduction, image cropping, color enhancement, detail enhancement, or other image processing operations.
The 3A module 43 is configured to calculate 3A values (AE value, AWE value, and AF value) from the image data using a 3A algorithm. After the routing module 34 sends another path of black-and-white image data to the 3A module 43 through the Mipi1 and sends another path of color image data to the 3A module 43 through the Mipi2, the 3A module 43 may calculate a first 3A value according to the received black-and-white image data, control the camera 10 according to the first 3A value, for example, adjust the 3A value of the camera 10 according to the first 3A value, or send the first 3A value to the camera 10, and the camera 10 adjusts its 3A value according to the first 3A value. In addition, the 3A module 43 may calculate a second 3A value according to the received color image data, control the camera 20 according to the second 3A value, for example, adjust the 3A value of the camera 20 according to the second 3A value, or send the second 3A value to the camera 20, and the camera 20 adjusts its 3A value according to the second 3A value.
Referring to fig. 5, fig. 5 shows a schematic diagram of an AI noise reduction module according to an embodiment of the disclosure. As shown in fig. 5, the noise reduction module includes a neural network 1, and the neural network 1 is used for noise reduction of an image. The image data may be input to the neural network 1, and the noise-reduced image data may be output through the neural network 1. For example, referring to fig. 5, taking image data as video frame data as an example, the nth frame of video frame data and the N-1 th frame of video frame data after noise reduction may be used as inputs of the neural network 1, and the nth frame of video frame data after noise reduction may be output through the neural network 1.
Referring to fig. 6, fig. 6 is a schematic diagram of an AI image fusion module according to an embodiment of the present application. As shown in fig. 6, the AI image fusion module includes a neural network 2, and the neural network 2 is configured to perform image fusion on black-and-white image data and color image data to obtain fused image data. For example, the black-and-white image data and the color image data are first aligned in scale, and then the black-and-white image data and the color image data after the alignment in scale are used as inputs of the neural network 2, and the fused image data is output through the neural network 2.
Referring to fig. 7, fig. 7 is a schematic diagram of an AI dynamic range compression module according to an embodiment of the disclosure. As shown in fig. 7, the AI dynamic range compression module includes a neural network 3, and the neural network 3 is used for dynamic range compression of image data. For example, high dynamic range image data may be input to the neural network 3, and low dynamic range image data may be output through the neural network 3.
The neural network may be a convolutional neural network, a Net neural network, or the like, but may be other neural networks, which is not limited in the embodiment of the present application.
Referring to fig. 8, fig. 8 shows a hardware structure schematic of a mobile phone 100 according to an embodiment of the present application. Referring to fig. 8, the handset 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components may be provided. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc.
Wherein the different processing units may be separate devices or may be integrated in one or more processors. For example, processor 110 includes a stand-alone processor 111 and an integrated processor 112. The processor 111 is configured to perform noise reduction and image fusion on image data acquired by a plurality of cameras in the cameras 193, so as to obtain fused image data. The integrated processor 112 is configured to perform image enhancement processing on the fused image data obtained by the first processor 111. Illustratively, processor 111 is an AI ISP that includes an NPU, and integrated processor 112 includes a general purpose ISP.
The controller may be a neural center or a command center of the mobile phone 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces, such as may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the camera function of cell phone 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display function of the handset 100.
It should be understood that the connection relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not limited to the structure of the mobile phone 100. In other embodiments of the present application, the mobile phone 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the mobile phone 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. Such as: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to the handset 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. applied to the handset 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module.
The mobile phone 100 implements display functions through a GPU, a display 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (flex), a mini, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the cell phone 100 may include 1 or N display screens 194, N being an integer greater than 1.
The mobile phone 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the cell phone 100 may include 1 or N cameras 193, N being an integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the handset 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, etc.
Video codecs are used to compress or decompress digital video. The handset 100 may support one or more video codecs. Thus, the mobile phone 100 may play or record video in multiple coding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, such as referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the mobile phone 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capabilities of the handset 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. Such as storing files of music, video, etc. in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The processor 110 performs various functional applications of the cellular phone 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (e.g., audio data, phonebook, etc.) created by the handset 100 during use, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The mobile phone 100 may implement audio functions such as music playing, recording, etc. through the audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, and application processor, etc. The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys or touch keys. The handset 100 may receive key inputs, generating key signal inputs related to user settings and function control of the handset 100. The motor 191 may generate a vibration cue. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card.
Next, a software system of the mobile phone 100 will be described.
The software system of the mobile phone 100 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In this embodiment, a software system of the mobile phone 100 is exemplarily described by taking an Android (Android) system with a layered architecture as an example.
Fig. 9 shows a block diagram of a software system of the mobile phone 100 according to an embodiment of the present application. Referring to fig. 9, the hierarchical architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run time) and system layer, a kernel layer and a hardware abstraction layer (Hardware Abstraction Layer, HAL), respectively.
The application layer may include a series of application packages. As shown in fig. 9, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 9, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like. The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data, which may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc., and make such data accessible to the application. The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to construct a display interface for an application, which may be comprised of one or more views, such as a view that includes displaying a text notification icon, a view that includes displaying text, and a view that includes displaying a picture. The phone manager is used to provide communication functions of the handset 100, such as management of call status (including on, off, etc.). The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like. The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. For example, a notification manager is used to inform that the download is complete, a message alert, etc. The notification manager may also be a notification that appears in the system top status bar in the form of a chart or a scroll bar text, such as a notification of a background running application. The notification manager may also be a notification that appears on the screen in the form of a dialog window, such as a text message being prompted in a status bar, a notification sound being emitted, the electronic device vibrating, a flashing indicator light, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system. The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules, such as: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc. The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises camera drivers, processor drivers, display drivers, audio drivers and other device drivers. The device driver is an interface between the I/O system and related hardware for driving the corresponding hardware device.
The hardware abstraction layer is an interface layer between the kernel layer and the hardware circuitry. The HAL is a core state module, which can hide various details related to hardware, such as an I/O interface, an interrupt controller, a multiprocessor communication mechanism and the like, provide uniform service interfaces for different hardware platforms running Windows, and realize portability on various hardware platforms.
The hardware layer includes camera group, individual processor, integrated processor, display and audio device, etc. The camera group includes a plurality of cameras, including, for example, a black-and-white camera and a color camera. By way of example, the separate processor may be an AI ISP, and the integrated processor may integrate a CPU, GPU, and general purpose ISP, etc.
It should be noted that, in the embodiments of the present application, only the Windows system is used as an example, and in other operating systems (such as an android system, an IOS system, etc.), the schemes of the present application can be implemented as long as the functions implemented by the respective functional modules are similar to those implemented by the embodiments of the present application.
The workflow of the handset 100 software and hardware is illustrated below in connection with capturing a photo scene.
When the touch sensor receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the original input event. Taking the touch operation as a clicking operation, taking a control corresponding to the clicking operation as a control of a camera application icon as an example, calling an interface of an application program framework layer by a camera application, starting the camera application, then calling a kernel layer to start a camera driver, and capturing a still image or video through the camera. After capturing the still image or video by the camera, the camera application may also invoke a separate processor and an integrated processor through the HAL layer, with the still image or video captured by the camera being image processed by the separate processor and the integrated processor.
Next, an image processing method provided in the embodiment of the present application will be described in detail.
Fig. 10 shows a flowchart of an image processing method according to an embodiment of the present application, where the method is applied to an electronic device such as a mobile phone, and the electronic device includes a first camera, a second camera, a first processor, and an integrated processor. As shown in fig. 10, the method includes the steps of:
Step 1001: the first camera collects first image data, and the first image data is black-and-white image data.
The first camera is a black-and-white camera for shooting black-and-white images. The first image data is single-channel original image data acquired by the first camera.
Step 1002: the first camera sends the first image data to the first processor.
The first camera can be connected with the first processor through a relevant interface, and the first image data can be sent to the first processor through the relevant interface. For example, the relevant interface may be a Mipi interface.
Step 1003: the second camera collects second image data, and the second image data is color image data.
The second camera is a color camera for shooting color images. The second image data is original image data of the RAW domain acquired by the second camera.
Step 1004: the second camera transmits the color image data to the first processor.
The second camera can be connected with the second processor through the relevant interface, and the second image data is sent to the first processor through the relevant interface. For example, the relevant interface may be a Mipi interface.
Step 1005: the first processor performs noise reduction on the first image data and the second image data after receiving the first image data and the second image data.
The first processor is a separate processor outside the integrated processor, and is a processor which is separately arranged outside the integrated processor and is used for performing image processing on an image acquired by the camera in the embodiment of the application. For example, the first processor is an AI processor including an NPU. For example, the first processor is an ISP that includes an NPU, i.e., the first processor is an AI ISP that includes an NPU.
By denoising the first image data and the second image data, the image data with high signal to noise ratio can be obtained, and the definition of the image data is improved.
As one example, the first processor may employ an AI algorithm to denoise the first image data and the second image data, respectively. Thus, the operation efficiency of the first processor can be improved.
For example, the first processor includes an NPU that includes a first neural network for denoising black and white image data and a second neural network for denoising color image data. The first processor can reduce noise of the first image data through the first neural network to obtain the first image data after noise reduction; and denoising the second image data through a second neural network to obtain denoised second image data.
The first neural network and the second neural network may be the same neural network or different neural networks, which is not limited in the embodiment of the present application.
The operation efficiency of the first processor can be further improved by running the AI noise reduction algorithm on the dedicated NPU.
As one example, the first and second neural networks may be as shown in fig. 5 above. Taking a video shooting scene as an example, the first neural network and the second neural network may determine a noise reduction frame of the current frame according to noise reduction frames of the current frame and adjacent frames. For example, the first neural network and the second neural network may determine the noise reduction frame of the current frame based on the time domain information of the current frame and the time domain information of the noise reduction frame of the neighboring frame.
For example, the first image data is first video frame data acquired by a first camera. For the first video frame data, the first processor may use the first video frame data and the third video frame data as inputs of the first neural network, and process the first video frame data and the third video frame data through the first neural network, to obtain first image data after noise reduction. The third video frame data is obtained after noise reduction of the video frame data acquired by the first camera before the first video frame data, namely the third video frame data is noise reduction data of the video frame data of the previous frame of the first video frame data.
For example, the second image data is second video frame data acquired by a second camera. For the second video frame data, the first processor may use the second video frame data and the fourth video frame data as inputs of the second neural network, and process the second video frame data and the fourth video frame data through the second neural network to obtain noise-reduced second image data. The fourth video frame data is obtained after noise reduction of the video frame data acquired by the second camera before the second video frame data, that is, the fourth video frame data is noise reduction data of the video frame data of the previous frame of the second video frame data.
As an example, the first processor may further pre-process the first image data and the second image data, respectively, before denoising the first image data and the second image data, respectively. Then, denoising the preprocessed first image data to obtain denoised first image data; and denoising the preprocessed second image data to obtain denoised second image data.
Wherein the preprocessing is used for correcting the image data. For example, preprocessing includes one or more of black-and-white level correction, dead spot correction, lens shading correction, and automatic white balance, although other image processing operations may be included, which is not limited in this embodiment.
As one example, the first processor includes a first IFE and a second IFE. The first processor pre-processes the first image data through the first IFE to obtain the pre-processed first image data. The first processor preprocesses the second image data through a second IFE to obtain preprocessed second image data.
The first IFE and the second IFE may be the same IFE or different IFEs, which is not limited in this embodiment of the present application.
As an example, after the first processor performs noise reduction on the second image data, the demosaicing process may be further performed on the second image data after the noise reduction, to obtain sixth image data.
The second image data after noise reduction is the image data of the RAW domain, and the sixth image data is the image data of the RGB domain. The demosaicing process is performed on the noise-reduced second image data, so that the noise-reduced second image data can be converted from the RAW domain to the RGB domain.
Step 1006: the first processor performs image fusion on the first image data after noise reduction and the second image data after noise reduction to obtain third image data.
The third image data is fusion image data and is color image data.
By carrying out image fusion on the first image data after noise reduction and the second image data after noise reduction, the brightness information and the detail information of the first image data after noise reduction can be fused with the color information of the second image data after noise reduction, so that fused image data with high signal to noise ratio, clear detail and accurate color can be obtained.
As one example, the first processor may employ an AI algorithm to image fuse the denoised first image data and the denoised second image data. Thus, the operation efficiency of the first processor can be improved.
For example, the first processor includes an NPU that includes a third neural network for image blending black and white image data and color image data. The first processor can perform image fusion on the first image data after noise reduction and the second image data after noise reduction through a third neural network to obtain third image data.
By running the AI image fusion algorithm on the special NPU, the operation speed of the algorithm can be improved, so that the operation efficiency of the first processor is further improved.
As an example, the first processor may first scale-align the first image data after noise reduction and the second image data after noise reduction to obtain first image data and second image data after scale alignment, where the scales of the first image data and the second image data after scale alignment are the same. And then the first image data and the second image data with the aligned scales are used as the input of a second neural network, and the first image data and the second image data with the aligned scales are processed through the second neural network to obtain third image data.
The scale of the image data with larger scale can be reduced, or the scale of the image data with smaller scale can be increased, so that the first image data after noise reduction and the second image data after noise reduction are subjected to scale alignment.
As an example, the first processor may obtain key parameters of the first camera and the second camera, and determine a scale difference between the noise-reduced first image data and the noise-reduced second image data according to the key parameters of the first camera and the second camera. And then, carrying out scale alignment on the first image data after noise reduction and the second image data after noise reduction according to the scale difference to obtain the first image data and the second image data after scale alignment.
The key parameters of the camera include one or more of focal length, pixel size and field angle, and of course, other key parameters may also be included, which is not limited in the embodiment of the present application.
In addition, after the first processor performs noise reduction on the second image data, demosaicing processing may be performed on the second image data after noise reduction, so as to obtain sixth image data. And then, carrying out image fusion on the first image data and the sixth image data after noise reduction to obtain third image data.
Step 1007: the first processor sends the third image data to the integrated processor.
Wherein the integrated processor is integrated with a plurality of processors, such as with a general purpose ISP, CPU and GPU. The integrated processor may be integrated on an integrated circuit, such as an integrated processor integrated on an SOC.
The first processor may be connected to the integrated processor through a correlation interface, and the third image data may be transmitted to the integrated processor through the correlation interface. For example, the relevant interface may be a Mipi interface.
As an example, the first processor may also perform dynamic range compression on the third image data to obtain the fifth image data, and then send the fifth image data to the integrated processor.
Wherein the dynamic range of the fifth image data is lower than the dynamic range of the third image data. For example, the third image data is high dynamic range (high dynamic range, HDR) image data, and the fifth image data is low dynamic range (low dynamic range, LDR) image data.
By performing dynamic range compression on the third image data, the third image data can be compressed from a high-bit-width image to a low-bit-width image, and the local contrast and detail information of the image can be maintained.
As one example, the first processor may employ an AI algorithm to perform dynamic range compression on the third image data. Thus, the operation efficiency of the first processor can be improved.
For example, the first processor includes an NPU that includes a fourth neural network for dynamic range compression of image data. The first processor may perform dynamic range compression on the third image data through a fourth neural network to obtain fifth image data.
By running the AI dynamic range compression algorithm on the dedicated NPU, the operation speed of the algorithm can be increased, thereby further increasing the operation efficiency of the first processor.
As one example, the fourth neural network may employ Tone Mapping (TM) to perform dynamic range compression on the third image data. For example, the first processor takes the third image data as an input of a fourth neural network, and performs tone mapping on the third image data through the fourth neural network to obtain fifth image data.
In addition, after the first processor receives the first image data and the second image data, the first image data and the second image data may be further transmitted to the integrated processor.
For example, the first processor is connected to the integrated processor through the relevant interface, and the first image data is sent to the integrated processor through the relevant interface. For example, the relevant interface may be a Mipi interface.
Step 1008: and the integrated processor performs image enhancement processing on the third image data to obtain target image data.
The image enhancement processing may include image processing operations such as hardware noise reduction, image clipping, color enhancement or detail enhancement, and may of course also include other image processing operations, which are not limited in this embodiment of the present application.
As one example, the integrated processor includes an IPE, and the integrated processor performs image enhancement processing on the third image data by means of the IPE, to obtain the target image data.
After the target image data is obtained, the integrated processor may save or send the target image data. For example, the integrated processor may convert the target image data from the RGB domain to the YUV domain, and then save or send the target image data in the YUV domain.
In addition, the integrated processor may also receive the first image data and the second image data transmitted by the first processor. After the integrated processor receives the first image data and the second image data, a first 3A value can be determined according to the first image data, and the first camera is controlled according to the first 3A value; and determining a second 3A value according to the second image data, and controlling the second camera according to the second 3A value.
Wherein the first 3A value and the second 3A value include an AF value, an AE value, and an AWE value. The integrated processor may determine the 3A value according to the image data using a 3A algorithm, and the 3A algorithm may be preset, which is not limited in the embodiment of the present application.
As one example, the integrated processor may adjust the 3A value of the first camera based on the first 3A value and adjust the 3A value of the second camera based on the second 3A value. Or the integrated processor can also send the first 3A value to the first camera, and the first camera adjusts the 3A value of the integrated processor according to the first 3A value; and sending the second 3A value to a second camera, and adjusting the 3A value of the second camera according to the second 3A value by the second camera.
Therefore, the first camera and the second camera can be automatically exposed, balanced and focused according to the image information of the first image data and the second image data, and the shooting effect of the subsequent images is improved.
As an example, the electronic device may process image data captured by the camera in a specific capturing mode by using the image processing method provided in the embodiment of the present application. The specific shooting mode may be preset, for example, may be a night shooting mode, an indoor shooting mode, a cloudy day shooting mode, and the like, which is not limited in the embodiment of the present application.
In this embodiment of the present application, by configuring an additional first processor outside the integrated processor and adopting a dual-camera scheme in which the black-and-white camera and the color camera are respectively used for shooting, before the image enhancement processing of the integrated processor, the first processor may firstly perform noise reduction on the black-and-white image data collected by the black-and-white camera and the color image data collected by the color camera, and perform image fusion on the black-and-white image data and the color image data after noise reduction, and then send the fused image data to the integrated processor, where the integrated processor further performs image enhancement processing on the fused image data. The first processor is used for respectively reducing noise of the black-and-white image data acquired by the black-and-white camera and the color image data acquired by the color camera, so that the black-and-white image data and the color image data with high signal-to-noise ratio can be obtained, and the definition of the image data is improved. By performing image fusion on the denoised black-and-white image data and the color image data, the brightness information and the detail information of the denoised black-and-white image data and the color information of the denoised color image data can be fused, so that the image data with high signal-to-noise ratio, clear detail and accurate color can be obtained. Therefore, the definition, color and brightness of the image can be enhanced in all directions, the image with higher definition, stronger color restoration capability and more uniform brightness is obtained, the shooting effect of the camera is further improved, and especially the shooting effect of shooting scenes with weaker light rays such as night scenes, indoor scenes, overcast scenes and the like can be improved.
Next, in conjunction with fig. 4, a video shooting scene is taken as an example, and an image processing method provided in an embodiment of the present application will be described in detail.
Fig. 11 shows a flowchart of another image processing method according to an embodiment of the present application, which is applied to the mobile phone 100 shown in fig. 4. As shown in fig. 11, the method includes the steps of:
step 1101: the camera 10 collects a video frame 1, and the video frame 1 is a black-and-white video frame.
Wherein the camera 10 is a camera for capturing black and white images. The camera 20 is used for photographing a color camera.
Step 1102: the camera 10 sends the video frame 1 to the routing module 34.
For example, camera 10 sends video frame 1 to routing module 34 via Mipi 0.
Step 1103: the camera 20 captures a video frame 2, the video frame 2 being a color video frame.
As one example, video frame 1 and video frame 2 are video frames acquired at the same time.
Step 1104: camera 20 sends video frame 2 to routing module 34.
For example, camera 20 sends video frame 2 to routing module 34 via Mipi 0.
As an example, the camera application may call the camera 10 and the camera 20 at the same time after receiving the video shooting instruction, and collect black and white video frames and color video frames at the same time in a double-shot mode.
It should be noted that, the video frame 1 and the video frame 2 are both original video data collected by the camera, the video frame 1 is a single channel video frame, and the video frame 2 is a video frame of the RAW domain.
Step 1105: the routing module 34 copies the video 1 to obtain video frame 3 and copies the video frame 2 to obtain video frame 4.
Step 1106: routing module 34 sends video frame 1 to IFE35 and video frame 2 to IFE36.
Step 1107: routing module 34 sends video frame 3 and video frame 4 to 3A module 43.
For example, routing module 34 may send video frame 3 and video frame 4 to 3A module 43 via the same or different interfaces. For example, video frame 3 is sent to 3A module 43 via Mipi1 and video frame 4 is sent to 3A module 43 via Mipi 2.
Step 1108: IFE35 pre-processes video frame 1 to obtain pre-processed video frame 1.
Wherein the preprocessing is used for correcting the image data, such as preprocessing including one or more of black-and-white level correction, dead pixel correction, lens shading correction, and automatic white balance. It should be appreciated that preprocessing may also include other image processing operations, which are not limited in this embodiment of the present application.
Step 1109: IFE35 sends preprocessed video frame 1 to AI noise reduction module 31.
Step 1110: IFE36 pre-processes video frame 2 to obtain pre-processed video frame 2.
It should be noted that the IFE35 and the IFE36 may be the same IFE or may be different IFEs, which is not limited in the embodiment of the present application.
Step 1111: IFE36 sends preprocessed video frame 2 to AI noise reduction module 32.
Step 1112: the AI noise reduction module 31 performs noise reduction on the preprocessed video frame 1 to obtain a noise-reduced video frame 1.
The AI noise reduction module 31 may use an AI noise reduction algorithm to reduce noise of the preprocessed video frame 1.
For example, the AI noise reduction module 31 includes a first neural network for reducing noise of black-and-white image data. The AI noise reduction module 31 performs noise reduction on the preprocessed video frame 1 through the first neural network to obtain a noise-reduced video frame 1.
The first neural network may be, for example, the neural network 1 shown in fig. 5. The noise reduction result of the video frame 1 and the previous video frame of the video frame 1 may be used as the input of the first neural network, and the noise reduction result of the video frame 1, that is, the noise reduced video frame 1, may be output through the first neural network.
Step 1113: the AI noise reduction module 31 sends the denoised video frame 1 to the AI image fusion module 33.
Step 1114: the AI noise reduction module 32 reduces noise of the preprocessed video frame 2 to obtain a noise-reduced video frame 2.
The AI noise reduction module 32 may use an AI noise reduction algorithm to reduce noise of the preprocessed video frame 2.
For example, the AI noise reduction module 32 includes a second neural network for denoising color image data. The AI noise reduction module 32 performs noise reduction on the preprocessed video frame 2 through the second neural network to obtain a noise-reduced video frame 2.
The second neural network may be, for example, the neural network 1 shown in fig. 5. The noise reduction result of the video frame 2 and the previous video frame of the video frame 2 may be used as an input of the second neural network, and the noise reduction result of the video frame 2, that is, the noise reduced video frame 2, may be output through the second neural network.
Noise of the video frame 1 and the video frame 2 can be reduced by reducing noise of the video frame 1 and the video frame 2, the video frame 1 and the video frame 2 with high signal to noise ratio are obtained, and definition of the video frame is improved.
Step 1115: the AI noise reduction module 32 sends the noise reduced video frame 2 to the demosaicing module 37.
Step 1116: the demosaicing module 37 performs demosaicing processing on the video frame 2 after noise reduction, to obtain the video frame 2 after demosaicing processing.
The demosaicing process is performed on the video frame 2 after noise reduction, so that the video frame 2 after noise reduction can be converted from a RAW domain to an RGB domain, and the video frame 2 with high signal to noise ratio in the RGB domain can be obtained.
Step 1117: the demosaicing module 37 sends the demosaiced video frame 2 to the AI image fusion module 33.
Step 1118: the AI image fusion module 33 performs image fusion on the video frame 1 after noise reduction and the video frame 2 after demosaicing processing to obtain a fused video frame.
The fusion video frame is a color video frame and is a color video frame in RGB domain.
The AI image fusion module 33 may perform image fusion on the video frame 1 after noise reduction and the video frame 2 after demosaicing processing by using AI image fusion.
For example, the AI image fusion module 33 includes a third neural network for image fusion of black and white image data and color image data. The AI image fusion module 33 performs image fusion on the video frame 1 after noise reduction and the video frame 2 after demosaicing processing through a third neural network to obtain a fused video frame.
The third neural network may be, for example, the neural network 2 shown in fig. 6. The video frame 1 after noise reduction and the video frame 2 after demosaicing can be subjected to scale alignment, and then the two video frames after scale alignment are used as the input of a third neural network, and the fused video frame is output through the third neural network.
The image fusion is carried out on the video frame 1 after noise reduction and the video frame 2 after demosaicing, so that the brightness information and the detail information of the video frame 1 after noise reduction and the color information of the video frame 2 after demosaicing can be fused, and the fused video frame with high signal to noise ratio, clear detail and accurate color can be obtained.
Step 1119: the AI image fusion module 33 sends the fused video frame to the AI dynamic range compression module 38.
Step 1120: the AI dynamic range compression module 38 performs dynamic range compression on the fusion video frames to obtain the fusion video frames after dynamic range compression.
The dynamic range of the fusion video frame after the dynamic range compression is lower than that of the fusion video frame. For example, the fusion video frame is an HDR video frame, and the fusion video frame after dynamic range compression is an LDR video frame.
The AI dynamic range compression module 38 may employ AI dynamic range compression to perform dynamic range compression on the fused video frames, among other things.
For example, the AI dynamic range compression module 38 includes a fourth neural network that is used to perform dynamic range compression of image data. The AI dynamic range compression module 38 performs dynamic range compression on the AI dynamic range compression module 38 through a fourth neural network to obtain a fused video frame after dynamic range compression.
The fourth neural network may be, for example, the neural network 3 shown in fig. 7. The fused video frame can be used as an input of a fourth neural network, and the fused video frame with the compressed dynamic range is output through the fourth neural network.
The fusion video frame can be compressed from a high-bit-width image to a low-bit-width image by carrying out dynamic range compression on the fusion video frame, and the local contrast and detail information of the image can be reserved.
Step 1121: the AI dynamic range compression module 38 sends the dynamic range compressed fusion video frames to IPE41.
For example, the AI dynamic range compression module 38 may send the dynamic range compressed fusion video frame to IPE41 via Mipi 0.
Step 1122: the IPE41 performs image enhancement processing on the fusion video frame after dynamic range compression to obtain a target video frame.
The image enhancement processing may include image processing operations such as hardware noise reduction, image clipping, color enhancement or detail enhancement, and may of course also include other image processing operations, which are not limited in this embodiment of the present application.
Step 1123: the IPE41 saves or displays the target video frame.
As an example, IPE41 may convert the target video frame from RGB domain to YUV domain, and then save or send the target video frame in YUV domain.
Step 1124: the 3A module 43 determines a first 3A value from video frame 3 using a 3A algorithm and a second 3A value from video frame 4 using a 3A algorithm.
Wherein the 3A value includes an AF value, an AE value and an AWE value.
Step 1125: the 3A module 43 sends the first 3A value to the camera 10 and the second 3A value to the camera 20.
Step 1126: the camera 10 adjusts its own 3A value according to the first 3A value.
Step 1127: the camera 20 adjusts its own 3A value according to the second 3A value.
Therefore, automatic exposure, automatic white balance and automatic focusing can be performed according to the original video frame camera, and the shooting effect of the subsequent video is improved.
It should be noted that, the AI ISP30 includes an NPU, and the AI noise reduction module 31, the AI noise reduction module 32, the AI image fusion module 33, and the AI dynamic range compression module 38 may all operate on a dedicated NPU, so that the operation speed of the algorithm may be improved, and the operation speed of the processor 30 may be further improved.
In this embodiment of the present application, by configuring the AI ISP30 outside the ISP of the SOC40 and adopting a dual-camera scheme in which the black-and-white camera and the color camera are respectively used for photographing, before the image enhancement processing of the SOC40, the black-and-white image data collected by the black-and-white camera and the color image data collected by the color camera may be first noise reduced by the AI ISP30, and the noise reduced black-and-white image data and the noise reduced color image data may be subjected to image fusion, and then the fused image data may be sent to the integrated processor, where the integrated processor may further perform the image enhancement processing. The black-and-white image data collected by the black-and-white camera and the color image data collected by the color camera are respectively noise-reduced through the AI ISP30, so that the black-and-white image data and the color image data with high signal-to-noise ratio can be obtained, and the definition of the image data is improved. By performing image fusion on the denoised black-and-white image data and the color image data, the brightness information and the detail information of the denoised black-and-white image data and the color information of the denoised color image data can be fused, so that the image data with high signal-to-noise ratio, clear detail and accurate color can be obtained. Therefore, the definition, color and brightness of the image can be enhanced in all directions, the image with higher definition, stronger color restoration capability and more uniform brightness is obtained, the shooting effect of the camera is further improved, and especially the shooting effect of shooting scenes with weaker light rays such as night scenes, indoor scenes, overcast scenes and the like can be improved.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, data subscriber line (Digital Subscriber Line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium such as a floppy Disk, a hard Disk, a magnetic tape, an optical medium such as a digital versatile Disk (Digital Versatile Disc, DVD), or a semiconductor medium such as a Solid State Disk (SSD), etc.
The above embodiments are not intended to limit the present application, and any modifications, equivalent substitutions, improvements, etc. within the technical scope of the present disclosure should be included in the protection scope of the present application.

Claims (13)

1. The image processing method is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a first camera, a second camera, a first processor and an integrated processor, the first processor comprises a neural network processing unit NPU, the NPU comprises a first neural network and a second neural network, the first neural network is used for reducing noise of black and white image data, and the second neural network is used for reducing noise of color image data; the method comprises the following steps:
the first processor acquires first image data acquired by the first camera and second image data acquired by the second camera, wherein the first image data is black-and-white image data, and the second image data is color image data; the first image data are first video frame data acquired by the first camera, and the second image data are second video frame data acquired by the second camera;
The first processor takes the first video frame data and third video frame data as input of the first neural network, processes time domain information of the first video frame data and the third video frame data through the first neural network to obtain first image data after noise reduction, and the third video frame data is obtained after noise reduction of video frame data acquired by the first camera before the first video frame data; the first processor takes the second video frame data and fourth video frame data as input of the second neural network, processes time domain information of the second video frame data and the fourth video frame data through the second neural network to obtain the denoised second image data, and the fourth video frame data is obtained by denoise video frame data acquired by the second camera before the second video frame data;
the first processor performs scale alignment on the first image data after noise reduction and the second image data after noise reduction to obtain first image data and second image data after scale alignment, wherein the scales of the first image data and the second image data after scale alignment are the same; the first processor performs image fusion on the first image data with the aligned scales and the second image data with the aligned scales to obtain third image data, and the third image data is sent to the integrated processor; the scale alignment means that the scale of the image data with larger scale is reduced or the scale of the image data with smaller scale is increased;
The integrated processor performs image enhancement processing on the third image data to obtain target image data;
the first processor sends the first image data and the second image data to the integrated processor;
the integrated processor determines a first 3A value according to the first image data, determines a second 3A value according to the second image data, and the first 3A value and the second 3A value comprise an automatic focusing AF value, an automatic exposure AE value and an automatic white balance AWE value;
the integrated processor controls the first camera according to the first 3A value and controls the second camera according to the second 3A value.
2. The method of claim 1, wherein prior to the first processor denoising the first image data and the second image data, respectively, further comprising:
the first processor respectively preprocesses the first image data and the second image data, wherein the preprocessing comprises one or more of black-and-white level correction, dead pixel correction, lens shading correction and automatic white balance;
the first processor performs noise reduction on the first image data and the second image data respectively, and includes:
The first processor performs noise reduction on the preprocessed first image data to obtain noise-reduced first image data;
the first processor performs noise reduction on the preprocessed second image data to obtain the noise-reduced second image data.
3. The method of claim 1, wherein the first processor includes a first IFE and a second IFE, the first processor pre-processing the first image data and the second image data, respectively, comprising:
the first processor preprocesses the first image data through the first IFE to obtain preprocessed first image data;
and the first processor preprocesses the second image data through the second IFE to obtain the preprocessed second image data.
4. The method of claim 1, wherein the first processor comprises an NPU comprising a third neural network for image fusion of black and white image data and color image data;
the first processor performs image fusion on the first image data with the aligned scales and the second image data with the noise reduced to obtain third image data, and the method comprises the following steps:
And the first processor performs image fusion on the first image data with the aligned scales and the second image data with the aligned scales through the third neural network to obtain third image data.
5. The method of claim 1, wherein the first processor performs scale alignment on the denoised first image data and the denoised second image data to obtain scale-aligned first image data and second image data, comprising:
the first processor obtains key parameters of the first camera and the second camera, wherein the key parameters comprise one or more of focal length, pixel size and field angle;
the first processor determines the scale difference between the first image data after noise reduction and the second image data after noise reduction according to the key parameters of the first camera and the second camera;
and the first processor performs scale alignment on the first image data after noise reduction and the second image data after noise reduction according to the scale difference to obtain the first image data and the second image data after scale alignment.
6. The method of any of claims 1-5, wherein the first processor sending the third image data to the integrated processor comprises:
The first processor performs dynamic range compression on the third image data to obtain fifth image data, wherein the dynamic range of the fifth image data is lower than that of the third image data;
the first processor sends the fifth image data to the integrated processor;
the integrated processor performs image enhancement processing on the third image data to obtain target image data, including:
and the integrated processor performs image enhancement processing on the fifth image data to obtain the target image data.
7. The method of claim 6, wherein the first processor comprises an NPU comprising a fourth neural network for dynamic range compression of image data;
the first processor performs dynamic range compression on the third image data to obtain fifth image data, including:
and the first processor performs dynamic range compression on the third image data through the fourth neural network to obtain the fifth image data.
8. The method of claim 7, wherein the first processor dynamically range compresses the third image data via the fourth neural network to obtain the fifth image data, comprising:
The first processor takes the third image data as the input of the fourth neural network, and performs tone mapping on the third image data through the fourth neural network to obtain the fifth image data.
9. The method of claim 1, wherein after the first processor denoises the second image data, further comprising:
the first processor performs demosaicing processing on the second image data after noise reduction to obtain sixth image data;
the first processor performs image fusion on the first image data after noise reduction and the second image data after noise reduction to obtain third image data, and the method comprises the following steps:
and the first processor performs image fusion on the first image data after noise reduction and the sixth image data to obtain third image data.
10. The method of claim 1, wherein the integrated processor includes an image processing engine IPE, and wherein the integrated processor performs image enhancement processing on the third image data to obtain target image data, and includes:
and the integrated processor performs image enhancement processing on the third image data through the IPE to obtain the target image data.
11. The method of claim 1, wherein the first processor is an image signal processor ISP, the ISP comprising an NPU.
12. An electronic device comprising a first camera for acquiring black and white image data, a second camera for acquiring color image data, a memory, a first processor, an integrated processor, a first computer program stored in the memory and executable on the first processor, and a second computer program stored in the memory and executable on the integrated processor, the first computer program when executed by the first processor implementing a method for the first processor to execute according to any one of claims 1-11, the second computer program when executed by the integrated processor implementing a method for the integrated processor to execute according to any one of claims 1-11.
13. A computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of claims 1-11.
CN202210912803.9A 2022-07-31 2022-07-31 Image processing method, device and storage medium Active CN115460343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210912803.9A CN115460343B (en) 2022-07-31 2022-07-31 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210912803.9A CN115460343B (en) 2022-07-31 2022-07-31 Image processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN115460343A CN115460343A (en) 2022-12-09
CN115460343B true CN115460343B (en) 2023-06-13

Family

ID=84297510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210912803.9A Active CN115460343B (en) 2022-07-31 2022-07-31 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN115460343B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002301447B2 (en) * 2001-10-12 2005-04-14 Canon Kabushiki Kaisha Interactive Animation of Sprites in a Video Production
JP2009157647A (en) * 2007-12-26 2009-07-16 Sony Corp Image processing circuit, imaging apparatus, method and program
WO2017090837A1 (en) * 2015-11-24 2017-06-01 Samsung Electronics Co., Ltd. Digital photographing apparatus and method of operating the same
CN106878605A (en) * 2015-12-10 2017-06-20 北京奇虎科技有限公司 The method and electronic equipment of a kind of image generation based on electronic equipment
US10511908B1 (en) * 2019-03-11 2019-12-17 Adobe Inc. Audio denoising and normalization using image transforming neural network
WO2020207262A1 (en) * 2019-04-09 2020-10-15 Oppo广东移动通信有限公司 Image processing method and apparatus based on multiple frames of images, and electronic device
CN112217962A (en) * 2019-07-10 2021-01-12 杭州海康威视数字技术股份有限公司 Camera and image generation method
CN113962884A (en) * 2021-10-10 2022-01-21 杭州知存智能科技有限公司 HDR video acquisition method and device, electronic equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881004A (en) * 2012-08-31 2013-01-16 电子科技大学 Digital image enhancement method based on optic nerve network
EP3531689B1 (en) * 2016-11-03 2021-08-18 Huawei Technologies Co., Ltd. Optical imaging method and apparatus
CN107147837A (en) * 2017-06-30 2017-09-08 维沃移动通信有限公司 The method to set up and mobile terminal of a kind of acquisition parameters
CN111586312B (en) * 2020-05-14 2022-03-04 Oppo(重庆)智能科技有限公司 Automatic exposure control method and device, terminal and storage medium
CN114693569A (en) * 2020-12-25 2022-07-01 华为技术有限公司 Method for fusing videos of two cameras and electronic equipment
CN114693857A (en) * 2020-12-30 2022-07-01 华为技术有限公司 Ray tracing multi-frame noise reduction method, electronic equipment, chip and readable storage medium
CN113810600B (en) * 2021-08-12 2022-11-11 荣耀终端有限公司 Terminal image processing method and device and terminal equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002301447B2 (en) * 2001-10-12 2005-04-14 Canon Kabushiki Kaisha Interactive Animation of Sprites in a Video Production
JP2009157647A (en) * 2007-12-26 2009-07-16 Sony Corp Image processing circuit, imaging apparatus, method and program
WO2017090837A1 (en) * 2015-11-24 2017-06-01 Samsung Electronics Co., Ltd. Digital photographing apparatus and method of operating the same
CN106878605A (en) * 2015-12-10 2017-06-20 北京奇虎科技有限公司 The method and electronic equipment of a kind of image generation based on electronic equipment
US10511908B1 (en) * 2019-03-11 2019-12-17 Adobe Inc. Audio denoising and normalization using image transforming neural network
WO2020207262A1 (en) * 2019-04-09 2020-10-15 Oppo广东移动通信有限公司 Image processing method and apparatus based on multiple frames of images, and electronic device
CN112217962A (en) * 2019-07-10 2021-01-12 杭州海康威视数字技术股份有限公司 Camera and image generation method
CN113962884A (en) * 2021-10-10 2022-01-21 杭州知存智能科技有限公司 HDR video acquisition method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115460343A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN115473957B (en) Image processing method and electronic equipment
CN109559270B (en) Image processing method and electronic equipment
US11949978B2 (en) Image content removal method and related apparatus
CN112532892B (en) Image processing method and electronic device
CN114095666B (en) Photographing method, electronic device, and computer-readable storage medium
CN113810603B (en) Point light source image detection method and electronic equipment
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113630558B (en) Camera exposure method and electronic equipment
CN115526787B (en) Video processing method and device
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN115460343B (en) Image processing method, device and storage medium
CN116095512B (en) Photographing method of terminal equipment and related device
CN116761082B (en) Image processing method and device
CN115686182B (en) Processing method of augmented reality video and electronic equipment
CN116051351B (en) Special effect processing method and electronic equipment
US20240137659A1 (en) Point light source image detection method and electronic device
CN115802144B (en) Video shooting method and related equipment
CN116708751B (en) Method and device for determining photographing duration and electronic equipment
CN116723382B (en) Shooting method and related equipment
CN116723410B (en) Method and device for adjusting frame interval
CN116347217A (en) Image processing method, device and storage medium
CN117880645A (en) Image processing method and device, electronic equipment and storage medium
CN117692693A (en) Multi-screen display method and related equipment
CN117115003A (en) Method and device for removing noise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant